GoDaddy Transferred A Domain By Mistake And Refused To Fix It via @sejournal, @martinibuster

GoDaddy is alleged to have transferred a domain name without authorization from it’s longtime registrant, transferring the domain name without the proper authorization and the required documentation. The victim spent nearly ten hours with customer service only to receive the response that there is nothing GoDaddy could do to fix the problem.

Domain Transfer Happened On A Saturday

Interestingly, the rogue domain transfer happened on a Saturday, which could be an important detail because some domain registrars outsource their customer service on the weekends and I have heard of other occasions where mistakes have occurred due to less quality control. I know of a case where high-value domain names worth six to seven figures were stolen on a weekend where an attacker was able to manipulate the weekend customer service into changing the email address of the account, enabling the thief to transfer away all of the one and two-word domains to another account.

What happened with this specific domain was not a case of robbery but something worse. A weekend customer service person made a mistake processing a legitimate domain name change by another GoDaddy customer, and instead of initiating the change on the correct domain they transferred the victim’s domain instead.

Compounding the error, GoDaddy’s weekend customer service failed to follow their own protocol for preventing unauthorized transfers, thereby allowing the domain to be transferred to someone else.

32 Calls And Nearly 10 Hours Of Phone Calls

The process of getting GoDaddy to reverse it’s mistake was a bureaucratic nightmare. They placed thirty-two phone calls and spent 9.6 hours on the phone talking to GoDaddy’s customer service.

“Lee called GoDaddy on Sunday. They confirmed the domain was no longer in his account but could not say where it went due to privacy concerns. They told him to email undo@godaddy.com. He did but did not receive any type of response when emailing that address. Of course Lee didn’t really feel like this was the appropriate level of urgency for this issue. He asked for a supervisor who was even less helpful. Lee was not happy. He may have said some hurtful things to GoDaddy’s support personnel during this call. That first call lasted 2 hours, 33 minutes, and 14 seconds.

On Monday morning, Lee and a coworker started working in earnest on this issue because there was still no update from GoDaddy. Calling in yielded a different agent who told Lee to email transferdisputes@godaddy.com instead. By Tuesday the address had changed again to artreview@godaddy.com. The instructions shifted by the day. It seemed like every GoDaddy tech support person had a slightly different recommendation.”

Compounding the error was that every time the victim called GoDaddy the call generated a new case number with none of the case numbers tied to any of the previous ones.

GoDaddy’s Response

After four days of trying to get through to someone at GoDaddy to get the problem resolved, GoDaddy finally responded with the following resolution:

“After investigating the domain name(s) in question, we have determined that the registrant of the domain name(s) provided the necessary documentation to initiate a change of account. … GoDaddy now considers this matter closed.”

GoDaddy’s response contained links to how to dispute a domain name change at ICAAN, the global organization that manages the domain name system, instructions on how to look up the domain name registration information and a customer support page about contacting legal representation.

That’s it.

Error Fixed, But Not By GoDaddy

The person who wrote about the issue said that they contacted a friend within GoDaddy who was then able to have the matter properly dealt with. Ultimately the error was not fixed by GoDaddy but by the innocent person who discovered someone else’s domain name in their GoDaddy account.

As previously stated, the entire fiasco began with a mistake on the part of GoDaddy on a legitimate domain change request. GoDaddy changed the domain name being changed to the victim’s domain name. The person who ended up with the victim’s domain name in their account contacted the victim and between the two of them they began the process of transferring the domain back to the rightful registrant.

Domain Name Ownership Is Non-Existent

A common mistake made by many developers and business owners is that they believe that they own a domain name. That is incorrect, nobody owns a domain name. Domain names are registered but never owned. The registration entitles the registrant to use the domain name but they never actually own it. That is how the domain name system works and it’s a part of the reason for why this issue played out the way it did. However,  the problem in this case was due solely to a mistake by GoDaddy.

The post that detailed the nightmare refers to GoDaddy’s “domain ownership protection” services but that’s not actually what it is called. There is no such thing domain name ownership protection.

What GoDaddy sells is a Domain Protection service that protects against unauthorized transfers and accidental expiration. The victim paid for that protection but because the error was due to GoDaddy’s own mistake the protection did nothing for the victim, the domain change went through without the proper documentation.

Read the blog post about how GoDaddy made a mistake and not only failed to fix the problem, they didn’t even acknowledge they had made a mistake.

GoDaddy Gave a Domain to a Stranger Without Any Documentation

Featured Image by Shutterstock/AVA Bitter

Google’s AI Overviews Cut Organic Clicks 38%, Field Study Finds via @sejournal, @MattGSouthern

A randomized field experiment finds Google’s AI Overviews reduce organic clicks to external websites by 38% on queries where they appear, while self-reported search satisfaction stays nearly unchanged when the summaries are removed.

The working paper by researchers at the Indian School of Business and Carnegie Mellon University was posted to SSRN this month. Authors Saharsh Agarwal and Ananya Sen describe it as the first randomized field experiment to test how AI Overviews affect user behavior in a real browsing environment.

How The Experiment Worked

Agarwal and Sen built a Chrome extension that randomly assigned 1,065 U.S. participants to one of three groups. People were recruited from Prolific and used Chrome on desktop. They also had to meet minimum browsing-history thresholds, so the sample reflects active desktop Chrome users rather than all Google users.

The control group saw Google Search normally. A “Hide AIO” group had the extension remove AI Overviews in real time. A third group was redirected to Google’s AI Mode for all searches. The study ran for two weeks per participant between January and February 2026.

Researchers pre-registered the experiment with the AEA RCT Registry before data collection. Over 95% of users in the Hide AIO group did not detect any changes during the study.

What The Researchers Found

AI Overviews appeared on 42% of queries, and removing them increased outbound clicks from 0.38 to 0.61 per search. They reduced outbound organic clicks by 38% on triggered queries, with zero-click search rising from 54% to 72%.

Effects were strongest when AI Overviews appeared at the top of the page, which occurred 85% of the time. Removing top-position AI Overviews nearly doubled outbound clicks, but lower ones had no effect.

Sponsored clicks and search frequency remained steady, indicating substitution between AI Overviews and organic visits.

The User Experience Finding

The endline survey used a 1-to-5 Likert scale to assess participants’ search experience. Responses from the control and Hide AIO groups were nearly identical across all measures, including satisfaction, information quality, and ease of finding information.

The researchers wrote that AI Overviews “divert traffic away from publishers without delivering measurable improvements in user experience.

How AI Mode Compared

Participants directed to AI Mode had lower outbound click rates, higher zero-click rates, and lower satisfaction at endline compared to other groups.

The authors note that these results are exploratory, as higher attrition, some uninstalling of the extension, or finding workarounds may have influenced the outcomes.

Why This Matters

Independent measurements of the impact of AI Overviews on traffic have mostly been correlational. Pew Research found users click 8% of the time with AI Overviews, compared to 15% without. Ahrefs analyzed GSC data and reported a 58% drop in click-through rate for top-ranking pages when AI Overviews appeared.

This experiment adds a different approach by randomly assigning users to see AI Overviews or not, isolating the causal effect.

Google VP Liz Reid claims AI Overviews cut “bounce clicks,’ but provides no data backing the user-benefit side. The Agarwal and Sen paper tested a related question with a randomized design, finding no measurable change in satisfaction or ease of finding info.

Looking Ahead

The paper is a draft on SSRN and is not peer-reviewed. Authors will add more results, and we will provide an update if findings change.

AI Overview CTR Fell 61%, But Clicks Didn’t Collapse via @sejournal, @MattGSouthern

Brand-cited AI Overview CTR fell 61% from Q3 to Q4, according to a new report from Seer Interactive, but the clicks on those pages barely moved.

The drop looks alarming on a dashboard, though it isn’t quite what it seems. Seer’s analysis of 5.47 million queries across 53 brands clearly shows what’s happening

What Happened In Q4

In September, brand-cited pages in AI Overviews received 15.8 million impressions and 398,798 clicks, with a CTR of 2.52%.

In October, impressions doubled to 33.1 million, and clicks increased slightly to 400,271, but CTR dropped to 1.21% as rapid impression growth outpaced clicks.

This isn’t a performance collapse but a math issue caused by faster impression growth than clicks.

November Is A Different Story

November’s impressions rose to 39.5 million, but clicks dropped to 301,783, and CTR fell to 0.76%.

Something pulled clicks down while visibility increased, and Seer’s data can’t explain why. For Q4, both patterns combine into a 61% figure, showing it’s important to analyze months separately in Search Console data.

What Seer Can’t Tell You

The agency is clear on one limit: it can’t determine whether the October impression surge was due to Google serving AI Overviews for more queries where brands were already cited, or because the brands earned citations through their SEO. Both explanations fit, and neither can be confirmed without a detailed analysis of the account.

Websites with similar data face the same ambiguity. Growing impressions are good if earned, but noise if they result from Google’s decisions. Your dashboard might not clarify this without account-level query analysis.

How This Fits With Past AIO CTR Coverage

Several studies show lower CTRs when AI Overviews appear. Ahrefs analyzed 146 million results and found a 20.5% AIO trigger rate, which was higher for informational and question queries.

A SISTRIX analysis in Germany reported a 59% drop in CTR at position one with AIOs, and Pew Research found that U.S. users clicked 8% of the time with AIOs versus 15% without.

Seer’s October data raises the question of whether a falling CTR on cited pages always means fewer clicks or can indicate greater visibility with the same click count.

Other Findings Worth Noting

Brand-cited pages get about 120% more clicks per impression than uncited pages on AIO SERPs, but cited pages lag behind no-AIO pages by 38%. A citation helps, but it doesn’t restore previous rankings.

Seer reports that organic CTR on AIO SERPs rose from 1.3% in December 2025 to 2.4% in February 2026, but calls this a leveling off rather than a recovery and advises against forecasting based on two months’ data.

Why This Matters

A falling CTR in your Q4 data doesn’t necessarily mean you’re losing clicks; check impressions for the same period before assuming there’s a problem.

Benchmarks show general trends, but your data tells your specific story. If clicks stay flat or grow faster than impressions, it’s a different issue than actual decline.

Looking Ahead

The main thing to watch is whether added AI Overview visibility starts driving more clicks, or whether cited pages continue absorbing more impressions without much traffic upside.

If that pattern holds, the value of being cited may look different from what CTR alone suggests. You may need to separate visibility, clicks, and citation coverage before deciding whether AI Overview exposure is helping or simply changing how performance gets measured


Featured Image: TaniaKitura/Shutterstock

Google Pushes “Bounce Clicks” Explanation For AI Overview Traffic Loss via @sejournal, @MattGSouthern

Google’s head of Search, Liz Reid, told Bloomberg’s Odd Lots podcast that AI Overviews are reducing “bounce clicks” from publisher pages, continuing an argument she has made in public appearances since last year.

Reid appeared on the April 23 episode of Odd Lots. Hosts Joe Weisenthal and Tracy Alloway asked how AI Overviews affect publisher traffic and ad revenue.

What Reid Said

Reid described what she called “bounce clicks” as the category of clicks AI Overviews are reducing.

She said users who quickly click and return to search no longer need to visit the page because they get the fact from the Overview. Those wanting to read longer still click through. She acknowledged fewer ad clicks for some queries but said increased query volume balances this. The argument aligns with Reid’s points in other public appearances.

The Pattern

Reid published a Google blog post in August stating that organic click volume from Google Search to websites was “relatively stable” year-over-year and that “quality clicks,” defined as visits where users don’t quickly click back, had increased.

In an October Wall Street Journal interview, she explicitly used the phrase “bounced clicks” and said that ad revenue with AI Overviews had been relatively stable.

The Bloomberg appearance makes the same basic case Reid made in August, describing some lost clicks as low-value visits where users would have quickly returned to Search.

What Reid Didn’t Say

In none of those three appearances has Reid provided supporting data.

Her August blog post included no charts, percentages, or year-over-year comparisons. On Bloomberg, she told Weisenthal and Alloway that Google tracks whether people come to search more often as one of its key signals, without providing numbers.

Weisenthal and Alloway asked about traffic and monetization, but the interview didn’t include follow-up questions requesting evidence for Reid’s explanation.

Google has not publicly shared data that would let outside observers test that distinction.

What Independent Data Shows

Chartbeat data published in the Reuters Institute’s Journalism and Technology Trends and Predictions 2026 report found that global publisher Google search traffic dropped by roughly a third. Google Discover referrals fell 21% year-over-year across more than 2,500 publisher websites.

Seer Interactive’s analysis found that organic click-through rate for queries with AI Overviews fell from 1.76% in 2024 to 0.61% in 2025, a 61% drop. Seer noted those queries tend to be informational searches that historically had lower CTRs.

Pew Research Center’s study of 68,000 real search queries found users clicked on results 8% of the time when AI Overviews appeared, compared with 15% when they did not.

Digital Content Next, a trade body whose members include the New York Times, Condé Nast, and Vox, reported a median 10% year-over-year decline in Google search referrals across 19 member publishers between May and June 2025. DCN CEO Jason Kint said at the time that the member data offered “ground truth” about what was happening to publisher traffic.

Why This Matters

Reid’s “bounce clicks” description answers a question the data raises, but it answers it without data of its own. That’s worth keeping in mind when evaluating any public claim from a platform that controls the measurements.

A business owner can’t verify from Reid’s Bloomberg appearance whether AI Overviews are cutting only low-value clicks or cutting across query types. The independent data measures total clicks and click-through rates, not the subset of clicks Reid describes as low-value. If Google has internal data that separates the two, it hasn’t shared it in the eight months since the August blog post.

Looking Ahead

Reid said that Google measures how often people return to Search. That signal tracks Google’s retention. Publishers need a traffic metric, but Google hasn’t shared one. Until it does, “bounce clicks” should be treated as a claim rather than a finding.

Google’s Robots.txt Docs Expand, Deep Links Get Rules, EU Steps In – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect how deep links appear in your snippets, how your robots.txt gets parsed, how agentic features work in Search, and how the EU’s data-sharing rules apply to AI chatbots.

Here’s what matters for you and your work.

Google Lists Best Practices For Read More Deep Links

Google updated its snippet documentation with a new section on “Read more” deep links in Search results. The documentation lists three best practices that can increase the likelihood of these links appearing.

Key facts: Content must be immediately visible to a human on page load, and content hidden behind expandable sections or tabbed interfaces can reduce the likelihood of these links appearing. Sections should use H2 or H3 headings. The snippet text needs to match the content that appears on the page, and pages with content loaded after scrolling or interaction may further reduce the likelihood.

Why This Matters

The three practices are the first specific guidance Google has published on this feature. Sites using expandable FAQ sections, tabbed product detail areas, or scroll-triggered content for core information may see fewer deep links in their snippets compared with sites that render the same content on page load.

The guidance matches a pattern Google has applied to other Search features. Content that renders without user interaction is more likely to appear in enhanced display.

Slobodan Manić, founder of No Hacks, made a related observation on LinkedIn:

“The documentation is framed around one snippet behavior (read more deep links in search results), but the language Google chose reads as a general preference. ‘Content immediately visible to a human’ is the structural instruction, not a read-more-specific tip.”

Manić’s point extends his April 16 IMHO interview with Managing Editor Shelley Walsh, where he argued that most websites are structurally broken for AI agents. He argues that search crawlers and AI agents now face the same structural problem, and the audit is the same for both.

For existing pages, the audit question is whether key information is contained within a click-to-expand element. If a page already has a “Read more” deep link for one section, that section’s structure serves as a guide to what works. For other sections on the same page, replicating that structure may also improve their chances.

Google describes the guidance as best practices that can “increase the likelihood” of deep links appearing. That hedging matters because this is not a list of requirements, and following all three may not guarantee the links appear.

Read our full coverage: Google Lists Best Practices For Read More Deep Links

Google May Expand Its Robots.txt Unsupported Rules List

Google may add rules to its robots.txt documentation based on analysis of real-world data collected through HTTP Archive. Gary Illyes and Martin Splitt described the project on the latest Search Off the Record podcast.

Key facts: Google’s team analyzed the most frequently unsupported rules in robots.txt files across millions of URLs indexed by the HTTP Archive. Illyes said the team plans to document the top 10 to 15 most-used unsupported rules beyond user-agent, allow, disallow, and sitemap. He also said the parser may expand the typos it accepts for disallow, though he did not commit to a timeline or name specific typos.

Why This Matters

If Google documents more unsupported directives, sites using custom or third-party rules will have clearer guidance on what Google ignores.

Anyone maintaining a robots.txt file with rules beyond user-agent, allow, disallow, and sitemap should audit for directives that have never worked for Google. The HTTP Archive data is publicly queryable on BigQuery, so the same distribution Google used is available to anyone who wants to examine it.

The typo tolerance is the more speculative part. Illyes’ phrasing implies that the parser already accepts some misspellings of “disallow,” and more may be honored over time. Audit any spelling variants now and correct them, rather than assuming they will be ignored.

Read our full coverage: Google May Expand Unsupported Robots.txt Rules List

EU Proposes Google Share Search Data With Rivals And AI Chatbots

The European Commission sent preliminary findings proposing that Google share search data with rival search engines across the EU and EEA, including AI chatbots that qualify as online search engines under the DMA. The measures are not yet binding, with a public consultation open until May 1 and a final decision due by July 27.

Key facts: The proposal covers four data categories shared on fair, reasonable, and non-discriminatory terms. The categories are ranking, query, click, and view data. Eligibility extends to AI chatbot providers that meet the DMA’s definition of online search engines. If the Commission maintains eligibility through the final decision, qualifying providers could gain access to anonymized Google Search data under the Commission’s proposed terms.

Why This Matters

This proposal explicitly extends search-engine data-sharing eligibility to AI chatbots under the DMA. If the eligibility survives the consultation, the regulatory category of “search engine” now includes products that most search marketing work has treated as a separate category.

The consequences vary depending on where you operate. For sites optimizing for EU/EEA visibility, the change could broaden the scope of where anonymized search signals flow. AI products competing with Google in that market could use the data to improve their retrieval and ranking systems, which could, in turn, affect which content they cite.

Outside the EU, the direct regulatory effect is zero. The category definition is a different matter. How the Commission draws the line between “AI chatbot” and “AI chatbot that qualifies as a search engine” is likely to be cited in future proceedings.

The eligibility question is the story to watch through May 1. If the Commission narrows the AI chatbot criteria in response to consultation feedback, the implications stay regulatory. If it holds the line, that would set a material precedent for how AI search is classified.

Read our full coverage: Google May Have To Share Search Data With Rivals

Google Adds New Task-Based Search Features

Google introduced new Search features that continue its evolution toward task completion. Users can now track individual hotel price drops via a new toggle in Search, and Google is adding the ability to launch AI agents directly from AI Mode.

Key facts: Hotel price tracking is available globally through a toggle in the search bar. When prices drop for a tracked hotel, Google sends an email alert. The AI agent launched from AI Mode allows users to initiate tasks handled by AI within the search interface. Rose Yao, a Google Search product leader, posted about the features on X.

Why This Matters

Each task-based feature moves a process that previously started on another site into Google’s own surface. Hotel price tracking has existed at the city level for months. Expansion to individual hotels adds a new signal that users can set inside Google rather than on hotel or aggregator sites.

Direct-booking visibility depends on being inside Google’s ecosystem. Sites relying on price-drop alerts as a return-trigger for users may see some of that engagement reallocated to Google’s tracking UI. For hotel brands, this raises the stakes for ensuring individual hotel pages are fully populated in Google Business Profile and hotel feeds.

On LinkedIn, Daniel Foley Carter connected the feature to a broader pattern:

“Google’s AI overviews, AI mode and now in-frame functionality for SERP + SITE is just Google eating more and more into traffic opportunities. Everything Google told US not to do its doing itself. SPAM / LOW VALUE CONTENT – don’t resummarise other peoples content – Google does it.”

The AI agent launch is more speculative. Google has not published detailed documentation explaining what kinds of tasks users can delegate or how sources get cited. The feature confirms that agentic search, described by Sundar Pichai as “search as an agent manager,” is appearing incrementally in Search rather than as a single launch.

Read Roger Montti’s full coverage: Google Adds New Tasked-Based Search Features

Theme Of The Week: The Rules Are Getting Written

Each story this week spells out something that was previously implicit or underway.

Google signaled plans to expand what its robots.txt documentation covers. The company listed specific practices that can increase the likelihood of “Read more” deep links appearing. The European Commission proposed measures that extend search-engine data-sharing eligibility to AI chatbots under the DMA. And task-based features that Sundar Pichai described in interviews are rolling out as toggles in the search bar.

For your day-to-day, the ground gets firmer. Fewer questions are judgment calls. What does and doesn’t qualify, what Google supports, and what counts as a search engine to a regulator are all getting written down. That works to your advantage when it means clearer audit criteria, and against you when “we weren’t sure” is no longer a defensible answer.

Top Stories Of The Week:

More Resources:


Featured Image: [Photographer]/Shutterstock

Google Won’t Act On Spam Reports If They Contain Personal Information via @sejournal, @martinibuster

Google updated their spam reporting documentation to make it clearer that spam reports are not wholly confidential and that it’s possible for personal identifiable information to be shared with the sites receiving a manual action.

Change In Response To Feedback

Google’s changelog noted that they were updating the spam reporting form based on feedback they’d received about personal information contained in the spam report that is shared with spammy sites that receive a manual action (formerly known as a penalty).

The update contains a new notice that spam reports containing personal information will not be processed.

The changelog noted:

“Clarifying when and why we may take manual action based on spam reports
What: Further clarified when and why we may take manual action based on spam reports.
Why: To address feedback we received about the change on using spam reports to take manual action.”

Google removed the following from their documentation:

“If we issue a manual action, we send whatever you write in the submission report verbatim to the site owner to help them understand the context of the manual action. We don’t include any other identifying information when we notify the site owner; as long as you avoid including personal information in the open text field, the report remains anonymous.”

The above wording was replaced with the following:

“Don’t include any personally identifying information in your submission. To comply with regulations, we must send the submission text to the site owner to help them understand the context of a manual action, if one is issued.

Because of this, we won’t process your submission if we determine it contains personally identifying information to protect privacy. Not including such information fully ensures your information is safe and prevents your submission from being discarded.”

Action Moving Forward

On the one hand it’s good that Google won’t proceed with a manual action if the report contains personal information. This means that if you’re submitting spam reports to Google, don’t name your site, business name, personal name or anything else that you don’t want the affected spammer to know.

Read the updated documentation here:

Report spam, phishing, or malware

Learn more about Google’s spam reporting tool: Google Just Made It Easy For SEOs To Kick Out Spammy Sites

Featured Image by Shutterstock/andre_dechapelle

Google May Expand Unsupported Robots.txt Rules List via @sejournal, @MattGSouthern

Google may expand the list of unsupported robots.txt rules in its documentation based on analysis of real-world robots.txt data collected through HTTP Archive.

Gary Illyes and Martin Splitt described the project on the latest episode of Search Off the Record. The work started after a community member submitted a pull request to Google’s robots.txt repository proposing two new tags be added to the unsupported list.

Illyes explained why the team broadened the scope beyond the two tags in the PR:

“We tried to not do things arbitrarily, but rather collect data.”

Rather than add only the two tags proposed, the team decided to look at the top 10 or 15 most-used unsupported rules. Illyes said the goal was “a decent starting point, a decent baseline” for documenting the most common unsupported tags in the wild.

How The Research Worked

The team used HTTP Archive to study what rules websites use in their robots.txt files. HTTP Archive runs monthly crawls across millions of URLs using WebPageTest and stores the results in Google BigQuery.

The first attempt hit a wall. The team “quickly figured out that no one is actually requesting robots.txt files” during the default crawl, meaning the HTTP Archive datasets don’t typically include robots.txt content.

After consulting with Barry Pollard and the HTTP Archive community, the team wrote a custom JavaScript parser that extracts robots.txt rules line by line. The custom metric was merged before the February crawl, and the resulting data is now available in the custom_metrics dataset in BigQuery.

What The Data Shows

The parser extracted every line that matched a field-colon-value pattern. Illyes described the resulting distribution:

“After allow and disallow and user agent, the drop is extremely drastic.”

Beyond those three fields, rule usage falls into a long tail of less common directives, plus junk data from broken files that return HTML instead of plain text.

Google currently supports four fields in robots.txt. Those fields are user-agent, allow, disallow, and sitemap. The documentation says other fields “aren’t supported” without listing which unsupported fields are most common in the wild.

Google has clarified that unsupported fields are ignored. The current project extends that work by identifying specific rules Google plans to document.

The top 10 to 15 most-used rules beyond the four supported fields are expected to be added to Google’s unsupported rules list. Illyes did not name specific rules that would be included.

Typo Tolerance May Expand

Illyes said the analysis also surfaced common misspellings of the disallow rule:

“I’m probably going to expand the typos that we accept.”

His phrasing implies the parser already accepts some misspellings. Illyes didn’t commit to a timeline or name specific typos.

Why This Matters

Search Console already surfaces some unrecognized robots.txt tags. If Google documents more unsupported directives, that could make its public documentation more closely reflect the unrecognized tags people already see surfaced in Search Console.

Looking Ahead

The planned update would affect Google’s public documentation and how disallow typos are handled. Anyone maintaining a robots.txt file with rules beyond user-agent, allow, disallow, and sitemap should audit for directives that have never worked for Google.

The HTTP Archive data is publicly queryable on BigQuery for anyone who wants to examine the distribution directly.


Featured Image: Screenshot from: YouTube.com/GoogleSearchCentral, April 2026. 

Google Adds View-Through Conversion Optimization To Demand Gen via @sejournal, @MattGSouthern

Google announced two updates to Demand Gen ahead of Google Marketing Live.

View-through conversion (VTC) optimization is now available for Demand Gen campaigns in Google Ads. This setting lets campaigns optimize toward view-through conversions on YouTube.

Google is also expanding Commerce Media Suite to support Demand Gen inventory in Google Ads. This adds Google Ads to existing Commerce Media Suite support in Display & Video 360 and Search Ads 360.

What’s New

VTC Optimization

When enabled, VTC optimization lets Demand Gen campaigns optimize toward view-through conversions on YouTube. A view-through conversion happens when a user sees an ad, doesn’t click, but later converts.

Commerce Media Suite

With the Google Ads expansion, advertisers can use retailers’ first-party catalog and conversion data to reach shoppers. Inventory covers YouTube, Discover, and Gmail.

The Performance Claim

In the announcement, Google cited Fospha’s Demand Gen and YouTube Playbook, a third-party vendor report. Fospha attributes an 18% higher share of new-customer conversions to Demand Gen versus the paid media average. Coverage spans 127 retail brands across fashion, cosmetics, and consumer goods from 2024 to 2025.

Fospha is a marketing attribution vendor with a commercial interest in measurement across advertising platforms. Google didn’t publish its own performance data alongside the announcement.

Why This Matters

VTC optimization brings Demand Gen closer to the capabilities advertisers already use on other ad platforms. For teams running Demand Gen alongside video campaigns on those platforms, the optimization setup no longer has to differ by channel.

The Commerce Media Suite expansion gives Google Ads advertisers access to retailer first-party catalog and conversion data. This adds Google Ads to existing Commerce Media Suite support in Display & Video 360 and Search Ads 360.

Since last year, Google has added Demand Gen optimization levers, including in-store sales optimization and shoppable CTV. VTC optimization and Commerce Media Suite support continue that pattern.

Looking Ahead

This announcement lands ahead of Google Marketing Live, where Google says more Demand Gen solutions will follow.

OpenAI’s Crawler Docs Now List OAI-AdsBot For ChatGPT Ads via @sejournal, @MattGSouthern

OpenAI’s public crawler documentation now lists OAI-AdsBot, a bot that may visit pages submitted as ChatGPT ads to check policy compliance and help determine ad relevance.

The entry sits alongside OAI-SearchBot, GPTBot, and ChatGPT-User on OpenAI’s crawler docs page, bringing the documented bot count to four.

OpenAI states that OAI-AdsBot only visits pages submitted as ads and that the data it collects isn’t used to train its generative AI foundation models.

What The Bot Does

Per OpenAI’s docs, OAI-AdsBot may visit an ad’s landing page after the ad gets submitted. The bot checks whether the page complies with OpenAI’s ad policies. It may also use content from the landing page to help decide when to show the ad to ChatGPT users.

The bot identifies itself with the user-agent string Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; OAI-AdsBot/1.0; +https://openai.com/adsbot.

OAI-SearchBot and GPTBot are both at version 1.3, per OpenAI’s docs. The crawler only visits pages submitted as ad landing pages, not the wider web.

What The Bot Doesn’t Do

Data collected by OAI-AdsBot isn’t used to train generative AI foundation models. That keeps OAI-AdsBot out of GPTBot’s territory, which handles training data collection.

It also keeps OAI-AdsBot separate from OpenAI’s other bots. OAI-SearchBot surfaces content in ChatGPT search, while ChatGPT-User fetches pages during user-initiated browsing, and OAI-AdsBot is limited to ad validation.

OAI-SearchBot and GPTBot can be controlled independently through robots.txt. ChatGPT-User is user-initiated, and the company notes that robots.txt rules may not apply to it. The OAI-AdsBot entry doesn’t say how the bot treats robots.txt.

No Public IP List Yet

OpenAI publishes IP range files for its three earlier bots at openai.com/searchbot.json, openai.com/gptbot.json, and openai.com/chatgpt-user.json. At the time of publication, no equivalent openai.com/adsbot.json file appears in OpenAI’s docs.

Without a published list, verifying a real OAI-AdsBot visit becomes harder. User-agent strings can be spoofed, and the IP lists give you a way to cross-check for the other three OpenAI bots. For OAI-AdsBot, that cross-check isn’t available.

Why This Matters

OAI-AdsBot has two audiences. Advertisers buying placements on ChatGPT need the bot to reach their landing pages; otherwise, the ad may not validate. Anyone tracking AI bot activity in server logs gets a new user-agent to watch, one tied to paid inventory rather than search or training.

Aggressive bot protection through Cloudflare, Akamai, or similar tools may block OAI-AdsBot before it reaches the page. That could create validation friction for advertisers who use strict bot-mitigation tools.

Looking Ahead

ChatGPT’s ad program has moved fast since OpenAI started testing ads on Feb. 9. As access opens up to more advertisers, OAI-AdsBot traffic will start showing up in more server logs. Watch for an eventual IP range file at openai.com/adsbot.json if OpenAI chooses to publish one. For now, the user-agent string is what you have to work with.


Featured Image: Blossom Stock Studio/Shutterstock

The Facts About Google Click Signals, Rankings, And SEO via @sejournal, @martinibuster

Clicks as a ranking-related signal have been a subject of debate for over twenty years, although nowadays most SEOs understand that clicks are not a direct ranking factor. The simple truth about clicks is that they are raw data and, surprisingly, processed with some similarity to human rater scores.

Clicks Are A Raw Signal

The DOJ Antitrust memorandum opinion from September 2025 mentions clicks as a “raw signal” that Google uses. It also categorizes content and search queries as raw signals. This is important because a raw signal is the lowest-level data point which is processed into higher level ranking signals or used for training a model like RankEmbed and its successor, RankEmbedBERT.

Those are considered raw signals because they are:

  • Directly observed
  • But not yet interpreted or used for training data

The DOJ document quotes professor James Allan, who gave expert testimony on behalf of Google:

“Signals range in complexity. There are “raw” signals, like the number of clicks, the content of a web page, and the terms within a query.

…These signals can be created with simple methods, such as counting occurrences (e.g., how many times a web page was clicked in response to a particular query). Id.
at 2859:3–2860:21 (Allan) (discussing Navboost signal) “

He then contrasts the raw signals with how they are processed:

“At the other end of the spectrum are innovative deep-learning models, which are machine-learning models that discern complex patterns in large datasets.

Deep models find and exploit patterns in vast data sets. They add unique capabilities at high cost.”

Professor Allan explains that “top-level signals” are used to produce the “final” scores for a web page, including popularity and quality.

Raw Signals Are Data To Be Further Processed

Navboost is mentioned several times in the September 2025 antitrust document as popularity data. It’s not mentioned in the context of clicks having a ranking effect on individal sites.

It’s referred to as a way to measure popularity and intent:

“…popularity as measured by user intent and feedback systems including Navboost/Glue…”

And elsewhere, in the context of explaining why some of the Navboost data is privileged:

“They are ‘popularity as measured by user intent and feedback systems including Navboost/Glue’…”

In the context of explaining why some of the Navboost data is privileged:

“Under the proposed remedy, Google must make available to Qualified Competitors …the following datasets:

1. User-side Data used to build, create, or operate the GLUE statistical model(s);

2. User-side Data used to train, build, or operate the RankEmbed model(s); and

3. The User-side Data used as training data for GenAI Models used in Search or any GenAI Product that can be used to access Search.

Google uses the first two datasets to build search signals and the third to train and refine the models underlying AI Overviews and (arguably) the Gemini app.”

Clicks, like human rater scores, are just a raw signal that is used further up the algorithm chain to train AI models to better able match web pages to queries or to generate a quality or relevance signal that is then added to the rest of the ranking signals by a ranking engine or a rank modifier engine.

70 Days Of Search Logs

The DOJ document makes reference to using 70 days of search logs. But that’s just eleven words in a larger context.

Here is the part that is frequently quoted:

“70 days of search logs plus scores generated by human raters”

I get it, it’s simple and direct. But there is more context to it:

“RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: [Redacted]% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.”

The 70 days of search logs are not click data used for ranking purposes in Google, AI Mode, or Gemini. It’s data in aggregate that is further processed in order to train specialized AI models like RankEmbedBERT that in turn rank web pages based on natural language analysis.

That part of the DOJ document does not claim that Google is directly using click data for ranking search results. It’s data, like the human rater data, that’s used by other systems for training data or to be further processed.

What Is Google’s RankEmbed?

RankEmbed is a natural language approach to identifying relevant documents and ranking them.

The same DOJ document explains:

“The RankEmbed model itself is an AI-based, deep-learning system that has strong natural-language understanding. This allows the model to more efficiently identify the best documents to retrieve, even if a query lacks certain terms.”

It’s trained on less data than previous models. The data partially consists of query terms and web page pairs:

“…RankEmbed is trained on 1/100th of the data used to train earlier ranking models yet provides higher quality search results.

…Among the underlying training data is information about the query, including the salient terms that Google has derived from the query, and the resultant web pages.”

That’s training data for training a model to recognize how query terms are relevant to web pages.

The same document explains:

“The data underlying RankEmbed models is a combination of click-and-query data and scoring of web pages by human raters.”

It’s crystal clear that in the context of this specific passage, it’s describing the use of click data (and human rater data) to train AI models, not to directly influence rankings.

What About Google’s Click Ranking Patent?

Way back in 2006 Google filed a patent related to clicks called, Modifying search result ranking based on implicit user feedback. The invention is about the mathematical formula for creating a “measure of relevance” out of the aggregated raw data of clicks (plural).

The patent distinguishes between the creation of the signal and the act of ranking itself. The “measure of relevance” is output to a ranking engine, which then can add it to existing ranking scores to rank search results for new searches.

Here’s what the patent describes:

“A ranking Sub-system can include a rank modifier engine that uses implicit user feedback to cause re-ranking of search results in order to improve the final ranking
presented to a user of an information retrieval system.

User selections of search results (click data) can be tracked and transformed into a click fraction that can be used to re-rank future search results.”

That “click fraction” is a measure of relevance. The invention described in the patent isn’t about tracking the click; it’s about the mathematical measure (the click fraction) that results from combining all those individual clicks together. That includes the Short Click, Medium Click, Long Click, and the Last Click.

Technically, it’s called the LCIC (Long Click divided by Clicks) Fraction. It’s “clicks” plural because it’s making decisions based on the sums of many clicks (aggregate), not the individual click.

That click fraction is an aggregate because:

  • Summation:
    The “first number” used for ranking is the sum of all those individual weighted clicks for a specific query-document pair.
  • Normalization:
    It takes that sum and divides it by the total count of all clicks (the “second number”).
  • Statistical Smoothing:
    The system applies “smoothing factors” to this aggregate number to ensure that a single click on a “rare” query doesn’t unfairly skew the results, especially for spammers.

That 2006 patent describes it’s weighting formula like this:

“A base LCC click fraction can be defined as:

LCC_BASE=[#WC(Q,D)]/[#C(Q,D)+S0)

where iWC(Q.D) is the sum of weighted clicks for a query URL…pair, iC(Q.D) is the total number of clicks (ordinal count, not weighted) for the query-URL pair, and S0 is a smoothing factor.”

That formula describes summing and dividing the data from many users to create a single score for a document. The “query-URL” pair is a “bucket” of data that stores the click behavior of every user who ever typed that specific query and clicked that specific search result. The smoothing factor is the anti-spam part that includes not counting single clicks on rare search queries.

Even way back in 2006, clicks is just raw data that is transformed further up the chain across multiple stages of aggregation, into a statistical measure of relevance before it ever reaches the ranking stage. In this patent, the clicks themselves are not ranking factors that directly influence whether a site is ranked or not. They were used in aggregate as a measure of relevance, which in turn was fed into another engine for ranking.

By the time the information reaches the ranking engine, the raw data has been transformed from individual user actions into an aggregate measure of relevance.

  • Thinking about clicks in relation to ranking is not as simple as clicks drive search rankings.
  • Clicks are just raw data.
  • Clicks are used to train AI systems like RankEmbedBert.
  • Clicks are not directly influencing search results. They have always been raw data, the starting point for systems that use the data in aggregate to create a signal that is then mixed into ranking decision making systems at Google.
  • So yes, like human rater data, raw data is processed to create a signal or to train AI systems.

Read the DOJ memorandum in PDF form here.

Read about four research papers about CTR.

Read the 2006 Google patent, Modifying search result ranking based on implicit user feedback.

Featured Image by Shutterstock/Carkhe