Are You Still Optimizing for Rankings? AI Search May Not Care. [Webinar] via @sejournal, @hethr_campbell

No ranking data. No impression data. 

So, how do you measure success when AI-generated answers appear and disappear, prompt by prompt?

With these significant changes to how we optimize for search, many brands are seeking to understand how to achieve SEO success.

Some Brands Are Winning in Search. Others? Invisible.

If your content isn’t appearing in AI-generated responses, like AI Overviews, ChatGPT, or Perplexity, you’re already losing ground to competitors.

👉 RSVP: Learn from the brands still dominating SERPs through AI search

In This Free Webinar, You’ll Learn:

  • Data-backed insights on what drives visibility and performance in AI search
  • A proven framework to drive results in AI search, and why this approach works
  • Purpose-built content strategies for driving success in Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO).

This webinar helps enterprise SEOs and executives move from “I don’t know what’s happening in AI search” to “I have a data-driven strategy to compete and win.”

This session is designed for:

  • Marketing managers and SEO strategists looking to stay ahead.
  • Brand leaders managing performance visibility across platforms.
  • Content teams building for modern search behaviors.

You’ll walk away with a usable playbook and a better understanding of how to optimize for the answer, not the query.

Learn from what today’s winning brands are doing right.

Secure your spot, plus get the recording sent straight to your inbox if you can’t make it live.

Explaining The Great Decoupling To C-Level via @sejournal, @TaylorDanRW

Something important is happening in Google Search.

If you’ve looked at your website data in Google Search Console, you may have noticed something odd. Your pages are showing up more often, but fewer people are clicking through.

These two signals – impressions and clicks – are used to rise and fall together. Now, they’re drifting apart.

We call this “The Great Decoupling.”

Screenshot from Jim Thornton (with permission to use), The SEO Community Slack Group, June 2025

And it’s not just your business. This is happening across industries and most website types.

It became more noticeable as Google rolled out something called AI Overviews – automated summaries that answer questions directly in search results.

If your site traffic from search is falling, but your rankings look fine, this article will help explain why.

We’ll examine the changes, their causes, how they manifest in analytics tools, and the responses of leading companies.

What’s Happening And Why It Matters

The Great Decoupling describes a new disconnect. Your website can appear more often in search results but get fewer clicks.

That was only “the expected behavior” when the SERP had things like featured snippets, or other special content result blocks from Google.

We’ve seen this clearly in client data during the first half of 2025.

Screenshot from Itamar Bauer (with permission to use), Studio Hawk, June 2025

Near the end of 2024, impressions and clicks were still closely linked. But by early 2025, impressions kept going up while clicks went down.

The click-through rate, the percentage of people who click, dropped sharply.

This trend is widespread. Whether your site is an ecommerce store, a B2B company, or a blog, the same thing is happening – more visibility but less engagement.

Martin Splitt has said that when your pages are shown in AI Overviews, you may get more impressions but fewer clicks.

He also said that people might still convert later, perhaps after seeing your brand in search results, even if they never click the first time.

So, we’re in a new “normal”; impressions alone no longer signal opportunity. It’s what happens after the impression that counts.

Why This Is Happening

Google’s move toward AI-powered results is driving this change. The most significant shift is the introduction of AI Overviews.

AI Overviews are summaries shown at the top of search results.

Instead of a list of websites, Google provides an instant answer. That answer is generated from various sources across the web, including yours, without requiring the user to click.

Your Content May Appear Twice, But It Only Gets One Chance To Earn A Click

Your site may show up as both a traditional link and as part of the AI Overview. That boosts impressions but often reduces clicks. People get what they need from the overview.

Less Friction Means Fewer Visits

The AI Overview gives users what they want quickly. However, if their need is met before they reach your site, your traffic will drop.

Some Search Terms Are Hit Harder Than Others

Generic questions, how-tos, and mid-funnel queries are more likely to trigger AI Overviews. These are often top-of-funnel keywords marketers use to drive discovery.

On the other hand, brand searches and high-intent queries are more resilient.

The point is that it’s not just about where you rank. It’s about whether Google decides to answer the question for the user without needing you.

Zero-Click Search Isn’t New

This isn’t entirely new. For years, Google has provided users with quick answers. Featured snippets, “People Also Ask” boxes, and knowledge panels all reduced the need to click.

AI Overviews are just the next step. They are more advanced, appear more often, and answer a broader range of questions. But, the principle is the same: to reduce the effort for the user.

We’ve adapted before. We can adapt again. However, this shift is more significant and impacts multiple stages of the customer journey, necessitating a more strategic approach.

What This Looks Like In Your Analytics

In Google Search Console, the gap between impressions and clicks is clear. In Google Analytics 4, you see the impact on your traffic and behavior metrics.

Organic Traffic Is Falling

Your GA4 report shows fewer sessions from Google, even though your rankings haven’t changed. That’s the result of fewer clicks.

Engagement May Look Better

Because fewer but more qualified visitors are reaching your site, session length and conversion rates may look stronger. But, overall, reach is down.

Attribution Becomes Less Clear

GA4 does not show traffic that came through AI Overviews separately.

Some visitors might return later and be counted as “direct” traffic. Others won’t be tracked at all. This makes it more challenging to attribute SEO’s role in brand discovery.

To understand what’s happening, you need to look at GSC and GA4 together. One shows the visibility. The other shows the outcomes.

How I Think You Should Adjust & Act

The most forward-thinking businesses are making strategic shifts to protect and grow their visibility. Here are four things they’re doing:

1. Strengthen Brand

When users search for you by name, Google is less likely to intervene. These clicks are holding steady and, in some cases, growing.

Investing in brand and trust is generic advice being thrown around a lot at the moment, but I think you should be looking at your brand in consideration of a user journey and what scope AI platforms have to alter that user journey before your brand is discovered.

Image from author, June 2025

This also means working to understand how well-known your brand is before a user starts at the “top of the funnel,” and whether or not they’re more likely to steer towards your brand due to previous positive brand touchpoints, or the sentiment and user stories of others online.

2. Publish Content That AI Can’t Copy

If your content is generic, Google’s AI can summarize it. If it’s unique, based on experience, data, or opinion, it’s much harder to replace.

Focus on:

  • Original research.
  • Customer stories.
  • Side-by-side product comparisons.
  • Tools and calculators.
  • Real customer feedback.

3. Build Around Topics, Not Keywords

Create clusters of related content around a theme. This signals authority to search engines and gives users more reasons to explore your site.

4. Turn Product Pages Into Useful Resources

Don’t just list specs. Add real information that helps the buyer:

  • FAQs.
  • Reviews.
  • Comparison tables.
  • Guides and videos.

You want to help the buyer better forecast their experience with the product or service, as well as their understanding of your brand.

Be upfront about as much information as possible, as a negative brand or product experience can be damaging in the long run.

Why SEO Still Matters

Yes, SEO remains highly relevant despite the rise of AI.

While AI tools are changing how search works and how users find answers, they haven’t replaced the need for a smart, well-executed SEO strategy.

SEO is evolving and becoming more important in new ways.

AI Needs High-Quality Content To Learn From

AI Overviews don’t invent answers. They draw from trusted online sources. That means Google still relies on high-quality, well-optimized content to build its responses.

SEO helps ensure your content meets the standards of E-E-A-T: experience, expertise, authoritativeness, and trustworthiness.

Search Engines Still Rank Pages

Even with AI features in search results, users still scroll through traditional listings and click on websites.

SEO ensures your content performs well in these results, whether it’s in the top 10 links, a featured snippet, or a “People Also Ask” box.

AI Enhances, Not Replaces, SEO

AI tools can automate certain aspects of SEO, such as keyword research and content suggestions. But, they don’t replace strategic thinking.

SEO experts continue to guide site architecture, content structure, technical fixes, and intent-based optimization – tasks that AI can’t fully handle alone.

SEO isn’t going away; it’s becoming more sophisticated.

The businesses that succeed will be the ones that blend innovative tools with strategic thinking and treat SEO as a long-term investment in visibility and value.

The new wave of SEO isn’t just about driving traffic. It’s about showing up where your customers are asking questions, building credibility, and creating a footprint that supports all your other channels.

  • Visibility builds trust. Even if someone doesn’t click, seeing your name in search results reinforces brand awareness.
  • SEO feeds other channels. The insights you gain from search, what people ask, how they ask it, and what ranking help shape your messaging everywhere else.
  • Strong content earns attention. Helpful, original content can drive engagement on-site, across social media, and in sales conversations.
  • It remains one of the most cost-effective ways to acquire leads, especially for branded and high-intent queries.

Search may not deliver the same volume of clicks, but it still shapes perception, influence, and decision-making.

SEO remains one of the most effective ways to stay visible and valuable in an increasingly AI-driven world.

Change What You Measure

The Great Decoupling is not just an SEO story. It’s a business visibility story. More people may see your brand, but fewer will visit your site.

That means you can’t just measure success by traffic. You need to consider engagement, recall, and brand strength.

Search is becoming a reputation game. If people trust you, they’ll find you, even if they don’t click the first time.

The companies that win won’t be the ones who chase rankings; they’ll be the ones who earn attention. Attention is potentially the “new click.”

More Resources:


Featured Image: Master1305/Shutterstock

How generative AI could help make construction sites safer

Last winter, during the construction of an affordable housing project on Martha’s Vineyard, Massachusetts, a 32-year-old worker named Jose Luis Collaguazo Crespo slipped off a ladder on the second floor and plunged to his death in the basement. He was one of more than 1,000 construction workers who die on the job each year in the US, making it the most dangerous industry for fatal slips, trips, and falls.

“Everyone talks about [how] ‘safety is the number-one priority,’” entrepreneur and executive Philip Lorenzo said during a presentation at Construction Innovation Day 2025, a conference at the University of California, Berkeley, in April. “But then maybe internally, it’s not that high priority. People take shortcuts on job sites. And so there’s this whole tug-of-war between … safety and productivity.”

To combat the shortcuts and risk-taking, Lorenzo is working on a tool for the San Francisco–based company DroneDeploy, which sells software that creates daily digital models of work progress from videos and images, known in the trade as “reality capture.”  The tool, called Safety AI, analyzes each day’s reality capture imagery and flags conditions that violate Occupational Safety and Health Administration (OSHA) rules, with what he claims is 95% accuracy.

That means that for any safety risk the software flags, there is 95% certainty that the flag is accurate and relates to a specific OSHA regulation. Launched in October 2024, it’s now being deployed on hundreds of construction sites in the US, Lorenzo says, and versions specific to the building regulations in countries including Canada, the UK, South Korea, and Australia have also been deployed.

Safety AI is one of multiple AI construction safety tools that have emerged in recent years, from Silicon Valley to Hong Kong to Jerusalem. Many of these rely on teams of human “clickers,” often in low-wage countries, to manually draw bounding boxes around images of key objects like ladders, in order to label large volumes of data to train an algorithm.

Lorenzo says Safety AI is the first one to use generative AI to flag safety violations, which means an algorithm that can do more than recognize objects such as ladders or hard hats. The software can “reason” about what is going on in an image of a site and draw a conclusion about whether there is an OSHA violation. This is a more advanced form of analysis than the object detection that is the current industry standard, Lorenzo claims. But as the 95% success rate suggests, Safety AI is not a flawless and all-knowing intelligence. It requires an experienced safety inspector as an overseer.  

A visual language model in the real world

Robots and AI tend to thrive in controlled, largely static environments, like factory floors or shipping terminals. But construction sites are, by definition, changing a little bit every day. 

Lorenzo thinks he’s built a better way to monitor sites, using a type of generative AI called a visual language model, or VLM. A VLM is an LLM with a vision encoder, allowing it to “see” images of the world and analyze what is going on in the scene. 

Using years of reality capture imagery gathered from customers, with their explicit permission, Lorenzo’s team has assembled what he calls a “golden data set” encompassing tens of thousands of images of OSHA violations. Having carefully stockpiled this specific data for years, he is not worried that even a billion-dollar tech giant will be able to “copy and crush” him.

To help train the model, Lorenzo has a smaller team of construction safety pros ask strategic questions of the AI. The trainers input test scenes from the golden data set to the VLM and ask questions that guide the model through the process of breaking down the scene and analyzing it step by step the way an experienced human would. If the VLM doesn’t generate the correct response—for example, it misses a violation or registers a false positive—the human trainers go back and tweak the prompts or inputs. Lorenzo says that rather than simply learning to recognize objects, the VLM is taught “how to think in a certain way,” which means it can draw subtle conclusions about what is happening in an image. 

Examples from nine categories of safety risks at construction sites that DroneDeploy can detect.
Examples of safety risk categories that Safety AI can detect.
COURTESY DRONEDEPLOY

As an example, Lorenzo says VLMs are much better than older methods at analyzing ladder usage, which is responsible for 24% of the fall deaths in the construction industry. 

“With traditional machine learning, it’s very difficult to answer the question of ‘Is a person using a ladder unsafely?’” says Lorenzo. “You can find the ladders. You can find the people. But to logically reason and say ‘Well, that person is fine’ or ‘Oh no, that person’s standing on the top step’—only the VLM can logically reason and then be like, ‘All right, it’s unsafe. And here’s the OSHA reference that says you can’t be on the top rung.’”

Answers to multiple questions (Does the person on the ladder have three points of contact? Are they using the ladder as stilts to move around?) are combined to determine whether the ladder in the picture is being used safely. “Our system has over a dozen layers of questioning just to get to that answer,” Lorenzo says. DroneDeploy has not publicly released its data for review, but he says he hopes to have his methodology independently audited by safety experts.  

The missing 5%

Using vision language models for construction AI shows promise, but there are “some pretty fundamental issues” to resolve, including hallucinations and the problem of edge cases, those anomalous hazards for which the VLM hasn’t trained, says Chen Feng. He leads New York University’s AI4CE lab, which develops technologies for 3D mapping and scene understanding in construction robotics and other areas. “Ninety-five percent is encouraging—but how do we fix that remaining 5%?” he asks of Safety AI’s success rate.

Feng points to a 2024 paper called “Eyes Wide Shut?”—written by Shengbang Tong, a PhD student at NYU, and coauthored by AI luminary Yann LeCun—that noted “systematic shortcomings” in VLMs.  “For object detection, they can reach human-level performance pretty well,” Feng says. “However, for more complicated things—these capabilities are still to be improved.” He notes that VLMs have struggled to interpret 3D scene structure from 2D images, don’t have good situational awareness in reasoning about spatial relationships, and often lack “common sense” about visual scenes.

Lorenzo concedes that there are “some major flaws” with LLMs and that they struggle with spatial reasoning. So Safety AI also employs some older machine-learning methods to help create spatial models of construction sites. These methods include the segmentation of images into crucial components and photogrammetry, an established technique for creating a 3D digital model from a 2D image. Safety AI has also trained heavily in 10 different problem areas, including ladder usage, to anticipate the most common violations.

Even so, Lorenzo admits there are edge cases that the LLM will fail to recognize. But he notes that for overworked safety managers, who are often responsible for as many as 15 sites at once, having an extra set of digital “eyes” is still an improvement.

Aaron Tan, a concrete project manager based in the San Francisco Bay Area, says that a tool like Safety AI could be helpful for these overextended safety managers, who will save a lot of time if they can get an emailed alert rather than having to make a two-hour drive to visit a site in person. And if the software can demonstrate that it is helping keep people safe, he thinks workers will eventually embrace it.  

However, Tan notes that workers also fear that these types of tools will be “bossware” used to get them in trouble. “At my last company, we implemented cameras [as] a security system. And the guys didn’t like that,” he says. “They were like, ‘Oh, Big Brother. You guys are always watching me—I have no privacy.’”

Older doesn’t mean obsolete

Izhak Paz, CEO of a Jerusalem-based company called Safeguard AI, has considered incorporating VLMs, but he has stuck with the older machine-learning paradigm because he considers it more reliable. The “old computer vision” based on machine learning “is still better, because it’s hybrid between the machine itself and human intervention on dealing with deviation,” he says. To train the algorithm on a new category of danger, his team aggregates a large volume of labeled footage related to the specific hazard and then optimizes the algorithm by trimming false positives and false negatives. The process can take anywhere from weeks to over six months, Paz says.

With training completed, Safeguard AI performs a risk assessment to identify potential hazards on the site. It can “see” the site in real time by accessing footage from any nearby internet-connected camera. Then it uses an AI agent to push instructions on what to do next to the site managers’ mobile devices. Paz declines to give a precise price tag, but he says his product is affordable only for builders at the “mid-market” level and above, specifically those managing multiple sites. The tool is in use at roughly 3,500 sites in Israel, the United States, and Brazil.

Buildots, a company based in Tel Aviv that MIT Technology Review profiled back in 2020, doesn’t do safety analysis but instead creates once- or twice-weekly visual progress reports of sites. Buildots also uses the older method of machine learning with labeled training data. “Our system needs to be 99%—we cannot have any hallucinations,” says CEO Roy Danon. 

He says that gaining labeled training data is actually much easier than it was when he and his cofounders began the project in 2018, since gathering video footage of sites means that each object, such as a socket, might be captured and then labeled in many different frames. But the tool is high-end—about 50 builders, most with revenue over $250 million, are using Buildots in Europe, the Middle East, Africa, Canada, and the US. It’s been used on over 300 projects so far.

Ryan Calo, a specialist in robotics and AI law at the University of Washington, likes the idea of AI for construction safety. Since experienced safety managers are already spread thin in construction, however, Calo worries that builders will be tempted to automate humans out of the safety process entirely. “I think AI and drones for spotting safety problems that would otherwise kill workers is super smart,” he says. “So long as it’s verified by a person.”

Andrew Rosenblum is a freelance tech journalist based in Oakland, CA.

The Download: how AI could improve construction site safety, and our Roundtables conversation with Karen Hao

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How generative AI could help make construction sites safer

More than 1,000 construction workers die on the job each year in the US, making it the most dangerous industry for fatal slips, trips, and falls.

A new AI tool called Safety AI could help to change that. It analyzes the progress made on a construction site each day, and flags conditions that violate Occupational Safety and Health Administration rules, with what its creator Philip Lorenzo claims is 95% accuracy.


Lorenzo says Safety AI is the first one of multiple emerging AI construction safety tools to use generative AI to flag safety violations. But as the 95% success rate suggests, Safety AI is not a flawless and all-knowing intelligence. Read the full story.

—Andrew Rosenblum

Roundtables: Inside OpenAI’s Empire with Karen Hao

Earlier this week, we held a subscriber-only Roundtable discussion with author and former MIT Technology Review senior editor Karen Hao about her new book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.

You can watch her conversation with our executive editor Niall Firth here—and if you aren’t already, you can subscribe to us here

MIT Technology Review Narrated: The tech industry can’t agree on what open-source AI means. That’s a problem.

What counts as ‘open-source AI’? The answer could determine who gets to shape the future of the technology.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China’s digital IDs are coming
And they’re unlikely to stay voluntary for long. (Economist $)
+ The country’s AI models are becoming increasingly popular worldwide. (WSJ $)

2 Donald Trump has mused about using DOGE to deport Elon Musk
Musk’s comments about the President’s ‘Big Beautiful Bill’ have touched a nerve. (Axios)
+ Turns out AI models are quite good at fact checking Trump. (WP $)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

3 Google must pay California’s Android users $314.6m
After a jury ruled it had misused their data. (Reuters

4 Many AI detectors overpromise and underdeliver
But that hasn’t stopped Californian colleges from investing millions in them. (Undark)
+ What’s next for college writing? Nothing good. (New Yorker $)
+ Educators are working out how to integrate AI into computer science. (NYT $)
+ AI-text detection tools are really easy to fool. (MIT Technology Review)

5 Google is making its first foray into fusion
The world’s first grid-scale fusion power plant is due to come online in the 2030s. (NBC News)
+ Google will buy half its output. (TechCrunch)
+ Inside a fusion energy facility. (MIT Technology Review)

6 China is banning certain portable batteries from flights
In the wake of two major manufacturers recalling millions of power banks. (NYT $)
+ The ban is catching travellers out. (SCMP)

7 The deepfake economy is spiralling out of control
Small business owners are drowning in online scams. (Insider $)

8 Chipmaking companies are attractive prospects for investors
And they’re likely to be better bets. (WSJ $)
+ OpenAI has denied that it plans to use Google’s in-house chip. (Reuters)

9 How cancer studies in dogs could help develop treatments for humans
The disease presents very similarly across both species. (Knowable Magazine)
+ Cancer vaccines are having a renaissance. (MIT Technology Review)

10 X is planning to task AI agents with writing Community Notes
Thankfully, humans will still review them. (Bloomberg $)
+ Why does AI hallucinate? (MIT Technology Review)

Quote of the day

“Missionaries will beat mercenaries.”

—OpenAI CEO Sam Altman takes aim at Meta’s recent spree of attempting to hire his staff, Wired reports.

One more thing

The world’s next big environmental problem could come from space

In September, a unique chase took place in the skies above Easter Island. From a rented jet, a team of researchers captured a satellite’s last moments as it fell out of space and blazed into ash across the sky, using cameras and scientific equipment. Their hope was to gather priceless insights into the physical and chemical processes that occur when satellites burn up as they fall to Earth at the end of their missions.

This kind of study is growing more urgent. The number of satellites in the sky is rapidly rising—with a tenfold increase forecast by the end of the decade. Letting these satellites burn up in the atmosphere at the end of their lives helps keep the quantity of space junk to a minimum. But doing so deposits satellite ash in the Earth’s atmosphere. This metallic ash could potentially alter the climate, and we don’t yet know how serious the problem is likely to be. Read the full story

—Tereza Pultarova

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The new Running Man film looks pretty good, even if it is without Arnold.
+ Maybe it’s just not worth trying to understand our dogs after all.
+ Cynthia Erivo, who knows a thing or two about belting out a tune, really loves The Thong Song, and who can blame her?
+ Show your face, colossal squid!

BNPL Loans to Impact Credit Scores

The Fair Issac Corporation, better known as FICO, is launching new credit scores that incorporate buy-now-pay-later loans, potentially influencing the behavior of consumers, providers, and merchants.

The shift could impact ecommerce conversion rates, average order values, and repeat purchases if consumers reconsider how they use BNPL services or become ineligible.

Some BNPL providers already report repayment data, but the new FICO score models represent the first standardized effort to incorporate BNPL loans into mainstream credit scoring.

For ecommerce merchants, the change could highlight a need to monitor how shoppers pay and may introduce uncertainty at checkout.

FICO signage on company headquarters building

FICO’s new BNPL credit scoring could impact merchant revenue.

Why It Matters

FICO’s decision to include BNPL data addresses lender demand for better visibility into repayment behavior and the widespread use of BNPL loans.

Specifically, a joint FICO and Affirm study “confirmed that a unique consumer behavior associated with BNPL loans is the potential for a large number of these loans to be opened within a short period.”

For FICO’s primary customers (financial institutions), consumers who take out multiple BNPL loans are a higher risk.

Critics argue that traditional scoring models, such as FICO’s, do not reflect the realities of modern consumer finance. The FICO score and similar ratings fail to consider new forms of financial behavior, including:

As a result, according to critics, traditional credit scoring models may penalize actions that aren’t inherently risky.

Negative Impact

One concern of merchants could be that BNPL plans will feel less like casual payment tools and more like formal loans. That perception, in turn, could lead to a measurable shift in consumer behavior.

For example, shoppers who used BNPL as a risk-free way to split payments may hesitate when those loans become visible to lenders. For some, the mere possibility of a credit impact could cause them to abandon the cart.

This concern is not unfounded. Imagine a conscientious shopper who pays for a credit monitoring service. The shopper has been using BNPL for convenience, but now, after buying a new couch online via Affirm, Afterpay, or Klarna, the change in debt load triggers a five-point decline in their FICO score.

A second merchant concern is related to the behavior cited by FICO: shoppers taking several BNPL loans in a short period. The new reporting could impact revenue. Klarna may not approve a BNPL loan for a new appliance the same day a shopper used Affirm to buy a new end table. The appliance merchant gets one less sale.

Positive Impact

The use of credit scores is widespread, and monitoring BNPL behavior could have positive impacts, too.

For example, BNPL loans can now help establish or improve credit profiles for consumers with thin or no credit history.

The aforementioned FICO and Affirm study suggested that shoppers with five or more BNPL loans would typically see their scores remain stable or increase under the new model.

A good BNPL repayment history could boost FICO scores and encourage responsible shoppers — particularly younger adults or new credit users — to continue buying via BNPL, especially for higher-ticket items.

Plus, improved BNPL reporting could result in lower merchant fees. Ecommerce businesses often pay more for BNPL transactions than for standard payment card checkouts. The change to how these loans impact credit scores might force BNPL providers to be relatively more competitive.

What to Do

Earth-shattering or not, FICO’s new scoring is a reminder for ecommerce merchants to understand how payment options and fees impact profits.

It’s as easy as monitoring a few key metrics, including:

  • Conversion rates. How payment options impact conversions.
  • AOV. What is the average order value for shoppers using BNPL vs. cards?
  • Repeat sales. Does the BNPL impact returning buyers and customer long-term value?
  • Returns. Is there a relationship between returns and the payment methods used?
  • Checkouts. Does the BNPL checkout rate change after FICO’s new scores take effect?
Cloudflare Sparks SEO Debate With New AI Crawler Payment System via @sejournal, @MattGSouthern

Cloudflare’s new “pay per crawl” initiative has sparked a debate among SEO professionals and digital marketers.

The company has introduced a default AI crawler-blocking system alongside new monetization options for publishers.

This enables publishers to charge AI companies for access, which could impact how web content is consumed and valued in the age of generative search.

Cloudflare’s New Default: Block AI Crawlers

The system, now in private beta, blocks known AI crawlers by default for new Cloudflare domains.

Publishers can choose one of three access settings for each crawler:

  1. Allow – Grant unrestricted access
  2. Charge – Require payment at the configured, domain-wide price
  3. Block – Deny access entirely

      Crawlers that attempt to access blocked content will receive a 402 Payment Required response. Publishers set a flat, sitewide price per request, and Cloudflare handles billing and revenue distribution.

      Cloudflare wrote:

      “Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho — and then giving that agent a budget to spend to acquire the best and most relevant content.

      Technical Details & Publisher Adoption

      The system integrates directly with Cloudflare’s bot management tools and works alongside existing WAF rules and robots.txt files. Authentication is handled using Ed25519 key pairs and HTTP message signatures to prevent spoofing.

      Cloudflare says early adopters include major publishers like Condé Nast, Time, The Atlantic, AP, BuzzFeed, Reddit, Pinterest, Quora, and others.

      While the current setup supports only flat pricing, the company plans to explore dynamic and granular pricing models in future iterations.

      SEO Community Shares Concerns

      While Cloudflare’s new controls can be changed manually, several SEO experts are concerned about the impact of making the system opt-out rather than opt-in.

      “This won’t end well,” wrote Duane Forrester, Vice President of Industry Insights at Yext, warning that businesses may struggle to appear in AI-powered answers without realizing crawler access is being blocked unless a fee is paid.

      Lily Ray, Vice President of SEO Strategy and Research at Amsive Digital, noted the change is likely to spark urgent conversations with clients, especially those unaware that their sites might now be invisible to AI crawlers by default.

      Ryan Jones, Senior Vice President of SEO at Razorfish, expressed that most of his client sites actually want AI crawlers to access their content for visibility reasons.

      Some Say It’s a Necessary Reset

      Some in the community welcome the move as a long-overdue rebalancing of content economics.

      “A force is needed to tilt the balance back to where it once was,” said Pedro Dias, Technical SEO Consultant and former member of Google’s Search Quality team. He suggests that the current dynamic favors AI companies at the expense of publishers.

      Ilya Grigorik, Distinguished Engineer and Technical Advisor at Shopify, praised the use of cryptographic authentication, saying it’s “much needed” given how difficult it is to distinguish between legitimate and malicious bots.

      Under the new system, crawlers must authenticate using public key cryptography and declare payment intent via custom HTTP headers.

      Looking Ahead

      Cloudflare’s pay-per-crawl system formalizes a new layer of negotiation over who gets to access web content, and at what cost.

      For SEO pros, this adds complexity: visibility may now depend not just on ranking, but on crawler access settings, payment policies, and bot authentication.

      While some see this as empowering publishers, others warn it could fragment the open web, where content access varies based on infrastructure and paywalls.

      If generative AI becomes a core part of how people search, and the pipes feeding that AI are now toll roads, websites will need to manage visibility across a growing patchwork of systems, policies, and financial models.


      Featured Image: Roman Samborskyi/Shutterstock

      YouTube Adds New Viewer Metrics To Track Audience Loyalty via @sejournal, @MattGSouthern

      YouTube is rolling out a new audience analytics feature that replaces the “returning viewers” metric with more detailed viewer categories.

      The update introduces three viewer types: new, casual, and regular. This is designed to help creators better understand who’s engaging with their content and how often.

      Breaking Down The New Viewer Categories

      YouTube now segments viewers into:

      • New viewers: People watching your content for the first time within the selected time period.
      • Casual viewers: Those who’ve watched between one and five months out of the past year.
      • Regular viewers: Viewers who have returned consistently for six or more months over the past 12 months.
      Screenshot from: YouTube.com/CreatorInsider, July 2025.

      In an announcment, YouTube clarifies:

      “These new categories provide a more nuanced understanding of viewer engagement and are not a direct equivalent of the previous returning viewers metric.”

      There are no changes to the definition of new viewers. The new segmentation applies across all video formats, including Shorts, VOD, and livestreams.

      What This Means

      The switch to more granular segmentation addresses a long-standing limitation in YouTube’s analytics.

      Previously, creators could only distinguish between new and returning viewers. That was a binary distinction that didn’t capture the full range of audience engagement.

      Now, with casual and regular viewer categories, creators can identify which viewers are sporadically engaged versus those who form a loyal base.

      YouTube cautioned that many channels may see a smaller percentage of regular viewers than expected, stating:

      “Regular viewers is a high bar to reach as it signifies viewers who have consistently returned to watch your content for 6 months or more in the past year.”

      Strategies For Building A Loyal Audience

      YouTube suggests that maintaining a strong base of regular viewers requires consistent publishing and community engagement.

      The platform recommends the following tactics:

      • Use community posts to stay visible between uploads
      • Respond to viewer comments
      • Host live premieres and join live chats
      • Maintain brand consistency across videos

      These strategies reflect broader trends in the creator economy, where sustained engagement is becoming more valuable than viral reach.

      Looking Ahead

      The new segmentation is now rolling out globally on both desktop and mobile, with availability expanding to all creators in the coming weeks.

      For marketers and brands, the added granularity offers a clearer picture of a creator’s influence and audience loyalty.

      As YouTube continues refining its analytics tools, the emphasis is shifting from raw numbers to actionable insights that help creators grow sustainable channels.


      Featured Image: Roman Samborskyi/Shutterstock

      Is Your Conversion Data Misleading You? 7 Common Google Ads Tracking Issues

      Conversion tracking tends to be one of those things advertisers set up once and then forget about, until something fails – big time.

      But in my 16 years of experience running Google Ads, I can confidently say it’s the single most important factor affecting PPC results. Way before campaign failure, when results first start lagging, faulty conversions are almost always to blame.

      So, whether you want to improve performance, or save a campaign that’s heading towards collapse, the starting point should be the same. Check your conversion data.

      Conversion data will only be useful for you if it’s accurate. Serious missteps can happen if you rely on Google Ads to optimize performance when it has misleading or incomplete conversion tracking.

      If your numbers are wrong, you’ll end up scaling the wrong campaigns, pausing the ones generating a positive return, or having a wrong idea of return on ad spend (ROAS) altogether – and this happens more often than you think.

      Here are seven of the most common causes of inaccurate or inconsistent conversion data in Google Ads, and what you can do to fix each one.

      1. Conversion Tracking Isn’t Set Up Properly

      Conversion tracking is often missing, duplicated, or firing in the wrong place. This is still one of the most common issues, and it can be the most damaging.

      For example, you may track a thank-you page where users refresh the screen three times. Your backend will have one sale, but in Google Ads, you’ll see three.

      Using reports like Repeat Rate is a great way to catch that error and ensure you fix it sooner rather than later.

      When tracking is unreliable, it’s impossible to optimize performance accurately. Campaign decisions are made on incomplete signals, and smart bidding models won’t have the data they need to learn effectively.

      Start by ensuring your conversion actions in Google Ads are appropriately defined.

      Use Google Tag Manager to centralize tracking across pages and platforms, and confirm accurate tag firing using Google’s Tag Assistant or built-in diagnostics.

      2. Tracking Low-Value Or Secondary Conversions

      Not all user actions are created equal – at least not when it comes to Google Ads optimization.

      Metrics like scroll depth, time on site, or video engagement can be helpful, but they shouldn’t be treated as primary conversion events in your ad account.

      These types of interactions are better as supporting metrics (secondary conversions). They can offer insights into how users engage with your landing page or website.

      This type of information is valuable, but it does not belong to the core set of conversion actions used to drive bidding decisions in Google Ads.

      When Google optimizes towards actions that don’t directly tie to revenue or qualified leads, you risk directing your budget towards activities that look great on a dashboard but don’t move the needle in your business.

      Instead, focus on tracking high-intent actions in your Google Ads account, like purchases, form submissions, or phone calls, and use the supporting metrics to help improve the user experience.

      3. Data Doesn’t Match Between Google Ads And GA4

      Discrepancies between platforms are expected, but that doesn’t mean they should be ignored. It’s common to see Google Ads report one number and Google Analytics 4 report another for the same conversion event.

      The root cause typically comes down to attribution model differences, reporting windows, or inconsistent event definitions.

      To reduce confusion, first ensure your Google Ads and GA4 accounts are correctly linked. Then, audit the attribution models in both platforms and understand how each system defines and credits conversions.

      GA4 uses data-driven attribution by default, whereas Google Ads may still be using last-click or another model (but now defaults to data-driven models for most accounts). Align conversion settings as much as possible to maintain consistency in your reporting.

      4. GCLID Is Missing Or Broken

      Google Ads can’t attribute conversions to a specific click if the GCLID isn’t passed through correctly, which will cause in-platform results to be lower.

      This issue tends to result from redirects, link shorteners, or forms that strip URL parameters.

      Fixing it starts with enabling auto-tagging in your account. Then, confirm that the GCLID is retained throughout the user journey, especially when forms span multiple pages or involve third-party integrations.

      Customer relationship management (CRM) systems and custom landing pages are often the culprits, so work with your developers to make sure GCLID values persist and aren’t overwritten.

      5. Privacy Settings And Consent Mode Are Blocking Data

      Unfortunately, privacy compliance has introduced new gaps in attribution. If a user declines consent, Google’s tags may not fire, leaving conversions untracked.

      This is particularly relevant in regions governed by GDPR, like the EU, and similar regulations.

      Consent Mode helps to bridge the gap. It adjusts how tags behave based on user permissions, allowing for some modeled data even without full cookie acceptance, making it a great solution.

      Pair that with first-party data strategies and server-side tagging where appropriate.

      Note, modeled conversions may take time to appear and don’t fully restore lost data, especially for smaller datasets or stricter consent regimes. But, it will help fill in the blanks responsibly.

      6. Offline Conversions Are Delayed Or Missing

      Offline conversions – like phone sales or in-store transactions – can be imported into Google Ads.

      But if you’re inconsistent with your upload process or if it lacks the proper identifiers, those conversions won’t map to the original ad click.

      Set up a schedule to upload offline conversions regularly, ideally on a daily or weekly basis. Include GCLID information and a timestamp with each entry to preserve click-level attribution.

      Once the data is uploaded, monitor for errors inside the Google Ads interface. Minor mismatches in format or missing fields can stop conversions from registering entirely.

      7. Tagging Conflicts Or Technical Errors

      Even when tracking is conceptually correct, technical issues can block it from functioning.

      Conflicting scripts, outdated plugins, or misplaced tags can all prevent conversion events from firing properly. These problems often go undetected until someone audits the data or sees a sudden drop in conversions.

      Use Tag Assistant or Google Tag Manager’s Preview Mode to audit your implementation regularly.

      Avoid conditional loading unless absolutely necessary, and coordinate with developers when other platforms – like Meta, HubSpot, or Salesforce – are active on the same pages.

      Final Thoughts

      Conversion tracking doesn’t exist in a vacuum, and it’s your job to make sure it plays well with the rest of your stack.

      Incomplete conversion data is a strategic liability. Feeding Google Ads AI the right signals can mean the difference between PPC growth and stagnation.

      By consistently auditing your setup and addressing these common issues, you’ll build cleaner data, glean better insights, and track your way to better performance.

      More Resources:


      Featured Image: TetianaKtv/Shutetrstock

      New Google AI Mode: Everything You Need To Know & What To Do Next via @sejournal, @lorenbaker

      Is your SEO strategy ready for Google’s new AI Mode?

      Is your 2025 SERP strategy in danger?

      What’s changed between traditional search mode and AI mode? 

      Will Google’s New AI Mode Hurt Your Traffic?

      Watch our webinar on-demand as we explored early implications for click-through rates, organic visibility, and content performance so you can:

      • Spot AIO SERP triggers: Identify search types most likely to spark AI Overviews.
      • Analyze impact: Find out which industries are being hit hardest.
      • Audit AIO brand mentions: See which domains are dominating AI-generated answers.
      • Optimize visibility: Update your SEO strategy to stay competitive.
      • Accurately track AI traffic: Measure shifts in click-through rates, visibility, and content performance.

      In this actionable session, Nick Gallagher, SEO Lead at Conductor, gave actionable SEO guidance in this new era of search engine results page (SERPs). 

      Get recommendations for optimizing content to stay competitive as AI-generated answers grow in prominence.

      Google’s New AI Mode: Learn To Analyze, Adapt & Optimize

      Don’t wait for the SERPs to leave you behind.

      Watch on-demand to uncover if AI Mode will hurt your traffic, and what to do about it.

      View the slides below or check out the full webinar for all the details

      Join Us For Our Next Webinar!

      The Data Reveals: What It Takes To Win In AI Search

      Register now to learn how to stay away from modern SEO strategies that don’t work.