OpenAI And Perplexity Set To Battle Google For Browser Dominance via @sejournal, @martinibuster

Credible rumors are circulating that OpenAI is developing a browser. However, the timing of the anonymous tip is curious, because Perplexity coincidentally announced they are releasing a browser named Comet.

It’s a longstanding tradition in Silicon Valley for competitors to try to overshadow competitor announcements with competing announcements of their own, and the timing of OpenAI’s anonymous rumor seems more than coincidental. For example, OpenAI leaked rumors of their own competing search engine on the exact same date that Google officially announced Gemini 1.5, on February 15, 2024. It’s a thing.

According to Reuters:

“OpenAI is close to releasing an AI-powered web browser that will challenge Alphabet’s (GOOGL.O), opens new tab market-dominating Google Chrome, three people familiar with the matter told Reuters.

The browser is slated to launch in the coming weeks, three of the people said, and aims to use artificial intelligence to fundamentally change how consumers browse the web. It will give OpenAI more direct access to a cornerstone of Google’s success: user data.”

Perplexity Comet

According to TechCrunch, Perplexity’s Comet browser comes with its Perplexity AI search engine as the default. The browser includes an AI agent called Comet Assistant that can help with everyday tasks like summarizing emails and navigating the web. Comet will be released first to its $200/month subscribers and to a list of VIPs invited to try it out.

There’s something old-school about Google, Perplexity, and OpenAI battling it out for browser dominance, a technological space that continues to have relevance to users and perhaps the one constant of the Internet, which is that and pop-ups.

ChatGPT Recommendations Potentially Influenced By Hacked Sites via @sejournal, @MattGSouthern

An investigation by SEO professional James Brockbank reveals that ChatGPT may be recommending businesses based on content from hacked websites and expired domains.

The findings aren’t a comprehensive study but the result of personal testing and observations. Brockbank, who serves as Managing Director at Digitaloft, says his report emerged from exploring how brands gain visibility in ChatGPT’s responses.

His analysis suggests that some actors are successfully gaming the system by publishing content on compromised or repurposed domains that retain high authority signals.

This content, despite being irrelevant or deceptive, can surface in ChatGPT-generated business recommendations.

Brockbank wrote:

“I believe that the more we understand about why certain citations get surfaced, even if these are spammy and manipulative, the better we understand how these new platforms work.”

How Manipulated Content Appears In ChatGPT Responses

Brockbank identified two main tactics that appear to influence ChatGPT’s business recommendations:

1. Hacked Websites

In multiple examples, ChatGPT surfaced gambling recommendations that traced back to legitimate websites that had been compromised.

One case involved a California-based domestic violence attorney whose site was found hosting a listicle about online slots.

Other examples included a United Nations youth coalition website and a U.S. summer camp site. They were both seemingly hijacked to host gambling-related content, including pages using white text on a white background to evade detection.

2. Expired Domains

The second tactic involves acquiring expired domains with strong backlink profiles and rebuilding them to promote unrelated content.

In one case, Brockbank discovered a site with over 9,000 referring domains from sources like BBC, CNN, and Bloomberg. The domain, once owned by a UK arts charity, had been repurposed to promote gambling.

Brockbank explained:

“There’s no question that it’s the site’s authority that’s causing it to be used as a source. The issue is that the domain changed hands and the site totally switched up.”

He also found domains that previously belonged to charities and retailers now being used to publish casino recommendations.

Why This Content Is Surfacing

Brockbank suggests that ChatGPT favors domains with perceived authority and recent publication dates.

Additionally, he finds ChatGPT’s recommendation system may not sufficiently evaluate whether the content aligns with the original site’s purpose.

Brockbank observed:

“ChatGPT prefers recent sources, and the fact that these listicles aren’t topically relevant to what the domain is (or should be) about doesn’t seem to matter.”

Brockbank acknowledges that being featured in authentic “best of” listicles or media placements can help businesses gain visibility in AI-generated results.

However, leveraging hacked or expired domains to manipulate source credibility crosses an ethical line.

Brockbank writes:

“Injecting your brand or content into a hacked site or rebuilding an expired domain solely to fool a language model into citing it? That’s manipulation, and it undermines the credibility of the platform.”

What This Means

While Brockbank’s findings are based on individual testing rather than a formal study, they surface a real concern: ChatGPT may be citing manipulated sources without fully understanding their origins or context.

The takeaway isn’t just about risk, it’s also about responsibility. As platforms like ChatGPT become more influential in how users discover businesses, building legitimate authority through trustworthy content and earned media will matter more than ever.

At the same time, the investigation highlights an urgent need for companies to improve how these systems detect and filter deceptive content. Until that happens, both users and businesses should approach AI-generated recommendations with a dose of skepticism.

Brockbank concluded:

“We’re not yet at the stage where we can trust ChatGPT recommendations without considering where it’s sourced these from.”

For more insights, see the original report at Digitaloft.


Featured Image: Mijansk786/Shutterstock

Ahrefs Study Finds No Evidence Google Penalizes AI Content via @sejournal, @MattGSouthern

A large-scale analysis by Ahrefs of 600,000 webpages finds that Google neither rewards nor penalizes AI-generated content.

The report, authored by Si Quan Ong and Xibeijia Guan, provides a data-driven examination of AI’s role in search visibility. It challenges ongoing speculation that using generative tools could hurt rankings.

How the Study Was Conducted

Ahrefs pulled the top 20 ranking URLs for 100,000 random keywords from its Keywords Explorer database.

The content of each page was analyzed using Ahrefs’ own AI content detector, built into its Page Inspect feature in Site Explorer.

The result was a dataset of 600,000 URLs, making this a comprehensive study on AI-generated content and search performance.

Key Findings

Majority of Top Pages Include AI Content

The data shows AI is already a fixture in high-ranking pages:

  • 4.6% of pages were classified as entirely AI-generated
  • 13.5% were purely human-written
  • 81.9% combined AI and human content

Among those mixed pages, usage patterns broke down as:

  • Minimal AI (1-10%): 13.8%
  • Moderate AI (11-40%): 40%
  • Substantial AI (41-70%): 20.3%
  • Dominant AI (71-99%): 7.8%

These findings align with a separate Ahrefs survey from its “State of AI in Content Marketing” report, in which 87% of marketers reported using AI to assist in creating content.

Ranking Impact: Correlation Close to Zero

Perhaps the most significant data point is the correlation between AI usage and Google ranking position, which was just 0.011. In practical terms, this indicates no relationship.

The report states:

“There is no clear relationship between how much AI-generated content a page has and how highly it ranks on Google. This suggests that Google neither significantly rewards nor penalizes pages just because they use AI.”

This echoes Google’s own public stance from February 2023, in which the company clarified that it evaluates content based on quality, not whether AI was used to produce it.

Subtle Trends at the Top

While the overall correlation is negligible, Ahrefs notes a slight trend among #1 ranked pages: they tend to have less AI content than those ranking lower.

Pages with minimal AI usage (0–30%) showed a faint preference for top spots. However, the report emphasizes that this isn’t strong enough to suggest a ranking factor, but rather a pattern worth noting.

Fully AI-generated content did appear in top-20 results but rarely ranked #1, reinforcing the challenge of creating top-performing pages using AI alone.

Key Takeaways

For content marketers, the Ahrefs study provides data-driven reassurance: using AI does not inherently risk a Google penalty.

At the same time, the rarity of pure AI content at the top suggests human oversight still matters.

The report suggests that most successful content today is created using a blend of human input and AI support.

In the words of the authors:

“Google probably doesn’t care how you made the content. It simply cares whether searchers find it helpful.”

The authors compare the state of content creation to the post-nuclear era of steel manufacturing. Just as there’s no longer any manufactured steel untouched by radiation, there may soon be no content untouched by AI.

Looking Ahead

Ahrefs’ findings indicate that content creators can confidently treat AI as a tool, not a threat. While Google remains focused on helpful, high-quality pages, how that content is made matters less than whether it meets user needs.

Relying Too Much On AI Is Backfiring For Businesses via @sejournal, @MattGSouthern

As more companies race to adopt generative AI tools, some are learning a hard lesson: when used without oversight or expertise, these tools can cause more problems than they solve.

From broken websites to ineffective marketing copy, the hidden costs of AI mistakes are adding up, forcing businesses to bring in professionals to clean up the mess.

AI Delivers Mediocrity Without Supervision

Sarah Skidd, a product marketing manager and freelance writer, was hired to revise the website copy generated by an AI tool for a hospitality company, according to a report by the BBC.

Instead of the time- and cost-savings the client expected, the result was 20 hours of billable rewrites.

Skidd told the BBC:

“[The copy] was supposed to sell and intrigue but instead it was very vanilla.”

This isn’t an isolated case. Skidd said other writers have shared similar stories. One told her that 90% of their workload now consists of editing AI-generated text that falls flat.

The issue isn’t just quality. According to a study by researchers Anders Humlum and Emilie Vestergaard, real-world productivity gains from AI chatbots are far below expectations.

Although controlled experiments show improvements of over 15%, most users report time savings of just 2.8% of their work hours on average.

Cutting Corners Can Lead To Problems

The risks go beyond boring copy. Sophie, co-owner of Create Designs, a UK-based digital agency, says she’s seen a wave of clients suffer avoidable problems after trying to use AI tools like ChatGPT for quick fixes.

Warner tells the BBC:

“Now they are going to ChatGPT first.”

And that’s often when things go wrong.

In one case, a client used AI-generated code to update an event page. The shortcut crashed their entire website, causing three days of downtime and a $485 repair bill.

Warner says even larger clients encounter similar issues but hesitate to admit AI was involved, making diagnosis harder and more expensive.

Warner added:

“The process of correcting these mistakes takes much longer than if professionals had been consulted from the beginning.”

Training & Infrastructure Matter More Than Tools

The Danish research paper by Humlum and Vestergaard finds businesses that offer AI training and establish internal guidelines see better (if still modest) results.

Workers with employer support saved slightly more time, about 3.6% of work hours compared to 2.2% without guidance.

Even then, the productivity benefits don’t seem to trickle down. The study found no measurable changes in earnings, hours worked, or job satisfaction for 97% of AI users surveyed.

Prof. Feng Li, associate dean for research and innovation at Bayes Business School, told the BBC:

“Human oversight is essential. Poor implementation can lead to reputational damage, unexpected costs—and even significant liabilities.”

The Gap Between AI Speed & Human Standards

Kashish Barot, a copywriter based in Gujarat, India, told the BBC she spends her time editing AI-generated content for U.S. clients.

She says many underestimate what it takes to produce effective writing.

Barot says:

“AI really makes everyone think it’s a few minutes’ work. However, good copyediting, like writing, takes time because you need to think and not just curate like AI.”

The research backs this up: marketers and software developers report slightly higher time savings when employers support AI use, but gains for teachers and accountants are negligible.

While AI tools may speed up certain tasks, they still require human judgment to meet brand standards and audience needs.

Key Takeaways

The takeaway for businesses? AI isn’t a shortcut to quality. Without proper training, strategy, and infrastructure, even the most powerful tools fall short.

What many companies overlook is that AI’s success depends less on the technology itself and more on the people using it, and whether they’ve been equipped to use it well.

Rushed adoption may save time upfront, but it leads to more expensive problems down the line. Whether it’s broken code, off-brand messaging, or public-facing content that lacks nuance, the cost of fixing AI mistakes can quickly outweigh the perceived savings.

For marketers, developers, and business leaders, the lesson is: AI can help, but only when human expertise stays in the loop.


Featured Image: Roman Samborskyi/Shutterstock

This Is Why AI Won’t Take Your Job (Yet) via @sejournal, @SequinsNsearch

SEO died a thousand times only this year, and the buzzword that resonates across every boardroom (and let’s be honest, everywhere else) is “AI.”

With Google releasing several AI-powered views over the past year and a half, along with the latest take on its own SearchGPT rival AI Mode, we are witnessing a traffic erosion that is very hard to counteract if we stay stuck in our traditional view of our role as search professionals.

And it is only natural that the debate we keep hearing is the same: Is AI eventually going to take our jobs? In a stricter sense, it probably will.

SEO, as we know it, has transformed drastically. It will keep evolving, forcing people to take on new skills and have a broader, multichannel strategy, along with clear and prompt communication to stakeholders who might still be confused about why clicks keep dropping while impressions stay the same.

The next year is expected to bring changes and probably some answers to this debate.

But in the meantime, I was able to draw some predictions, based on my own study investigating humans’ ability to discern AI, to see if the “human touch” really has an advantage over it.

Why This Matters For Us Now

Knowing if people can recognize AI matters for us because people’s behavior changes when they know they’re interacting with it, as compared to when they don’t.

A 2023 study by Yunhao Zhang and Renée Richardson Gosline compared content created by humans, AI, and hybrid approaches for marketing copy and persuasive campaigns.

What they noticed is that when the source was undisclosed, participants preferred AI-generated content, a result that was reversed when they knew how the content was created.

It’s like the transparency on using AI added a layer of diffidence to the interaction, rooted in the common mistrust that is reserved for any new and relatively unknown experience.

At the end of the day, we have consumed human-written content for centuries, but generative AI has been scaled only in the past few years, so this wasn’t even a challenge we were exposed to before.

Similarly, Gabriele Pizzi from the University of Bologna showed that when people interact with an AI chatbot in a simulated shopping environment, they are more likely to consider the agent as competent (and, in turn, trust it with their personal information) when the latter looks more human as compared to “robotic.”

And as marketers, we know that trust is the ultimate seal not only to get a visit and a transaction, but also to form a lasting relationship with the user behind the screen.

So, if recognizing AI content changes the way we interact with it and make decisions, do we still retain the human advantage when AI material gets so close to reality that it is virtually undistinguishable?

Your Brain Can Discriminate AI, But It Doesn’t Mean We Are Infallible Detectors

Previous studies have shown that humans display a feeling of discomfort, known as the uncanny valley, when they see or interact with an artificial entity with semi-realistic features.

How this negative feeling is manifested physiologically with higher activity of our sympathetic nervous system (the division responsible for our “fight or flight” response) before participants can verbally report on or even be aware of it.

It’s a measure of their “gut feeling” towards a stimulus that mimics human features, but does not succeed in doing so entirely.

The uncanny valley phenomenon arises from the fact that our brain, being used to predicting patterns and filling in the blanks based on our own experience, sees these stimuli as “glitches” and spots them as outliers in our known library of faces, bodies, and expressions.

The deviation from the norm and the uncertainty in labeling these “uncanny” stimuli can be triggering from a cognitive perspective, which manifests in higher electrodermal activity (shortened as EDA), a measure of psychological arousal that can be measured with electrodes on the skin.

Based on this evidence, it is realistic to hypothesize that our brain can spot AI before making any active discrimination, and that we can see higher EDA in relation to faces generated with AI, especially when there is something “off” about them.

It is unclear, though, at what level of realism we stop displaying a distinctive response, so I wanted to find that out with my own research.

Here are the questions I set up to answer with my study:

  1. Do we have an in-built pre-conscious “detector” system for AI, and at what point of realistic imitation does it stop responding?
  2. If we do, does it guide our active discrimination between AI and human content?
  3. Is our ability to discriminate influenced by our overall exposure to AI stimuli in real life?

And most of all, can any of the answers to these questions predict what are the next challenges we’ll face in search and marketing?

To answer these questions, I measured the electrodermal activity of 24 participants between 25 and 65 years old as they were presented with neutral, AI-generated, and human-generated images, and checked for any significant differences in responses to each category.

My study ran in three phases, one for each question I had:

  1. A first task where participants visualized neutral, AI, and human static stimuli on a screen without any actions required, while their electrodermal activity was recorded. This was intended to measure the automatic, pre-conscious response to the stimuli presented.
  2. A second behavioral task, where participants had to press a button to categorize the faces that they had seen into AI- vs. human-generated, as fast and accurately as they could, to measure their conscious discrimination skills.
  3. A final phase where participants declared their demographic range and their familiarity with AI on a self-reported scale across five questions. This gave me a self-reported “AI-literacy” score for each participant that I could correlate with any of the other measures obtained from the physiological and behavioral tasks.

And here is what I found:

  • Participants showed a significant difference in pre-conscious activation between conditions, and in particular, the EDA was significantly higher for human faces rather than AI faces (both hyper-realistic and CGI faces). This would support the hypothesis that our brain can tell the difference between AI and human faces before we even initiate a discrimination task.
  • The higher activation for human faces contrasts with the older literature showing higher activation for uncanny valley stimuli, and this could be related to either our own habituation to CGI visuals (meaning they are not triggering outliers anymore), or the automatic cognitive effort involved in trying to extrapolate the emotion of human neutral faces. As a matter of fact, the limitation of EDA is that it tells us something is happening in our nervous system, but it doesn’t tell us what: higher activity could be related to familiarity and preference, negative emotional states, or even cognitive effort, so more research on this is needed.
  • Exposure and familiarity with AI material correlated with higher accuracy when participants had to actively categorize faces into AI-generated and human, supporting the hypothesis that the more we are exposed to AI, the better we become at spotting subtle differences.
  • People were much faster and accurate in categorizing stimuli of the “uncanny valley” nature into the AI-generated bucket, but struggled with hyper-realistic faces, miscategorizing them as human faces in 22% of cases.
  • Active discrimination was not guided by pre-conscious activation. Although a difference in autonomous activity can be seen for AI and human faces, this did not correlate with how fast or accurate participants were. In fact, it can be argued that participants “second-guessed” their own instincts when they knew they had to make a choice.

And yet, the biggest result of all was something I noticed on the pilot I ran before the real study: When the participant is familiar with the brand or the product presented, it’s how they feel about it that guides what we see at the neural level, rather than the automatic response to the image presented.

So, while our brain can technically “tell the difference,” our emotions, familiarity with the brand, the message, and expectations are all factors that can heavily skew our own attitude and behavior, essentially making our discrimination (automatic or not) almost irrelevant in the cascade of evaluations we make.

This has massive implications not only in the way we retain our existing audience, but also in how we approach new ones.

We are now at a stage where understanding what our user wants beyond the immediate query is even more vital, and we have a competitive advantage if we can identify all of this before they explicitly express their needs.

The Road To Survival Isn’t Getting Out Of The Game. It’s Learning The New Rules To Play By

So, does marketing still need real people?

It definitely does, although it’s hard to see that now that every business is ignited by the fear of missing out on the big AI opportunity and distracted by new shiny objects populating the web every day.

Humans thrive on change – that’s how we learn and grow new connections and associations that help us adapt to new environments and processes.

Ever heard of the word neuroplasticity? While it might just sound like a fancy term for learning, it is quite literally the ability of your brain to reshape as a result of experience.

That’s why I think AI won’t take our jobs. We are focusing on AI’s fast progress in the ability to ingest content and recreate outputs that are virtually indistinguishable from our own, but we are not paying attention to our own power of evolving to this new level field.

AI will keep on moving, but so will the needle of our discernment and our behavior towards it, based on the experiences that we build with new processes and material.

My results already indicate how familiarity with AI plays a role in how good we are at recognizing it, and in a year’s time, even the EDA results might change as a function of progressive exposure.

Our skepticism and diffidence towards AI is rooted in the unknown sides of it, paired with a lot of the misuse that we’ve seen as a by-product of a fast, virtually unregulated growth.

The nature of our next interactions with AI will shape our behavior.

I think this is our opportunity as an industry to create valuable AI-powered experiences without sacrificing the quality of our work, our ethical responsibilities toward the user, and our relationship with them. It’s a slower process, but one worth undertaking.

So, even if, at the beginning, I approached this study as a man vs. the machine showdown, I believe we are heading toward the man and the machine era.

Far from the “use AI for everything” approach we tend to see around, below is a breakdown of where I see a (supervised) integration of AI to our job unproblematic, and where I think it still has no space in its current state.

Use: Anything That Provides Information, Facilitates Navigation, And Streamlines User Journeys

  • For example, testing product descriptions based on the features that already reside in the catalog, or providing summaries of real users’ reviews that highlight pros and cons straight away.
  • Virtual try-ons and enabling recommended products based on similarity.
  • Automating processes like identifying internal link opportunities, categorizing intent, and combining multiple data sources for better insights.

Avoid: Anything That’s Based On Establishing A Connection Or Persuading The User

  • This includes any content that fakes expertise and authority in the field. The current technology (and the lack of regulation) even allows for AI influencers, but bear in mind that your brand authenticity is still your biggest asset to preserve when the user is looking to convert. The pitfalls of deceiving them when they expect organic content are greater than just losing a click. This is the work you can’t automate.
  • Similarly, generating reviews or user-generated content at scale to convey legitimacy or value. If you know this is what your users want to get more information on, then you cannot meet their doubts with fake arguments. Gaming tactics are short-lived in marketing because people learn to discern and actively avoid them once they realize they are being deceived. Humans crave authenticity and real peer validation of their decisions because it makes them feel safe. If we ever reach a point where, as a collective, we feel we can trust AI, then it might be different, but that’s not going to happen when most of its current use is dedicated to tricking users into a transaction at all cost, rather than providing the necessary information they need to make an informed decision.
  • Replacing experts and quality control. If it backfired for customer-favorite Duolingo, it will likely backfire for you, too.

The New Goals We Should Be Setting

Here’s where a new journey starts for us.

The collective search behavior has already changed not only as a consequence of any AI-powered view on the SERP that makes our consumption of information and decision-making faster and easier, but also as a function of the introduction of new channels and forms of content (the “Search Everywhere” revolution we hear all about now).

This brings us to new goals as search professionals:

  • Be omnipresent: It’s now the time to work with other channels to improve organic brand awareness and be in the mind of the user at every stage of the journey.
  • Remove friction: Now that we can get answers right off the search engine results page without even clicking to explore more, speed is the new normal, and anything that makes the journey slower is an abandonment risk. Getting your customers what they want straight off the bat (being transparent with your offer, removing unnecessary steps to find information, and improving user experience to complete an action) prevents them from going to seek better results from competitors.
  • Preserve your authenticity: Users want to trust you and feel safe in their choices, so don’t fall into the hype of scalability that could harm your brand.
  • Get to know your customers deeper: Keyword data is no longer enough. We need to know their emotional states when they search, what their frustrations are, and what problems they are trying to solve. And most of all, how they feel about our brand, our product, and what they expect from us, so that we can really meet them where they are before a thousand other options come into play.

We’ve been there before. We’ll adapt again. And I think we’ll come out okay (maybe even more skilled) on the other side of the AI hype.

More Resources:


Featured Image: Stock-Asso/Shutterstock

Are You Still Optimizing for Rankings? AI Search May Not Care. [Webinar] via @sejournal, @hethr_campbell

No ranking data. No impression data. 

So, how do you measure success when AI-generated answers appear and disappear, prompt by prompt?

With these significant changes to how we optimize for search, many brands are seeking to understand how to achieve SEO success.

Some Brands Are Winning in Search. Others? Invisible.

If your content isn’t appearing in AI-generated responses, like AI Overviews, ChatGPT, or Perplexity, you’re already losing ground to competitors.

👉 RSVP: Learn from the brands still dominating SERPs through AI search

In This Free Webinar, You’ll Learn:

  • Data-backed insights on what drives visibility and performance in AI search
  • A proven framework to drive results in AI search, and why this approach works
  • Purpose-built content strategies for driving success in Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO).

This webinar helps enterprise SEOs and executives move from “I don’t know what’s happening in AI search” to “I have a data-driven strategy to compete and win.”

This session is designed for:

  • Marketing managers and SEO strategists looking to stay ahead.
  • Brand leaders managing performance visibility across platforms.
  • Content teams building for modern search behaviors.

You’ll walk away with a usable playbook and a better understanding of how to optimize for the answer, not the query.

Learn from what today’s winning brands are doing right.

Secure your spot, plus get the recording sent straight to your inbox if you can’t make it live.

Cloudflare Sparks SEO Debate With New AI Crawler Payment System via @sejournal, @MattGSouthern

Cloudflare’s new “pay per crawl” initiative has sparked a debate among SEO professionals and digital marketers.

The company has introduced a default AI crawler-blocking system alongside new monetization options for publishers.

This enables publishers to charge AI companies for access, which could impact how web content is consumed and valued in the age of generative search.

Cloudflare’s New Default: Block AI Crawlers

The system, now in private beta, blocks known AI crawlers by default for new Cloudflare domains.

Publishers can choose one of three access settings for each crawler:

  1. Allow – Grant unrestricted access
  2. Charge – Require payment at the configured, domain-wide price
  3. Block – Deny access entirely

      Crawlers that attempt to access blocked content will receive a 402 Payment Required response. Publishers set a flat, sitewide price per request, and Cloudflare handles billing and revenue distribution.

      Cloudflare wrote:

      “Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho — and then giving that agent a budget to spend to acquire the best and most relevant content.

      Technical Details & Publisher Adoption

      The system integrates directly with Cloudflare’s bot management tools and works alongside existing WAF rules and robots.txt files. Authentication is handled using Ed25519 key pairs and HTTP message signatures to prevent spoofing.

      Cloudflare says early adopters include major publishers like Condé Nast, Time, The Atlantic, AP, BuzzFeed, Reddit, Pinterest, Quora, and others.

      While the current setup supports only flat pricing, the company plans to explore dynamic and granular pricing models in future iterations.

      SEO Community Shares Concerns

      While Cloudflare’s new controls can be changed manually, several SEO experts are concerned about the impact of making the system opt-out rather than opt-in.

      “This won’t end well,” wrote Duane Forrester, Vice President of Industry Insights at Yext, warning that businesses may struggle to appear in AI-powered answers without realizing crawler access is being blocked unless a fee is paid.

      Lily Ray, Vice President of SEO Strategy and Research at Amsive Digital, noted the change is likely to spark urgent conversations with clients, especially those unaware that their sites might now be invisible to AI crawlers by default.

      Ryan Jones, Senior Vice President of SEO at Razorfish, expressed that most of his client sites actually want AI crawlers to access their content for visibility reasons.

      Some Say It’s a Necessary Reset

      Some in the community welcome the move as a long-overdue rebalancing of content economics.

      “A force is needed to tilt the balance back to where it once was,” said Pedro Dias, Technical SEO Consultant and former member of Google’s Search Quality team. He suggests that the current dynamic favors AI companies at the expense of publishers.

      Ilya Grigorik, Distinguished Engineer and Technical Advisor at Shopify, praised the use of cryptographic authentication, saying it’s “much needed” given how difficult it is to distinguish between legitimate and malicious bots.

      Under the new system, crawlers must authenticate using public key cryptography and declare payment intent via custom HTTP headers.

      Looking Ahead

      Cloudflare’s pay-per-crawl system formalizes a new layer of negotiation over who gets to access web content, and at what cost.

      For SEO pros, this adds complexity: visibility may now depend not just on ranking, but on crawler access settings, payment policies, and bot authentication.

      While some see this as empowering publishers, others warn it could fragment the open web, where content access varies based on infrastructure and paywalls.

      If generative AI becomes a core part of how people search, and the pipes feeding that AI are now toll roads, websites will need to manage visibility across a growing patchwork of systems, policies, and financial models.


      Featured Image: Roman Samborskyi/Shutterstock

      Study: Google AI Mode Shows 91% URL Change Across Repeat Searches via @sejournal, @MattGSouthern

      A new study analyzing 10,000 keywords reveals that Google’s AI Mode delivers inconsistent results.

      The research also shows minimal overlap between AI Mode sources and traditional organic search rankings.

      Published by SE Ranking, the study examines how AI Mode performs in comparison to Google’s AI Overviews and the top 10 organic search results.

      “The average overlap of exact URLs between the three datasets was just 9.2%,” the study notes, illustrating the volatility.

      Highlights From The Study

      AI Mode Frequently Pulls Different Results

      To test consistency, researchers ran the same 10,000 keywords through AI Mode three times on the same day. The results varied most of the time.

      In 21.2% of cases, there were no overlapping URLs at all between the three sets of responses.

      Domain-level consistency was slightly higher, at 14.7%, indicating AI Mode may cite different pages from the same websites.

      Minimal Overlap With Organic Results

      Only 14% of URLs in AI Mode responses matched the top 10 organic search results for the same queries. When looking at domain-level matches, overlap increased to 21.9%.

      In 17.9% of queries, AI Mode provided zero overlap with organic URLs, suggesting its selections could be independent of Google’s ranking algorithms.

      Most Links Come From Trusted Domains

      On average, each AI Mode response contains 12.6 citations.

      The most common format is block links (90.8%), followed by in-text links (8.9%) and AIM SERP-style links (0.3%), which resemble traditional search engine results pages (SERPs).

      Despite the volatility, some domains consistently appeared across all tests. The top-cited sites were:

      1. Indeed (1.8%)
      2. Wikipedia (1.6%)
      3. Reddit (1.5%)
      4. YouTube (1.4%)
      5. NerdWallet (1.2%)

      Google properties were cited most frequently, accounting for 5.7% of all links. These were mostly Google Maps business profiles.

      Differences From AI Overviews

      Comparing AI Mode to AI Overviews, researchers found an average URL overlap of just 10.7%, with domain overlap at 16%.

      This suggests the two systems operate under different logic despite both being AI-driven.

      What This Means For Search Marketers

      The high volatility of AI Mode results presents new challenges and new opportunities.

      Because results can vary even for identical queries, tracking visibility is more complex.

      However, this fluidity also creates more openings for exposure. Unlike traditional search results, where a small set of top-ranking pages often dominate, AI Mode appears to refresh its citations frequently.

      That means publishers with relevant, high-quality content may have a better chance of appearing in AI Mode answers, even if they’re not in the organic top 10.

      To adapt to this environment, SEOs and content creators should consider:

      • Prioritizing domain-wide authority and topical relevance
      • Diversifying content across trusted platforms
      • Optimizing local presence through tools like Google Maps
      • Monitoring evolving inclusion patterns as AI Mode develops

      For more, see the full study from SE Ranking.


      Featured Image: Roman Samborskyi/Shutterstock

      DeepSeek App Faces Ban In Germany For Illegal Transfer Of User Data via @sejournal, @martinibuster

      German data protection official Meike Kamp has filed a formal request that Apple and Google remove the DeepSeek app from their respective app stores for the illegal transfer of users’ personal data to China, in violation of European Union law.

      Meike Kamp, the Commissioner for Data Protection and Freedom of Information, previously requested in May that DeepSeek voluntarily comply with the legal requirements for data transfer to other countries, stop the transfer of data altogether, or remove their app from the Apple and Google app stores.

      Failure to respond to those requests resulted in the official taking the next step of filing a report of illegal content to both Apple and Google who will then examine and decide DeepSeek’s future on their platforms.

      The data protection commissioner stated (translated from original German):

      “The transfer of user data by DeepSeek to China is unlawful. DeepSeek has not been able to convincingly prove to my authority that data from German users:

      • Inside China is protected at a level equivalent to that of the European Union.
      • Chinese authorities have extensive access rights to personal data within the sphere of influence of Chinese companies.
      • In addition, DeepSeek users in China do not have enforceable rights and effective remedies guaranteed in the European Union.

      I have therefore informed Google and Apple, as operators of the largest app platforms, about the violations and expect a blocking to be checked as soon as possible.”

      Takeaways

      • Enforcement of Data Privacy Laws
        Germany is taking formal steps to enforce EU data privacy regulations by targeting app distribution channels (Apple and Google).
      • International Data Transfer Violations
        DeepSeek is accused of transferring personal user data to China without ensuring protections as required by EU standards.
      • China’s Data Access
        The lack of enforceable user rights and legal remedies in China is a central concern, due to the government’s extensive access rights over data held by Chinese companies.
      • Escalation of Regulatory Action
        A report of illegal content was sent to Apple and Google after DeepSeek ignored a voluntary compliance request.
      • Decision Pending At Apple And Google
        Apple and Google will assess the reported violation and have the option to block the DeepSeek app in Germany.

      Germany’s data protection official has formally requested that Apple and Google remove the DeepSeek app from their app stores due to illegal data transfers of German users’ personal information to China. The request follows concerns over Chinese government access to sensitive user data, after DeepSeek failed to comply with EU data protection standards.

      Featured Image by Shutterstock/Mijansk786

      The State Of AI In Marketing: 6 Key Findings From Marketing Leaders via @sejournal, @theshelleywalsh

      AI is being rapidly implemented, but that doesn’t mean it’s being used effectively.

      The current lack of clear benchmarks and data about AI usage has meant that everyone has been operating in the dark.

      This led us to create our first State Of AI In Marketing report, so that chief marketing officers and marketing decision-makers can have insights to make better informed decisions as they navigate the fast-moving developments in our industry.

      We asked eight key questions about generative AI in marketing to a selection of U.S.-based decision-makers and leaders.

      We got 155 responses from mostly senior marketers, directors, and C-suite to offer fresh insights into how industry leaders perceive AI, and how they are using AI right now.

      While some marketers are unlocking major gains in efficiency, others are struggling with poor output quality, lack of brand voice consistency, and legal uncertainties.

      Our whitepaper presents their responses, broken down across five core themes:

      • Which AI tools are most broadly adopted among marketers.
      • How marketers are using AI.
      • The results they’re seeing.
      • The challenges they face.
      • Where they plan to invest next.

      Whether you’re leading a team or building a roadmap, this report is designed to help you benchmark your AI strategy to make confident decisions as our industry moves at an unprecedented pace.

      6 Key Findings From The Report

      1. ChatGPT Is Currently Dominating The Tools

      Over 83% of marketers said ChatGPT has positively impacted their efficiency or effectiveness.

      But it’s not the only player: Tools like Gemini, Perplexity, Claude, and Canva AI are also making their mark, with many marketers assembling AI “stacks” that combine different strengths across platforms.

      2. Content Has Seen The Most Tangible Impact From AI

      Unsurprisingly, the most impact in marketing so far among our respondents is based around content, where 64.5% experienced value with content creation, 43.9% with content optimization, and 43.9% with idea generation.

      However, AI is not replacing creativity; it’s augmenting it. Marketing teams are using tools to speed up, optimize and break through creative blocks, not to replace human insight.

      The report also shares other key areas where AI has delivered the most value to those surveyed.

      3. Time Savings Were The Single Most Improved Outcome

      The majority of respondents (76.8%) cited time savings as the biggest improvement since adopting AI.

      To enhance productivity and efficiency, marketers are gaining hours back to relocate their time to more strategic work.

      4. Direct ROI-Linked Results Are Lacking

      While operational efficiency is clearly impacted, strategic metrics like customer lifetime value, lead quality, and attribution remain largely unchanged.

      In other words, AI is streamlining how we work, but not necessarily improving what we deliver without human oversight and a sound strategy.

      5. Output Quality Remains A Top Concern

      More than half (54.2%) of respondents identified inaccurate, unreliable, or inconsistent output quality as the biggest limitation in using AI for marketing.

      This highlights a central theme that AI still requires substantial human oversight to produce marketing-ready content.

      6. Misinformation Is The No. 1 Concern

      The most cited concern about AI’s rise in marketing wasn’t job loss; it was the risk of misinformation.

      A full 62.6% of respondents flagged AI-generated misinformation as their top worry, revealing the importance of trust, accuracy, and reputation for AI-powered content.

      The report also highlights the other areas of concern where marketers are experiencing limitations and inefficiencies.

      More Key Findings In The State Of AI Report

      Marketing Leaders Are Planning To Invest In These Key Areas

      Marketing decision-makers surveyed are prioritizing AI investments where value has already been proven. The report breaks down how much of that investment is across analytics, customer experience, SEO, marketing attribution, or content production, amongst other areas.

      How Marketing Leaders Are Restructuring Their Teams

      The report findings also indicate whether and how our respondents restructured to accommodate AI within their organization.

      Where Will Be The Biggest Impact Over The Next Few Months

      Possibly the most insightful section is where respondents gave their thoughts into what would be AI’s biggest impact on marketing over the next 12 months.

      Many expect a content explosion, where the market is flooded with AI-generated assets, raising the bar for originality and quality.

      Others foresee a reshaped search industry and reduced roles, with an emphasis on those who don’t embrace AI getting left behind.

      But, not all forecasts are negative. Several marketers believe AI will level the playing field for small businesses, increase access to high-quality tools, and empower individuals to do the work of many.

      You can find many more comments and predictions in the full report.

      The State Of AI In Marketing Report For 2025 Can Help Shed Light

      Right now is one of the most challenging times our industry has faced, and marketing leaders have hard decisions to make.

      Hopefully, this whitepaper will help to shed light on how and where leaders can move forward.

      In the report, you can find:

      • A real-world look at what tools marketers are using now and how they’re stacking them.
      • Insights into what areas of marketing AI are transforming, and where it’s still falling short.
      • A clear view into future investments, from personalization to performance tracking.
      • Actionable recommendations for how to evaluate, deploy, and govern AI effectively.
      • Perspectives from marketers navigating AI’s challenges.

      Download the full AI in marketing survey report to make confident decisions in your AI implementation strategy.


      Featured Image: Paulo Bobita/Search Engine Journal