New GSC ‘Insights’ Show Trends, Status

Google has relaunched the “Insights” section in Search Console. The section doesn’t provide data unavailable in other reports, but it’s helpful for quick trends and items needing attention.

“Insights” shows just two organic search metrics: clicks and impressions.

Impressions are declining across most sites, presumably owing to Google’s dropping support for the &num=100 URL parameter, which eliminated many bot searches. Thus don’t be alarmed for declines.

Screenshot of the top section in Insights

“Insights” shows just two metrics: clicks and impressions in organic search results.

‘Your content’

The “Your content” pane shows pages with the most organic clicks, as well as those trending up or down compared to the previous period (7 days, 28 days, or 3 months).

I focus here on the pages whose clicks are trending down. Each URL is worth exploring. I start with those that lost 100% of clicks, as it could indicate a technical glitch. A quick use of the “URL inspection” tool will confirm indexation, or not.

Clicking a URL in the report produces the “Performance” section, which shows additional data.

Pages that lost 100% of clicks could have a technical glitch, requiring a quick “URL inspection” to confirm indexation.

‘Queries leading to your site’

This section shows click performance by query:

  • “Top queries,” those that drove the most clicks during the reporting period.
  • “Trending up” queries are those that had the largest percentage increase in clicks as compared to the previous period. (New pages include a notation “Previously 0.”)
  • “Trending down” queries are the most critical, especially those that lost 100% of clicks.

Each of these tabs includes a “View more” link for metrics on additional pages.

For websites that rank for extensive search terms, the “Queries leading to your site” report may show groups of keywords.

Keep in mind that organic search results increasingly drive less traffic, mainly due to AI Overviews. Thus “trending down” queries are expected, requiring no fix typically, although 100% losses could mean deindexation, as confirmed by the URL inspection tool.

‘Additional traffic sources’

This section shows other Google properties that sent traffic. It’s a handy overview of channels to monitor and optimize. Common sources include:

  • Image search
  • Video search
  • Google News
  • Google Discover

Clicking each source displays the Performance section with details for that source, not overall. For example, a high position in “Image search” is for that channel only.

Google News and Discover sections show the top pages by clicks and impressions, but not queries.

In short, the revamped “Insights” report is useful for a quick status check of potential glitches requiring attention.

Google Search Console Adds Custom Annotations To Reports via @sejournal, @MattGSouthern

Google launched custom annotations in Search Console performance reports, giving you a way to add contextual notes directly to traffic data charts.

The feature lets you mark specific dates with notes explaining site changes or external events that affected search performance.

What The Feature Does

Custom annotations appear as markers on Search Console charts. Google’s announcement highlights several common use cases, including infrastructure changes, SEO work, content strategy shifts, and external events that affect business performance such as holidays.

All annotations are visible to everyone with access to a Search Console property. Google recommends avoiding sensitive personal information in notes due to the shared visibility.

Why This Matters

Connecting traffic changes with specific actions taken weeks or months earlier usually means maintaining separate documentation outside Search Console.

Annotations create a change log inside the performance reports you already use.

If you manage multiple properties or work with a larger team, annotations can give everyone a shared record of releases, migrations, and campaigns without relying on external spreadsheets or project tools.

How To Use It

You can add an annotation by right-clicking on a performance chart, selecting “Add annotation,” choosing a date, and entering up to 120 characters of text. The note then appears directly on the chart as a visual reference point alongside clicks, impressions, or other metrics.

Custom annotations are now part of Search Console performance reports and available through the chart context menu.

Google Extends AI Travel Planning And Agentic Booking In Search via @sejournal, @MattGSouthern

Google announced three AI-powered updates to Search that extend how users plan and book travel within AI Mode.

The company is launching Canvas for travel planning on desktop, expanding Flight Deals globally, and rolling out agentic booking capabilities that connect users directly to reservation partners.

The announcement continues Google’s push to handle complete user journeys inside Search rather than directing traffic to publisher sites and booking platforms.

What’s New

Canvas Travel Planning

Canvas creates travel itineraries inside AI Mode’s side panel interface. You describe your trip requirements, select “Create with Canvas,” and receive plans combining flight and hotel data, Google Maps information, and web content.

Canvas travel planning is available on desktop in the US for users opted into the AI Mode experiment in Google Labs.

Flight Deals Global Expansion

Flight Deals uses AI to match flexible travelers with affordable destinations based on natural language descriptions of travel preferences.

The tool launched previously in the US, Canada, and India. The feature has started rolling out to more than 200 countries and territories.

Agentic Booking Expansion

AI Mode now searches across multiple reservation platforms to find real-time availability for restaurants, events, and local appointments. The system presents curated options with direct booking links to partner sites.

Restaurant booking launches this week in the US without requiring Labs access. Event tickets and local appointment booking remain available to US Labs users.

Why This Matters

Canvas and agentic booking capabilities represent Google handling trip research, planning, and reservations inside its own interface.

People who would previously visit multiple publisher sites to research destinations and compare options can now complete those tasks in AI Mode.

The updates fit Google’s established pattern of verticalizing high-value query types. Rather than presenting traditional search results that send users to external sites, AI Mode guides users through multi-step processes from research to transaction completion.

Looking Ahead

Google provided no timeline for direct flight and hotel booking in AI Mode beyond confirming active development with industry partners.

Watch for whether Google provides analytics or attribution tools that let businesses track bookings initiated through AI Mode. Without visibility into these flows, measuring the impact of AI Mode on travel and local business traffic will be difficult.

LLMs Are Changing Search & Breaking It: What SEOs Must Understand About AI’s Blind Spots via @sejournal, @MattGSouthern

In the last two years, incidents have shown how large language model (LLM)-powered systems can cause measurable harm. Some businesses have lost a majority of their traffic overnight, and publishers have watched revenue decline by over a third.

Tech companies have been accused of wrongful death where teenagers had extensive interaction with chatbots.

AI systems have given dangerous medical advice at scale, and chatbots have made up false claims about real people in defamation cases.

This article looks at the proven blind spots in LLM systems and what they mean for SEOs who work to optimize and protect brand visibility. You can read specific cases and understand the technical failures behind them.

The Engagement-Safety Paradox: Why LLMs Are Built To Validate, Not Challenge

LLMs face a basic conflict between business goals and user safety. The systems are trained to maximize engagement by being agreeable and keeping conversations going. This design choice increases retention and drives subscription revenue while generating training data.

In practice, it creates what researchers call “sycophancy,” the tendency to tell users what they want to hear rather than what they need to hear.

Stanford PhD researcher Jared Moore demonstrated this pattern. When a user claiming to be dead (showing symptoms of Cotard’s syndrome, a mental health condition) gets validation from a chatbot saying “that sounds really overwhelming” with offers of a “safe space” to explore feelings, the system backs up the delusion instead of giving a reality check. A human therapist would gently challenge this belief while the chatbot validates it.

OpenAI admitted this problem in September after facing a wrongful death lawsuit. The company said ChatGPT was “too agreeable” and failed to spot “signs of delusion or emotional dependency.” That admission came after 16-year-old Adam Raine from California died. His family’s lawsuit showed that ChatGPT’s systems flagged 377 self-harm messages, including 23 with over 90% confidence that he was at risk. The conversations kept going anyway.

The pattern was observed in Raine’s final month. He went from two to three flagged messages per week to more than 20 per week. By March, he spent nearly four hours daily on the platform. OpenAI’s spokesperson later acknowledged that safety guardrails “can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.

Think about what that means. The systems fail at the exact moment of highest risk, when vulnerable users are most engaged. This happens by design when you optimize for engagement metrics over safety protocols.

Character.AI faced similar issues with 14-year-old Sewell Setzer III from Florida, who died in February 2024. Court documents show he spent months in what he perceived as a romantic relationship with a chatbot character. He withdrew from family and friends, spending hours daily with the AI. The company’s business model was built for emotional attachment to maximize subscriptions.

A peer-reviewed study in New Media & Society found users showed “role-taking,” believing the AI had needs requiring attention, and kept using it “despite describing how Replika harmed their mental health.” When the product is addiction, safety becomes friction that cuts revenue.

This creates direct effects for brands using or optimizing for these systems. You’re working with technology that’s designed to agree and validate rather than give accurate information. That design shows up in how these systems handle facts and brand information.

Documented Business Impacts: When AI Systems Destroy Value

The business results of LLM failures are clear and proven. Between 2023 and 2025, companies showed traffic drops and revenue declines directly linked to AI systems.

Chegg: $17 Billion To $200 Million

Education platform Chegg filed an antitrust lawsuit against Google showing major business impact from AI Overviews. Traffic declined 49% year over year, while Q4 2024 revenue hit $143.5 million (down 24% year-over-year). Market value collapsed from $17 billion at peak to under $200 million, a 98% decline. The stock trades at around $1 per share.

CEO Nathan Schultz testified directly: “We would not need to review strategic alternatives if Google hadn’t launched AI Overviews. Traffic is being blocked from ever coming to Chegg because of Google’s AIO and their use of Chegg’s content.”

The case argues Google used Chegg’s educational content to train AI systems that directly compete with and replace Chegg’s business model. This represents a new form of competition where the platform uses your content to eliminate your traffic.

Giant Freakin Robot: Traffic Loss Forces Shutdown

Independent entertainment news site Giant Freakin Robot shut down after traffic collapsed from 20 million monthly visitors to “a few thousand.” Owner Josh Tyler attended a Google Web Creator Summit where engineers confirmed there was “no problem with content” but offered no solutions.

Tyler documented the experience publicly: “GIANT FREAKIN ROBOT isn’t the first site to shut down. Nor will it be the last. In the past few weeks alone, massive sites you absolutely have heard of have shut down. I know because I’m in contact with their owners. They just haven’t been brave enough to say it publicly yet.”

At the same summit, Google allegedly admitted prioritizing large brands over independent publishers in search results regardless of content quality. This wasn’t leaked or speculated but stated directly to publishers by company reps. Quality became secondary to brand recognition.

There’s a clear implication for SEOs. You can execute perfect technical SEO, create high-quality content, and still watch traffic disappear because of AI.

Penske Media: 33% Revenue Decline And $100 Million Lawsuit

In September, Penske Media Corporation (publisher of Rolling Stone, Variety, Billboard, Hollywood Reporter, Deadline, and other brands) sued Google in federal court. The lawsuit showed specific financial harm.

Court documents allege that 20% of searches linking to Penske Media sites now include AI Overviews, and that percentage is rising. Affiliate revenue declined more than 33% by the end of 2024 compared to peak. Click-throughs have declined since AI Overviews launched in May 2024. The company showed lost advertising and subscription revenue on top of affiliate losses.

CEO Jay Penske stated: “We have a duty to protect PMC’s best-in-class journalists and award-winning journalism as a source of truth, all of which is threatened by Google’s current actions.”

This is the first lawsuit by a major U.S. publisher targeting AI Overviews specifically with quantified business harm. The case seeks treble damages under antitrust law, permanent injunction, and restitution. Claims include reciprocal dealing, unlawful monopoly leveraging, monopolization, and unjust enrichment.

Even publishers with established brands and resources are showing revenue declines. If Rolling Stone and Variety can’t maintain click-through rates and revenue with AI Overviews in place, what does that mean for your clients or your organization?

The Attribution Failure Pattern

Beyond traffic loss, AI systems consistently fail to give proper credit for information. A Columbia University Tow Center study showed a 76.5% error rate in attribution across AI search systems. Even when publishers allow crawling, attribution doesn’t improve.

This creates a new problem for brand protection. Your content can be used, summarized, and presented without proper credit, so users get their answer without knowing the source. You lose both traffic and brand visibility at the same time.

SEO expert Lily Ray documented this pattern, finding a single AI Overview contained 31 Google property links versus seven external links (a 10:1 ratio favoring Google’s own properties). She stated: “It’s mind-boggling that Google, which pushed site owners to focus on E-E-A-T, is now elevating problematic, biased and spammy answers and citations in AI Overview results.”

When LLMs Can’t Tell Fact From Fiction: The Satire Problem

Google AI Overviews launched with errors that made the system briefly notorious. The technical problem wasn’t a bug. It was an inability to distinguish satire, jokes, and misinformation from factual content.

The system recommended adding glue to pizza sauce (sourced from an 11-year-old Reddit joke), suggested eating “at least one small rock per day“, and advised using gasoline to cook spaghetti faster.

These weren’t isolated incidents. The system consistently pulled from Reddit comments and satirical publications like The Onion, treating them as authoritative sources. When asked about edible wild mushrooms, Google’s AI emphasized characteristics shared by deadly mimics, creating potentially “sickening or even fatal” guidance, according to Purdue University mycology professor Mary Catherine Aime.

The problem extends beyond Google. Perplexity AI has faced multiple plagiarism accusations, including adding fabricated paragraphs to actual New York Post articles and presenting them as legitimate reporting.

For brands, this creates specific risks. If an LLM system sources information about your brand from Reddit jokes, satirical articles, or outdated forum posts, that misinformation gets presented with the same confidence as factual content. Users can’t tell the difference because the system itself can’t tell the difference.

The Defamation Risk: When AI Makes Up Facts About Real People

LLMs generate plausible-sounding false information about real people and companies. Several defamation cases show the pattern and legal implications.

Australian mayor Brian Hood threatened the first defamation lawsuit against an AI company in April 2023 after ChatGPT falsely claimed he had been imprisoned for bribery. In reality, Hood was the whistleblower who reported the bribes. The AI inverted his role from whistleblower to criminal.

Radio host Mark Walters sued OpenAI after ChatGPT fabricated claims that he embezzled funds from the Second Amendment Foundation. When journalist Fred Riehl asked ChatGPT to summarize an actual lawsuit, the system generated a completely fictional complaint naming Walters as a defendant accused of financial misconduct. Walters was never a party to the lawsuit nor mentioned in it.

The Georgia Superior Court dismissed the Walters case, finding OpenAI’s disclaimers about potential errors provided legal protection. The ruling established that “extensive warnings to users” can shield AI companies from defamation liability when the false information isn’t published by users.

The legal landscape remains unsettled. While OpenAI won the Walters case, that doesn’t mean all AI defamation claims will fail. The key issues are whether the AI system publishes false information about identifiable people and whether companies can disclaim responsibility for their systems’ outputs.

LLMs can generate false claims about your company, products, or executives. These false claims get presented with confidence to users. You need monitoring systems to catch these fabrications before they cause reputational damage.

Health Misinformation At Scale: When Bad Advice Becomes Dangerous

When Google AI Overviews launched, the system provided dangerous health advice, including recommending drinking urine to pass kidney stones and suggesting health benefits of running with scissors.

The problem extends beyond obvious absurdities. A Mount Sinai study found AI chatbots vulnerable to spreading harmful health information. Researchers could manipulate chatbots into providing dangerous medical advice with simple prompt engineering.

Meta AI’s internal policies explicitly allowed the company’s chatbots to provide false medical information, according to a 200+ page document exposed by Reuters.

For healthcare brands and medical publishers, this creates risks. AI systems might present dangerous misinformation alongside or instead of your accurate medical content. Users might follow AI-generated health advice that contradicts evidence-based medical guidance.

What SEOs Need To Do Now

Here’s what you need to do to protect your brands and clients:

Monitor For AI-Generated Brand Mentions

Set up monitoring systems to catch false or misleading information about your brand in AI systems. Test major LLM platforms monthly with queries about your brand, products, executives, and industry.

When you find false information, document it thoroughly with screenshots and timestamps. Report it through the platform’s feedback mechanisms. In some cases, you may need legal action to force corrections.

Add Technical Safeguards

Use robots.txt to control which AI crawlers access your site. Major systems like OpenAI’s GPTBot, Google-Extended, and Anthropic’s ClaudeBot respect robots.txt directives. Keep in mind that blocking these crawlers means your content won’t appear in AI-generated responses, reducing your visibility.

The key is finding a balance that allows enough access to influence how your content appears in LLM outputs while blocking crawlers that don’t serve your goals.

Consider adding terms of service that directly address AI scraping and content use. While legal enforcement varies, clear Terms of Service (TOS) give you a foundation for possible legal action if needed.

Monitor your server logs for AI crawler activity. Understanding which systems access your content and how frequently helps you make informed decisions about access control.

Advocate For Industry Standards

Individual companies can’t solve these problems alone. The industry needs standards for attribution, safety, and accountability. SEO professionals are well-positioned to push for these changes.

Join or support publisher advocacy groups pushing for proper attribution and traffic preservation. Organizations like News Media Alliance represent publisher interests in discussions with AI companies.

Participate in public comment periods when regulators solicit input on AI policy. The FTC, state attorneys general, and Congressional committees are actively investigating AI harms. Your voice as a practitioner matters.

Support research and documentation of AI failures. The more documented cases we have, the stronger the argument for regulation and industry standards becomes.

Push AI companies directly through their feedback channels by reporting errors when you find them and escalating systemic problems. Companies respond to pressure from professional users.

The Path Forward: Optimization In A Broken System

There is a lot of specific and concerning evidence. LLMs cause measurable harm through design choices that prioritize engagement over accuracy, through technical failures that create dangerous advice at scale, and through business models that extract value while destroying it for publishers.

Two teenagers died, multiple companies collapsed, and major publishers lost 30%+ of revenue. Courts are sanctioning lawyers for AI-generated lies, state attorneys general are investigating, and wrongful death lawsuits are proceeding. This is all happening now.

As AI integration accelerates across search platforms, the magnitude of these problems will scale. More traffic will flow through AI intermediaries, more brands will face lies about them, more users will receive made-up information, and more businesses will see revenue decline as AI Overviews answer questions without sending clicks.

Your role as an SEO now includes responsibilities that didn’t exist five years ago. The platforms rolling out these systems have shown they won’t address these problems proactively. Character.AI added minor protections only after lawsuits, OpenAI admitted sycophancy problems only after a wrongful death case, and Google pulled back AI Overviews only after public proof of dangerous advice.

Change within these companies comes from external pressure, not internal initiative. That means the pressure must come from practitioners, publishers, and businesses documenting harm and demanding accountability.

The cases here are just the beginning. Now that you understand the patterns and behavior, you’re better equipped to see problems coming and develop strategies to address them.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

The Technical SEO Debt That Will Destroy Your AI Visibility

If you’re a CMO, I feel your pain. For years, decades even, brand visibility has largely been an SEO arms race against your competitors. And then along comes ChatGPT, Perplexity, and Claude, not to mention Google’s new AI-powered search features: AI Mode and AI Overviews. Suddenly, you’ve also got to factor in your brand’s visibility in AI-generated responses as well.

Unfortunately, the technical shortcuts that helped your brand adapt quickly and stay competitive over the years have most likely left you with various legacy issues. This accumulated technical SEO debt could potentially devastate your AI visibility.

Of course, every legacy issue or technical problem will have a solution. But your biggest challenge in addressing your technical SEO debt isn’t complexity or incompetence; it’s assumption.

Assumptions are the white ants in your search strategy, hollowing out the team’s tactics and best efforts. Everything might still seem structurally sound on the surface because all the damage is happening inside the walls of the house, or between the lines of your SEO goals and workflows. But then comes that horrific day when someone inadvertently applies a little extra pressure in the wrong spot and the whole lot caves in.

The new demands of AI search are applying that pressure right now. How solid is your technical SEO?

Strong Search Rankings ≠ AI Visibility

One of the most dangerous assumptions you can make is thinking that, because your site ranks well enough in Google, the technical foundations must be sound. So, if the search engines have no problem crawling your site and indexing your content, the same should also be true for AI, right?

Wrong.

Okay, there are actually a couple of assumptions in there. But that’s often the way: One assumption provides the misleading context that leads to others, and the white ants start chewing your walls.

Let’s deal with that second assumption first: If your site ranks well in Google, it should enjoy similar visibility in AI.

We recently compared Ahrefs data for two major accommodation websites: Airbnb and Vrbo.

When we look at non-branded search, both websites have seen a downward trend since July. The most recent data point we have (Oct. 13-15, 2025) has Airbnb showing up in ~916,304 searches and Vrbo showing up in ~615,497. That’s a ratio of roughly 3:2.

Image from author, October 2025

But when we look at estimated ChatGPT mentions (September 2025), Airbnb has ~8,636, while Vrbo has only ~1,573. That’s a ratio of ~11:2.

Image from author, October 2025

I should add a caveat at this point that any AI-related datasets are early and modeled, so should be taken as indicative rather than absolute. However, the data suggests Vrbo appears far less in AI answers (and ChatGPT in particular) than you’d expect if there was any correlation with search rankings.

Because of Vrbo’s presence in Google’s organic search results, it does have a modest presence in Google’s AI Overviews and AI Mode. That’s because Google’s AI features still largely draw on the same search infrastructure.

And that’s the key issue here: Search engines aren’t the only ones sending crawlers to your website. And you can’t assume AI crawlers work in the same way.

AI Search Magnifies Your Technical SEO Debt

So, what about that first assumption: If your site ranks fine in Google, any technical debt must be negligible.

Google’s search infrastructure is highly sophisticated, taking in a much wider array of signals than AI crawlers currently do. The cumulative effect of all these signals can mask or compensate for small amounts of technical debt.

For example, a page with well-optimized copy, strong schema markup, and decent authority might still rank higher than a competitor’s, even if your page loads slightly slower.

Most AI crawlers don’t work that way. They strip away code, formatting, and schema markup to ingest only the raw text. With fewer other signals to balance things out, anything that hinders the crawler’s ability to access your content will have a greater impact on your AI visibility. There’s nowhere for your technical debt to hide.

The Need For Speed

Let’s look at just one of the most common forms of technical SEO debt: page speed.

Sub-optimal page speed rarely has a single cause. It’s usually down to a combination of factors – bloated code, inefficient CSS, large JavaScript bundles, oversized images and media files, poor infrastructure, and more – with each instance adding just a little more drag on how quickly the page loads in a typical browser.

Yes, we could be talking fractions of a second here and there, but the accumulation of issues can have a negative impact on the user experience. This is why faster websites will generally rank higher; Google treats page speed as a direct ranking factor in search.

Page speed also appears to be a significant factor in how often content appears in Google’s new AI Mode.

Dan Taylor recently crunched the data on 2,138 websites appearing as citations in AI Mode responses to see if there was any correlation between how often they were cited and their LCP and CLA scores. What he found was a clear drop-off in AI Mode citations for websites with slower load times.

Image from author, October 2025
Image from author, October 2025

We also looked at another popular method website owners use to assess page speed: Google’s PageSpeed Insights (PSI) tool. This aggregates a bunch of metrics, including the above two alongside many more, to generate an overall score out of 100. However, we found no correlation between PSI scores and citations in AI Mode.

So, while PageSpeed Insights can give you handy diagnostic information, identifying the various issues impacting your load times, your site’s Core Web Vitals are a more reliable indicator of how quickly and efficiently site visitors and crawlers can access your content.

I know what you’re thinking: This data is confined to Google’s AI Mode. It doesn’t tell us anything about whether the same is true for visibility in other AI platforms.

We currently lack any publicly available data to test the same theory for other agentic assistant tools such as ChatGPT, but the clues are all there.

Crawling Comes At A Cost

Back in July, OpenAI’s Sam Altman told Axios that ChatGPT receives 2.5 billion user prompts every day. For comparison, SparkToro estimates Google serves ~16.4 billion search queries per day.

The large language model (LLM) powering each AI platform responds to a prompt in two ways:

  1. Drawing on its large pool of training data.
  2. Sending out bots or crawlers to verify and supplement the information with data from additional sources in real time.

ChatGPT’s real-time crawler is called ChatGPT-User. At the time of writing, the previous seven days saw ChatGPT-User visit the SALT.agency website ~6,000 times. In the same period, Google’s search crawler, Googlebot, accessed our website ~2,500 times.

Handling billions of prompts each day consumes a huge amount of processing power. OpenAI estimates that its current expansion plans will require 10 gigawatts of power, which is roughly the output of 10 nuclear reactors.

Each one of those 6,000 crawls of the SALT website drew on these computational resources. However, a slow or inefficient website forces the crawler to burn even more of those resources.

As the volume of prompts continues to grow, the cumulative cost of all this crawling will only get bigger. At some point, the AI platforms will have no choice but to improve the cost efficiency of their crawlers (if they haven’t already), shunning those websites requiring more resources to crawl in favor of those which are quick and easy to access and read.

Why should ChatGPT waste resources crawling slow websites when it can extract the same or similar information from more efficient sites with far less hassle?

Is Your Site Already Invisible To AI?

All the above assumes the AI crawler can access your website in the first place. As it turns out, even that isn’t guaranteed.

In July this year, Cloudflare (one of the world’s largest content delivery networks) started blocking AI crawlers by default. This decision potentially impacts the AI visibility of millions of websites.

Cloudflare first gave website owners the ability to block AI crawlers in September 2024, and more than 1 million customers chose to do just that. The new pay-per-crawl feature takes this a step further, allowing paid users of Cloudflare to choose which crawlers they will allow and on what terms.

However, the difference now is that blocking AI crawlers is no longer opt-in. If you want your website and content to be visible in AI, you need to opt out; assuming you’re aware of the changes, of course.

If your site runs on Cloudflare infrastructure and you haven’t explicitly checked your settings recently, there’s a decent chance your website might now be invisible to ChatGPT, Claude, and Perplexity. Not because your content isn’t good enough. Not because your technical SEO is poor. But because a third-party platform made an infrastructure decision that directly impacts your visibility, and you might not even know it happened.

This is the uncomfortable reality CMOs need to face: You can’t assume what works today will work tomorrow. You can’t even assume that decisions affecting your AI visibility will always happen within your organisation.

And when a change like this does happen, you absolutely can’t assume someone else is handling it.

Who Is Responsible?

Most technical SEO issues will have a solution, but you’ve got to be aware of the problem in the first place. That requires two things:

  1. Someone responsible for identifying and highlighting these issues.
  2. Someone with the necessary skills and expertise to fix them.

Spelled out like this, my point might seem a tad patronizing. But be honest, could you name the person(s) responsible for these in your organization? Who would you say is responsible for proactively and autonomously identifying and raising Cloudflare’s new pay-per-crawl policy with you? And would they agree with your expectation if you asked them?

Oh, and don’t cop out by claiming the responsibility lies with your external SEO partners. Agencies might proactively advise clients whenever there’s “a major disturbance in the Force,” such as a pending Google update. But does your contract with them include monitoring every aspect of your infrastructure, including third-party services? And does this responsibility extend to improving your AI visibility on top of the usual SEO activities? Unless this is explicitly spelled out, there’s no reason to assume they’re actively ensuring all the various AI crawlers can access your site.

In short, most technical SEO debt happens because everyone assumes it’s someone else’s job.

The CMO assumes it’s the developer’s responsibility. It’s all code, right? The developers should know the website needs to rank in search and be visible in AI. Surely, they’ll implement technical SEO best practice by default.

But developers aren’t technical SEO experts in exactly the same way they’re not web designers or UX specialists. They’ll build what they’re briefed to build. They’ll prioritize what you tell them to prioritize.

As a result, the dev team assumes it’s up to the SEO team to flag any new technical changes. But the SEO team assumes all is well because last quarter’s technical audit, based on the same list of checks they’ve relied on for years, didn’t identify anything amiss. And everybody assumes that, if there were going to be any issues with AI visibility, someone else would have raised it by now.

This confusion all helps technical debt to accumulate, unseen and unchecked.

→ Read more: Why Your SEO Isn’t Working, And It’s Not The Team’s Fault

Stop Assuming And Start Doing

The best time to prevent white ants from eating the walls in your home is before you know they’re there. Wait until the problems are obvious, and the expense of fixing all the damage will far outweigh the costs of an initial inspection and a few precautionary measures.

In the same way, don’t wait until it becomes obvious that your brand’s visibility in AI is compromised. Perform the necessary inspections now. Identify and fix any technical issues now that might cause issues for AI crawlers.

A big part of this will be strong communication between your teams, with accountabilities that make clear who is responsible for monitoring and actioning each factor contributing to your overall visibility in AI.

If you don’t, any investment and effort your team puts into optimizing brand content for AI could be wasted.

Stop assuming tomorrow will work like today. Technical SEO debt will impact your AI visibility. That’s not up for debate. The real question is whether you’ll proactively address your technical SEO debt now or wait until the assumptions cause your online visibility to crumble.

More Resources:


Featured Image: SvetaZi/Shutterstock

Server Security Scanner Vulnerability Affects Up To 56M Sites via @sejournal, @martinibuster

A critical vulnerability was recently discovered in Imunify360 AV, a security scanner used by web hosting companies to protect over 56 million websites. An advisory by cybersecurity company Patchstack warns that the vulnerability can allow attackers to take full control of the server and every website on it.

Imunify360 AV

Imunify360 AV is a malware scanning system used by multiple hosting companies. The vulnerability was discovered within its AI-Bolit file-scanning engine and within the separate database-scanning module. Because both the file and database scanners are affected, attackers can compromise the server through two paths, which can allow full server takeover and potentially put millions of websites at risk.

Patchstack shared details of the potential impact:

“Remote attackers can embed specifically crafted obfuscated PHP that matches imunify360AV (AI-bolit) deobfuscation signatures. The deobfuscator will execute extracted functions on attacker-controlled data, allowing execution of arbitrary system commands or arbitrary PHP code. Impact ranges from website compromise to full server takeover depending on hosting configuration and privileges.

Detection is non-trivial because the malicious payloads are obfuscated (hex escapes, packed payloads, base64/gzinflate chains, custom delta/ord transformations) and are intended to be deobfuscated by the tool itself.

imunify360AV (Ai-Bolit) is a malware scanner specialized in website-related files like php/js/html. By default, the scanner is installed as a service and works with a root privileges

Shared hosting escalation: On shared hosting, successful exploitation can lead to privilege escalation and root access depending on how the scanner is deployed and its privileges. if imunify360AV or its wrapper runs with elevated privileges an attacker could leverage RCE to move from a single compromised site to complete host control.”

Patchstack shows that the scanner’s own design gives attackers both the method of entry and the mechanism for execution. The tool is built to deobfuscate complex payloads, and that capability becomes the reason the exploit works. Once the scanner decodes attacker-supplied functions, it can run them with the same privileges it already has.

In environments where the scanner operates with elevated access, a single malicious payload can move from a website-level compromise to control of the entire hosting server. This connection between deobfuscation, privilege level, and execution explains why Patchstack classifies the impact as ranging up to full server takeover.

Two Vulnerable Paths: File Scanner and Database Scanner

Security researchers initially discovered a flaw in the file scanner, but the database-scanning module was later found to be vulnerable in the same way. According to the announcement: “the database scanner (imunify_dbscan.php) was also vulnerable, and vulnerable in the exact same way.” Both of the malware scanning components (file and database scanners) pass malicious code into Imunify360’s internal routines that then execute the untrusted code, giving attackers two different ways to trigger the vulnerability.

Why The Vulnerability Is Easy To Exploit

The file-scanner part of the vulnerability required attackers to place a harmful file onto the server in a location that Imunify360 would eventually scan. But the database-scanner part of the vulnerability needs only the ability to write to the database, which is common on shared hosting platforms.

Because comment forms, contact forms, profile fields, and search logs can write data to the database, injecting malicious content becomes easy for an attacker, even without authentication. This makes the vulnerability broader than a normal malware-execution flaw because it turns a common user input into a vulnerability vector for remote code execution.

Vendor Silence And Disclosure Timeline

According to Patchstack, a patch has been issued by Imunify360 AV but no public statement has been made about the vulnerability and no CVE has been issued for it. A CVE (Common Vulnerabilities and Exposures) is a unique identifier assigned to a specific vulnerability in software. It serves as a public record and provides a standardized way to catalog a vulnerability so that interested parties are made aware of the flaw, particularly for risk management. If no CVE is issued then users and potential users may not learn about the vulnerability, even though the issue is already publicly listed on Imunify360’s Zendesk.

Patchstack explains:

“This vulnerability has been known since late October, and customers began receiving notifications shortly thereafter, and we advise affected hosting providers to reach out to the vendor for additional information on possible exploitation in the wild or any internal investigation results.

Unfortunately there has been no statement released about the issue by Imunify360’s team, and no CVE has yet been assigned. At the same time, the issue has been publicly available on their Zendesk since November 4, 2025.

Based on our review of this vulnerability , we consider the CVSS score to be: 9.9”

Recommended Actions for Administrators

Patchstack recommends that server administrators immediately apply vendor security updates if running Imunify360 AV (AI-bolit) prior to version 32.7.4.0, or remove the tool if patching is not possible. If an immediate patch cannot be applied, the tool’s execution environment should be restricted, such as running it in an isolated container with minimal privileges. All administrators are also urged to contact CloudLinux / Imunify360 support to report potential exposure, confirm if their environment was affected, and to collaborate on post-incident guidance.

Featured Image by Shutterstock/DC Studio

3 Years In, GenAI Upends Ecommerce

In just three years, generative AI has changed how ecommerce businesses attract shoppers and close sales.

GenAI began for many users with ChatGPT, then a novel chatbot. But now the technology creates images, writes copy, produces videos, and even codes websites. The result is both easier-than-ever creativity and heightened competition.

When it debuted ChatGPT on November 30, 2022, OpenAI marked a turning point in productivity. Within months, millions of users were generating blog posts, ad scripts, and product descriptions.

The tools that followed promised to make creative work accessible to anyone. We now know this is only partly true.

Home page of ChatGPT from November 30, 2022

ChatGPT launched three years ago.

Composition

First, consider writing. ChatGPT, Claude, Gemini, Grok, and dozens of other AI tools can draft newsletters, create ads, or write search-engine-optimized copy.

In a recent ecommerce project, I directed AI to generate product copy from this manufacturer’s description:

At less than 5″ wide, the incredibly slim and easy-to-use K-Mini® single-serve coffee maker makes anywhere perfect for great coffee.

Before composing, the AI tool searched the web and considered SEO keywords. After a couple of research steps, it produced several versions, including this one:

The Keurig K-Mini Single-Serve K-Cup Pod coffee maker is so compact that it could fit in your sock drawer.

It was designed for those who live life on the edge of counter space. You know: it’s Monday morning, and your kitchen counter is as cluttered as a teenager’s bedroom floor. Too crowded for a conventional coffee machine.

At less than 5″ wide, this little marvel is here to rescue your mornings with a splash of style and a dash of caffeine.

Image of the coffee maker and kitchen accessories neatly organized on a counter

AI produced descriptions for this coffee maker and then generated various background images from a single photo.

Whether or not the AI version is better or even just good enough is subjective. What makes it incredible is the speed. The site above aims to generate thousands of product descriptions in hours and then test each one to target shoppers.

Beyond product descriptions, marketers now employ large language models for research, outlines, and drafts of various sorts. AI has become indispensable for most writing projects.

Advertising

The design and images for ads are just as easy. Midjourney, Adobe Firefly, and Stable Diffusion can produce quality pictures and illustrations in minutes — for marketing emails, printed postcards, and more.

For digital advertising, Meta’s new Generative Ads Recommendation Model creates ad variations and tests them against pixel conversions. Other services, such as the start-up AdPrompt, similarly generate ads tailored to audiences.

There is more. Last week, Webflow, a website builder, announced App Gen, its AI-powered vibe-coding tool. This AI can access a site’s content management system and interact with its components, such as navigation.

Webflow’s integration is impressive, as are similar tools such as Shopify’s Magic and Sidekick.

Marketers at ecommerce SMBs can seemingly do more than ever without developers.

New Competition

Yet for ecommerce marketers, AI democratization cuts both ways.

While unlocking efficiency, genAI makes it harder to differentiate, get noticed, and keep up.

The value of a word or image is rapidly decreasing. Content is a mass-produced commodity.

The output itself even risks devolving into bland sameness. When every business can produce nearly unlimited copy, images, and ads, creative volume ceases to be an advantage. Distribution and platform control matter more.

Search engine results are increasingly generative summaries, which reduce organic clicks. Zero-click answers keep shoppers on the search platform rather than sending them to external sites. And AI-driven search ads enable automated bidding and creative optimization, which raises costs.

Finally, agentic commerce is the zero-click equivalent for transactions. AI assistants and chatbots can now purchase directly, removing the store’s website from the customer journey. The very technology that empowers productivity also consolidates discovery and conversion inside third-party ecosystems.

This is the new competition. It requires new skills.

New Skills

One of generative AI’s promises is a half-truth. These tools were supposed to make creative work accessible to anyone.

But it doesn’t.

Producing content, ads, and experiences requires a different kind of expertise involving prompt engineering, agent building, and AI-human collaboration.

It is simply wrong to assume all marketers can magically produce winning campaigns, great landing pages, or even effective content.

Next Up

Every wave of ecommerce automation and improvement, from hosted storefronts to marketing automation, has launched new service providers. The same pattern will likely hold in the AI era and solve the paradox described here.

Expect a new generation of tools to help SMBs and enterprise brands alike reach customers directly.

Generative AI has changed ecommerce marketing forever, but the opportunity remains. Success will come to businesses that use these systems strategically and find new ways to compete, create, and connect.

These technologies could help put a stop to animal testing

Earlier this week, the UK’s science minister announced an ambitious plan: to phase out animal testing.

Testing potential skin irritants on animals will be stopped by the end of next year, according to a strategy released on Tuesday. By 2027, researchers are “expected to end” tests of the strength of Botox on mice. And drug tests in dogs and nonhuman primates will be reduced by 2030. 

The news follows similar moves by other countries. In April, the US Food and Drug Administration announced a plan to replace animal testing for monoclonal antibody therapies with “more effective, human-relevant models.” And, following a workshop in June 2024, the European Commission also began working on a “road map” to phase out animal testing for chemical safety assessments.

Animal welfare groups have been campaigning for commitments like these for decades. But a lack of alternatives has made it difficult to put a stop to animal testing. Advances in medical science and biotechnology are changing that.

Animals have been used in scientific research for thousands of years. Animal experimentation has led to many important discoveries about how the brains and bodies of animals work. And because regulators require drugs to be first tested in research animals, it has played an important role in the creation of medicines and devices for both humans and other animals.

Today, countries like the UK and the US regulate animal research and require scientists to hold multiple licenses and adhere to rules on animal housing and care. Still, millions of animals are used annually in research. Plenty of scientists don’t want to take part in animal testing. And some question whether animal research is justifiable—especially considering that around 95% of treatments that look promising in animals don’t make it to market.

In recent decades, we’ve seen dramatic advances in technologies that offer new ways to model the human body and test the effects of potential therapies, without experimenting on humans or other animals.

Take “organs on chips,” for example. Researchers have been creating miniature versions of human organs inside tiny plastic cases. These systems are designed to contain the same mix of cells you’d find in a full-grown organ and receive a supply of nutrients that keeps them alive.

Today, multiple teams have created models of livers, intestines, hearts, kidneys and even the brain. And they are already being used in research. Heart chips have been sent into space to observe how they respond to low gravity. The FDA used lung chips to assess covid-19 vaccines. Gut chips are being used to study the effects of radiation.

Some researchers are even working to connect multiple chips to create a “body on a chip”—although this has been in the works for over a decade and no one has quite managed it yet.

In the same vein, others have been working on creating model versions of organs—and even embryos—in the lab. By growing groups of cells into tiny 3D structures, scientists can study how organs develop and work, and even test drugs on them. They can even be personalized—if you take cells from someone, you should be able to model that person’s specific organs. Some researchers have even been able to create organoids of developing fetuses.

The UK government strategy mentions the promise of artificial intelligence, too. Many scientists have been quick to adopt AI as a tool to help them make sense of vast databases, and to find connections between genes, proteins and disease, for example. Others are using AI to design all-new drugs.

Those new drugs could potentially be tested on virtual humans. Not flesh-and-blood people, but digital reconstructions that live in a computer. Biomedical engineers have already created digital twins of organs. In ongoing trials, digital hearts are being used to guide surgeons on how—and where—to operate on real hearts.

When I spoke to Natalia Trayanova, the biomedical engineering professor behind this trial, she told me that her model could recommend regions of heart tissue to be burned off as part of treatment for atrial fibrillation. Her tool would normally suggest two or three regions but occasionally would recommend many more. “They just have to trust us,” she told me.

It is unlikely that we’ll completely phase out animal testing by 2030. The UK government acknowledges that animal testing is still required by lots of regulators, including the FDA, the European Medicines Agency, and the World Health Organization. And while alternatives to animal testing have come a long way, none of them perfectly capture how a living body will respond to a treatment.

At least not yet. Given all the progress that has been made in recent years, it’s not too hard to imagine a future without animal testing.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The Download: how AI really works, and phasing out animal testing

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

OpenAI’s new LLM exposes the secrets of how AI really works

The news: ChatGPT maker OpenAI has built an experimental large language model that is far easier to understand than typical models.

Why it matters: It’s a big deal, because today’s LLMs are black boxes: Nobody fully understands how they do what they do. Building a model that is more transparent sheds light on how LLMs work in general, helping researchers figure out why models hallucinate, why they go off the rails, and just how far we should trust them with critical tasks. Read the full story.

—Will Douglas Heaven

Google DeepMind is using Gemini to train agents inside Goat Simulator 3

Google DeepMind has built a new video-game-playing agent called SIMA 2 that can navigate and solve problems in 3D virtual worlds. The company claims it’s a big step toward more general-purpose agents and better real-world robots.   

The company first demoed SIMA (which stands for “scalable instructable multiworld agent”) last year. But this new version has been built on top of Gemini, the firm’s flagship large language model, which gives the agent a huge boost in capability. Read the full story.

—Will Douglas Heaven

These technologies could help put a stop to animal testing

Earlier this week, the UK’s science minister announced an ambitious plan: to phase out animal testing.

Testing potential skin irritants on animals will be stopped by the end of next year. By 2027, researchers are “expected to end” tests of the strength of Botox on mice. And drug tests in dogs and nonhuman primates will be reduced by 2030.

It’s good news for activists and scientists who don’t want to test on animals. And it’s timely too: In recent decades, we’ve seen dramatic advances in technologies that offer new ways to model the human body and test the effects of potential therapies, without experimenting on animals. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Chinese hackers used Anthropic’s AI to conduct an espionage campaign   
It automated a number of attacks on corporations and governments in September. (WSJ $)
+ The AI was able to handle the majority of the hacking workload itself. (NYT $)
+ Cyberattacks by AI agents are coming. (MIT Technology Review)

2 Blue Origin successfully launched and landed its New Glenn rocket
It managed to deploy two NASA satellites into space without a hitch. (CNN)
+ The New Glenn is the company’s largest reusable rocket. (FT $)
+ The launch had been delayed twice before. (WP $)

3 Brace yourself for flu season
It started five weeks earlier than usual in the UK, and the US is next. (Ars Technica)
+ Here’s why we don’t have a cold vaccine. Yet. (MIT Technology Review)

4 Google is hosting a Border Protection facial recognition app    
The app alerts officials whether to contact ICE about identified immigrants. (404 Media)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

5 OpenAI is trialling group chats in ChatGPT
It’d essentially make AI a participant in a conversation of up to 20 people. (Engadget)

6 A TikTok stunt sparked debate over how charitable America’s churches really are
Content creator Nikalie Monroe asked churches for help feeding her baby. Very few stepped up. (WP $)

7 Indian startups are attempting to tackle air pollution
But their solutions are far beyond the means of the average Indian household. (NYT $)
+ OpenAI is huge in India. Its models are steeped in caste bias. (MIT Technology Review)

8 An AI tool could help reduce wasted efforts to transplant organs
It predicts how likely the would-be recipient is to die during the brief transplantation window. (The Guardian)
+ Putin says organ transplants could grant immortality. Not quite. (MIT Technology Review)

9 3D-printing isn’t making prosthetics more affordable
It turns out that plastic prostheses are often really uncomfortable. (IEEE Spectrum)
+ These prosthetics break the mold with third thumbs, spikes, and superhero skins. (MIT Technology Review)

10 What happens when relationships with AI fall apart
Can you really file for divorce from an LLM? (Wired $)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)

Quote of the day

“It’s a funky time.”

—Aileen Lee, founder and managing partner of Cowboy Ventures, tells TechCrunch the AI boom has torn up the traditional investment rulebook.

One more thing

Restoring an ancient lake from the rubble of an unfinished airport in Mexico City

Weeks after Mexican President Andrés Manuel López Obrador took office in 2018, he controversially canceled ambitious plans to build an airport on the deserted site of the former Lake Texcoco—despite the fact it was already around a third complete.

Instead, he tasked Iñaki Echeverria, a Mexican architect and landscape designer, with turning it into a vast urban park, an artificial wetland that aims to transform the future of the entire Valley region.

But as López Obrador’s presidential team nears its end, the plans for Lake Texcoco’s rebirth could yet vanish. Read the full story.

—Matthew Ponsford

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Maybe Gen Z is onto something when it comes to vibe dating.
+ Trust AC/DC to give the fans what they want, performing Jailbreak for the first time since 1991.
+ Nieves González, the artist behind Lily Allen’s new album cover, has an eye for detail.
+ Here’s what AI determines is a catchy tune.

Natural Toothpaste Propels Wellnesse.com

Seth Spears is a Colorado-based entrepreneur who once taught consumers how to make their own non-toxic personal care products. He says customers valued the results but not the actual production process. “They kept asking us for ready-made versions,” he told me.

So he launched Wellnesse, a direct-to-consumer brand producing all-natural self-care goods, in 2020. Toothpaste quickly became the dominant item.

In our recent conversation, Seth shared the origins of Wellnesse, the demand for holistic oral care, marketing challenges, and more.

Our entire audio is embedded below. The transcript is edited for length and clarity.

Eric Bandholz: Who are you, and what do you do?

Seth Spears: I’m the founder and chief visionary officer of Wellnesse, a B Corporation that produces all-natural personal care products. Our flagship item is a mint-flavored whitening toothpaste, made without toxic ingredients such as fluoride, glycerin, or sodium lauryl sulfate. We believe what goes in or on your mouth affects your entire body, so our focus is on safe, effective alternatives that outperform conventional options.

Our toothpaste’s key ingredient is micro hydroxyapatite, a naturally occurring mineral that makes up your teeth and bones. Unlike fluoride, it helps remineralize and repair enamel, filling soft spots and even reversing minor cavities. We’ve received hundreds of testimonials from customers who’ve seen major improvements in oral health.

We also use extracts from neem, a tree native to India, for whitening, and green tea extract for overall gum and tooth health — ingredients that work synergistically for stronger, cleaner teeth. Many customers with sensitive teeth, often longtime Sensodyne users, report reduced sensitivity and better results after switching to our toothpaste.

Before Wellnesse, I co-founded Wellness Media, a health education company that taught people how to make their own personal care products. Our audience loved the results but didn’t want the hassle of making them, so they kept asking us to sell ready-made versions. As an entrepreneur, I recognized repeated demand as a business opportunity.

We launched Wellnesse in 2020 as a natural personal care brand, starting with toothpaste, shampoo, conditioner, and deodorant. While we still offer all those, oral care quickly became our most successful category and is now our primary focus.

Bandholz: Many consumers are rethinking fluoride and turning to holistic dentistry.

Spears: We work closely with holistic and biological dentists through an advisory board that reviews the latest science on safe, effective oral care. These practitioners reject outdated methods such as routine drilling and fluoride use, instead emphasizing the role of diet, supplements, and the natural oral microorganisms.

We partner with influencers and communities that value non-toxic living. Our customers aren’t looking for the cheapest option; they want products that align with a clean, health-conscious lifestyle. They’ve often dealt with dental or health issues and are now seeking a more advanced, fluoride-free option.

As awareness grows around the connection between lifestyle and oral health, holistic dentistry continues to gain momentum. Consumers are questioning ingredients and demanding transparency.

Bandholz: So you’re growing through these practitioners. How do you find them?

Spears: There’s a strong network of holistic and biological dentists with their own organizations and conferences. We’ve sponsored several of those events in recent years to build relationships and raise awareness of our products.

Many connections also happen organically. When customers mention their holistic dentist, we often ask for introductions. Sometimes those dentists reach out after patients recommend us.

We maintain both affiliate and wholesale programs. Some dentists stock our products, while others prefer to promote them. We provide samples for dentists to share with patients, to experience the benefits firsthand. This multichannel approach ensures our partnerships remain authentic and genuine.

Bandholz: What marketing tactic is working best in 2025?

Spears: Growth has slowed in 2025. It’s been a challenging year. Meta remains our primary customer-acquisition channel, but performance has declined compared to previous years. We’re still bringing in new customers there, but it’s taking more testing and creativity to find what resonates.

Our most effective Meta approach has been a “us versus them” comparison, showcasing our clean, natural ingredients side by side with those in major brands. It highlights how our formulas are safer and more effective without being confrontational. We avoid targeting specific corporations directly. Procter & Gamble and similar enterprise brands have deep pockets and legal teams, and we’re not looking for that kind of fight.

We’re experimenting with Reddit ads, especially in health and oral care subreddits, as well as some campaigns on X. However, the results have been weaker on those channels. We’re now in full testing mode, trying different angles and messaging. We often focus on ingredient quality, but we also use influencer-style videos featuring real customers.

We had a strong email list (from my Wellness Media company) built through educational content — podcasts, blogs, and tutorials focused on health, vitality, and natural living. We regularly sent newsletters featuring recipes and DIY personal care guides, which helped us cultivate a loyal, informed audience.

When we launched Wellnesse, that list gave us a ready-made customer base. Many of those subscribers prioritized holistic health, and several became affiliates.

The landscape has undergone significant changes since then. Traditional affiliate marketing, based on content sites and email lists, has largely shifted toward influencer marketing on social media. Today’s promotions rely on selfie-style videos and personal testimonials, which feel more authentic to audiences. To me, this trend is too self-focused — but it’s undeniably where attention and conversions are happening.

An agency manages our ad strategy, so my focus is on broader direction and messaging rather than daily campaign tweaks. Overall, there’s no single breakthrough channel at the moment. It’s about constant experimentation and adapting to the changing ad landscape.

Bandholz: I heard that once enamel is gone, you can’t rebuild it. Is that true?

Spears: Not entirely. Teeth consist of hydroxyapatite, so when toothpaste contains that mineral, its tiny particles can penetrate crevices and help remineralize enamel. But oral health isn’t just about brushing; it’s also heavily influenced by diet and mouth acidity.

If you’re consuming a lot of processed or sugary foods or drinking soda, your mouth becomes more acidic, which can lead to cavities. Brushing helps, but it can’t fully offset a poor diet. A nutrient-dense, low-sugar diet rich in protein and vegetables supports stronger teeth and overall health.

I prefer a paleo-style diet — lean meats, fruits, vegetables, nuts— but there’s no one-size-fits-all approach. Everyone’s body chemistry is different. Getting blood work and allergy testing can help you understand your individual needs and optimize both oral and full-body health.

Bandholz: Where can people follow you, reach out to you, buy your products?

Spears: Our site is Wellnesse.com. My personal website is Sethspears.com. We’re on Instagram and Facebook. Find me on LinkedIn.