What is the chance your plane will be hit by space debris?

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

In mid-October, a mysterious object cracked the windshield of a packed Boeing 737 cruising at 36,000 feet above Utah, forcing the pilots into an emergency landing. The internet was suddenly buzzing with the prospect that the plane had been hit by a piece of space debris. We still don’t know exactly what hit the plane—likely a remnant of a weather balloon—but it turns out the speculation online wasn’t that far-fetched.

That’s because while the risk of flights being hit by space junk is still small, it is, in fact, growing. 

About three pieces of old space equipment—used rockets and defunct satellites—fall into Earth’s atmosphere every day, according to estimates by the European Space Agency. By the mid-2030s, there may be dozens. The increase is linked to the growth in the number of satellites in orbit. Currently, around 12,900 active satellites circle the planet. In a decade, there may be 100,000 of them, according to analyst estimates.

To minimize the risk of orbital collisions, operators guide old satellites to burn up in Earth’s atmosphere. But the physics of that reentry process are not well understood, and we don’t know how much material burns up and how much reaches the ground.

“The number of such landfall events is increasing,” says Richard Ocaya, a professor of physics at the University of Free State in South Africa and a coauthor of a recent paper on space debris risk. “We expect it may be increasing exponentially in the next few years.”

So far, space debris hasn’t injured anybody—in the air or on the ground. But multiple close calls have been reported in recent years. In March last year, an 0.7-kilogram chunk of metal pierced the roof of a house in Florida. The object was later confirmed to be a remnant of a battery pallet tossed out from the International Space Station. When the strike occurred, the homeowner’s 19-year-old son was resting in a next-door room.

And in February this year, a 1.5-meter-long fragment of SpaceX’s Falcon 9 rocket crashed down near a warehouse outside Poland’s fifth-largest city, Poznan. Another piece was found in a nearby forest. A month later, a 2.5-kilogram piece of a Starlink satellite dropped on a farm in the Canadian province of Saskatchewan. Other incidents have been reported in Australia and Africa. And many more may be going completely unnoticed. 

“If you were to find a bunch of burnt electronics in a forest somewhere, your first thought is not that it came from a spaceship,” says James Beck, the director of the UK-based space engineering research firm Belstead Research. He warns that we don’t fully understand the risk of space debris strikes and that it might be much higher than satellite operators want us to believe. 

For example, SpaceX, the owner of the currently largest mega-constellation, Starlink, claims that its satellites are “designed for demise” and completely burn up when they spiral from orbit and fall through the atmosphere.

But Beck, who has performed multiple wind tunnel tests using satellite mock-ups to mimic atmospheric forces, says the results of such experiments raise doubts. Some satellite components are made of durable materials such as titanium and special alloy composites that don’t melt even at the extremely high temperatures that arise during a hypersonic atmospheric descent. 

“We have done some work for some small-satellite manufacturers and basically, their major problem is that the tanks get down,” Beck says. “For larger satellites, around 800 kilos, we would expect maybe two or three objects to land.” 

It can be challenging to quantify how much of a danger space debris poses. The International Civil Aviation Organization (ICAO) told MIT Technology Review that “the rapid growth in satellite deployments presents a novel challenge” for aviation safety, one that “cannot be quantified with the same precision as more established hazards.” 

But the Federal Aviation Administration has calculated some preliminary numbers on the risk to flights: In a 2023 analysis, the agency estimated that by 2035, the risk that one plane per year will experience a disastrous space debris strike will be around 7 in 10,000. Such a collision would either destroy the aircraft immediately or lead to a rapid loss of air pressure, threatening the lives of all on board.

The casualty risk to humans on the ground will be much higher. Aaron Boley, an associate professor in astronomy and a space debris researcher at the University of British Columbia, Canada, says that if megaconstellation satellites “don’t demise entirely,” the risk of a single human death or injury caused by a space debris strike on the ground could reach around 10% per year by 2035. That would mean a better than even chance that someone on Earth would be hit by space junk about every decade. In its report, the FAA put the chances even higher with similar assumptions, estimating that “one person on the planet would be expected to be injured or killed every two years.”

Experts are starting to think about how they might incorporate space debris into their air safety processes. The German space situational awareness company Okapi Orbits, for example, in cooperation with the German Aerospace Center and the European Organization for the Safety of Air Navigation (Eurocontrol), is exploring ways to adapt air traffic control systems so that pilots and air traffic controllers can receive timely and accurate alerts about space debris threats.

But predicting the path of space debris is challenging too. In recent years, advances in AI have helped improve predictions of space objects’ trajectories in the vacuum of space, potentially reducing the risk of orbital collisions. But so far, these algorithms can’t properly account for the effects of the gradually thickening atmosphere that space junk encounters during reentry. Radar and telescope observations can help, but the exact location of the impact becomes clear with only very short notice.

“Even with high-fidelity models, there’s so many variables at play that having a very accurate reentry location is difficult,” says Njord Eggen, a data analyst at Okapi Orbits. Space debris goes around the planet every hour and a half when in low Earth orbit, he notes, “so even if you have uncertainties on the order of 10 minutes, that’s going to have drastic consequences when it comes to the location where it could impact.”

For aviation companies, the problem is not just a potential strike, as catastrophic as that would be. To avoid accidents, authorities are likely to temporarily close the airspace in at-risk regions, which creates delays and costs money. Boley and his colleagues published a paper earlier this year estimating that busy aerospace regions such as northern Europe or the northeastern United States already have about a 26% yearly chance of experiencing at least one disruption due to the reentry of a major space debris item. By the time all planned constellations are fully deployed, aerospace closures due to space debris hazards may become nearly as common as those due to bad weather.

Because current reentry predictions are unreliable, many of these closures may end up being unnecessary.

For example, when a 21-metric-ton Chinese Long March mega-rocket was falling to Earth in 2022, predictions suggested its debris could scatter across Spain and parts of France. In the end, the rocket crashed into the Pacific Ocean. But the 30-minute closure of south European airspace delayed and diverted hundreds of flights. 

In the meantime, international regulators are urging satellite operators and launch providers to deorbit large satellites and rocket bodies in a controlled way, when possible, by carefully guiding them into remote parts of the ocean using residual fuel. 

The European Space Agency estimates that only about half the rocket bodies reentering the atmosphere do so in a controlled way. 

Moreover, around 2,300 old and no-longer-controllable rocket bodies still linger in orbit, slowly spiraling toward Earth with no mechanisms for operators to safely guide them into the ocean.

“There’s enough material up there that even if we change our practices, we will still have all those rocket bodies eventually reenter,” Boley says. “Although the probability of space debris hitting an aircraft is small, the probability that the debris will spread and fall over busy airspace is not small. That’s actually quite likely.”

The Download: the risk of falling space debris, and how to debunk a conspiracy theory

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What is the chance your plane will be hit by space debris?

The risk of flights being hit by space junk is still small, but it’s growing.

About three pieces of old space equipment—used rockets and defunct satellites—fall into Earth’s atmosphere every day, according to estimates by the European Space Agency. By the mid-2030s, there may be dozens thanks to the rise of megaconstellations in orbit.

So far, space debris hasn’t injured anybody—in the air or on the ground. But multiple close calls have been reported in recent years.

But some estimates have the risk of a single human death or injury caused by a space debris strike on the ground at around 10% per year by 2035. That would mean a better than even chance that someone on Earth would be hit by space junk about every decade. Find out more.

—Tereza Pultarova

This story is part of MIT Technology Review Explains: our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read the rest of the series here.

Chatbots are surprisingly effective at debunking conspiracy theories

—Thomas Costello, Gordon Pennycook & David Rand

Many people believe that you can’t talk conspiracists out of their beliefs. 

But that’s not necessarily true. Our research shows that many conspiracy believers do respond to evidence and arguments—information that is now easy to deliver in the form of a tailored conversation with an AI chatbot.

This is good news, given the outsize role that unfounded conspiracy theories play in today’s political landscape. So while there are widespread and legitimate concerns that generative AI is a potent tool for spreading disinformation, our work shows that it can also be part of the solution. Read the full story.

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China is quietly expanding its remote nuclear test site
In the wake of Donald Trump announcing America’s intentions to revive similar tests. (WP $)
+ A White House memo has accused Alibaba of supporting Chinese operations. (FT $)

2 Jeff Bezos is becoming co-CEO of a new AI startup
Project Prometheus will focus on AI for building computers, aerospace and vehicles. (NYT $)

3 AI-powered toys are holding inappropriate conversations with children 
Including how to find dangerous objects including pills and knives. (The Register)
+ Chatbots are unreliable and unpredictable, whether embedded in toys or not. (Futurism)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

4 Big Tech is warming to the idea of data centers in space
They come with a lot less red tape than their Earth-bound counterparts. (WSJ $)
+ There are a huge number of data centers mired in the planning stage. (WSJ $)
+ Should we be moving data centers to space? (MIT Technology Review)

5 The mafia is recruiting via TikTok
Some bosses are even using the platform to control gangs from behind bars. (Economist $)

6 How to resist AI in your workplace
Like most things in life, there’s power in numbers. (Vox)

7 How China’s EV fleet could become a giant battery network
If economic troubles don’t get in the way, that is. (Rest of World)
+ EV sales are on the rise in South America. (Reuters)
+ China’s energy dominance in three charts. (MIT Technology Review)

8 Inside the unstoppable rise of the domestic internet
Control-hungry nations are following China’s lead in building closed platforms. (NY Mag $)
+ Can we repair the internet? (MIT Technology Review)

9 Search traffic? What search traffic?
These media startups have found a way to thrive without Google. (Insider $)
+ AI means the end of internet search as we’ve known it. (MIT Technology Review)

10 Paul McCartney has released a silent track to protest AI’s creep into music
That’ll show them! (The Guardian)
+ AI is coming for music, too. (MIT Technology Review)

Quote of the day

“All the parental controls in the world will not protect your kids from themselves.”

—Samantha Broxton, a parenting coach and consultant, tells the Washington Post why educating children around the risks of using technology is the best way to help them protect themselves.

One more thing

Inside the controversial tree farms powering Apple’s carbon neutral goal

Apple (and its peers) are planting vast forests of eucalyptus trees in Brazil to try to offset their climate emissions, striking some of the largest-ever deals for carbon credits in the process.

The tech behemoth is betting that planting millions of eucalyptus trees in Brazil will be the path to a greener future. Some ecologists and local residents are far less sure.

The big question is: Can Latin America’s eucalyptus be a scalable climate solution? Read the full story.

—Gregory Barber

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Shepard Fairey’s retrospective show in LA looks very cool.
+ Check out these fascinating scientific breakthroughs that have been making waves over the past 25 years.
+ Good news—sweet little puffins are making a comeback in Ireland.
+ Maybe we should all be getting into Nordic walking.

The State of AI: How war will be changed forever

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.

In this conversation, Helen Warrell, FT investigations reporter and former defense and security editor, and James O’Donnell, MIT Technology Review’s senior AI reporter, consider the ethical quandaries and financial incentives around AI’s use by the military.

Helen Warrell, FT investigations reporter 

It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.

Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Henry Kissinger, the former US secretary of state, spent his final years warning about the coming catastrophe of AI-driven warfare.

Grasping and mitigating these risks is the military priority—some would say the “Oppenheimer moment”—of our age. One emerging consensus in the West is that decisions around the deployment of nuclear weapons should not be outsourced to AI. UN secretary-general António Guterres has gone further, calling for an outright ban on fully autonomous lethal weapons systems. It is essential that regulation keep pace with evolving technology. But in the sci-fi-fueled excitement, it is easy to lose track of what is actually possible. As researchers at Harvard’s Belfer Center point out, AI optimists often underestimate the challenges of fielding fully autonomous weapon systems. It is entirely possible that the capabilities of AI in combat are being overhyped.

Anthony King, Director of the Strategy and Security Institute at the University of Exeter and a key proponent of this argument, suggests that rather than replacing humans, AI will be used to improve military insight. Even if the character of war is changing and remote technology is refining weapon systems, he insists, “the complete automation of war itself is simply an illusion.”

Of the three current military use cases of AI, none involves full autonomy. It is being developed for planning and logistics, cyber warfare (in sabotage, espionage, hacking, and information operations; and—most controversially—for weapons targeting, an application already in use on the battlefields of Ukraine and Gaza. Kyiv’s troops use AI software to direct drones able to evade Russian jammers as they close in on sensitive sites. The Israel Defense Forces have developed an AI-assisted decision support system known as Lavender, which has helped identify around 37,000 potential human targets within Gaza. 

Helen Warrell and James O'Donnell

FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK

There is clearly a danger that the Lavender database replicates the biases of the data it is trained on. But military personnel carry biases too. One Israeli intelligence officer who used Lavender claimed to have more faith in the fairness of a “statistical mechanism” than that of a grieving soldier.

Tech optimists designing AI weapons even deny that specific new controls are needed to control their capabilities. Keith Dear, a former UK military officer who now runs the strategic forecasting company Cassi AI, says existing laws are more than sufficient: “You make sure there’s nothing in the training data that might cause the system to go rogue … when you are confident you deploy it—and you, the human commander, are responsible for anything they might do that goes wrong.”

It is an intriguing thought that some of the fear and shock about use of AI in war may come from those who are unfamiliar with brutal but realistic military norms. What do you think, James? Is some opposition to AI in warfare less about the use of autonomous systems and really an argument against war itself? 

James O’Donnell replies:

Hi Helen, 

One thing I’ve noticed is that there’s been a drastic shift in attitudes of AI companies regarding military applications of their products. In the beginning of 2024, OpenAI unambiguously forbade the use of its tools for warfare, but by the end of the year, it had signed an agreement with Anduril to help it take down drones on the battlefield. 

This step—not a fully autonomous weapon, to be sure, but very much a battlefield application of AI—marked a drastic change in how much tech companies could publicly link themselves with defense. 

What happened along the way? For one thing, it’s the hype. We’re told AI will not just bring superintelligence and scientific discovery but also make warfare sharper, more accurate and calculated, less prone to human fallibility. I spoke with US Marines, for example, who tested a type of AI while patrolling the South Pacific that was advertised to analyze foreign intelligence faster than a human could. 

Secondly, money talks. OpenAI and others need to start recouping some of the unimaginable amounts of cash they’re spending on training and running these models. And few have deeper pockets than the Pentagon. And Europe’s defense heads seem keen to splash the cash too. Meanwhile, the amount of venture capital funding for defense tech this year has already doubled the total for all of 2024, as VCs hope to cash in on militaries’ newfound willingness to buy from startups. 

I do think the opposition to AI warfare falls into a few camps, one of which simply rejects the idea that more precise targeting (if it’s actually more precise at all) will mean fewer casualties rather than just more war. Consider the first era of drone warfare in Afghanistan. As drone strikes became cheaper to implement, can we really say it reduced carnage? Instead, did it merely enable more destruction per dollar?

But the second camp of criticism (and now I’m finally getting to your question) comes from people who are well versed in the realities of war but have very specific complaints about the technology’s fundamental limitations. Missy Cummings, for example, is a former fighter pilot for the US Navy who is now a professor of engineering and computer science at George Mason University. She has been outspoken in her belief that large language models, specifically, are prone to make huge mistakes in military settings.

The typical response to this complaint is that AI’s outputs are human-checked. But if an AI model relies on thousands of inputs for its conclusion, can that conclusion really be checked by one person?

Tech companies are making extraordinarily big promises about what AI can do in these high-stakes applications, all while pressure to implement them is sky high. For me, this means it’s time for more skepticism, not less. 

Helen responds:

Hi James, 

We should definitely continue to question the safety of AI warfare systems and the oversight to which they’re subjected—and hold political leaders to account in this area. I am suggesting that we also apply some skepticism to what you rightly describe as the “extraordinarily big promises” made by some companies about what AI might be able to achieve on the battlefield. 

There will be both opportunities and hazards in what the military is being offered by a relatively nascent (though booming) defense tech scene. The danger is that in the speed and secrecy of an arms race in AI weapons, these emerging capabilities may not receive the scrutiny and debate they desperately need.

Further reading:

Michael C. Horowitz, director of Perry World House at the University of Pennsylvania, explains the need for responsibility in the development of military AI systems in this FT op-ed.

The FT’s tech podcast asks what Israel’s defense tech ecosystem can tell us about the future of warfare 

This MIT Technology Review story analyzes how OpenAI completed its pivot to allowing its technology on the battlefield.

MIT Technology Review also uncovered how US soldiers are using generative AI to help scour thousands of pieces of open-source intelligence.

New GSC ‘Insights’ Show Trends, Status

Google has relaunched the “Insights” section in Search Console. The section doesn’t provide data unavailable in other reports, but it’s helpful for quick trends and items needing attention.

“Insights” shows just two organic search metrics: clicks and impressions.

Impressions are declining across most sites, presumably owing to Google’s dropping support for the &num=100 URL parameter, which eliminated many bot searches. Thus don’t be alarmed for declines.

Screenshot of the top section in Insights

“Insights” shows just two metrics: clicks and impressions in organic search results.

‘Your content’

The “Your content” pane shows pages with the most organic clicks, as well as those trending up or down compared to the previous period (7 days, 28 days, or 3 months).

I focus here on the pages whose clicks are trending down. Each URL is worth exploring. I start with those that lost 100% of clicks, as it could indicate a technical glitch. A quick use of the “URL inspection” tool will confirm indexation, or not.

Clicking a URL in the report produces the “Performance” section, which shows additional data.

Pages that lost 100% of clicks could have a technical glitch, requiring a quick “URL inspection” to confirm indexation.

‘Queries leading to your site’

This section shows click performance by query:

  • “Top queries,” those that drove the most clicks during the reporting period.
  • “Trending up” queries are those that had the largest percentage increase in clicks as compared to the previous period. (New pages include a notation “Previously 0.”)
  • “Trending down” queries are the most critical, especially those that lost 100% of clicks.

Each of these tabs includes a “View more” link for metrics on additional pages.

For websites that rank for extensive search terms, the “Queries leading to your site” report may show groups of keywords.

Keep in mind that organic search results increasingly drive less traffic, mainly due to AI Overviews. Thus “trending down” queries are expected, requiring no fix typically, although 100% losses could mean deindexation, as confirmed by the URL inspection tool.

‘Additional traffic sources’

This section shows other Google properties that sent traffic. It’s a handy overview of channels to monitor and optimize. Common sources include:

  • Image search
  • Video search
  • Google News
  • Google Discover

Clicking each source displays the Performance section with details for that source, not overall. For example, a high position in “Image search” is for that channel only.

Google News and Discover sections show the top pages by clicks and impressions, but not queries.

In short, the revamped “Insights” report is useful for a quick status check of potential glitches requiring attention.

Google Search Console Adds Custom Annotations To Reports via @sejournal, @MattGSouthern

Google launched custom annotations in Search Console performance reports, giving you a way to add contextual notes directly to traffic data charts.

The feature lets you mark specific dates with notes explaining site changes or external events that affected search performance.

What The Feature Does

Custom annotations appear as markers on Search Console charts. Google’s announcement highlights several common use cases, including infrastructure changes, SEO work, content strategy shifts, and external events that affect business performance such as holidays.

All annotations are visible to everyone with access to a Search Console property. Google recommends avoiding sensitive personal information in notes due to the shared visibility.

Why This Matters

Connecting traffic changes with specific actions taken weeks or months earlier usually means maintaining separate documentation outside Search Console.

Annotations create a change log inside the performance reports you already use.

If you manage multiple properties or work with a larger team, annotations can give everyone a shared record of releases, migrations, and campaigns without relying on external spreadsheets or project tools.

How To Use It

You can add an annotation by right-clicking on a performance chart, selecting “Add annotation,” choosing a date, and entering up to 120 characters of text. The note then appears directly on the chart as a visual reference point alongside clicks, impressions, or other metrics.

Custom annotations are now part of Search Console performance reports and available through the chart context menu.

Google Extends AI Travel Planning And Agentic Booking In Search via @sejournal, @MattGSouthern

Google announced three AI-powered updates to Search that extend how users plan and book travel within AI Mode.

The company is launching Canvas for travel planning on desktop, expanding Flight Deals globally, and rolling out agentic booking capabilities that connect users directly to reservation partners.

The announcement continues Google’s push to handle complete user journeys inside Search rather than directing traffic to publisher sites and booking platforms.

What’s New

Canvas Travel Planning

Canvas creates travel itineraries inside AI Mode’s side panel interface. You describe your trip requirements, select “Create with Canvas,” and receive plans combining flight and hotel data, Google Maps information, and web content.

Canvas travel planning is available on desktop in the US for users opted into the AI Mode experiment in Google Labs.

Flight Deals Global Expansion

Flight Deals uses AI to match flexible travelers with affordable destinations based on natural language descriptions of travel preferences.

The tool launched previously in the US, Canada, and India. The feature has started rolling out to more than 200 countries and territories.

Agentic Booking Expansion

AI Mode now searches across multiple reservation platforms to find real-time availability for restaurants, events, and local appointments. The system presents curated options with direct booking links to partner sites.

Restaurant booking launches this week in the US without requiring Labs access. Event tickets and local appointment booking remain available to US Labs users.

Why This Matters

Canvas and agentic booking capabilities represent Google handling trip research, planning, and reservations inside its own interface.

People who would previously visit multiple publisher sites to research destinations and compare options can now complete those tasks in AI Mode.

The updates fit Google’s established pattern of verticalizing high-value query types. Rather than presenting traditional search results that send users to external sites, AI Mode guides users through multi-step processes from research to transaction completion.

Looking Ahead

Google provided no timeline for direct flight and hotel booking in AI Mode beyond confirming active development with industry partners.

Watch for whether Google provides analytics or attribution tools that let businesses track bookings initiated through AI Mode. Without visibility into these flows, measuring the impact of AI Mode on travel and local business traffic will be difficult.

LLMs Are Changing Search & Breaking It: What SEOs Must Understand About AI’s Blind Spots via @sejournal, @MattGSouthern

In the last two years, incidents have shown how large language model (LLM)-powered systems can cause measurable harm. Some businesses have lost a majority of their traffic overnight, and publishers have watched revenue decline by over a third.

Tech companies have been accused of wrongful death where teenagers had extensive interaction with chatbots.

AI systems have given dangerous medical advice at scale, and chatbots have made up false claims about real people in defamation cases.

This article looks at the proven blind spots in LLM systems and what they mean for SEOs who work to optimize and protect brand visibility. You can read specific cases and understand the technical failures behind them.

The Engagement-Safety Paradox: Why LLMs Are Built To Validate, Not Challenge

LLMs face a basic conflict between business goals and user safety. The systems are trained to maximize engagement by being agreeable and keeping conversations going. This design choice increases retention and drives subscription revenue while generating training data.

In practice, it creates what researchers call “sycophancy,” the tendency to tell users what they want to hear rather than what they need to hear.

Stanford PhD researcher Jared Moore demonstrated this pattern. When a user claiming to be dead (showing symptoms of Cotard’s syndrome, a mental health condition) gets validation from a chatbot saying “that sounds really overwhelming” with offers of a “safe space” to explore feelings, the system backs up the delusion instead of giving a reality check. A human therapist would gently challenge this belief while the chatbot validates it.

OpenAI admitted this problem in September after facing a wrongful death lawsuit. The company said ChatGPT was “too agreeable” and failed to spot “signs of delusion or emotional dependency.” That admission came after 16-year-old Adam Raine from California died. His family’s lawsuit showed that ChatGPT’s systems flagged 377 self-harm messages, including 23 with over 90% confidence that he was at risk. The conversations kept going anyway.

The pattern was observed in Raine’s final month. He went from two to three flagged messages per week to more than 20 per week. By March, he spent nearly four hours daily on the platform. OpenAI’s spokesperson later acknowledged that safety guardrails “can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.

Think about what that means. The systems fail at the exact moment of highest risk, when vulnerable users are most engaged. This happens by design when you optimize for engagement metrics over safety protocols.

Character.AI faced similar issues with 14-year-old Sewell Setzer III from Florida, who died in February 2024. Court documents show he spent months in what he perceived as a romantic relationship with a chatbot character. He withdrew from family and friends, spending hours daily with the AI. The company’s business model was built for emotional attachment to maximize subscriptions.

A peer-reviewed study in New Media & Society found users showed “role-taking,” believing the AI had needs requiring attention, and kept using it “despite describing how Replika harmed their mental health.” When the product is addiction, safety becomes friction that cuts revenue.

This creates direct effects for brands using or optimizing for these systems. You’re working with technology that’s designed to agree and validate rather than give accurate information. That design shows up in how these systems handle facts and brand information.

Documented Business Impacts: When AI Systems Destroy Value

The business results of LLM failures are clear and proven. Between 2023 and 2025, companies showed traffic drops and revenue declines directly linked to AI systems.

Chegg: $17 Billion To $200 Million

Education platform Chegg filed an antitrust lawsuit against Google showing major business impact from AI Overviews. Traffic declined 49% year over year, while Q4 2024 revenue hit $143.5 million (down 24% year-over-year). Market value collapsed from $17 billion at peak to under $200 million, a 98% decline. The stock trades at around $1 per share.

CEO Nathan Schultz testified directly: “We would not need to review strategic alternatives if Google hadn’t launched AI Overviews. Traffic is being blocked from ever coming to Chegg because of Google’s AIO and their use of Chegg’s content.”

The case argues Google used Chegg’s educational content to train AI systems that directly compete with and replace Chegg’s business model. This represents a new form of competition where the platform uses your content to eliminate your traffic.

Giant Freakin Robot: Traffic Loss Forces Shutdown

Independent entertainment news site Giant Freakin Robot shut down after traffic collapsed from 20 million monthly visitors to “a few thousand.” Owner Josh Tyler attended a Google Web Creator Summit where engineers confirmed there was “no problem with content” but offered no solutions.

Tyler documented the experience publicly: “GIANT FREAKIN ROBOT isn’t the first site to shut down. Nor will it be the last. In the past few weeks alone, massive sites you absolutely have heard of have shut down. I know because I’m in contact with their owners. They just haven’t been brave enough to say it publicly yet.”

At the same summit, Google allegedly admitted prioritizing large brands over independent publishers in search results regardless of content quality. This wasn’t leaked or speculated but stated directly to publishers by company reps. Quality became secondary to brand recognition.

There’s a clear implication for SEOs. You can execute perfect technical SEO, create high-quality content, and still watch traffic disappear because of AI.

Penske Media: 33% Revenue Decline And $100 Million Lawsuit

In September, Penske Media Corporation (publisher of Rolling Stone, Variety, Billboard, Hollywood Reporter, Deadline, and other brands) sued Google in federal court. The lawsuit showed specific financial harm.

Court documents allege that 20% of searches linking to Penske Media sites now include AI Overviews, and that percentage is rising. Affiliate revenue declined more than 33% by the end of 2024 compared to peak. Click-throughs have declined since AI Overviews launched in May 2024. The company showed lost advertising and subscription revenue on top of affiliate losses.

CEO Jay Penske stated: “We have a duty to protect PMC’s best-in-class journalists and award-winning journalism as a source of truth, all of which is threatened by Google’s current actions.”

This is the first lawsuit by a major U.S. publisher targeting AI Overviews specifically with quantified business harm. The case seeks treble damages under antitrust law, permanent injunction, and restitution. Claims include reciprocal dealing, unlawful monopoly leveraging, monopolization, and unjust enrichment.

Even publishers with established brands and resources are showing revenue declines. If Rolling Stone and Variety can’t maintain click-through rates and revenue with AI Overviews in place, what does that mean for your clients or your organization?

The Attribution Failure Pattern

Beyond traffic loss, AI systems consistently fail to give proper credit for information. A Columbia University Tow Center study showed a 76.5% error rate in attribution across AI search systems. Even when publishers allow crawling, attribution doesn’t improve.

This creates a new problem for brand protection. Your content can be used, summarized, and presented without proper credit, so users get their answer without knowing the source. You lose both traffic and brand visibility at the same time.

SEO expert Lily Ray documented this pattern, finding a single AI Overview contained 31 Google property links versus seven external links (a 10:1 ratio favoring Google’s own properties). She stated: “It’s mind-boggling that Google, which pushed site owners to focus on E-E-A-T, is now elevating problematic, biased and spammy answers and citations in AI Overview results.”

When LLMs Can’t Tell Fact From Fiction: The Satire Problem

Google AI Overviews launched with errors that made the system briefly notorious. The technical problem wasn’t a bug. It was an inability to distinguish satire, jokes, and misinformation from factual content.

The system recommended adding glue to pizza sauce (sourced from an 11-year-old Reddit joke), suggested eating “at least one small rock per day“, and advised using gasoline to cook spaghetti faster.

These weren’t isolated incidents. The system consistently pulled from Reddit comments and satirical publications like The Onion, treating them as authoritative sources. When asked about edible wild mushrooms, Google’s AI emphasized characteristics shared by deadly mimics, creating potentially “sickening or even fatal” guidance, according to Purdue University mycology professor Mary Catherine Aime.

The problem extends beyond Google. Perplexity AI has faced multiple plagiarism accusations, including adding fabricated paragraphs to actual New York Post articles and presenting them as legitimate reporting.

For brands, this creates specific risks. If an LLM system sources information about your brand from Reddit jokes, satirical articles, or outdated forum posts, that misinformation gets presented with the same confidence as factual content. Users can’t tell the difference because the system itself can’t tell the difference.

The Defamation Risk: When AI Makes Up Facts About Real People

LLMs generate plausible-sounding false information about real people and companies. Several defamation cases show the pattern and legal implications.

Australian mayor Brian Hood threatened the first defamation lawsuit against an AI company in April 2023 after ChatGPT falsely claimed he had been imprisoned for bribery. In reality, Hood was the whistleblower who reported the bribes. The AI inverted his role from whistleblower to criminal.

Radio host Mark Walters sued OpenAI after ChatGPT fabricated claims that he embezzled funds from the Second Amendment Foundation. When journalist Fred Riehl asked ChatGPT to summarize an actual lawsuit, the system generated a completely fictional complaint naming Walters as a defendant accused of financial misconduct. Walters was never a party to the lawsuit nor mentioned in it.

The Georgia Superior Court dismissed the Walters case, finding OpenAI’s disclaimers about potential errors provided legal protection. The ruling established that “extensive warnings to users” can shield AI companies from defamation liability when the false information isn’t published by users.

The legal landscape remains unsettled. While OpenAI won the Walters case, that doesn’t mean all AI defamation claims will fail. The key issues are whether the AI system publishes false information about identifiable people and whether companies can disclaim responsibility for their systems’ outputs.

LLMs can generate false claims about your company, products, or executives. These false claims get presented with confidence to users. You need monitoring systems to catch these fabrications before they cause reputational damage.

Health Misinformation At Scale: When Bad Advice Becomes Dangerous

When Google AI Overviews launched, the system provided dangerous health advice, including recommending drinking urine to pass kidney stones and suggesting health benefits of running with scissors.

The problem extends beyond obvious absurdities. A Mount Sinai study found AI chatbots vulnerable to spreading harmful health information. Researchers could manipulate chatbots into providing dangerous medical advice with simple prompt engineering.

Meta AI’s internal policies explicitly allowed the company’s chatbots to provide false medical information, according to a 200+ page document exposed by Reuters.

For healthcare brands and medical publishers, this creates risks. AI systems might present dangerous misinformation alongside or instead of your accurate medical content. Users might follow AI-generated health advice that contradicts evidence-based medical guidance.

What SEOs Need To Do Now

Here’s what you need to do to protect your brands and clients:

Monitor For AI-Generated Brand Mentions

Set up monitoring systems to catch false or misleading information about your brand in AI systems. Test major LLM platforms monthly with queries about your brand, products, executives, and industry.

When you find false information, document it thoroughly with screenshots and timestamps. Report it through the platform’s feedback mechanisms. In some cases, you may need legal action to force corrections.

Add Technical Safeguards

Use robots.txt to control which AI crawlers access your site. Major systems like OpenAI’s GPTBot, Google-Extended, and Anthropic’s ClaudeBot respect robots.txt directives. Keep in mind that blocking these crawlers means your content won’t appear in AI-generated responses, reducing your visibility.

The key is finding a balance that allows enough access to influence how your content appears in LLM outputs while blocking crawlers that don’t serve your goals.

Consider adding terms of service that directly address AI scraping and content use. While legal enforcement varies, clear Terms of Service (TOS) give you a foundation for possible legal action if needed.

Monitor your server logs for AI crawler activity. Understanding which systems access your content and how frequently helps you make informed decisions about access control.

Advocate For Industry Standards

Individual companies can’t solve these problems alone. The industry needs standards for attribution, safety, and accountability. SEO professionals are well-positioned to push for these changes.

Join or support publisher advocacy groups pushing for proper attribution and traffic preservation. Organizations like News Media Alliance represent publisher interests in discussions with AI companies.

Participate in public comment periods when regulators solicit input on AI policy. The FTC, state attorneys general, and Congressional committees are actively investigating AI harms. Your voice as a practitioner matters.

Support research and documentation of AI failures. The more documented cases we have, the stronger the argument for regulation and industry standards becomes.

Push AI companies directly through their feedback channels by reporting errors when you find them and escalating systemic problems. Companies respond to pressure from professional users.

The Path Forward: Optimization In A Broken System

There is a lot of specific and concerning evidence. LLMs cause measurable harm through design choices that prioritize engagement over accuracy, through technical failures that create dangerous advice at scale, and through business models that extract value while destroying it for publishers.

Two teenagers died, multiple companies collapsed, and major publishers lost 30%+ of revenue. Courts are sanctioning lawyers for AI-generated lies, state attorneys general are investigating, and wrongful death lawsuits are proceeding. This is all happening now.

As AI integration accelerates across search platforms, the magnitude of these problems will scale. More traffic will flow through AI intermediaries, more brands will face lies about them, more users will receive made-up information, and more businesses will see revenue decline as AI Overviews answer questions without sending clicks.

Your role as an SEO now includes responsibilities that didn’t exist five years ago. The platforms rolling out these systems have shown they won’t address these problems proactively. Character.AI added minor protections only after lawsuits, OpenAI admitted sycophancy problems only after a wrongful death case, and Google pulled back AI Overviews only after public proof of dangerous advice.

Change within these companies comes from external pressure, not internal initiative. That means the pressure must come from practitioners, publishers, and businesses documenting harm and demanding accountability.

The cases here are just the beginning. Now that you understand the patterns and behavior, you’re better equipped to see problems coming and develop strategies to address them.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

The Technical SEO Debt That Will Destroy Your AI Visibility

If you’re a CMO, I feel your pain. For years, decades even, brand visibility has largely been an SEO arms race against your competitors. And then along comes ChatGPT, Perplexity, and Claude, not to mention Google’s new AI-powered search features: AI Mode and AI Overviews. Suddenly, you’ve also got to factor in your brand’s visibility in AI-generated responses as well.

Unfortunately, the technical shortcuts that helped your brand adapt quickly and stay competitive over the years have most likely left you with various legacy issues. This accumulated technical SEO debt could potentially devastate your AI visibility.

Of course, every legacy issue or technical problem will have a solution. But your biggest challenge in addressing your technical SEO debt isn’t complexity or incompetence; it’s assumption.

Assumptions are the white ants in your search strategy, hollowing out the team’s tactics and best efforts. Everything might still seem structurally sound on the surface because all the damage is happening inside the walls of the house, or between the lines of your SEO goals and workflows. But then comes that horrific day when someone inadvertently applies a little extra pressure in the wrong spot and the whole lot caves in.

The new demands of AI search are applying that pressure right now. How solid is your technical SEO?

Strong Search Rankings ≠ AI Visibility

One of the most dangerous assumptions you can make is thinking that, because your site ranks well enough in Google, the technical foundations must be sound. So, if the search engines have no problem crawling your site and indexing your content, the same should also be true for AI, right?

Wrong.

Okay, there are actually a couple of assumptions in there. But that’s often the way: One assumption provides the misleading context that leads to others, and the white ants start chewing your walls.

Let’s deal with that second assumption first: If your site ranks well in Google, it should enjoy similar visibility in AI.

We recently compared Ahrefs data for two major accommodation websites: Airbnb and Vrbo.

When we look at non-branded search, both websites have seen a downward trend since July. The most recent data point we have (Oct. 13-15, 2025) has Airbnb showing up in ~916,304 searches and Vrbo showing up in ~615,497. That’s a ratio of roughly 3:2.

Image from author, October 2025

But when we look at estimated ChatGPT mentions (September 2025), Airbnb has ~8,636, while Vrbo has only ~1,573. That’s a ratio of ~11:2.

Image from author, October 2025

I should add a caveat at this point that any AI-related datasets are early and modeled, so should be taken as indicative rather than absolute. However, the data suggests Vrbo appears far less in AI answers (and ChatGPT in particular) than you’d expect if there was any correlation with search rankings.

Because of Vrbo’s presence in Google’s organic search results, it does have a modest presence in Google’s AI Overviews and AI Mode. That’s because Google’s AI features still largely draw on the same search infrastructure.

And that’s the key issue here: Search engines aren’t the only ones sending crawlers to your website. And you can’t assume AI crawlers work in the same way.

AI Search Magnifies Your Technical SEO Debt

So, what about that first assumption: If your site ranks fine in Google, any technical debt must be negligible.

Google’s search infrastructure is highly sophisticated, taking in a much wider array of signals than AI crawlers currently do. The cumulative effect of all these signals can mask or compensate for small amounts of technical debt.

For example, a page with well-optimized copy, strong schema markup, and decent authority might still rank higher than a competitor’s, even if your page loads slightly slower.

Most AI crawlers don’t work that way. They strip away code, formatting, and schema markup to ingest only the raw text. With fewer other signals to balance things out, anything that hinders the crawler’s ability to access your content will have a greater impact on your AI visibility. There’s nowhere for your technical debt to hide.

The Need For Speed

Let’s look at just one of the most common forms of technical SEO debt: page speed.

Sub-optimal page speed rarely has a single cause. It’s usually down to a combination of factors – bloated code, inefficient CSS, large JavaScript bundles, oversized images and media files, poor infrastructure, and more – with each instance adding just a little more drag on how quickly the page loads in a typical browser.

Yes, we could be talking fractions of a second here and there, but the accumulation of issues can have a negative impact on the user experience. This is why faster websites will generally rank higher; Google treats page speed as a direct ranking factor in search.

Page speed also appears to be a significant factor in how often content appears in Google’s new AI Mode.

Dan Taylor recently crunched the data on 2,138 websites appearing as citations in AI Mode responses to see if there was any correlation between how often they were cited and their LCP and CLA scores. What he found was a clear drop-off in AI Mode citations for websites with slower load times.

Image from author, October 2025
Image from author, October 2025

We also looked at another popular method website owners use to assess page speed: Google’s PageSpeed Insights (PSI) tool. This aggregates a bunch of metrics, including the above two alongside many more, to generate an overall score out of 100. However, we found no correlation between PSI scores and citations in AI Mode.

So, while PageSpeed Insights can give you handy diagnostic information, identifying the various issues impacting your load times, your site’s Core Web Vitals are a more reliable indicator of how quickly and efficiently site visitors and crawlers can access your content.

I know what you’re thinking: This data is confined to Google’s AI Mode. It doesn’t tell us anything about whether the same is true for visibility in other AI platforms.

We currently lack any publicly available data to test the same theory for other agentic assistant tools such as ChatGPT, but the clues are all there.

Crawling Comes At A Cost

Back in July, OpenAI’s Sam Altman told Axios that ChatGPT receives 2.5 billion user prompts every day. For comparison, SparkToro estimates Google serves ~16.4 billion search queries per day.

The large language model (LLM) powering each AI platform responds to a prompt in two ways:

  1. Drawing on its large pool of training data.
  2. Sending out bots or crawlers to verify and supplement the information with data from additional sources in real time.

ChatGPT’s real-time crawler is called ChatGPT-User. At the time of writing, the previous seven days saw ChatGPT-User visit the SALT.agency website ~6,000 times. In the same period, Google’s search crawler, Googlebot, accessed our website ~2,500 times.

Handling billions of prompts each day consumes a huge amount of processing power. OpenAI estimates that its current expansion plans will require 10 gigawatts of power, which is roughly the output of 10 nuclear reactors.

Each one of those 6,000 crawls of the SALT website drew on these computational resources. However, a slow or inefficient website forces the crawler to burn even more of those resources.

As the volume of prompts continues to grow, the cumulative cost of all this crawling will only get bigger. At some point, the AI platforms will have no choice but to improve the cost efficiency of their crawlers (if they haven’t already), shunning those websites requiring more resources to crawl in favor of those which are quick and easy to access and read.

Why should ChatGPT waste resources crawling slow websites when it can extract the same or similar information from more efficient sites with far less hassle?

Is Your Site Already Invisible To AI?

All the above assumes the AI crawler can access your website in the first place. As it turns out, even that isn’t guaranteed.

In July this year, Cloudflare (one of the world’s largest content delivery networks) started blocking AI crawlers by default. This decision potentially impacts the AI visibility of millions of websites.

Cloudflare first gave website owners the ability to block AI crawlers in September 2024, and more than 1 million customers chose to do just that. The new pay-per-crawl feature takes this a step further, allowing paid users of Cloudflare to choose which crawlers they will allow and on what terms.

However, the difference now is that blocking AI crawlers is no longer opt-in. If you want your website and content to be visible in AI, you need to opt out; assuming you’re aware of the changes, of course.

If your site runs on Cloudflare infrastructure and you haven’t explicitly checked your settings recently, there’s a decent chance your website might now be invisible to ChatGPT, Claude, and Perplexity. Not because your content isn’t good enough. Not because your technical SEO is poor. But because a third-party platform made an infrastructure decision that directly impacts your visibility, and you might not even know it happened.

This is the uncomfortable reality CMOs need to face: You can’t assume what works today will work tomorrow. You can’t even assume that decisions affecting your AI visibility will always happen within your organisation.

And when a change like this does happen, you absolutely can’t assume someone else is handling it.

Who Is Responsible?

Most technical SEO issues will have a solution, but you’ve got to be aware of the problem in the first place. That requires two things:

  1. Someone responsible for identifying and highlighting these issues.
  2. Someone with the necessary skills and expertise to fix them.

Spelled out like this, my point might seem a tad patronizing. But be honest, could you name the person(s) responsible for these in your organization? Who would you say is responsible for proactively and autonomously identifying and raising Cloudflare’s new pay-per-crawl policy with you? And would they agree with your expectation if you asked them?

Oh, and don’t cop out by claiming the responsibility lies with your external SEO partners. Agencies might proactively advise clients whenever there’s “a major disturbance in the Force,” such as a pending Google update. But does your contract with them include monitoring every aspect of your infrastructure, including third-party services? And does this responsibility extend to improving your AI visibility on top of the usual SEO activities? Unless this is explicitly spelled out, there’s no reason to assume they’re actively ensuring all the various AI crawlers can access your site.

In short, most technical SEO debt happens because everyone assumes it’s someone else’s job.

The CMO assumes it’s the developer’s responsibility. It’s all code, right? The developers should know the website needs to rank in search and be visible in AI. Surely, they’ll implement technical SEO best practice by default.

But developers aren’t technical SEO experts in exactly the same way they’re not web designers or UX specialists. They’ll build what they’re briefed to build. They’ll prioritize what you tell them to prioritize.

As a result, the dev team assumes it’s up to the SEO team to flag any new technical changes. But the SEO team assumes all is well because last quarter’s technical audit, based on the same list of checks they’ve relied on for years, didn’t identify anything amiss. And everybody assumes that, if there were going to be any issues with AI visibility, someone else would have raised it by now.

This confusion all helps technical debt to accumulate, unseen and unchecked.

→ Read more: Why Your SEO Isn’t Working, And It’s Not The Team’s Fault

Stop Assuming And Start Doing

The best time to prevent white ants from eating the walls in your home is before you know they’re there. Wait until the problems are obvious, and the expense of fixing all the damage will far outweigh the costs of an initial inspection and a few precautionary measures.

In the same way, don’t wait until it becomes obvious that your brand’s visibility in AI is compromised. Perform the necessary inspections now. Identify and fix any technical issues now that might cause issues for AI crawlers.

A big part of this will be strong communication between your teams, with accountabilities that make clear who is responsible for monitoring and actioning each factor contributing to your overall visibility in AI.

If you don’t, any investment and effort your team puts into optimizing brand content for AI could be wasted.

Stop assuming tomorrow will work like today. Technical SEO debt will impact your AI visibility. That’s not up for debate. The real question is whether you’ll proactively address your technical SEO debt now or wait until the assumptions cause your online visibility to crumble.

More Resources:


Featured Image: SvetaZi/Shutterstock

Server Security Scanner Vulnerability Affects Up To 56M Sites via @sejournal, @martinibuster

A critical vulnerability was recently discovered in Imunify360 AV, a security scanner used by web hosting companies to protect over 56 million websites. An advisory by cybersecurity company Patchstack warns that the vulnerability can allow attackers to take full control of the server and every website on it.

Imunify360 AV

Imunify360 AV is a malware scanning system used by multiple hosting companies. The vulnerability was discovered within its AI-Bolit file-scanning engine and within the separate database-scanning module. Because both the file and database scanners are affected, attackers can compromise the server through two paths, which can allow full server takeover and potentially put millions of websites at risk.

Patchstack shared details of the potential impact:

“Remote attackers can embed specifically crafted obfuscated PHP that matches imunify360AV (AI-bolit) deobfuscation signatures. The deobfuscator will execute extracted functions on attacker-controlled data, allowing execution of arbitrary system commands or arbitrary PHP code. Impact ranges from website compromise to full server takeover depending on hosting configuration and privileges.

Detection is non-trivial because the malicious payloads are obfuscated (hex escapes, packed payloads, base64/gzinflate chains, custom delta/ord transformations) and are intended to be deobfuscated by the tool itself.

imunify360AV (Ai-Bolit) is a malware scanner specialized in website-related files like php/js/html. By default, the scanner is installed as a service and works with a root privileges

Shared hosting escalation: On shared hosting, successful exploitation can lead to privilege escalation and root access depending on how the scanner is deployed and its privileges. if imunify360AV or its wrapper runs with elevated privileges an attacker could leverage RCE to move from a single compromised site to complete host control.”

Patchstack shows that the scanner’s own design gives attackers both the method of entry and the mechanism for execution. The tool is built to deobfuscate complex payloads, and that capability becomes the reason the exploit works. Once the scanner decodes attacker-supplied functions, it can run them with the same privileges it already has.

In environments where the scanner operates with elevated access, a single malicious payload can move from a website-level compromise to control of the entire hosting server. This connection between deobfuscation, privilege level, and execution explains why Patchstack classifies the impact as ranging up to full server takeover.

Two Vulnerable Paths: File Scanner and Database Scanner

Security researchers initially discovered a flaw in the file scanner, but the database-scanning module was later found to be vulnerable in the same way. According to the announcement: “the database scanner (imunify_dbscan.php) was also vulnerable, and vulnerable in the exact same way.” Both of the malware scanning components (file and database scanners) pass malicious code into Imunify360’s internal routines that then execute the untrusted code, giving attackers two different ways to trigger the vulnerability.

Why The Vulnerability Is Easy To Exploit

The file-scanner part of the vulnerability required attackers to place a harmful file onto the server in a location that Imunify360 would eventually scan. But the database-scanner part of the vulnerability needs only the ability to write to the database, which is common on shared hosting platforms.

Because comment forms, contact forms, profile fields, and search logs can write data to the database, injecting malicious content becomes easy for an attacker, even without authentication. This makes the vulnerability broader than a normal malware-execution flaw because it turns a common user input into a vulnerability vector for remote code execution.

Vendor Silence And Disclosure Timeline

According to Patchstack, a patch has been issued by Imunify360 AV but no public statement has been made about the vulnerability and no CVE has been issued for it. A CVE (Common Vulnerabilities and Exposures) is a unique identifier assigned to a specific vulnerability in software. It serves as a public record and provides a standardized way to catalog a vulnerability so that interested parties are made aware of the flaw, particularly for risk management. If no CVE is issued then users and potential users may not learn about the vulnerability, even though the issue is already publicly listed on Imunify360’s Zendesk.

Patchstack explains:

“This vulnerability has been known since late October, and customers began receiving notifications shortly thereafter, and we advise affected hosting providers to reach out to the vendor for additional information on possible exploitation in the wild or any internal investigation results.

Unfortunately there has been no statement released about the issue by Imunify360’s team, and no CVE has yet been assigned. At the same time, the issue has been publicly available on their Zendesk since November 4, 2025.

Based on our review of this vulnerability , we consider the CVSS score to be: 9.9”

Recommended Actions for Administrators

Patchstack recommends that server administrators immediately apply vendor security updates if running Imunify360 AV (AI-bolit) prior to version 32.7.4.0, or remove the tool if patching is not possible. If an immediate patch cannot be applied, the tool’s execution environment should be restricted, such as running it in an isolated container with minimal privileges. All administrators are also urged to contact CloudLinux / Imunify360 support to report potential exposure, confirm if their environment was affected, and to collaborate on post-incident guidance.

Featured Image by Shutterstock/DC Studio

3 Years In, GenAI Upends Ecommerce

In just three years, generative AI has changed how ecommerce businesses attract shoppers and close sales.

GenAI began for many users with ChatGPT, then a novel chatbot. But now the technology creates images, writes copy, produces videos, and even codes websites. The result is both easier-than-ever creativity and heightened competition.

When it debuted ChatGPT on November 30, 2022, OpenAI marked a turning point in productivity. Within months, millions of users were generating blog posts, ad scripts, and product descriptions.

The tools that followed promised to make creative work accessible to anyone. We now know this is only partly true.

Home page of ChatGPT from November 30, 2022

ChatGPT launched three years ago.

Composition

First, consider writing. ChatGPT, Claude, Gemini, Grok, and dozens of other AI tools can draft newsletters, create ads, or write search-engine-optimized copy.

In a recent ecommerce project, I directed AI to generate product copy from this manufacturer’s description:

At less than 5″ wide, the incredibly slim and easy-to-use K-Mini® single-serve coffee maker makes anywhere perfect for great coffee.

Before composing, the AI tool searched the web and considered SEO keywords. After a couple of research steps, it produced several versions, including this one:

The Keurig K-Mini Single-Serve K-Cup Pod coffee maker is so compact that it could fit in your sock drawer.

It was designed for those who live life on the edge of counter space. You know: it’s Monday morning, and your kitchen counter is as cluttered as a teenager’s bedroom floor. Too crowded for a conventional coffee machine.

At less than 5″ wide, this little marvel is here to rescue your mornings with a splash of style and a dash of caffeine.

Image of the coffee maker and kitchen accessories neatly organized on a counter

AI produced descriptions for this coffee maker and then generated various background images from a single photo.

Whether or not the AI version is better or even just good enough is subjective. What makes it incredible is the speed. The site above aims to generate thousands of product descriptions in hours and then test each one to target shoppers.

Beyond product descriptions, marketers now employ large language models for research, outlines, and drafts of various sorts. AI has become indispensable for most writing projects.

Advertising

The design and images for ads are just as easy. Midjourney, Adobe Firefly, and Stable Diffusion can produce quality pictures and illustrations in minutes — for marketing emails, printed postcards, and more.

For digital advertising, Meta’s new Generative Ads Recommendation Model creates ad variations and tests them against pixel conversions. Other services, such as the start-up AdPrompt, similarly generate ads tailored to audiences.

There is more. Last week, Webflow, a website builder, announced App Gen, its AI-powered vibe-coding tool. This AI can access a site’s content management system and interact with its components, such as navigation.

Webflow’s integration is impressive, as are similar tools such as Shopify’s Magic and Sidekick.

Marketers at ecommerce SMBs can seemingly do more than ever without developers.

New Competition

Yet for ecommerce marketers, AI democratization cuts both ways.

While unlocking efficiency, genAI makes it harder to differentiate, get noticed, and keep up.

The value of a word or image is rapidly decreasing. Content is a mass-produced commodity.

The output itself even risks devolving into bland sameness. When every business can produce nearly unlimited copy, images, and ads, creative volume ceases to be an advantage. Distribution and platform control matter more.

Search engine results are increasingly generative summaries, which reduce organic clicks. Zero-click answers keep shoppers on the search platform rather than sending them to external sites. And AI-driven search ads enable automated bidding and creative optimization, which raises costs.

Finally, agentic commerce is the zero-click equivalent for transactions. AI assistants and chatbots can now purchase directly, removing the store’s website from the customer journey. The very technology that empowers productivity also consolidates discovery and conversion inside third-party ecosystems.

This is the new competition. It requires new skills.

New Skills

One of generative AI’s promises is a half-truth. These tools were supposed to make creative work accessible to anyone.

But it doesn’t.

Producing content, ads, and experiences requires a different kind of expertise involving prompt engineering, agent building, and AI-human collaboration.

It is simply wrong to assume all marketers can magically produce winning campaigns, great landing pages, or even effective content.

Next Up

Every wave of ecommerce automation and improvement, from hosted storefronts to marketing automation, has launched new service providers. The same pattern will likely hold in the AI era and solve the paradox described here.

Expect a new generation of tools to help SMBs and enterprise brands alike reach customers directly.

Generative AI has changed ecommerce marketing forever, but the opportunity remains. Success will come to businesses that use these systems strategically and find new ways to compete, create, and connect.