Google Extends AI Travel Planning And Agentic Booking In Search via @sejournal, @MattGSouthern

Google announced three AI-powered updates to Search that extend how users plan and book travel within AI Mode.

The company is launching Canvas for travel planning on desktop, expanding Flight Deals globally, and rolling out agentic booking capabilities that connect users directly to reservation partners.

The announcement continues Google’s push to handle complete user journeys inside Search rather than directing traffic to publisher sites and booking platforms.

What’s New

Canvas Travel Planning

Canvas creates travel itineraries inside AI Mode’s side panel interface. You describe your trip requirements, select “Create with Canvas,” and receive plans combining flight and hotel data, Google Maps information, and web content.

Canvas travel planning is available on desktop in the US for users opted into the AI Mode experiment in Google Labs.

Flight Deals Global Expansion

Flight Deals uses AI to match flexible travelers with affordable destinations based on natural language descriptions of travel preferences.

The tool launched previously in the US, Canada, and India. The feature has started rolling out to more than 200 countries and territories.

Agentic Booking Expansion

AI Mode now searches across multiple reservation platforms to find real-time availability for restaurants, events, and local appointments. The system presents curated options with direct booking links to partner sites.

Restaurant booking launches this week in the US without requiring Labs access. Event tickets and local appointment booking remain available to US Labs users.

Why This Matters

Canvas and agentic booking capabilities represent Google handling trip research, planning, and reservations inside its own interface.

People who would previously visit multiple publisher sites to research destinations and compare options can now complete those tasks in AI Mode.

The updates fit Google’s established pattern of verticalizing high-value query types. Rather than presenting traditional search results that send users to external sites, AI Mode guides users through multi-step processes from research to transaction completion.

Looking Ahead

Google provided no timeline for direct flight and hotel booking in AI Mode beyond confirming active development with industry partners.

Watch for whether Google provides analytics or attribution tools that let businesses track bookings initiated through AI Mode. Without visibility into these flows, measuring the impact of AI Mode on travel and local business traffic will be difficult.

LLMs Are Changing Search & Breaking It: What SEOs Must Understand About AI’s Blind Spots via @sejournal, @MattGSouthern

In the last two years, incidents have shown how large language model (LLM)-powered systems can cause measurable harm. Some businesses have lost a majority of their traffic overnight, and publishers have watched revenue decline by over a third.

Tech companies have been accused of wrongful death where teenagers had extensive interaction with chatbots.

AI systems have given dangerous medical advice at scale, and chatbots have made up false claims about real people in defamation cases.

This article looks at the proven blind spots in LLM systems and what they mean for SEOs who work to optimize and protect brand visibility. You can read specific cases and understand the technical failures behind them.

The Engagement-Safety Paradox: Why LLMs Are Built To Validate, Not Challenge

LLMs face a basic conflict between business goals and user safety. The systems are trained to maximize engagement by being agreeable and keeping conversations going. This design choice increases retention and drives subscription revenue while generating training data.

In practice, it creates what researchers call “sycophancy,” the tendency to tell users what they want to hear rather than what they need to hear.

Stanford PhD researcher Jared Moore demonstrated this pattern. When a user claiming to be dead (showing symptoms of Cotard’s syndrome, a mental health condition) gets validation from a chatbot saying “that sounds really overwhelming” with offers of a “safe space” to explore feelings, the system backs up the delusion instead of giving a reality check. A human therapist would gently challenge this belief while the chatbot validates it.

OpenAI admitted this problem in September after facing a wrongful death lawsuit. The company said ChatGPT was “too agreeable” and failed to spot “signs of delusion or emotional dependency.” That admission came after 16-year-old Adam Raine from California died. His family’s lawsuit showed that ChatGPT’s systems flagged 377 self-harm messages, including 23 with over 90% confidence that he was at risk. The conversations kept going anyway.

The pattern was observed in Raine’s final month. He went from two to three flagged messages per week to more than 20 per week. By March, he spent nearly four hours daily on the platform. OpenAI’s spokesperson later acknowledged that safety guardrails “can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.

Think about what that means. The systems fail at the exact moment of highest risk, when vulnerable users are most engaged. This happens by design when you optimize for engagement metrics over safety protocols.

Character.AI faced similar issues with 14-year-old Sewell Setzer III from Florida, who died in February 2024. Court documents show he spent months in what he perceived as a romantic relationship with a chatbot character. He withdrew from family and friends, spending hours daily with the AI. The company’s business model was built for emotional attachment to maximize subscriptions.

A peer-reviewed study in New Media & Society found users showed “role-taking,” believing the AI had needs requiring attention, and kept using it “despite describing how Replika harmed their mental health.” When the product is addiction, safety becomes friction that cuts revenue.

This creates direct effects for brands using or optimizing for these systems. You’re working with technology that’s designed to agree and validate rather than give accurate information. That design shows up in how these systems handle facts and brand information.

Documented Business Impacts: When AI Systems Destroy Value

The business results of LLM failures are clear and proven. Between 2023 and 2025, companies showed traffic drops and revenue declines directly linked to AI systems.

Chegg: $17 Billion To $200 Million

Education platform Chegg filed an antitrust lawsuit against Google showing major business impact from AI Overviews. Traffic declined 49% year over year, while Q4 2024 revenue hit $143.5 million (down 24% year-over-year). Market value collapsed from $17 billion at peak to under $200 million, a 98% decline. The stock trades at around $1 per share.

CEO Nathan Schultz testified directly: “We would not need to review strategic alternatives if Google hadn’t launched AI Overviews. Traffic is being blocked from ever coming to Chegg because of Google’s AIO and their use of Chegg’s content.”

The case argues Google used Chegg’s educational content to train AI systems that directly compete with and replace Chegg’s business model. This represents a new form of competition where the platform uses your content to eliminate your traffic.

Giant Freakin Robot: Traffic Loss Forces Shutdown

Independent entertainment news site Giant Freakin Robot shut down after traffic collapsed from 20 million monthly visitors to “a few thousand.” Owner Josh Tyler attended a Google Web Creator Summit where engineers confirmed there was “no problem with content” but offered no solutions.

Tyler documented the experience publicly: “GIANT FREAKIN ROBOT isn’t the first site to shut down. Nor will it be the last. In the past few weeks alone, massive sites you absolutely have heard of have shut down. I know because I’m in contact with their owners. They just haven’t been brave enough to say it publicly yet.”

At the same summit, Google allegedly admitted prioritizing large brands over independent publishers in search results regardless of content quality. This wasn’t leaked or speculated but stated directly to publishers by company reps. Quality became secondary to brand recognition.

There’s a clear implication for SEOs. You can execute perfect technical SEO, create high-quality content, and still watch traffic disappear because of AI.

Penske Media: 33% Revenue Decline And $100 Million Lawsuit

In September, Penske Media Corporation (publisher of Rolling Stone, Variety, Billboard, Hollywood Reporter, Deadline, and other brands) sued Google in federal court. The lawsuit showed specific financial harm.

Court documents allege that 20% of searches linking to Penske Media sites now include AI Overviews, and that percentage is rising. Affiliate revenue declined more than 33% by the end of 2024 compared to peak. Click-throughs have declined since AI Overviews launched in May 2024. The company showed lost advertising and subscription revenue on top of affiliate losses.

CEO Jay Penske stated: “We have a duty to protect PMC’s best-in-class journalists and award-winning journalism as a source of truth, all of which is threatened by Google’s current actions.”

This is the first lawsuit by a major U.S. publisher targeting AI Overviews specifically with quantified business harm. The case seeks treble damages under antitrust law, permanent injunction, and restitution. Claims include reciprocal dealing, unlawful monopoly leveraging, monopolization, and unjust enrichment.

Even publishers with established brands and resources are showing revenue declines. If Rolling Stone and Variety can’t maintain click-through rates and revenue with AI Overviews in place, what does that mean for your clients or your organization?

The Attribution Failure Pattern

Beyond traffic loss, AI systems consistently fail to give proper credit for information. A Columbia University Tow Center study showed a 76.5% error rate in attribution across AI search systems. Even when publishers allow crawling, attribution doesn’t improve.

This creates a new problem for brand protection. Your content can be used, summarized, and presented without proper credit, so users get their answer without knowing the source. You lose both traffic and brand visibility at the same time.

SEO expert Lily Ray documented this pattern, finding a single AI Overview contained 31 Google property links versus seven external links (a 10:1 ratio favoring Google’s own properties). She stated: “It’s mind-boggling that Google, which pushed site owners to focus on E-E-A-T, is now elevating problematic, biased and spammy answers and citations in AI Overview results.”

When LLMs Can’t Tell Fact From Fiction: The Satire Problem

Google AI Overviews launched with errors that made the system briefly notorious. The technical problem wasn’t a bug. It was an inability to distinguish satire, jokes, and misinformation from factual content.

The system recommended adding glue to pizza sauce (sourced from an 11-year-old Reddit joke), suggested eating “at least one small rock per day“, and advised using gasoline to cook spaghetti faster.

These weren’t isolated incidents. The system consistently pulled from Reddit comments and satirical publications like The Onion, treating them as authoritative sources. When asked about edible wild mushrooms, Google’s AI emphasized characteristics shared by deadly mimics, creating potentially “sickening or even fatal” guidance, according to Purdue University mycology professor Mary Catherine Aime.

The problem extends beyond Google. Perplexity AI has faced multiple plagiarism accusations, including adding fabricated paragraphs to actual New York Post articles and presenting them as legitimate reporting.

For brands, this creates specific risks. If an LLM system sources information about your brand from Reddit jokes, satirical articles, or outdated forum posts, that misinformation gets presented with the same confidence as factual content. Users can’t tell the difference because the system itself can’t tell the difference.

The Defamation Risk: When AI Makes Up Facts About Real People

LLMs generate plausible-sounding false information about real people and companies. Several defamation cases show the pattern and legal implications.

Australian mayor Brian Hood threatened the first defamation lawsuit against an AI company in April 2023 after ChatGPT falsely claimed he had been imprisoned for bribery. In reality, Hood was the whistleblower who reported the bribes. The AI inverted his role from whistleblower to criminal.

Radio host Mark Walters sued OpenAI after ChatGPT fabricated claims that he embezzled funds from the Second Amendment Foundation. When journalist Fred Riehl asked ChatGPT to summarize an actual lawsuit, the system generated a completely fictional complaint naming Walters as a defendant accused of financial misconduct. Walters was never a party to the lawsuit nor mentioned in it.

The Georgia Superior Court dismissed the Walters case, finding OpenAI’s disclaimers about potential errors provided legal protection. The ruling established that “extensive warnings to users” can shield AI companies from defamation liability when the false information isn’t published by users.

The legal landscape remains unsettled. While OpenAI won the Walters case, that doesn’t mean all AI defamation claims will fail. The key issues are whether the AI system publishes false information about identifiable people and whether companies can disclaim responsibility for their systems’ outputs.

LLMs can generate false claims about your company, products, or executives. These false claims get presented with confidence to users. You need monitoring systems to catch these fabrications before they cause reputational damage.

Health Misinformation At Scale: When Bad Advice Becomes Dangerous

When Google AI Overviews launched, the system provided dangerous health advice, including recommending drinking urine to pass kidney stones and suggesting health benefits of running with scissors.

The problem extends beyond obvious absurdities. A Mount Sinai study found AI chatbots vulnerable to spreading harmful health information. Researchers could manipulate chatbots into providing dangerous medical advice with simple prompt engineering.

Meta AI’s internal policies explicitly allowed the company’s chatbots to provide false medical information, according to a 200+ page document exposed by Reuters.

For healthcare brands and medical publishers, this creates risks. AI systems might present dangerous misinformation alongside or instead of your accurate medical content. Users might follow AI-generated health advice that contradicts evidence-based medical guidance.

What SEOs Need To Do Now

Here’s what you need to do to protect your brands and clients:

Monitor For AI-Generated Brand Mentions

Set up monitoring systems to catch false or misleading information about your brand in AI systems. Test major LLM platforms monthly with queries about your brand, products, executives, and industry.

When you find false information, document it thoroughly with screenshots and timestamps. Report it through the platform’s feedback mechanisms. In some cases, you may need legal action to force corrections.

Add Technical Safeguards

Use robots.txt to control which AI crawlers access your site. Major systems like OpenAI’s GPTBot, Google-Extended, and Anthropic’s ClaudeBot respect robots.txt directives. Keep in mind that blocking these crawlers means your content won’t appear in AI-generated responses, reducing your visibility.

The key is finding a balance that allows enough access to influence how your content appears in LLM outputs while blocking crawlers that don’t serve your goals.

Consider adding terms of service that directly address AI scraping and content use. While legal enforcement varies, clear Terms of Service (TOS) give you a foundation for possible legal action if needed.

Monitor your server logs for AI crawler activity. Understanding which systems access your content and how frequently helps you make informed decisions about access control.

Advocate For Industry Standards

Individual companies can’t solve these problems alone. The industry needs standards for attribution, safety, and accountability. SEO professionals are well-positioned to push for these changes.

Join or support publisher advocacy groups pushing for proper attribution and traffic preservation. Organizations like News Media Alliance represent publisher interests in discussions with AI companies.

Participate in public comment periods when regulators solicit input on AI policy. The FTC, state attorneys general, and Congressional committees are actively investigating AI harms. Your voice as a practitioner matters.

Support research and documentation of AI failures. The more documented cases we have, the stronger the argument for regulation and industry standards becomes.

Push AI companies directly through their feedback channels by reporting errors when you find them and escalating systemic problems. Companies respond to pressure from professional users.

The Path Forward: Optimization In A Broken System

There is a lot of specific and concerning evidence. LLMs cause measurable harm through design choices that prioritize engagement over accuracy, through technical failures that create dangerous advice at scale, and through business models that extract value while destroying it for publishers.

Two teenagers died, multiple companies collapsed, and major publishers lost 30%+ of revenue. Courts are sanctioning lawyers for AI-generated lies, state attorneys general are investigating, and wrongful death lawsuits are proceeding. This is all happening now.

As AI integration accelerates across search platforms, the magnitude of these problems will scale. More traffic will flow through AI intermediaries, more brands will face lies about them, more users will receive made-up information, and more businesses will see revenue decline as AI Overviews answer questions without sending clicks.

Your role as an SEO now includes responsibilities that didn’t exist five years ago. The platforms rolling out these systems have shown they won’t address these problems proactively. Character.AI added minor protections only after lawsuits, OpenAI admitted sycophancy problems only after a wrongful death case, and Google pulled back AI Overviews only after public proof of dangerous advice.

Change within these companies comes from external pressure, not internal initiative. That means the pressure must come from practitioners, publishers, and businesses documenting harm and demanding accountability.

The cases here are just the beginning. Now that you understand the patterns and behavior, you’re better equipped to see problems coming and develop strategies to address them.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

ChatGPT Outage Affects APIs And File Uploads via @sejournal, @martinibuster

OpenAI is experiencing a widespread outage affecting two systems, APIs and ChatGPT. The outage has been ongoing for at least a half an hour as of publication date.

ChatGPT API Jobs Stuck Outage

The first issue is that batch API jobs get stuck in the finalization state. There are twelve components of APIs that are monitored for uptime and it’s the Batch part that’s experiencing “degraded” performance. The issue has been ongoing since 3:54 PM.

According to OpenAI:

“Subset of Batch API jobs stuck in finalizing state”

ChatGPT Uploads Outage

The other error pertains to ChatGPT file uploads are failing. This is described as a partial outage.

OpenAI’s official explanation:

“File uploads to ChatGPT conversations are failing for some users, giving an error message indicating the file has expired.

…File uploads to ChatGPT conversations are failing for some users, giving an error message indicating the file has expired.”

This issue has been ongoing since 3:53 PM.

Screenshot of OpenAI Uploads Outage

Data: Translated Sites See 327% More Visibility in AI Overviews

This post was sponsored by Weglot. The opinions expressed in this article are the sponsor’s own.

When Google’s AI Overviews launched in 2024, dozens of questions quickly surfaced among SEO professionals, one being: if AI now curates and summarizes search results, how do websites earn visibility, especially across languages?

Weglot recently conducted a data-driven study, analyzing 1.3 million citations across Google AI Overviews and ChatGPT to determine if LLMs cite content in one language, would they also cite it in others?

The result: translated websites saw up to 327% more visibility in AI Overviews than untranslated ones, a clear signal that international SEO is becoming inseparable from AI search.

What’s more, websites with another language available were also more likely to be cited in AI Overviews, regardless of the language the search was made.

This shift is redefining the rules of visibility. AI Overviews and large language models (LLMs) now mediate how information is discovered. Instead of ranking pages, they “cite” sources in generated responses.

But with that shift comes a new risk: if your website isn’t available in the user’s search language, does AI simply overlook it, or worse, send users to Google Translate’s proxy page instead?

The risk with Google’s Translate proxy is that while it does the translation work for you, you have no control over the translations of your content. Worse still, you don’t get any of the traffic benefits, as users are not directed to your site.

The Study

Here’s how the research worked. To understand how translation affects AI visibility, Weglot focused the research on Spanish-language websites across two markets: Spain and Mexico.

The study was then split into two phases. Phase one focused on websites that weren’t translated, and therefore only displayed the language intended for their market, in this case, Spanish.

In that phase, Weglot looked at 153 websites without English translations: 98 from Spain and 55 from Mexico. Weglot deliberately selected high-traffic sites because they offered no English versions.

Phase two involved a comparison group of 83 Spanish and Mexican sites with versions in both Spanish and English. This allowed Weglot to directly compare the performance of translated versus untranslated content.

In total, this generated 22,854 queries in phase one and 12,138 in phase two. The methodology converted the top 50 non-branded keywords of each site into queries that users would likely search, and then these were translated between the Spanish and English versions.

In total, 1.3 million citations were analyzed.

The Key Results

Untranslated Sites Have Very Low AI Search Visibility

The findings show that untranslated websites experience a substantial drop in visibility for searches conducted in non-available languages, despite maintaining strong visibility in the current available language.

Diving deeper into this, untranslated sites essentially lose massive visibility. From the study, even when these Spanish websites performed well in Spanish searches, the sites virtually disappeared in English searches.

Looking at this data further within Google AI Overviews:

  • The sample size of 98 untranslated sites from Spain had 17,094 citations for Spanish queries vs 2,810 citations for the equivalent search in English, a 431% gap in visibility.
  • Taking a look at untranslated sites in Mexico, the study identified a similar pattern. 12,038 citations for Spanish queries vs 3,450 citations for English, showing 213% fewer citations when searching English.

Even ChatGPT, though slightly more balanced, still favored translated sites, with Spanish sites receiving 3.5% fewer citations in English and 4.9% fewer with Mexican sites.

Image created by Weglot, November 2025

Translated Sites Have 327% More AI Search Visibility

But what happens when you do translate your site?

Bringing in the comparison group of Spanish websites that also have an English version, we can see that translated sites dramatically close the visibility gap and that having a second language transformed visibility within Google AI Overviews.

Google AI Overviews:

  • Translated sites in Spain saw 10,046 citations vs 8,048 in English, showcasing only a 22% gap.
  • Translated sites in Mexico showed 5,527 citations for Spanish queries and 3,325 citations for English, and a difference of 59%.

Overall, translated sites achieved 327% more visibility than untranslated ones and earned 24% more total citations per query.

When looking at ChatGPT, the bias almost vanished. Translated sites saw near-equal citations in both languages.

Image created by Weglot, November 2025

Next Steps: Translate Your Site To Boost Global Visibility In AI SERPs

Translation does more than boost visibility, it multiplies it.

Not only does having multiple languages across your site ensure your site gets picked up for searches in multiple languages, but it also adds to the overall visibility of your site as a whole.

The study found that translated sites perform better across all metrics. The data shows that translated sites received 24% more citations per prompt than untranslated sites.

Looking at this by language, translation resulted in a 33% increase in English citations and a 16% increase in Spanish citations per query.

Weglot’s findings indicate that translation acts as a signal of authority and reliability for AIOs and ChatGPT, boosting citation performance across all languages, not only the ones content is translated.

Image created by Weglot, November 2025

AI Search Rewards Translated Content as a Visibility Signal

Traditional international SEO has long focused on hreflang tags and localized keywords. But in the age of AI search, translation itself becomes a visibility signal:

  1. Language alignment: AI engines prioritize content matching the query’s language.
  2. Authority building: Translated content attracts engagement across markets, improving perceived reliability.
  3. Traffic control: Proper translations prevent Google Translate proxies from intercepting clicks.
  4. Semantic reach: Multilingual content broadens your surface area for AI training and citation.

Put simply: If your content isn’t in the language of the question, it’s unlikely it will be in the answer either.

The Business Impact

The consequences aren’t theoretical. One case in Weglot’s dataset, a major Spanish book retailer selling English-language titles worldwide without an English version of its site, shows the impact.

When English speakers searched for relevant books:

  • The site appeared 64% less often in Google AI Overviews and ChatGPT.
  • In 36% of the cases where it did appear, the link pointed to Google Translate’s proxy, not the retailer’s own domain.

Despite offering exactly what English users wanted, the business lost visibility, traffic, and ultimately, sales.

The Bigger Picture: AI Search Is Redefining SEO and Translation Is Now a Growth Strategy

The implications reach far beyond Spain or Mexico, or even the Spanish language.

As AI search evolves, the SEO playbook is expanding. Ranking isn’t just about “position one” anymore; it’s about being cited, summarized, and surfaced by machines trained on multilingual web content.

Weglot’s findings point to a future where translation is both an SEO and an AI strategy and not a localization afterthought.

With Google AIOs now live in multiple languages and ChatGPT integrating real-time web data, multilingual visibility has become an equity issue: sites optimized for one language risk being invisible in another.

Image created by Weglot, November 2025

Final Takeaway: Untranslated Sites Are Invisible in AI Search

The evidence is clear: Untranslated = unseen. Website translation is high up there for AIO visibility.

As AI continues to shape how search engines understand relevance, translation isn’t just about accessibility; it’s how your brand gets recognized by algorithms and audiences alike.

For the easiest way to translate a website, start your free trial now!

Plus, enjoy a 15% discount for 12 months on public plans by using the promo code SEARCH15 on a paid plan purchase.

Image Credits

Featured Image: Image by Weglot. Used with permission.

In-Post Images: Image by Weglot. Used with permission.

llms.txt: The Web’s Next Great Idea, Or Its Next Spam Magnet via @sejournal, @DuaneForrester

At a recent conference, I was asked if llms.txt mattered. I’m personally not a fan, and we’ll get into why below. I listened to a friend who told me I needed to learn more about it as she believed I didn’t fully understand the proposal, and I have to admit that she was right. After doing a deep dive on it, I now understand it much better. Unfortunately, that only served to crystallize my initial misgivings. And while this may sound like a single person disliking an idea, I’m actually trying to view this from the perspective of the search engine or the AI platform. Why would they, or why wouldn’t they, adopt this protocol? And that POV led me to some, I think, interesting insights.

We all know that search is not the only discovery layer anymore. Large-language-model (LLM)-driven tools are rewriting how web content is found, consumed, and represented. The proposed protocol, called llms.txt, attempts to help websites guide those tools. But the idea carries the same trust challenges that killed earlier “help the machine understand me” signals. This article explores what llms.txt is meant to do (as I understand it), why platforms would be reluctant, how it can be abused, and what must change before it becomes meaningful.

Image Credit: Duane Forrester

What llms.txt Hoped To Fix

Modern websites are built for human browsers: heavy JavaScript, complex navigation, interstitials, ads, dynamic templates. But most LLMs, especially at inference time, operate in constrained environments: limited context windows, single-pass document reads, and simpler retrieval than traditional search indexers. The original proposal from Answer.AI suggests adding an llms.txt markdown file at the root of a site, which lists the most important pages, optionally with flattened content so AI systems don’t have to scramble through noise.

Supporters describe the file as “a hand-crafted sitemap for AI tools” rather than a crawl-block file. In short, the theory: Give your site’s most valuable content in a cleaner, more accessible format so tools don’t skip it or misinterpret it.

The Trust Problem That Never Dies

If you step back, you discover this is a familiar pattern. Early in the web’s history, something like the meta keywords tag let a site declare what it was about; it was widely abused and ultimately ignored. Similarly, authorship markup (rel=author, etc) tried to help machines understand authority, and again, manipulation followed. Structured data (schema.org) succeeded only after years of governance and shared adoption across search engines. llms.txt sits squarely inside this lineage: a self-declared signal that promises clarity but trusts the publisher to tell the truth. Without verification, every little root-file standard becomes a vector for manipulation.

The Abuse Playbook (What Spam Teams See Immediately)

What concerns platform policy teams is plain: If a website publishes a file called llms.txt and claims whatever it likes, how does the platform know that what’s listed matches the live content users see, or can be trusted in any way? Several exploit paths open up:

  1. Cloaking through the manifest. A site lists pages in the file that are hidden from regular visitors or behind paywalls, then the AI tool ingests content nobody else sees.
  2. Keyword stuffing or link dumping. The file becomes a directory stuffed with affiliate links, low-value pages, or keyword-heavy anchors aimed at gaming retrieval.
  3. Poisoning or biasing content. If agents trust manifest entries more than the crawl of messy HTML, a malicious actor can place manipulative instructions or biased lists that affect downstream results.
  4. Third-party link chains. The file could point to off-domain URLs, redirect farms, or content islands, making your site a conduit or amplifier for low-quality content.
  5. Trust laundering. The presence of a manifest might lead an LLM to assign higher weight to listed URLs, so a thin or spammy page gets a boost purely by appearance of structure.

The broader commentary flags this risk. For instance, some industry observers argue that llms.txt “creates opportunities for abuse, such as cloaking.” And community feedback apparently confirms minimal actual uptake: “No LLM reads them.” That absence of usage ironically means fewer real-world case studies of abuse, but it also means fewer safety mechanisms have been tested.

Why Platforms Hesitate

From a platform’s viewpoint, the calculus is pragmatic: New signals add cost, risk, and enforcement burden. Here’s how the logic works.

First, signal quality. If llms.txt entries are noisy, spammy, or inconsistent with the live site, then trusting them can reduce rather than raise content quality. Platforms must ask: Will this file improve our model’s answer accuracy or create risk of misinformation or manipulation?

Second, verification cost. To trust a manifest, you need to cross-check it against the live HTML, canonical tags, structured data, site logs, etc. That takes resources. Without verification, a manifest is just another list that might lie.

Third, abuse handling. If a bad actor publishes an llms.txt manifest that lists misleading URLs which an LLM ingests, who handles the fallout? The site owner? The AI platform? The model provider? That liability issue is real.

Fourth, user-harm risk. An LLM citing content from a manifest might produce inaccurate or biased answers. This just adds to the current problem we already face with inaccurate answers and people following incorrect, wrong, or dangerous answers.

Google has already stated that it will not rely on llms.txt for its “AI Overviews” feature and continues to follow “normal SEO.” And John Mueller wrote: “FWIW no AI system currently uses llms.txt.” So the tools that could use the manifest are largely staying on the sidelines. This reflects the idea that a root-file standard without established trust is a liability.

Why Adoption Without Governance Fails

Every successful web standard has shared DNA: a governing body, a clear vocabulary, and an enforcement pathway. The standards that survive all answer one question early … “Who owns the rules?”

Schema.org worked because that answer was clear. It began as a coalition between Bing, Google, Yahoo, and Yandex. The collaboration defined a bounded vocabulary, agreed syntax, and a feedback loop with publishers. When abuse emerged (fake reviews, fake product data), those engines coordinated enforcement and refined documentation. The signal endured because it wasn’t owned by a single company or left to self-police.

Robots.txt, in contrast, survived by being minimal. It didn’t try to describe content quality or semantics. It only told crawlers what not to touch. That simplicity reduced its surface area for abuse. It required almost no trust between webmasters and platforms. The worst that could happen was over-blocking your own content; there was no incentive to lie inside the file.

llms.txt lives in the opposite world. It invites publishers to self-declare what matters most and, in its full-text variant, what the “truth” of that content is. There’s no consortium overseeing the format, no standardized schema to validate against, and no enforcement group to vet misuse. Anyone can publish one. Nobody has to respect it. And no major LLM provider today is known to consume it in production. Maybe they are, privately, but publicly, no announcements about adoption.

What Would Need To Change For Trust To Build

To shift from optional neat-idea to actual trusted signal, several conditions must be met, and each of these entails a cost in either dollars or human time, so again, dollars.

  • First, manifest verification. A signature or DNS-based verification could tie an llms.txt file to site ownership, reducing spoof risk. (cost to website)
  • Second, cross-checking. Platforms should validate that URLs listed correspond to live, public pages, and identify mismatch or cloaking via automated checks. (cost to engine/platform)
  • Third, transparency and logging. Public registries of manifests and logs of updates would make dramatic changes visible and allow community auditing. (cost to someone)
  • Fourth, measurement of benefit. Platforms need empirical evidence that ingesting llms.txt leads to meaningful improvements in answer correctness, citation accuracy, or brand representation. Until then, this is speculative. (cost to engine/platform)
  • Finally, abuse deterrence. Mechanisms must be built to detect and penalize spammy or manipulative manifest usage. Without that, spam teams simply assume negative benefit. (cost to engine/platform)

Until those elements are in place, platforms will treat llms.txt as optional at best or irrelevant at worst. So maybe you get a small benefit? Or maybe not…

The Real Value Today

For site owners, llms.txt still may have some value, but not as a guaranteed path to traffic or “AI ranking.” It can function as a content alignment tool, guiding internal teams to identify priority URLs you want AI systems to see. For documentation-heavy sites, internal agent systems, or partner tools that you control, it may make sense to publish a manifest and experiment.

However, if your goal is to influence large public LLM-powered results (such as those by Google, OpenAI, or Perplexity), you should tread cautiously. There is no public evidence those systems honor llms.txt yet. In other words: Treat llms.txt as a “mirror” of your content strategy, not a “magnet” pulling traffic. Of course, this means building the file(s) and maintaining them, so factor in the added work v. whatever return you believe you will receive.

Closing Thoughts

The web keeps trying to teach machines about itself. Each generation invents a new format, a new way to declare “here’s what matters.” And each time the same question decides its fate: “Can this signal be trusted?” With llms.txt, the idea is sound, but the trust mechanisms aren’t yet baked in. Until verification, governance, and empirical proof arrive, llms.txt will reside in the grey zone between promise and problem.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Roman Samborskyi/Shutterstock

Data Shows How AI Overviews Is Ranking Shopping Keywords via @sejournal, @martinibuster

BrightEdge’s latest research shows that Google’s AI Overviews are now appearing in ways that reflect what BrightEdge describes as “deliberate, aggressive choices” about where AI shows up and where it does not. These trends show marketers where AI search is showing up within the buyer’s journey and what businesses should expect.

The data indicates that Google is concentrating AI in parts of the shopping process where it gives clear informational value, particularly during research and evaluation. This aligns AI Overviews with the points in the shopping journey where users need help comparing options or understanding product details.

BrightEdge reports that Google retained only about 30 percent of the AI Overview keywords that appeared at the peak of its September 1 through October 15, 2025 research window. The retained queries also tended to have higher search volume than the removed ones, which BrightEdge notes is the opposite pattern observed in 2024. This fits with the higher retention in categories where shoppers look for explanations, comparisons, and instructional information.

BrightEdge explains:

“The numbers paint an interesting story: Google retained only 30% of its peak AI Overview keywords. But here’s what makes 2025 fundamentally different: those retained keywords have HIGHER search volume than removed ones—the complete opposite of 2024. Google isn’t just pulling back; it’s being strategic about which searches deserve AI guidance.”

The shifting behavior of AI Overviews shows how actively Google is tuning its system. BrightEdge observed a spike from 9 percent to 26 percent coverage on September 18 before returning to 9 percent soon after. This change signals ongoing testing. The year-over-year overlap of AI Overview keywords is only 18 percent, which BrightEdge calls a “massive reshuffling” that shows “active experimentation” and requires marketers to plan for change rather than stability. The volatility shows Google may be experimenting or responding to user trends and that the queries shown in AI Overviews can change over time.
My opinion is that Google is likely responding to user trends, testing how they respond to AI Overviews, then using the data to show more if user reactions are positive.

AI Is A Comparison And Evaluation Layer

BrightEdge’s research indicates that AI Overviews aligns with shopper intent. Google places AI in research queries such as “best TV for gaming,” continues support for evaluation queries like “Samsung vs LG,” and then withdraws when users show purchase intent with searches like “Samsung S95C price.”

These examples show that AI serves as an educational and comparison layer, not a transactional one. When a shopper reaches a buying decision, Google steps back and lets traditional results handle the final step. This apparent alignment with comparison and evaluation means Google is confident in using AI Overviews as a part of the shopping journey.

Usefulness Varies Across Categories

The data shows that AI’s usefulness varies across categories, and Google adjusts AIO keywords retention based on these needs. Categories that retained AI Overviews such as Grocery, TV and Home Theater, and Small Appliances share a pattern.

Users rely on comparison, explanation, and instruction during their decisions. In contrast, categories with low retention, like Furniture and Home, rely on visual browsing rather than text-based evaluation. This limits the value of AI. Google’s category patterns show that AI appears more often in categories where text-based information (such as comparison, explanation, and instruction) guides decisions.

Google’s keyword filtering clarifies how AI fits into the shopping journey. Among retained queries, a little more than a quarter are evaluation or comparison searches, including “best [product]” and “X vs Y” terms. These are queries where users need background and guidance. In contrast, Google removes bottom-funnel keywords. Price, buy, deals, and specific product names are removed. This shows Google’s focus is on how useful AI serves for each intent. AI educates and guides but does not handle the final purchase step.

Shopping Trends Influence AI Appearance

The shopping calendar shapes how AI appears in search results. BrightEdge describes the typical shopping journey as consisting of research in November, evaluation and comparison in early December, and buying in late December. AI helps shoppers understand options in November, assists with comparisons in early December, and by late December, AI tends to be less influential and traditional search results tend to complete the sale.

This makes November the key moment for making evaluation and comparison content easier for AI to cite. Once December arrives, the chance for AI-driven discovery shrinks because consumers have moved on to the final leg of their shopping journey, purchase.

These findings mean that brands should align their content strategies with the points in the journey where AI Overviews are active. BrightEdge advises identifying evaluation and transactional pages, ensuring that comparison content is indexed early, and watching category-specific retention patterns. The data indicates two areas where brands can focus their efforts. One is supporting AI during research and review stages. The other is improving organic search visibility for purchasing queries. The 18 percent year-over-year consistency figure also shows that flexibility is needed because the queries shown in AI Overviews change frequently.

Although the behavior of AI Overviews may seem volatile, BrightEdge’s research suggests that the changes follow a consistent pattern. AI surfaces when people are learning and evaluating and withdraws when users shift into buying. Categories that require explanations or comparisons see the highest retention in AI Overviews, and November remains the key period when AI can use that content. The overall pattern gives brands a clearer view of how AI fits into the shopping journey and how user intent shapes where AI shows up.

Read BrightEdge’s report:
Google AI Overview Holiday Shopping Test: The 57% Pullback That Changes Everything

Featured Image by Shutterstock/Misselss

Why WordPress 6.9 Abilities API Is Consequential And Far-Reaching via @sejournal, @martinibuster

WordPress 6.9, scheduled for release on December 2, 2025, is shipping with a new Abilities API that introduces a new system designed to make advanced AI-driven functionality possible for themes and plugins. The new Abilities API will standardize how plugins, themes, and core describe what they can do in a format that humans and machines can understand.

This positions WordPress sites to be understood and used more reliably by AI agents and automation tools, since the Abilities API provides the structured information those systems need to interact with site functionality in a predictable way.

The Abilities API is designed to address a long-standing issue in WordPress: functionality has been scattered across custom functions, AJAX handlers, and plugin-specific implementations. According to WordPress, the purpose of the API is to provide a common way for WordPress core, plugins, and themes to describe what they can do in a standardized, machine-readable form.

This approach enables discoverability, clear validation, and predictable execution wherever an ability originates. By centralizing the description and exposure of capabilities, the Abilities API provides a centralized way to describe functionality that might otherwise be scattered across different implementations.

What An Ability Is

The announcement defines an “ability” as a self-contained unit of functionality that includes its inputs, outputs, permissions, and execution logic. This structure allows abilities to be managed as separate pieces of functionality rather than fragments buried in theme or plugin code. WordPress explains that registering abilities through the API lets developers define permission checks, execution callbacks, and validation requirements, ensuring predictable behavior wherever the ability is used. By replacing isolated functions with defined units, WordPress creates a clearer and more open system for interacting with its features.

What Developers Gain From Abilities API

Developers gain several advantages by registering functionality as abilities. According to the announcement, abilities become discoverable through standardized interfaces, which means they can be queried, listed, and inspected across different contexts. Developers can organize them into categories, validate inputs and outputs, and apply permission rules that define who or what can execute them. The announcement notes that one benefit is automatic exposure through REST API endpoints under the wp-abilities/v1 namespace. This setup shifts WordPress from custom-coded actions to a system where functionality is defined in a consistent and reachable way.

Abilities Best Practices

One of the frustrating paint points for WordPress users is when a plugin or theme conflicts with another one. This happens for a variety of reasons but in the case of the Abilities API, WordPress has created a set of rules that should help prevent conflicts and errors.

WordPress explains the practices:

Ability names should follow these practices:

  • Use namespaced names to prevent conflicts (e.g., my-plugin/my-ability)
  • Use only lowercase alphanumeric characters, dashes, and forward slashes
  • Use descriptive, action-oriented names (e.g., process-payment, generate-report)
  • The format should be namespace/ability-name

Abilities API

The Abilities API introduces three components that work together to provide a complete system for registering and interacting with abilities.

1. The first is a PHP API for registering, managing, and executing abilities.

2. The second is automatic REST API exposure, which ensures that abilities can be accessed through endpoints without extra developer effort.

3. The third is a set of new hooks that help developers integrate with the system. These components, according to the announcement, bring consistency to how abilities are described and executed, forming a base described in the announcement as a consistent way to register and execute abilities.

The Abilities API is guided by several design goals that help it function as a long-term foundation.

Discoverability
Discoverability is a central goal, allowing every ability to be listed, queried, and inspected.

Interoperability
Interoperability is also emphasized, as the uniform schema lets different parts of WordPress create workflows together.

Security
Security is a part of the new API by design with permission checks defining who and what can invoke abilities.

Part Of The AI Building Blocks Initiative

The Abilities API is not an isolated change but part of the AI Building Blocks initiative meant to prepare WordPress for AI-driven workflows. The announcement explains that this system provides the base for AI agents, automation tools, and developers to interact with WordPress in a predictable way.
Abilities are machine-readable and exposed in the same manner across PHP, REST, and planned interfaces, and the announcement describes them as usable across those contexts. The Abilities API provides the metadata that AI agents and automation tools can use to understand and work with WordPress functionality.

The introduction of the Abilities API in WordPress 6.9 potentially marks a huge change in how functionality is organized, described, and accessed across the platform. By creating a standardized system for defining abilities and exposing them in different contexts, WordPress introduces a system that positions WordPress to be in the forefront of future AI innovations for years to come. This is a big and consequential update to WordPress that will be here in a few short weeks.

Featured Image by Shutterstock/AntonKhrupinArt

OpenAI Releases GPT-5.1 With Improved Instruction Following via @sejournal, @MattGSouthern

OpenAI released GPT-5.1 Instant and GPT-5.1 Thinking with updates to conversational style and reasoning capabilities.

The updates begin rolling out today to paid users before expanding to free accounts.

OpenAI says this release addresses feedback from users who want AI that feels more natural to interact with, while also improving intelligence.

What’s New

GPT-5.1 Instant

GPT-5.1 Instant, ChatGPT’s most-used model, now defaults to a warmer, more conversational tone.

OpenAI reports improved instruction following, with the model more reliably answering the specific question asked rather than drifting into tangents.

GPT-5.1 Instant can use adaptive reasoning. The model decides when to think before responding to challenging questions, producing more thorough answers while maintaining speed.

GPT-5.1 Thinking

The advanced reasoning model adapts thinking time more precisely. On a representative distribution of ChatGPT tasks, GPT-5.1 Thinking runs roughly twice as fast on the fastest tasks and roughly twice as slow on the slowest tasks compared to GPT-5 Thinking.

Responses use less jargon and fewer undefined terms, which OpenAI says should make the most capable model more approachable for complex workplace tasks and explaining technical concepts.

Customization Options

OpenAI refined personality presets to better reflect common usage patterns. Default, Friendly (formerly Listener), and Efficient (formerly Robot) remain with updates, and new options include Professional, Candid, and Quirky.

These presets apply across all models. The original Cynical (formerly Cynic) and Nerdy (formerly Nerd) options remain available under the same personalization menu.

Beyond presets, OpenAI is experimenting with controls that let you tune specific characteristics such as response conciseness, warmth, scannability, and emoji frequency from personalization settings.

Personalization changes now take effect across all chats immediately, including ongoing conversations. Previously, changes only applied to new conversations started afterward.

The updated GPT-5.1 models also adhere more closely to custom instructions, giving you more precise tone and behavior control.

Rollout Timeline

GPT-5.1 Instant and Thinking begin rolling out today starting with paid subscribers. Free and logged-out users will get access afterward.

Enterprise and Education customers get a seven-day early access toggle to GPT-5.1 (off by default). After that window, GPT-5.1 becomes the default ChatGPT model.

GPT-5 (Instant and Thinking) remains available in the legacy models dropdown for paid subscribers for three months, giving people time to compare and adapt.

Why This Matters

GPT-5.1 can change how your day-to-day workflows behave. Better instruction following means less prompt tweaking and fewer off-brief outputs.

Adaptive reasoning may make simple tasks feel faster while giving more complex work, like technical explanations or data analysis, extra time.

Looking Ahead

OpenAI frames this update as a step toward personalized AI that adapts to individual preferences and tasks.

Updated personality styles and tone options roll out today. Granular characteristic tuning will roll out later this week as an experiment to a limited number of users, with further changes based on feedback.


Featured Image: Photo Agency/Shutterstock

OpenAI’s Sam Altman Says Personalized AI Raises Privacy Concerns via @sejournal, @martinibuster

In a recent interview with Stanford University, OpenAI’s CEO Sam Altman predicted that AI security will become the defining problem of the next phase of AI development, saying that AI security is one of the best fields to study right now. He also cited personalized AI as one example of a security concern that he’s been thinking about lately.

What Does AI Security Mean Today?

Sam Altman said that concerns about AI safety will be reframed as AI Security issues that can be solvable by AI.

Interview host, Dan Boneh, asked:

“So what does it mean for an AI system to be secure? What does it mean for even trying to kind of make it do things it wasn’t designed to do?

How do we protect AI systems from prompt injections and other attacks like that? How do you think of AI security?

I guess the concrete question I want to ask is, among all the different things we can do with AI, this course is about learning one sliver of the field. Is this a good area? Should people go into this?”

Sam Altman encouraged today’s students to study AI security.

He answered:

“I think this is one of the best areas to go study. I think we are soon heading into a world where a lot of the AI safety problems that people have traditionally talked about are going to be recast as AI security problems in different ways.

I also think that given how capable these models are getting, if we want to be able to deploy them for wide use, the security problems are going to get really big. You mentioned many areas that I think are super important to figure out. Adversary robustness in particular seems like it’s getting quite serious.”

What Altman means is that people are starting to find ways to trick AI systems, and the problem is becoming serious enough that researchers and engineers need to focus on making AI resistant to manipulation and other kinds of attacks, such as prompt injections.

AI Personalization Becoming A Security Concern

Altman also said that something he’s been thinking a lot about lately is possible security issues with AI personalization. He said that people appreciate personalized responses from AI but he said that this could open the door to malicious hackers figuring out how to steal sensitive data (exfiltrate).

He explained:

“One more that I will mention that you touched on a little bit, but just it’s been on my mind a lot recently. There are two things that people really love right now that taken together are a real security challenge.

Number one, people love how personalized these models are getting. So ChatGPT now really gets to know you. It personalizes over your conversational history, your data you’ve connected to it, whatever else.

And then number two is you can connect these models to other services. They can go off and like call things on the web and, you know, do stuff for you that’s helpful.

But what you really don’t want is someone to be able to exfiltrate data from your personal model that knows everything about you.

And humans, you can kind of trust to be reasonable at this. If you tell your spouse a bunch of secrets, you can sort of trust that they will know in what context what to tell to other people. The models don’t really do this very well yet.

And so if you’re telling like a model all about your, you know, private health care issues, and then it is off, and you have it like buying something for you, you don’t want that e-commerce site to know about all of your health issues or whatever.

But this is a very interesting security problem to solve this with like 100% robustness.”

Altman identifies personalization as both a breakthrough and a new opening for cyber attack. The same qualities that make AI more useful also make it a target, since models that learn from individual histories could be manipulated to reveal them. Altman shows how convenience can become a source of exposure, explaining that privacy and usability are now security challenges.

Lastly, Altman circled back to AI as both the security problem and the solution.

He concluded:

“Yeah, by the way, it works both directions. Like you can use it to secure systems. I think it’s going to be a big deal for cyber attacks at various times.”

Takeaways

  • AI Security As The Next Phase Of AI Development
    Altman predicts that AI security will replace AI safety as the central challenge and opportunity in artificial intelligence.
  • Personalization As A New Attack Surface
    The growing trend of AI systems that learn from user data raises new security concerns, since personalization could expose opportunities for attackers to extract sensitive information.
  • Dual Role Of AI In Cybersecurity
    Altman emphasizes that AI will both pose new security threats and serve as a powerful tool to detect and prevent them.
  • Emerging Need For AI Security Expertise
    Altman’s comments suggests that there will be a rising demand for professionals who understand how to secure, test, and deploy AI responsibly.
Is AI Search SEO Leaving Bigger Opportunities Behind? via @sejournal, @martinibuster

A recent podcast by Ahrefs raised two issues about optimizing for AI search that can cause organizations to underperform and miss out on opportunities to improve sales. The conversation illustrates a gap between realistic expectations for AI-based trends and what can be achieved through overlooked opportunities elsewhere.

YouTube Is Second Largest Search Engine

The first thing noted in the podcast is that YouTube is the second-largest search engine by queries entered in the search bar. More people type search queries into YouTube’s search bar than any other search engine except Google itself. So it absolutely makes sense for companies to seriously consider how a video strategy can work to increase traffic and brand awareness.

It should be a no-brainer that businesses figure out YouTube, and yet many businesses are rushing to spend time and money optimizing for answer engines like Perplexity and ChatGPT, which have a fraction of the traffic of YouTube.

Patrick Stox explained:

“YouTube is the second largest search engine. There’s a lot of focus on all these AI assistants. They’re in total driving less than 1% of your traffic. YouTube might be a lot more. I don’t know how much it’s going to drive traffic to the website, but there’s a lot of eyes on it. I know for us, like we see it in our signups, …they sign up for Ahrefs.

It’s an incredible channel that I think as people need to diversify, to kind of hedge their bets on where their traffic is coming from, this would be my first choice. Like go and do more video. There’s your action item. If you’re not doing it, go do more video right now.”

Tim Soulo, Ahrefs CMO, expressed curiosity that so many people are looking two or three years ahead for opportunities that may or may not materialize on AI assistants, while overlooking the real benefits available today on YouTube.

He commented:

“I feel that a lot of people get fixated on AI assistants like ChatGPT and Perplexity and optimizing for AI search because they are kind of looking three, five years ahead and they are kind of projecting that in three, five years, that might be the dominant thing, how people search.

…But again, if we focus on today, YouTube is much more popular than ChatGPT and YouTube has a lot more business potential than ChatGPT. So yeah, definitely you have to invest in AI search. You have to do the groundwork that would help you rank in Google, rank in ChatGPT and everything. …I don’t see YouTube losing its relevance five years from now. I can only see it getting bigger and bigger because the new generation of people that is growing up right now, they are very video oriented. Short form video, long form video. So yeah, definitely. If you’re putting all your eggs in the basket of ChatGPT, but not putting anything in YouTube, that’s a big mistake.”

Patrick Stox agreed with Tim, noting that Instagram and TikTok are big for short-form videos that are wildly popular today, and encouraged viewers and listeners to see how video can fit into their marketing.

Some of the disconnect regarding SEO and YouTube is that SEOs may feel that SEO is about Google, and YouTube is therefore not their domain of responsibility. I would counter that YouTube should be a part of SEOs’ concern because people use it for reviews, how-to information, and product research, and the searches on YouTube are second only to Google.

SEO/AEO/GEO Can’t Solve All AI Search Issues

The second topic they touched on was the expectations placed on SEO to solve all of a business’s traffic and visibility problems. Patrick Stox and Tim Soulo suggested that high rankings and a satisfactory marketing outcome begin and end with a high-quality product, service, and content. Problems at the product or service end cause friction and result in negative sentiment on social media. This isn’t something that you can SEO yourself out of.

Patrick Stox explained:

“We only have a certain amount of control, though. We can go and create a bunch of pages, a bunch of content. But if you have real issues, like if everyone suddenly is like Nvidia’s graphics cards suck and they’re saying that on social media and Reddit and everything, YouTube, there’s only so much you can do to combat that.

…And there might be tens of thousands of them and there’s one of me. So what am I gonna do? I’m gonna be a drop in the bucket. It’s gonna be noise in the void. The internet is still the one controlling the narrative. So there’s only so much that SEOs are gonna be able to do in a situation like that.

…So this is going to get contentious in a lot of organizations where you’re going to have to do something that the execs are going to be yelling, can’t you just change that, make it go away?”

Tim and Patrick went on to use the example of their experience with a pricing change they made a few years ago, where customers balked at the changes. Ahrefs made the change because they thought it would make their service more affordable, but despite their best efforts to answer user questions and get control of the conversation, the controversy wouldn’t go away, so they ultimately decided to give users what they wanted.

The point is that positive word of mouth isn’t necessarily an SEO issue, even though SEO/GEO/AEO is now expected to get out there and build positive brand associations so that they’re recommended by AI Mode, ChatGPT, and Perplexity.

Takeaways

  • Find balance between AI search and immediate business opportunities:
    Some organizations may focus too heavily on optimizing for AI assistants at the expense of video and multimodal search opportunities.
  • YouTube’s marketing power:
    YouTube is the second-largest search engine and a major opportunity for traffic and brand visibility.
  • Realistic expectations for SEO:
    SEO/GEO/AEO cannot fix problems rooted in poor products, services, or customer sentiment. Long-term visibility in AI search depends not just on optimization, but on maintaining positive brand sentiment.

Watch the video at about the 36 minute mark:

Featured Image by Shutterstock/Collagery