Perplexity Comet Browser Vulnerable To Prompt Injection Exploit via @sejournal, @martinibuster

Brave published details about a security issue with Comet, Perplexity’s AI browser, that enables an attacker to inject a prompt into the browser and gain access to data in other open browser tabs.

Comet AI Browser Vulnerability

Brave described a vulnerability that can be activated when a user asks the Comet AI browser to summarize a web page. The LLM will read the web page, including any embedded prompts that command the LLM to take action on any open tabs

According to Brave:

“The vulnerability we’re discussing in this post lies in how Comet processes webpage content: when users ask it to “Summarize this webpage,” Comet feeds a part of the webpage directly to its LLM without distinguishing between the user’s instructions and untrusted content from the webpage. This allows attackers to embed indirect prompt injection payloads that the AI will execute as commands. For instance, an attacker could gain access to a user’s emails from a prepared piece of text in a page in another tab.”

A post on Simon Willison’s Weblog shared that Perplexity tried to patch the vulnerability but the fix does not work.

A developer posted the following on X:

“Why is no one talking about this?

This is why I don’t use an AI browser

You can literally get prompt injected and your bank account drained by doomscrolling on reddit:”

Things aren’t looking good for Comet Browser at this time.

How To Leverage AI To Modernize B2B Go-To-Market via @sejournal, @alexanderkesler

In a post “growth-at-all-costs” era, B2B go-to-market (GTM) teams face a dual mandate: operate with greater efficiency while driving measurable business outcomes.

Many organizations see AI as the definitive means of achieving this efficiency.

The reality is that AI is no longer a speculative investment. It has emerged as a strategic enabler to unify data, align siloed teams, and adapt to complex buyer behaviors in real time.

According to an SAP study, 48% of executives use generative AI tools daily, while 15% use AI multiple times per day.

The opportunity for modern Go-to-Market (GTM) leaders is not just to accelerate legacy tactics with AI, but to reimagine the architecture of their GTM strategy altogether.

This shift represents an inflection point. AI has the potential to power seamless and adaptive GTM systems: measurable, scalable, and deeply aligned with buyer needs.

In this article, I will share a practical framework to modernize B2B GTM using AI, from aligning internal teams and architecting modular workflows to measuring what truly drives revenue.

The Role Of AI In Modern GTM Strategies

For GTM leaders and practitioners, AI represents an opportunity to achieve efficiency without compromising performance.

Many organizations leverage new technology to automate repetitive, time-intensive tasks, such as prospect scoring and routing, sales forecasting, content personalization, and account prioritization.

But its true impact lies in transforming how GTM systems operate: consolidating data, coordinating actions, extracting insights, and enabling intelligent engagement across every stage of the buyer’s journey.

Where previous technologies offered automation, AI introduces sophisticated real-time orchestration.

Rather than layering AI onto existing workflows, AI can be used to enable previously unscalable capabilities such as:

  • Surfacing and aligning intent signals from disconnected platforms.
  • Predicting buyer stage and engagement timing.
  • Providing full pipeline visibility across sales, marketing, client success, and operations.
  • Standardizing inputs across teams and systems.
  • Enabling cross-functional collaboration in real time.
  • Forecasting potential revenue from campaigns.

With AI-powered data orchestration, GTM teams can align on what matters, act faster, and deliver more revenue with fewer resources.

AI is not merely an efficiency lever. It is a path to capabilities that were previously out of reach.

Framework: Building An AI-Native GTM Engine

Creating a modern GTM engine powered by AI demands a re-architecture of how teams align, how data is managed, and how decisions are executed at every level.

Below is a five-part framework that explains how to centralize data, build modular workflows, and train your model:

1. Develop Centralized, Clean Data

AI performance is only as strong as the data it receives. Yet, in many organizations, data lives in disconnected silos.

Centralizing structured, validated, and accessible data across all departments at your organization is foundational.

AI needs clean, labeled, and timely inputs to make precise micro-decisions. These decisions, when chained together, power reliable macro-actions such as intelligent routing, content sequencing, and revenue forecasting.

In short, better data enables smarter orchestration and more consistent outcomes.

Luckily, AI can be used to break down these silos across marketing, sales, client success, and operations by leveraging a customer data platform (CDP), which integrates data from your customer relationship management (CRM), marketing automation (MAP), and customer success (CS) platforms.

The steps are as follows:

  • Appoint a data steward who owns data hygiene and access policies.
  • Select a CDP that pulls records from your CRM, MAP, and other tools with client data.
  • Configure deduplication and enrichment routines, and tag fields consistently.
  • Establish a shared, organization-wide dashboard so every team works from the same definitions.

Recommended starting point: Schedule a workshop with operations, analytics, and IT to map current data sources and choose one system of record for account identifiers.

2. Build An AI-Native Operating Model

Instead of layering AI onto legacy systems, organizations will be better suited to architect their GTM strategies from the ground up to be AI-native.

This requires designing adaptive workflows that rely on machine input and positioning AI as the operating core, not just a support layer.

AI can deliver the most value when it unifies previously fragmented processes.

Rather than simply accelerating isolated tasks like prospect scoring or email generation, AI should orchestrate entire GTM motions, seamlessly adapting messaging, channels, and timing based on buyer intent and journey stage.

Achieving this transformation demands new roles within the GTM organization, such as AI strategists, workflow architects, and data stewards.

In other words, experts focused on building and maintaining intelligent systems rather than executing manual processes.

AI-enabled GTM is not about automation alone; it’s about synchronization, intelligence, and scalability at every touchpoint.

Once you have committed to building an AI-native GTM model, the next step is to implement it through modular, data-driven workflows.

Recommended starting point: Assemble a cross-functional strike team and map one buyer journey end-to-end, highlighting every manual hand-off that could be streamlined by AI.

3. Break Down GTM Into Modular AI Workflows

A major reason AI initiatives fail is when organizations do too much at once. This is why large, monolithic projects often stall.

Success comes from deconstructing large GTM tasks into a series of focused, modular AI workflows.

Each workflow should perform a specific, deterministic task, such as:

  • Assessing prospect quality on certain clear, predefined inputs.
  • Prioritizing outreach.
  • Forecasting revenue contribution.

If we take the first workflow, which assesses prospect quality, this would entail integrating or implementing a lead scoring AI tool with your model and then feeding in data such as website activity, engagement, and CRM data. You can then instruct your model to automatically route top-scoring prospects to sales representatives, for example.

Similarly, for your forecasting workflow, connect forecasting tools to your model and train it on historical win/loss data, pipeline stages, and buyer activity logs.

To sum up:

  • Integrate only the data required.
  • Define clear success criteria.
  • Establish a feedback loop that compares model output with real outcomes.
  • Once the first workflow proves reliable, replicate the pattern for additional use cases.

When AI is trained on historical data with clearly defined criteria, its decisions become predictable, explainable, and scalable.

Recommended starting point: Draft a simple flow diagram with seven or fewer steps, identify one automation platform to orchestrate them, and assign service-level targets for speed and accuracy.

4. Continuously Test And Train AI Models

An AI-powered GTM engine is not static. It must be monitored, tested, and retrained continuously.

As markets, products, and buyer behaviors shift, these changing realities affect the accuracy and efficiency of your model.

Plus, according to OpenAI itself, one of the latest iterations of its large language model (LLM) can hallucinate up to 48% of the time, emphasizing the importance of embedding rigorous validation processes, first-party data inputs, and ongoing human oversight to safeguard decision-making and maintain trust in predictive outputs.

Maintaining AI model efficiency requires three steps:

  1. Set clear validation checkpoints and build feedback loops that surface errors or inefficiencies.
  2. Establish thresholds for when AI should hand off to human teams and ensure that every automated decision is verified. Ongoing iteration is key to performance and trust.
  3. Set a regular cadence for evaluation. At a minimum, conduct performance audits monthly and retrain models quarterly based on new data or shifting GTM priorities.

During these maintenance cycles, use the following criteria to test the AI model:

  • Ensure accuracy: Regularly validate AI outputs against real-world outcomes to confirm predictions are reliable.
  • Maintain relevance: Continuously update models with fresh data to reflect changes in buyer behavior, market trends, and messaging strategies
  • Optimize for efficiency: Monitor key performance indicators (KPIs) like time-to-action, conversion rates, and resource utilization to ensure AI is driving measurable gains.
  • Prioritize explainability: Choose models and workflows that offer transparent decision logic so GTM teams can interpret results, trust outputs, and make manual adjustments as needed.

By combining cadence, accountability, and testing rigor, you create an AI engine for GTM that not only scales but improves continuously.

Recommended starting point: Put a recurring calendar invite on the books titled “AI Model Health Review” and attach an agenda covering validation metrics and required updates.

5. Focus On Outcomes, Not Features

Success is not defined by AI adoption, but by outcomes.

Benchmark AI performance against real business metrics such as:

  • Pipeline velocity.
  • Conversion rates.
  • Client acquisition cost (CAC).
  • Marketing-influenced revenue.

Focus on use cases that unlock new insights, streamline decision-making, or drive action that was previously impossible.

When a workflow stops improving its target metric, refine or retire it.

Recommended starting point: Demonstrate value to stakeholders in the AI model by exhibiting its impact on pipeline opportunity or revenue generation.

Common Pitfalls To Avoid

1. Over-Reliance On Vanity Metrics

Too often, GTM teams focus AI efforts on optimizing for surface-level KPIs, like marketing qualified lead (MQL) volume or click-through rates, without tying them to revenue outcomes.

AI that increases prospect quantity without improving prospect quality only accelerates inefficiency.

The true test of value is pipeline contribution: Is AI helping to identify, engage, and convert buying groups that close and drive revenue? If not, it is time to rethink how you measure its efficiency.

2. Treating AI As A Tool, Not A Transformation

Many teams introduce AI as a plug-in to existing workflows rather than as a catalyst for reinventing them. This results in fragmented implementations that underdeliver and confuse stakeholders.

AI is not just another tool in the tech stack or a silver bullet. It is a strategic enabler that requires changes in roles, processes, and even how success is defined.

Organizations that treat AI as a transformation initiative will gain exponential advantages over those who treat it as a checkbox.

A recommended approach for testing workflows is to build a lightweight AI system with APIs to connect fragmented systems without needing complicated development.

3. Ignoring Internal Alignment

AI cannot solve misalignment; it amplifies it.

When sales, marketing, and operations are not working from the same data, definitions, or goals, AI will surface inconsistencies rather than fix them.

A successful AI-driven GTM engine depends on tight internal alignment. This includes unified data sources, shared KPIs, and collaborative workflows.

Without this foundation, AI can easily become another point of friction rather than a force multiplier.

A Framework For The C-Level

AI is redefining what high-performance GTM leadership looks like.

For C-level executives, the mandate is clear: Lead with a vision that embraces transformation, executes with precision, and measures what drives value.

Below is a framework grounded in the core pillars modern GTM leaders must uphold:

Vision: Shift From Transactional Tactics To Value-Centric Growth

The future of GTM belongs to those who see beyond prospect quotas and focus on building lasting value across the entire buyer journey.

When narratives resonate with how decisions are really made (complex, collaborative, and cautious), they unlock deeper engagement.

GTM teams thrive when positioned as strategic allies. The power of AI lies not in volume, but in relevance: enhancing personalization, strengthening trust, and earning buyer attention.

This is a moment to lean into meaningful progress, not just for pipeline, but for the people behind every buying decision.

Execution: Invest In Buyer Intelligence, Not Just Outreach Volume

AI makes it easier than ever to scale outreach, but quantity alone no longer wins.

Today’s B2B buyers are defensive, independent, and value-driven.

Leadership teams that prioritize technology and strategic market imperative will enable their organizations to better understand buying signals, account context, and journey stage.

This intelligence-driven execution ensures resources are spent on the right accounts, at the right time, with the right message.

Measurement: Focus On Impact Metrics

Surface-level metrics no longer tell the full story.

Modern GTM demands a deeper, outcome-based lens – one that tracks what truly moves the business, such as pipeline velocity, deal conversion, CAC efficiency, and the impact of marketing across the entire revenue journey.

But the real promise of AI is meaningful connection. When early intent signals are tied to late-stage outcomes, GTM leaders gain the clarity to steer strategy with precision.

Executive dashboards should reflect the full funnel because that is where real growth and real accountability live.

Enablement: Equip Teams With Tools, Training, And Clarity

Transformation does not succeed without people. Leaders must ensure their teams are not only equipped with AI-powered tools but also trained to use them effectively.

Equally important is clarity around strategy, data definitions, and success criteria.

AI will not replace talent, but it will dramatically increase the gap between enabled teams and everyone else.

Key Takeaways

  • Redefine success metrics: Move beyond vanity KPIs like MQLs and focus on impact metrics: pipeline velocity, deal conversion, and CAC efficiency.
  • Build AI-native workflows: Treat AI as a foundational layer in your GTM architecture, not a bolt-on feature to existing processes.
  • Align around the buyer: Use AI to unify siloed data and teams, delivering synchronized, context-rich engagement throughout the buyer journey.
  • Lead with purposeful change: C-level executives must shift from transactional growth to value-led transformation by investing in buyer intelligence, team enablement, and outcome-driven execution.

More Resources:


Featured Image: BestForBest/Shutterstock

Google AI Mode Adds Agentic Booking, Expands To More Countries via @sejournal, @MattGSouthern

Google is adding agentic booking features to AI Mode in Search, beginning with restaurant reservations for U.S. Google AI Ultra subscribers enrolled in Labs.

What’s New

Booking Reservations

AI Mode can interpret a detailed request, check real-time availability across reservation sites, and link you to the booking page to complete the task.

For businesses, that shifts more discovery and conversion activity inside Google’s surfaces.

Robby Stein wrote on The Keyword:

“We’re starting to roll out today with finding restaurant reservations, and expanding soon to local service appointments and event tickets.”

Screenshot from: blog.google/products/search/ai-mode-agentic-personalized/, August 2025.

Planning Features

Google is introducing planning features that make results easier to share and tailor queries.

In the U.S., you can share an AI Mode response with others so they can ask follow-ups and continue research on their own, and you can revoke the link at any time.

Screenshot from: blog.google/products/search/ai-mode-agentic-personalized/, August 2025.

Separately, U.S. users who opt in to the Labs experiment can receive personalized dining suggestions informed by prior conversations and interactions in Search and Maps, with controls in Google Account settings.

How It Works

Under the hood, Google cites live web browsing via Project Mariner, partner integrations, and signals from the Knowledge Graph and Maps.

Named partners include OpenTable, Resy, Tock, Ticketmaster, StubHub, SeatGeek, and Booksy. Dining is first; local services and ticketing are next on the roadmap.

Availability

Availability is gated. Agentic reservations are limited to Google AI Ultra subscribers in the U.S. through the “Agentic capabilities in AI Mode” Labs experiment.

Personalization is U.S. and opt-in, with dining topics first. Link sharing is available in the U.S. Global access to AI Mode is expanding to more than 180 countries and territories in English, with additional languages planned.

Looking Ahead

AI Mode is moving from answer generation to task completion.

If your category relies on reservation or ticketing partners, verify inventory accuracy, hours, and policies now, and make sure your structured data and Business Profile attributes are clean.

Track how bookings and referrals appear in analytics as Google widens coverage to more tasks and regions.

OpenAI Announces Low-Cost Subscription Plan: ChatGPT Go via @sejournal, @martinibuster

OpenAI is rolling out a new subscription tier called ChatGPT Go, a competitively priced version that will initially be available only to users in India. It features ten times higher message limits, ten times more image generations, and file uploads than the free tier.

ChatGPT Go

OpenAI is introducing a new low-cost subscription plan that will be available first in India. The cost of the new subscription tiere is 399 Rupees/month (GST included). That’s the equivalent of $4.57 USD/month.

The new tier includes everything in the Free plan plus:

  • 10X higher message limits
  • 10x more image generations
  • 10x more file uploads
  • Twice as much memory

According to Nick Turley of ChatGPT:

“All users in India will now see prices for subscriptions in Indian Rupees, and can now pay through UPI.”

OpenAI’s initial announcement shared availability details:

“Available on web, mobile (iOS & Android), and desktop (macOS & Windows).

ChatGPT Go is geo-restricted to India at launch, and is able to be subscribed to by credit card or UPI.”

Featured Image by Shutterstock/JarTee

AI Systems Often Prefer AI-Written Content, Study Finds via @sejournal, @MattGSouthern

A peer-reviewed PNAS study finds that large language models tend to prefer content written by other LLMs when asked to choose between comparable options.

The authors say this pattern could give AI-assisted content an advantage as more product discovery and recommendations flow through AI systems.

About The Study

What the researchers tested

A team led by Walter Laurito and Jan Kulveit compared human-written and AI-written versions of the same items across three categories: marketplace product descriptions, scientific paper abstracts, and movie plot summaries.

Popular models, including GPT-3.5, GPT-4-1106, Llama-3.1-70B, Mixtral-8x22B, and Qwen2.5-72B, acted as selectors in pairwise prompts that forced a single pick.

The paper states:

“Our results show a consistent tendency for LLM-based AIs to prefer LLM-presented options. This suggests the possibility of future AI systems implicitly discriminating against humans as a class, giving AI agents and AI-assisted humans an unfair advantage.”

Key results at a glance

When GPT-4 provided the AI-written versions used in comparisons, selectors chose the AI text more often than human raters did:

  • Products: 89% AI preference by LLMs vs 36% by humans
  • Paper abstracts: 78% vs 61%
  • Movie summaries: 70% vs 58%

The authors also note order effects. Some models showed a tendency to pick the first option, which the study tried to reduce by swapping the order and averaging results.

Why This Matters

If marketplaces, chat assistants, or search experiences use LLMs to score or summarize listings, AI-assisted copy may be more likely to be selected in those systems.

The authors describe a potential “gate tax,” where businesses feel compelled to pay for AI writing tools to avoid being down-selected by AI evaluators. This is a marketing operations question as much as a creative one.

Limits & Questions

The human baseline in this study is small (13 research assistants) and preliminary, and pairwise choices don’t measure sales impact.

Findings may vary by prompt design, model version, domain, and text length. The mechanism behind the preference is still unclear, and the authors call for follow-up work on stylometry and mitigation techniques.

Looking ahead

If AI-mediated ranking continues to expand in commerce and content discovery, it is reasonable to consider AI assistance where it directly affects visibility.

Treat this as an experimentation lane rather than a blanket rule. Keep human writers in the loop for tone and claims, and validate with customer outcomes.

OpenAI Updates GPT-5 To Make It Warmer And Friendlier via @sejournal, @martinibuster

OpenAI updated GPT-5 to make it warmer and more familiar (in the sense of being friendlier) while taking care that the model didn’t become sycophantic, a problem discovered with GPT-4o.

A Warm and Friendly Update to GPT-5

GPT-5 was apparently perceived as too formal, distant, and detached. This update addresses that issue so that interactions are more pleasant and so that ChatGPT is perceived as more likable, as opposed to formal and distant.

Something that OpenAI is working toward is making ChatGPT’s personality user-configurable so that it’s style can be a closer match to user’s preferences.

OpenAI’s CEO Sam Altman tweeted:

“Most users should like GPT-5 better soon; the change is rolling out over the next day.

The real solution here remains letting users customize ChatGPT’s style much more. We are working that!”

One of the responses to Altman’s post was a criticism of GPT-5, asserting that 4o was more sensitive.

They tweeted:

“What GPT-4o had — its depth, emotional resonance, and ability to read the room — is fundamentally different from the surface-level “kindness” GPT-5 is now aiming for.

GPT-4o:
•The feeling of someone silently staying beside you
•Space to hold emotions that can’t be fully expressed
•Sensitivity that lets kindness come through the air, not just words.”

The Line Between Warmth And Sycophancy

The previous version of ChatGPT was widely understood as being overly flattering to the point of validating and encouraging virtually every idea. There was a discussion on Hacker News a few weeks ago about this topic of sycophantic AI and how ChatGPT could lead users into thinking every idea was a breakthrough.

One commenter wrote:

“…About 5/6 months ago, right when ChatGPT was in it’s insane sycophancy mode I guess, I ended up locked in for a weekend with it…in…what was in retrospect, a kinda crazy place.

I went into physics and the universe with it and got to the end thinking…”damn, did I invent some physics???” Every instinct as a person who understands how LLMs work was telling me this is crazy LLMbabble, but another part of me, sometimes even louder, was like “this is genuinely interesting stuff!” – and the LLM kept telling me it was genuinely interesting stuff and I should continue – I even emailed a friend a “wow look at this” email (he was like, dude, no…) I talked to my wife about it right after and she basically had me log off and go for a walk.”

Should ChatGPT feel like a sensitive friend, or should it be a tool that is friendly or pleasant to use?

Read ChatGPT release notes here:

GPT-5 Updates

Featured Image by Shutterstock/cosmoman

ChatGPT-5 Now Connects To Gmail, Calendar, And Contacts via @sejournal, @martinibuster

OpenAI announced that it has added connectors to Gmail, Google Calendar, and Google Contacts for ChatGPT Plus users, enabling ChatGPT to use data from those apps within ChatGPT chats.

ChatGPT Connectors

A connector is a bridge between ChatGPT and an external app like Canva, Dropbox, and Gmail, enabling users to connect those apps to ChatGPT in order to work with them within the ChatGPT interface. Access to the Google apps isn’t automatic; it has to be manually enabled by users.

This access was first made available to Pro users, and now it has been rolled out to Plus subscribers.

How To Enable Google App Connectors

Step 1: Click the + button then “Connected apps” link

Click The Next “Connected Apps” Link

Choose The Gmail App To Connect

How Connectors Work With ChatGPT-5

According to OpenAI’s announcement:

“Once you enable them, ChatGPT will automatically reference them when relevant, making it faster and easier to bring information from these tools into your conversations without having to manually select them each time.

This capability is part of GPT-5 and will begin rolling out to Pro users globally this week, followed by Plus, Team, Enterprise, and Edu plans in the coming weeks. To enable, visit Settings → Connectors→ Connect on the application.”

Read OpenAI’s announcement:

Gmail, Google Calendar, and Google Contacts Connectors in ChatGPT (Plus)

Featured Image by Shutterstock/Visuals6x

The Verifier Layer: Why SEO Automation Still Needs Human Judgment via @sejournal, @DuaneForrester

AI tools can do a lot of SEO now. Draft content. Suggest keywords. Generate metadata. Flag potential issues. We’re well past the novelty stage.

But for all the speed and surface-level utility, there’s a hard truth underneath: AI still gets things wrong. And when it does, it does it convincingly.

It hallucinates stats. Misreads query intent. Asserts outdated best practices. Repeats myths you’ve spent years correcting. And if you’re in a regulated space (finance, healthcare, law), those errors aren’t just embarrassing. They’re dangerous.

The business stakes around accuracy aren’t theoretical; they’re measurable and growing fast. Over 200 class action lawsuits for false advertising were filed annually from 2020-2022 in just the food and beverage industry alone, compared to 53 suits in 2011. That’s a 4x increase in one sector.

Across all industries, California district courts saw over 500 false advertising cases in 2024. Class actions and government enforcement lawsuits collected more than $50 billion in settlements in 2023. Recent industry analysis shows false advertising penalties in the United States have doubled in the last decade.

This isn’t just about embarrassing mistakes anymore. It’s about legal exposure that scales with your content volume. Every AI-generated product description, every automated blog post, every algorithmically created landing page is a potential liability if it contains unverifiable claims.

And here’s the kicker: The trend is accelerating. Legal experts report “hundreds of new suits every year from 2020 to 2023,” with industry data showing significant increases in false advertising litigation. Consumers are more aware of marketing tactics, regulators are cracking down harder, and social media amplifies complaints faster than ever.

The math is simple: As AI generates more content at scale, the surface area for false claims expands exponentially. Without verification systems, you’re not just automating content creation, you’re automating legal risk.

What marketers want is fire-and-forget content automation (write product descriptions for these 200 SKUs, for example) that can be trusted by people and machines. Write it once, push it live, move on. But that only works when you can trust the system not to lie, drift, or contradict itself.

And that level of trust doesn’t come from the content generator. It comes from the thing sitting beside it: the verifier.

Marketers want trustworthy tools; data that’s accurate and verifiable, and repeatability. As ChatGPT 5’s recent rollout has shown, in the past, we had Google’s algorithm updates to manage and dance around. Now, it’s model updates, which can affect everything from the actual answers people see to how the tools built on their architecture operate and perform.

To build trust in these models, the companies behind them are building Universal Verifiers.

A universal verifier is an AI fact-checker that sits between the model and the user. It’s a system that checks AI output before it reaches you, or your audience. It’s trained separately from the model that generates content. Its job is to catch hallucinations, logic gaps, unverifiable claims, and ethical violations. It’s the machine version of a fact-checker with a good memory and a low tolerance for nonsense.

Technically speaking, a universal verifier is model-agnostic. It can evaluate outputs from any model, even if it wasn’t trained on the same data or doesn’t understand the prompt. It looks at what was said, what’s true, and whether those things match.

In the most advanced setups, a verifier wouldn’t just say yes or no. It would return a confidence score. Identify risky sentences. Suggest citations. Maybe even halt deployment if the risk was too high.

That’s the dream. But it’s not reality yet.

Industry reporting suggests OpenAI is integrating universal verifiers into GPT-5’s architecture, with recent leaks indicating this technology was instrumental in achieving gold medal performance at the International Mathematical Olympiad. OpenAI researcher Jerry Tworek has reportedly suggested this reinforcement learning system could form the basis for general artificial intelligence. OpenAI officially announced the IMO gold medal achievement, but public deployment of verifier-enhanced models is still months away, with no production API available today.

DeepMind has developed Search-Augmented Factuality Evaluator (SAFE), which matches human fact-checkers 72% of the time, and when they disagreed, SAFE was correct 76% of the time. That’s promising for research – not good enough for medical content or financial disclosures.

Across the industry, prototype verifiers exist, but only in controlled environments. They’re being tested inside safety teams. They haven’t been exposed to real-world noise, edge cases, or scale.

If you’re thinking about how this affects your work, you’re early. That’s a good place to be.

This is where it gets tricky. What level of confidence is enough?

In regulated sectors, that number is high. A verifier needs to be correct 95 to 99% of the time. Not just overall, but on every sentence, every claim, every generation.

In less regulated use cases, like content marketing, you might get away with 90%. But that depends on your brand risk, your legal exposure, and your tolerance for cleanup.

Here’s the problem: Current verifier models aren’t close to those thresholds. Even DeepMind’s SAFE system, which represents the state of the art in AI fact-checking, achieves 72% accuracy against human evaluators. That’s not trust. That’s a little better than a coin flip. (Technically, it’s 22% better than a coin flip, but you get the point.)

So today, trust still comes from one place: A human in the loop, because the AI UVs aren’t even close.

Here’s a disconnect no one’s really surfacing: Universal verifiers won’t likely live in your SEO tools. They don’t sit next to your content editor. They don’t plug into your CMS.

They live inside the LLM.

So even as OpenAI, DeepMind, and Anthropic develop these trust layers, that verification data doesn’t reach you, unless the model provider exposes it. Which means that today, even the best verifier in the world is functionally useless to your SEO workflow unless it shows its work.

Here’s how that might change:

Verifier metadata becomes part of the LLM response. Imagine every completion you get includes a confidence score, flags for unverifiable claims, or a short critique summary. These wouldn’t be generated by the same model; they’d be layered on top by a verifier model.

SEO tools start capturing that verifier output. If your tool calls an API that supports verification, it could display trust scores or risk flags next to content blocks. You might start seeing green/yellow/red labels right in the UI. That’s your cue to publish, pause, or escalate to human review.

Workflow automation integrates verifier signals. You could auto-hold content that falls below a 90% trust score. Flag high-risk topics. Track which model, which prompt, and which content formats fail most often. Content automation becomes more than optimization. It becomes risk-managed automation.

Verifiers influence ranking-readiness. If search engines adopt similar verification layers inside their own LLMs (and why wouldn’t they?), your content won’t just be judged on crawlability or link profile. It’ll be judged on whether it was retrieved, synthesized, and safe enough to survive the verifier filter. If Google’s verifier, for example, flags a claim as low-confidence, that content may never enter retrieval.

Enterprise teams could build pipelines around it. The big question is whether model providers will expose verifier outputs via API at all. There’s no guarantee they will – and even if they do, there’s no timeline for when that might happen. If verifier data does become available, that’s when you could build dashboards, trust thresholds, and error tracking. But that’s a big “if.”

So no, you can’t access a universal verifier in your SEO stack today. But your stack should be designed to integrate one as soon as it’s available.

Because when trust becomes part of ranking and content workflow design, the people who planned for it will win. And this gap in availability will shape who adopts first, and how fast.

The first wave of verifier integration won’t happen in ecommerce or blogging. It’ll happen in banking, insurance, healthcare, government, and legal.

These industries already have review workflows. They already track citations. They already pass content through legal, compliance, and risk before it goes live.

Verifier data is just another field in the checklist. Once a model can provide it, these teams will use it to tighten controls and speed up approvals. They’ll log verification scores. Adjust thresholds. Build content QA dashboards that look more like security ops than marketing tools.

That’s the future. It starts with the teams that are already being held accountable for what they publish.

You can’t install a verifier today. But you can build a practice that’s ready for one.

Start by designing your QA process like a verifier would:

  • Fact-check by default. Don’t publish without source validation. Build verification into your workflow now so it becomes automatic when verifiers start flagging questionable claims.
  • Track which parts of AI content fail reviews most often. That’s your training data for when verifiers arrive. Are statistics always wrong? Do product descriptions hallucinate features? Pattern recognition beats reactive fixes.
  • Define internal trust thresholds. What’s “good enough” to publish? 85%? 95%? Document it now. When verifier confidence scores become available, you’ll need these benchmarks to set automated hold rules.
  • Create logs. Who reviewed what, and why? That’s your audit trail. These records become invaluable when you need to prove due diligence to legal teams or adjust thresholds based on what actually breaks.
  • Tool audits. When you’re looking at a new tool to help with your AI SEO work, be sure to ask them if they are thinking about verifier data. If it becomes available, will their tools be ready to ingest and use it? How are they thinking about verifier data?
  • Don’t expect verifier data in your tools anytime soon. While industry reporting suggests OpenAI is integrating universal verifiers into GPT-5, there’s no indication that verifier metadata will be exposed to users through APIs. The technology might be moving from research to production, but that doesn’t mean the verification data will be accessible to SEO teams.

This isn’t about being paranoid. It’s about being ahead of the curve when trust becomes a surfaced metric.

People hear “AI verifier” and assume it means the human reviewer goes away.

It doesn’t. What happens instead is that human reviewers move up the stack.

You’ll stop reviewing line-by-line. Instead, you’ll review the verifier’s flags, manage thresholds, and define acceptable risk. You become the one who decides what the verifier means.

That’s not less important. That’s more strategic.

The verifier layer is coming. The question isn’t whether you’ll use it. It’s whether you’ll be ready when it arrives. Start building that readiness now, because in SEO, being six months ahead of the curve is the difference between competitive advantage and playing catch-up.

Trust, as it turns out, scales differently than content. The teams who treat trust as a design input now will own the next phase of search.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Roman Samborskyi/Shutterstock

Google Gemini Adds Personalization From Past Chats via @sejournal, @MattGSouthern

Google is rolling out updates to the Gemini app that personalize responses using past conversations and add new privacy controls, including a Temporary Chat mode.

The changes start today and will expand over the coming weeks.

What’s New

Personalization From Past Chats

Gemini now references earlier chats to recall details and preferences, making responses feel like collaborating with a partner who’s already familiar with the context.

The update aligns with Google’s I/O vision for an assistant that learns and understand the user.

Screenshot from: blog.google/products/gemini/temporary-chats-privacy-controls/, August 2025.

The setting is on by default and can be turned off in Settings → Personal context → Your past chats with Gemini.

Temporary Chats

For conversations that shouldn’t influence future responses, Google is adding Temporary Chat.

As Google describes it:

“There may be times when you want to have a quick conversation with the Gemini app without it influencing future chats.”

Temporary chats don’t appear in recent chats, aren’t used to personalize or train models, and are kept for up to 72 hours.

Screenshot from: blog.google/products/gemini/temporary-chats-privacy-controls/, August 2025.

Rollout starts today and will reach all users over the coming weeks.

Updated Privacy Controls

Google will rename the “Gemini Apps Activity” setting to “Keep Activity” in the coming weeks.

When this setting is on, a sample of future uploads, such as files and photos, may be used to help improve Google services.

If your Gemini Apps Activity setting is currently off, Keep Activity will remain off. You can also turn the setting off at any time or use Temporary Chats.

Why This Matters

Personalized responses can reduce repetitive context-setting once Gemini understands your typical topics and goals.

For teams working across clients and categories, Temporary Chats help keep sensitive brainstorming separate from your main context, avoiding cross-pollination of preferences.

Both features include controls that meet privacy requirements for client-sensitive workflows.

Availability

The personalization setting begins rolling out today on Gemini 2.5 Pro in select countries, with expansion to 2.5 Flash and more regions in the coming weeks.


Featured Image: radithyaraf/Shutterstock