Google Quietly Raised Ad Prices, Court Orders More Transparency via @sejournal, @MattGSouthern

Google raised ad prices incrementally through internal “pricing knobs” that advertisers couldn’t detect, according to federal court documents.

  • Google raised ad prices 5-15% at a time using “pricing knobs” that made increases look like normal auction fluctuations.
  • Google’s surveys showed advertisers noticed higher costs but didn’t realize Google was causing the increases.
  • A federal judge now requires Google to publicly disclose auction changes that could raise advertiser costs.
What To Expect AT NESS 2025: Surviving The AI-First Era via @sejournal, @NewsSEO_

This post was sponsored by NESS. The opinions expressed in this article are the sponsor’s own.

For anyone who isn’t paying attention to news SEO because they feel it isn’t their relevant niche – think again.

The foundations of SEO are underpinned by publishing content. Therefore, news SEO is relevant to all SEO. We are all publishers online.

John Shehata and Barry Adams are the experts within this vertical and, between them, have experience working with most of the top news publications worldwide.

Together, they founded the News and Editorial SEO Summit (NESS) in 2021, and in the last four years, the SEO industry has seen the most significant and rapid changes since it began 30 years ago.

I spoke to both John and Barry to get their insights into some of the current issues SEOs face, how SEO can survive this AI-first era, and to get a preview of the topics to be discussed at their upcoming fifth NESS event to be held on October 21-22, 2025.

You can watch the full interview at the end of this article.

SEO Repackaged For The AI Era

I started out by commenting that recently, at Google Search Central Live in Thailand, Gary Illyes came out to say that there is no difference between GEO, AEO, and SEO. I asked Barry what he thought about this and if the introduction of AI Mode is going to continue taking away publisher traffic.

Surprisingly, Barry agreed with Google to say, “It’s SEO. It’s just SEO. I fully agree with what the Googlers are saying on this front, and it’s not often that I fully agree with Googlers.”

He went on to say, “I have yet to find any LLM optimization strategy that is not also an SEO strategy. It’s just SEO repackaged for the AI era so that agencies can charge more money without actually creating any more added value.”

AI Mode Is A Threat To Publisher Traffic

While AI Overviews have drawn significant attention, Barry identifies AI Mode as a more serious threat to publisher traffic.

Unlike AI Overviews, which still display traditional search results alongside AI-generated summaries, AI Mode creates an immersive conversational experience that encourages users to continue their search journey within Google’s ecosystem.

Barry warns that if AI Mode becomes the default search experience, it could be “insanely damaging for the web because it’s just going to make a lot of traffic evaporate without any chance of recovery.”

He added that “If you can maintain your traffic from search at the moment, you’re already doing better than most.”

Moving Up The Value Chain

At NESS, John will be speaking about how to survive this AI-first era, and I asked him for a preview of how SEOs can survive what is happening right now.

John highlighted a major issue: “Number one, I think SEOs need to move up the value chain. And I have been saying this for a long time, SEOs cannot be only about keywords and rankings. It has to be much bigger than that.”

He then went on to talk about three key areas as solutions: building topical authority, traffic diversification, and direct audience relationships.

“They [news publishers] need to think about revenue diversification as well as going back to some traditional revenue streams, such as events or syndication. They also need to build their own direct relationships with users, either through apps or newsletters. And newsletters never got the attention they deserve in any of the different brands I’m familiar with, but now it’s gaining more traction. It’s extremely important.”

Quality Journalism Is Crucial For Publishers

Despite the AI disruption, both John and Barry stress that technical SEO fundamentals remain important, but to a point.

“You have to make sure the foundations are in place,” Barry notes, but he believes the technical can only take you so far. After that, investment in content is critical.

“When those foundations are at the level where there’s not much value in getting further optimization, then the publisher has to do the hard work of producing the content that builds the brand. The foundation can only get you so far. But if you don’t have the foundation, you are building a house on quicksand and you’re not going to be able to get much traction anyway.”

John also noted that “it’s important to double down on technical elements of the site.” He went on to say, “While I think you need to look at your schema, your speed, all of the elements, the plumbing, just to make sure that whatever channel you work with has good access and good understanding of your data.”

Barry concluded by reaffirming the importance of content quality. “The content is really what needs to shine. And if you don’t have that in place, if you don’t have that unique brand voice, that quality journalism, then why are you in business in the first place?”

The AI Agents Question

James Carson and Marie Haynes are both speaking about AI agents at NESS 2025, and when I asked Barry and John about the introduction of AI agents into newsrooms, the conversation was both optimistic and cautious.

John sees significant potential for AI to handle research tasks, document summarization, and basic content creation for standardized reporting like market updates or sports scores.

“A lot of SEO teams are using AI to recommend Google Discover headlines that intrigue curiosity, checking certain SEO elements on the site and so on. So I think more and more we have seen AI integrated not to write the content itself, but to guide the content and optimize the efficiency of the whole process.” John commented.

However, Barry remains skeptical about current AI agent reliability for enterprise environments.

“You cannot give an AI agent your credit card details to start shopping on your behalf, and then it just starts making things up and ends up spending thousands of your dollars on the wrong things … The AI agents are nowhere near that maturity level yet and I’m not entirely sure they will ever be at that maturity level because I do think the current large language model technology has fundamental limitations.”

John countered that “AI agents can save us hundreds of hours, hundreds.” He went on to say, “These three elements together, automation, AI agents, and human supervision together can be a really powerful combination, but not AI agent completely solo. And I agree with Barry, it can lead to disastrous consequences.”

Looking Forward

The AI-first era demands honest acknowledgment of changed realities. Easy search traffic growth is over, but opportunities exist for publishers willing to adapt strategically.

Success requires focusing on unique value propositions, building direct audience relationships, and maintaining technical excellence while accepting that traditional growth metrics may no longer apply.

The future belongs to publishers who understand that survival means focusing on their audience to build authentic connections that value their specific perspective and expertise.

Watch the full interview below.


If you’re a news publisher, or an SEO, you cannot afford to miss the fifth NESS on October 21-22, 2025.

SEJ readers have a special 20% discount on tickets. Just use the code “SEJ2025” at the checkout here.

Headline speakers include Marie Haynes, Mike King, Lily Ray, Kevin Indig, and of course John Shehata and Barry Adams.

Over two days, there are 20 speakers representing the best news publishers such as Carly Steven (Daily Mail), Maddie Shepherd (CBS), Christine Liang (The New York Times), Jessie Willms (The Guardian), among others.

Check out the full schedule here.


Featured Image: Shelley Walsh/Search Engine Journal/ NESS

The CMO & SEO: Staying Ahead Of The Multi-AI Search Platform Shift (Part 1)

Some of the critical questions that are top of mind for both SEOs and CMOs as we head into a multi-search world are: Where is search going to develop? Is ChatGPT a threat or an opportunity? Is optimizing for large language models (LLMs) the same as optimizing for search engines?

In this two-part interview series, I try to answer these questions to provide some clear direction and focus to help navigate considerable change.

What you will learn:

  • Ecosystem Evolution: While it is still a Google-first world, learn where native AI search platforms are growing and what this means.
  • Opportunity vs. Threat: Why AI platforms create unprecedented brand visibility opportunities while demanding new return on investment (ROI) thinking.
  • LLM Optimization Strategy: Why SEO has become more vital than ever, regardless of the AI and Search platform, and where specific nuances to optimize for lie.
  • CMO Priorities: Why authority and trust signals matter more than ever in AI-driven search.
  • Organizational Alignment: Why CMOs need to integrate marketing, PR, and technical teams for cohesive AI-first search strategies.

Where Do You Think The Current Search Ecosystem Might Develop In The Next 6 Months?

To answer the first question, I think we are witnessing something really fascinating right now. The search landscape is undergoing a fundamental transformation that will accelerate significantly over the next six months.

While Google still dominates with about 90% market share, AI-powered search platforms are experiencing explosive growth that is impossible to ignore.

Let me put this in perspective. ChatGPT is showing 21% month-over-month growth and is on track to hit 700 million weekly active users.

Claude and Perplexity are posting similar numbers at 21% and 19% growth, respectively. But here is what has caught my attention: Grok has seen over 1,000% month-over-month growth. Source BrightEdge Generative Parser and DataCube analysis, July 2025.

Sure, it is starting from a tiny base, but that trajectory makes it the dark horse to watch. Meanwhile, DeepSeek continues its gradual decline following its January surge, which highlights the volatility in this emerging market. I will share more on that later.

In A Google First World, User Behavior Is Also Evolving On Multiple AI Platforms

What is particularly interesting is how user behavior is evolving. People are not just switching from Google to AI search — they are starting to mix and match platforms based on their specific needs. I am seeing users turn to:

  • ChatGPT for deep research.
  • Perplexity for quick facts.
  • Claude, when they need reliable information.
  • Google when they want comprehensive breadth.
Image from BrightEdge, August 2025

The CMO AI And SEO Mindset Shift

From a marketing perspective, this creates a massive change in thinking. SEO is not just about Google anymore – though that is still where most of the focus needs to be.

Marketers will need to consider optimizing for multiple AI engines, each with its own distinct data ingestion pipelines. For ChatGPT and Claude, you need clear, structured, cited content that AI models can safely reuse. For Perplexity, timeliness, credibility, and brevity matter more than traditional keyword density.

It is no longer about optimizing just for clicks; it is about optimizing for influence and citations and making sure you appear in the proper context at the right moment within all these distinct types of AI experiences.

The Search Bot To AI User Agent Revolution

ChatGPT and its ChatGPT-User agent are leading the charge.

In July, BrightEdge’s analysis revealed that ChatGPT’s User Agent real-time page requests nearly doubled its activity. In other words, it shows that users relying on real-time web searches to answer questions almost doubled within just one month.

For example, suppose you are looking to compare “Apple Watch vs. Fitbit” from current reviews. In that case, the ChatGPT user agent is acting as your browsing assistant and operating on your behalf, which is fundamentally different from traditional search engines and crawlers.

Image from BrightEdge, August 2025

In summary, I believe the next six months will establish what I term a “multi-AI search world.” Users will become increasingly comfortable switching between platforms fluidly based on what they need in that moment. The opportunity here is massive for early adopters who figure out cross-platform optimization.

Is The Rise Of AI Platforms Like ChatGPT An Opportunity Or A Threat That CMOs Need To Be Aware Of?

It is all opportunity.

Each AI platform is carving out its own distinct identity. Google is doubling down on AI Overviews and AI Mode. ChatGPT is making this fascinating transition from conversational Q&A into full web search integration.

Perplexity is cementing itself as the premier “answer engine” with its citation-first, mobile-focused approach, and they are planning deeper integrations with news providers and real-time data.

Claude is expanding beyond conversation into contextual search with superior fact-checking capabilities, while Microsoft’s Bing Copilot is positioning itself as this search-plus-productivity hybrid that seamlessly blends document generation with web search.

The rise of AI platforms represents both a transformative opportunity and a strategic challenge that CMOs must navigate with sophistication and strategic foresight.

Learn More: How Enterprise Search And AI Intelligence Reveal Market Pulse

CMOs And The Shift From Ranking To Referencing And Citations

And that brings me to a huge mindset shift: We are moving from “ranking” to “referencing.” AI summaries do not just display the top 10 links; they reference and attribute sites within the answer itself.

Being cited within an AI summary can be more impactful than just ranking high in traditional blue links. So, CMOs need to start tracking not just where they rank, but where and how their content gets referenced and cited by AI everywhere.

Technical Infrastructure Requirements And CMOs Leaning Into SEO Teams

On the technical side, structured data and clear information architecture are no longer nice-to-haves – they are foundational. AI relies on this structure to surface accurate information, so schema.org markup, clean technical SEO, and machine-readable content formats are essential.

Image from BrightEdge, August 2025

Brands, The CMO, And The Authority And Trust Premium

Here is something that is becoming critical: Authority and brand trust matter more than ever. AI tends to pull from sites it considers authoritative, trustworthy, and frequently cited. This puts a premium on long-term brand-building, thought leadership, and reputation management across all digital channels.

You need to focus on those E-E-A-T signals (Experience, Expertise, Authoritativeness, Trustworthiness) for both humans and AI algorithms.

The CMOs’ SEO And AI Competitive Advantage

The CMOs who are proactively adapting to these shifts – rethinking measurement, technical SEO, brand trust, and cross-team integration – are the ones positioning their enterprises for continued visibility and influence.

The move to AI-driven search is rapid, but savvy enterprise marketers are seeing this as an opportunity to deepen brand engagement and become a trusted source for both human users and AI engines.

It is challenging, but the potential upside for brands that get this right is enormous.

It is a whole new way of thinking about ROI.

Learn More: How AI Search Should Be Shaping Your CEO’s & CMO’s Strategy

Do You Think Optimizing For LLMs Is The Same As Search Engines, As Google Suggests?

Following Google Search Central Live in Thailand, and Gary’s advice that SEOs don’t need to optimize for GEO, I think that Gary’s absolutely right, and putting any acronym debates behind us, foundational SEO remains the same, particularly with Google search.

SEO has never been more vital, and AI is accelerating the need for specialists in this area. Your website still needs to be fast, mobile-friendly, and technically sound. Search engines and AI systems alike need to crawl and index your content efficiently. Technical optimizations like proper URL structures, XML sitemaps, clean code, and fast loading times are still paying dividends.

The CMO, SEO, And LLM Optimization Fundamentals

Now, when we talk about optimizing for all LLMs, there is a similarity in the reality that success still lies in core SEO – primarily technical SEO – and content fundamentals.

Strong internal linking helps AI crawlers understand how your pages connect. Make sure all pages are easily crawlable. Answer related questions throughout your content using clear headings, schema markup, and FAQ sections, and figure out what people are trying to accomplish to give them the answer and be the cited source in AI results.

LLM Platform-Specific Differentiation

However, as more brands are being discovered and interpreted across multiple AI platforms, it is also vital to understand that each has its own interface, logic, and way of shaping brand perceptions.

Each platform has developed distinct strengths: ChatGPT Search provides a comprehensive narrative context. Perplexity shines with visual integration and related content. Google AI Overview excels at structured, hierarchical information.

Here is a nuanced example. When users ask comparison questions like “what’s the best?,” ChatGPT and Google’s approaches are similar. But when users ask action-oriented questions like “how do I?,” they part ways dramatically. ChatGPT acts like a trusted coach for decision-making, while Google AI remains the research assistant.

Image from BrightEdge, August 2025

Trust Signal Variations

Different platforms also show distinct trust signal patterns. Google AI Overviews tends to cite review sites and community sources like Reddit, asking “what does the community think?”.

ChatGPT appears to favor retail sources more frequently, asking, “where can you buy it?”. This suggests these platforms are developing different approaches to trust and authority validation.

Three-Phase AI Optimization Framework For The CMO And Marketing Teams

Here is a framework for organizations to follow.

  • Start by tracking your AI and brand presence across multiple AI engines. Monitor how your visibility evolves over time through citations and mentions across AI Overviews, ChatGPT, and beyond.
  • Next, focus on understanding variations in brand mentions across key prompts. Quickly identify which prompts from ChatGPT, AI Overviews, and other AI search engines generate brand mentions so you can optimize your content efficiently.
  • Finally, dive deeper into specific prompts to understand why AI systems recommend brands. Utilizing sentiment analysis provides precise insights into which brand attributes each AI engine favors.

Learn More: The Triple-P Framework: AI & Search Brand Presence, Perception & Performance

The CMO: AI, Search, And Cross-Team Integration Thinking

One thing I am seeing work well is tighter integration across marketing and communications teams. Paid and organic strategies must align more than ever because ads and organic AI overviews often get presented together – your messaging, branding, and targeted intent need to be entirely consistent.
Plus, your PR and content teams need better coordination because off-site mentions in media, reviews, and authoritative sites directly influence who gets cited in AI summaries.

Conclusion: Embracing The Multi-AI Search Transformation

The CMOs who are proactively adapting to the shifts are positioning their organizations for sustained competitive advantage in this evolving landscape.

Big Picture, to put this all in perspective.

The 3 Big Questions From CMOs On AI And Search

  1. AI would kill Google: No, it has turbocharged it.
  2. SEO is dead: No, it’s actually more important than ever. AI is reshaping search, which means we need to understand what this transformation entails. Generative Engine Optimization (GEO) builds upon core SEO foundations and requires more integrated, higher-quality technical approaches.
  3. Everything changes? The more things change, the more they stay the same.

In Part 2 of this series, topics covered will include the future of traditional SERP search and how agentic SEO might change the search funnel. Learn how these changes impact the role of SEO and all teams that fall under the CMO remit.

More Resources:


Featured Image: jd8/Shutterstock

The Behavioral Data You Need To Improve Your Users’ Search Journey via @sejournal, @SequinsNsearch

We’re more than halfway through 2025, and SEO has already changed names many times to take into account the new mission of optimizing for the rise of large language models (LLMs): We’ve seen GEO (Generative Engine Optimization) floating around, AEO (Answer Engine Optimization), and even LEO (LLM Engine Optimization) has made an apparition in industry conversations and job titles.

However, while we are all busy finding new nomenclatures to factor in the machine part of the discovery journey, there is someone else in the equation that we risk forgetting about: the end beneficiary of our efforts, the user.

Why Do You Need Behavioral Data In Search?

Behavioral data is vital to understand what leads a user to a search journey, where they carry it out, and what potential points of friction might be blocking a conversion action, so that we can better cater to their needs.

And if we learned anything from the documents leaked from the Google trial, it is that users’ signals might actually be one of the many factors that influence rankings, something that was never fully confirmed by the company’s spokespeople, but that’s also been uncovered by Mark Wiliams Cook in his analysis of Google exploits and patents.

With search becoming more and more personalized, and data about users becoming less transparent now that simple search queries are expanding into full funnel conversations on LLMs, it’s important to remember that – while individual needs and experiences might be harder to isolate and cater for – general patterns of behavior tend to stick across the same population, and we can use some rules of thumb to get the basics right.

Humans often operate on a few basic principles aimed at preserving energy and resources, even in search:

  • Minimizing effort: following the path of least resistance.
  • Minimizing harm: avoiding threats.
  • Maximizing gain: seeking opportunities that present the highest benefit or rewards.

So while Google and other search channels might change the way we think about our daily job, the secret weapon we can use to future-proof our brands’ organic presence is to isolate some data about behavior, as it is, generally, much more predictable than algorithm changes.

What Behavioral Data Do You Need To Improve Search Journeys?

I would narrow it down to data that cover three main areas: discovery channel indicators, built-in mental shortcuts, and underlying users’ needs.

1. Discovery Channel Indicators

The days of starting a search on Google are long gone.

According to the Messy Middle research by Google, the exponential increase in information and new channels available has determined a shift from linear search behaviors to a loop of exploration and evaluation guiding our purchase decisions.

And since users now have an overwhelming amount of channels, they can consult in order to research a product or a brand. It’s also harder to cut through the noise, so by knowing more about them, we can make sure our strategy is laser-focused across content and format alike.

Discovery channel indicators give us information about:

  • How users are finding us beyond traditional search channels.
  • The demographic that we reach on some particular channels.
  • What drives their search, and what they are mostly engaging with.
  • The content and format that are best suited to capture and retain their attention in each one.

For example, we know that TikTok tends to be consulted for inspiration and to validate experiences through user-generated content (UGC), and that Gen Z and Millennials on social apps are increasingly skeptical of traditional ads (with skipping rates of 99%, according to a report by Bulbshare). What they favor instead is authentic voices, so they will seek out first-hand experiences on online communities like Reddit.

Knowing the different channels that users reach us through can inform organic and paid search strategy, while also giving us some data on audience demographics, helping us capture users that would otherwise be elusive.

So, make sure your channel data is mapped to reflect these new discovery channels at hand, especially if you are relying on custom analytics. Not only will this ensure that you are rightfully attributed what you are owed for organic, but it will also be an indication of untapped potential you can lean into, as searches become less and less trackable.

This data should be easily available to you via the referral and source fields in your analytics platform of choice, and you can also integrate a “How did you hear about us” survey for users who complete a transaction.

And don’t forget about language models: With the recent rise in queries that start a search and complete an action directly on LLMs, it’s even harder to track all search journeys. This replaces our mission to be relevant for one specific query at a time, to be visible for every intent we can cover.

This is even more important when we realize that everything contributes to the transactional power of a query, irrespective of how the search intent is traditionally labelled, since someone might decide to evaluate our offers and then drop out due to the lack of sufficient information about the brand.

2. Built-In Mental Shortcuts

The human brain is an incredible organ that allows us to perform several tasks efficiently every day, but its cognitive resources are not infinite.

This means that when we are carrying out a search, probably one of many of the day, while we are also engaged in other tasks, we can’t allocate all of our energy into finding the most perfect result among the infinite possibilities available. That’s why our attentional and decisional processes are often modulated by built-in mental shortcuts like cognitive biases and heuristics.

These terms are sometimes used interchangeably to refer to imperfect, yet efficient decisions, but there is a difference between the two.

Cognitive Biases

Cognitive biases are systematic, mostly unconscious errors in thinking that affect the way we perceive the world around us and form judgments. They can distort the objective reality of an experience, and the way we are persuaded into an action.

One common example of this is the serial position effect, which is made up of two biases: When we see an array of items in a list, we tend to remember best the ones we see first (primacy bias) and last (recency bias). And since cognitive load is a real threat to attention, especially now that we live in the age of 24/7 stimuli, primacy and recency biases are the reason why it’s recommended to lead with the core message, product, or item if there are a lot of options or content on the page.

Primacy and recency not only affect recall in a list, but also determine the elements that we use as a reference to compare all of the alternative options against. This is another effect called anchoring bias, and it is leveraged in UX design to assign a baseline value to the first item we see, so that anything we compare against it can either be perceived as a better or worse deal, depending on the goal of the merchant.

Among many others, some of the most common biases are:

  • Distance and size effects: As numbers increase in magnitude, it becomes harder for humans to make accurate judgments, reason why some tactics recommend using bigger digits in savings rather than fractions of the same value.
  • Negativity bias: We tend to remember and assign more emotional value to negative experiences rather than positive ones, which is why removing friction at any stage is so important to prevent abandonment.
  • Confirmation bias: We tend to seek out and prefer information that confirms our existing beliefs, and this is not only how LLMs operate to provide answers to a query, but it can be a window into the information gaps we might need to cover.

Heuristics

Heuristics, on the other hand, are rules of thumb that we employ as shortcuts at any stage of decision-making, and help us reach a good outcome without going through the hassle of analyzing every potential ramification of a choice.

A known heuristic is the familiarity heuristic, which is when we choose a brand or a product that we already know, because it cuts down on every other intermediate evaluation we would otherwise have to make with an unknown alternative.

Loss aversion is another common heuristic, showing that on average we are more likely to choose the least risky option among two with similar returns, even if this means we might miss out on a discount or a short-term benefit. An example of loss aversion is when we choose to protect our travels for an added fee, or prefer products that we can return.

There are more than 150 biases and heuristics, so this is not an exhaustive list – but in general, getting familiar with which ones are most common among our users helps us smooth out the journey for them.

Isolating Biases And Heuristics In Search

Below, you can see how some queries can already reveal subtle biases that might be driving the search task.

Bias/Heuristic Sample Queries
Confirmation Bias • Is [brand/products] the best for this [use case]?
• Is this [brand/product/service] better than [alternative brand/product service]?
• Why is [this service] more efficient than [alternative service]?
Familiarity Heuristic • Is [brand] based in [country]?
• [Brand]’s HQs
• Where do I find [product] in [country]?
Loss Aversion • Is [brand] legit?
• [brand] returns
• Free [service]
Social Proof • Most popular [product/brand]
• Best [product/brand]

You can use Regex to isolate some of these patterns and modifiers directly in Google Search Console, or you can explore other query tools like AlsoAsked.

If you’re working with large datasets, I recommend using a custom LLM or creating your own model for classifications and clustering based on these rules, so it becomes easier to spot a trend in the queries and figure out priorities.

These observations will also give you a window into the next big area.

3. Underlying Users’ Needs

While biases and heuristics can manifest a temporary need in a specific task, one of the most beneficial aspects that behavioral data can give us is the need that drives the starting query and guides all of the subsequent actions.

Underlying needs don’t only become apparent from clusters of queries, but from the channels used in the discovery and evaluation loop, too.

For example, if we see high prominence of loss aversion based on our queries, paired with low conversion rates and high traffic on UGC videos for our product or brand, we can infer that:

  • Users need reassurance on their investment.
  • There is not enough information to cover this need on our website alone.

Trust is a big decision-mover, and one of the most underrated needs that brands often fail to fulfill as they take their legitimacy for granted.

However, sometimes we need to take a step back and put ourselves in the users’ shoes in order to see everything with fresh eyes from their perspective.

By mapping biases and heuristics to specific users’ needs, we can plan for cross-functional initiatives that span beyond pure SEO and are beneficial for the entire journey from search to conversion and retention.

How Do You Obtain Behavioral Data For Actionable Insights?

In SEO, we are used to dealing with a lot of quantitative data to figure out what’s happening on our channel. However, there is much more we can uncover via qualitative measures that can help us identify the reason something might be happening.

Quantitative data is anything that can be expressed in numbers: This can be time on page, sessions, abandonment rate, average order value, and so on.

Tools that can help us extract quantitative behavioral data are:

  • Google Search Console & Google Merchant Center: Great for high-level data like click-through rates (CTRs), which can flag mismatches between the user intent and the page or campaign served, as well as cannibalization instances and incorrect or missing localization.
  • Google Analytics, or any custom analytics platform your brand relies on: These give us information on engagement metrics, and can pinpoint issues in the natural flow of the journey, as well as point of abandonment. My suggestion is to set up custom events tailored to your specific goals, in addition to the default engagement metrics, like sign-up form clicks or add to cart.
  • Heatmaps and eye-tracking data: Both of these can give us valuable insights into visual hierarchy and attention patterns on the website. Heatmapping tools like  Microsoft Clarity can show us clicks, mouse scrolls, and position data, uncovering not only areas that might not be getting enough attention, but also elements that don’t actually work. Eye-tracking data (fixation duration and count, saccades, and scan-paths) integrate that information by showing what elements are capturing visual attention, as well as which ones are often not being seen at all.

Qualitative data, on the other hand, cannot be expressed in numbers as it usually relies on observations. Examples include interviews, heuristic assessments, and live session recordings. This type of research is generally more open to interpretation than its quantitative counterpart, but it’s vital to make sure we have the full picture of the user journey.

Qualitative data for search can be extracted from:

  • Surveys and CX logs: These can uncover common frustrations and points of friction for returning users and customers, which can guide better messaging and new page opportunities.
  • Scrapes of Reddit, Trustpilot, and online communities conversations: These give us a similar output as surveys, but expand the analysis of blockers to conversion to users that we haven’t acquired yet.
  • Live user testing: The least scalable but sometimes most rewarding option, as it can cut down all the inference on quantitative data, especially when they are combined (for example, live sessions can be combined with eye-tracking and narrated by the user at a later stage via Retrospective Think-Aloud or RTA).

Behavioral Data In The AI Era

In the past year, our industry has been really good at two things: sensationalizing AI as the enemy that will replace us, and highlighting its big failures on the other end. And while it’s undeniable that there are still massive limitations, having access to AI presents unprecedented benefits as well:

  • We can use AI to easily tie up big behavioral datasets and uncover actionables that make the difference.
  • Even when we don’t have much data, we can train our own synthetic dataset based on a sample of ours or a public one, to spot existing patterns and promptly respond to users’ needs.
  • We can generate predictions that can be used proactively for new initiatives to keep us ahead of the curve.

How Do You Leverage Behavioral Data To Improve Search Journeys?

Start by creating a series of dynamic dashboards with the measures you can obtain for each one of the three areas we talked about (discovery channel indicators, built-in mental shortcuts, and underlying users’ needs). These will allow you to promptly spot behavioral trends and collect actions that can make the journey smoother for the user at every step, since search now spans beyond the clicks on site.

Once you get new insights for each area, prioritize your actions based on expected business impact and effort to implement.

And bear in mind that behavioral insights are often transferable to more than one section of the website or the business, which can maximize returns across several channels.

Lastly, set up regular conversations with your product and UX teams. Even if your job title keeps you in search, business success is often channel-agnostic. This means that we shouldn’t only treat the symptom (e.g., low traffic to a page), but curate the entire journey, and that’s why we don’t want to work in silos on our little search island.

Your users will thank you. The algorithm will likely follow.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Interaction To Next Paint: 9 Content Management Systems Ranked via @sejournal, @martinibuster

Interaction to Next Paint (INP) is a meaningful Core Web Vitals metric because it represents how quickly a web page responds to user input. It is so important that the HTTPArchive has a comparison of INP across content management systems. The following are the top content management systems ranked by Interaction to Next Paint.

What Is Interaction To Next Paint (INP)?

INP measures how responsive a web page is to user interactions during a visit. Specifically, it measures interaction latency, which is the time between when a user clicks, taps, or presses a key and when the page visually responds.

This is a more accurate measurement of responsiveness than the older metric it replaced, First Input Delay (FID), which only captured the first interaction. INP is more comprehensive because it evaluates all clicks, taps, and key presses on a page and then reports a representative value based on the longest meaningful latency.

The INP score is representative of the page’s responsive performance. For that reason**,** extreme outliers are filtered out of the calculation so that the score reflects typical worst-case responsiveness.

Web pages with poor INP scores create a frustrating user experience that increases the risk of page abandonment. Fast responsiveness enables a smoother experience that supports higher engagement and conversions.

INP Scores Have Three Ratings:

  • Good: Below or at 200 milliseconds
  • Needs Improvement: Above 200 milliseconds and below or at 500 milliseconds
  • Poor: Above 500 milliseconds

Content Management System INP Champions

The latest Interaction to Next Paint (INP) data shows that all major content management systems improved from June to July, but only by incremental improvements.

Joomla posted the largest gain with a 1.12% increase in sites achieving a good score. WordPress followed with a 0.88% increase in the number of sites posting a good score, while Wix and Drupal improved by 0.70% and 0.64%.

Duda and Squarespace also improved, though by smaller margins of 0.46% and 0.22%. Even small percentage changes can reflect real improvements in how users experience responsiveness on these platforms, so it’s encouraging that every publishing platform in this comparison is improving.

CMS INP Ranking By Monthly Improvement

  1. Joomla: +1.12%
  2. WordPress: +0.88%
  3. Wix: +0.70%
  4. Drupal: +0.64%
  5. Duda: +0.46%
  6. Squarespace: +0.22%

Which CMS Has The Best INP Scores?

Month-to-month improvement shows who is doing better, but that’s not the same as which CMS is doing the best. The July INP results show a different ranking order of content management systems when viewed by overall INP scores.

Squarespace leads with 96.07% of sites achieving a good INP score, followed by Duda at 93.81%. This is a big difference from the Core Web Vitals rankings, where Duda is consistently ranked number one. When it comes to arguably the most important Core Web Vital metric, Squarespace takes the lead as the number one ranked CMS for Interaction to Next Paint.

Wix and WordPress are ranked in the middle with 87.52% and 86.77% of sites showing a good INP score, while Drupal, with a score of 86.14%, is ranked in fifth place, just a fraction behind WordPress.

Ranking in sixth place in this comparison is Joomla, trailing the other five with a score of 84.47%. That score is not so bad considering that it’s only two to three percent behind Wix and WordPress.

CMS INP Rankings for July 2025

  1. Squarespace – 96.07%
  2. Duda: 93.81%
  3. Wix: 87.52%
  4. WordPress: 86.77%
  5. Drupal: 86.14%
  6. Joomla: 84.47%

These rankings show that even platforms that lag in INP performance, like Joomla, are still improving, and it could be that Joomla’s performance may best the other platforms in the future if it keeps up its improvement.

In contrast, Squarespace, which already performs well, posted the smallest gain. This indicates that performance improvement is uneven, with systems advancing at different speeds. Nevertheless, the latest Interaction to Next Paint (INP) data shows that all six content management systems in this comparison improved from June to July. That upward performance trend is a positive sign for publishers.

What About Shopify’s INP Performance?

Shopify has strong Core Web Vitals performance, but how well does it compare to these six content management systems? This might seem like an unfair comparison because shopping platforms require features, images, and videos that can slow a page down. But Duda, Squarespace, and Wix offer ecommerce solutions, so it’s actually a fair and reasonable comparison.

We see that the rankings change when Shopify is added to the INP comparison:

Shopify Versus Everyone

  1. Squarespace: 96.07%
  2. Duda: 93.81%
  3. Shopify: 89.58%
  4. Wix: 87.52%
  5. WordPress: 86.77%
  6. Drupal: 86.14%
  7. Joomla: 84.47%

Shopify is ranked number three. Now look at what happens when we compare the three shopping platforms against each other:

Top Ranked Shopping Platforms By INP

  1. BigCommerce: 95.29%
  2. Shopify: 89.58%
  3. WooCommerce: 87.99%

BigCommerce is the number-one-ranked shopping platform for the important INP metric among the three in this comparison.

Lastly, we compare the INP performance scores for all the platforms together, leading to a surprising comparison.

CMS And Shopping Platforms Comparison

  1. Squarespace: 96.07%
  2. BigCommerce: 95.29%
  3. Duda: 93.81%
  4. Shopify: 89.58%
  5. WooCommerce: 87.99%
  6. Wix: 87.52%
  7. WordPress: 86.77%
  8. Drupal: 86.14%
  9. Joomla: 84.47%

All three ecommerce platforms feature in the top five rankings of content management systems, which is remarkable because of the resource-intensive demands of ecommerce websites. WooCommerce, a WordPress-based shopping platform, ranks in position five, but it’s so close to Wix that they are virtually tied for position five.

Takeaways

INP measures the responsiveness of a web page, making it a meaningful indicator of user experience. The latest data shows that while every CMS is improving, Squarespace, BigCommerce, and Duda outperform all other content platforms in this comparison by meaningful margins.

All of the platforms in this comparison show high percentages of good INP scores. The number four-ranked Shopify is only 6.49 percentage points behind the top-ranked Squarespace, and 84.47% of the sites published with the bottom-ranked Joomla show a good INP score. These results show that all platforms are delivering a quality experience for users

View the results here (must be logged into a Google account to view).

Featured Image by Shutterstock/Roman Samborskyi

Make AI Writing Work for Your Content & SERP Visibility Strategy [Webinar] via @sejournal, @hethr_campbell

Are your AI writing tools helping or hurting your SEO performance?

Join Nadege Chaffaut and Crystie Bowe from Conductor on September 17, 2025, for a practical webinar on creating AI-informed content that ranks and builds trust.

You’ll Learn How To:

  • Engineer prompts that produce high-quality content
  • Keep your SEO visibility and credibility intact at scale
  • Build authorship and expertise into AI content workflows

Why You Can’t Miss This Session

AI can be a competitive advantage when used the right way. This webinar will give you the frameworks and tactics to scale content that actually performs.

Register Now

Sign up to get actionable strategies for AI content. Can’t make it live? Register anyway, and we’ll send you the full recording.

Therapists are secretly using ChatGPT. Clients are triggered.

Declan would never have found out his therapist was using ChatGPT had it not been for a technical mishap. The connection was patchy during one of their online sessions, so Declan suggested they turn off their video feeds. Instead, his therapist began inadvertently sharing his screen.

“Suddenly, I was watching him use ChatGPT,” says Declan, 31, who lives in Los Angeles. “He was taking what I was saying and putting it into ChatGPT, and then summarizing or cherry-picking answers.”

Declan was so shocked he didn’t say anything, and for the rest of the session he was privy to a real-time stream of ChatGPT analysis rippling across his therapist’s screen. The session became even more surreal when Declan began echoing ChatGPT in his own responses, preempting his therapist. 

“I became the best patient ever,” he says, “because ChatGPT would be like, ‘Well, do you consider that your way of thinking might be a little too black and white?’ And I would be like, ‘Huh, you know, I think my way of thinking might be too black and white,’ and [my therapist would] be like, ‘Exactly.’ I’m sure it was his dream session.”

Among the questions racing through Declan’s mind was, “Is this legal?” When Declan raised the incident with his therapist at the next session—“It was super awkward, like a weird breakup”—the therapist cried. He explained he had felt they’d hit a wall and had begun looking for answers elsewhere. “I was still charged for that session,” Declan says, laughing.

The large language model (LLM) boom of the past few years has had unexpected ramifications for the field of psychotherapy, mostly due to the growing number of people substituting the likes of ChatGPT for human therapists. But less discussed is how some therapists themselves are integrating AI into their practice. As in many other professions, generative AI promises tantalizing efficiency savings, but its adoption risks compromising sensitive patient data and undermining a relationship in which trust is paramount.

Suspicious sentiments

Declan is not alone, as I can attest from personal experience. When I received a recent email from my therapist that seemed longer and more polished than usual, I initially felt heartened. It seemed to convey a kind, validating message, and its length made me feel that she’d taken the time to reflect on all of the points in my (rather sensitive) email.

On closer inspection, though, her email seemed a little strange. It was in a new font, and the text displayed several AI “tells,” including liberal use of the Americanized em dash (we’re both from the UK), the signature impersonal style, and the habit of addressing each point made in the original email line by line.

My positive feelings quickly drained away, to be replaced by disappointment and mistrust, once I realized ChatGPT likely had a hand in drafting the message—which my therapist confirmed when I asked her.

Despite her assurance that she simply dictates longer emails using AI, I still felt uncertainty over the extent to which she, as opposed to the bot, was responsible for the sentiments expressed. I also couldn’t entirely shake the suspicion that she might have pasted my highly personal email wholesale into ChatGPT.

When I took to the internet to see whether others had had similar experiences, I found plenty of examples of people receiving what they suspected were AI-generated communiqués from their therapists. Many, including Declan, had taken to Reddit to solicit emotional support and advice.

So had Hope, 25, who lives on the east coast of the US, and had direct-messaged her therapist about the death of her dog. She soon received a message back. It would have been consoling and thoughtful—expressing how hard it must be “not having him by your side right now”—were it not for the reference to the AI prompt accidentally preserved at the top: “Here’s a more human, heartfelt version with a gentle, conversational tone.”

Hope says she felt “honestly really surprised and confused.” “It was just a very strange feeling,” she says. “Then I started to feel kind of betrayed. … It definitely affected my trust in her.” This was especially problematic, she adds, because “part of why I was seeing her was for my trust issues.”

Hope had believed her therapist to be competent and empathetic, and therefore “never would have suspected her to feel the need to use AI.” Her therapist was apologetic when confronted, and she explained that because she’d never had a pet herself, she’d turned to AI for help expressing the appropriate sentiment. 

A disclosure dilemma 

Betrayal or not, there may be some merit to the argument that AI could help therapists better communicate with their clients. A 2025 study published in PLOS Mental Health asked therapists to use ChatGPT to respond to vignettes describing problems of the kind patients might raise in therapy. Not only was a panel of 830 participants unable to distinguish between the human and AI responses, but AI responses were rated as conforming better to therapeutic best practice. 

However, when participants suspected responses to have been written by ChatGPT, they ranked them lower. (Responses written by ChatGPT but misattributed to therapists received the highest ratings overall.) 

Similarly, Cornell University researchers found in a 2023 study that AI-generated messages can increase feelings of closeness and cooperation between interlocutors, but only if the recipient remains oblivious to the role of AI. The mere suspicion of its use was found to rapidly sour goodwill.

“People value authenticity, particularly in psychotherapy,” says Adrian Aguilera, a clinical psychologist and professor at the University of California, Berkeley. “I think [using AI] can feel like, ‘You’re not taking my relationship seriously.’ Do I ChatGPT a response to my wife or my kids? That wouldn’t feel genuine.”

In 2023, in the early days of generative AI, the online therapy service Koko conducted a clandestine experiment on its users, mixing in responses generated by GPT-3 with ones drafted by humans. They discovered that users tended to rate the AI-generated responses more positively. The revelation that users had unwittingly been experimented on, however, sparked outrage.

The online therapy provider BetterHelp has also been subject to claims that its therapists have used AI to draft responses. In a Medium post, photographer Brendan Keen said his BetterHelp therapist admitted to using AI in their replies, leading to “an acute sense of betrayal” and persistent worry, despite reassurances, that his data privacy had been breached. He ended the relationship thereafter. 

A BetterHelp spokesperson told us the company “prohibits therapists from disclosing any member’s personal or health information to third-party artificial intelligence, or using AI to craft messages to members to the extent it might directly or indirectly have the potential to identify someone.”

All these examples relate to undisclosed AI usage. Aguilera believes time-strapped therapists can make use of LLMs, but transparency is essential. “We have to be up-front and tell people, ‘Hey, I’m going to use this tool for X, Y, and Z’ and provide a rationale,” he says. People then receive AI-generated messages with that prior context, rather than assuming their therapist is “trying to be sneaky.”

Psychologists are often working at the limits of their capacity, and levels of burnout in the profession are high, according to 2023 research conducted by the American Psychological Association. That context makes the appeal of AI-powered tools obvious. 

But lack of disclosure risks permanently damaging trust. Hope decided to continue seeing her therapist, though she stopped working with her a little later for reasons she says were unrelated. “But I always thought about the AI Incident whenever I saw her,” she says.

Risking patient privacy

Beyond the transparency issue, many therapists are leery of using LLMs in the first place, says Margaret Morris, a clinical psychologist and affiliate faculty member at the University of Washington.

“I think these tools might be really valuable for learning,” she says, noting that therapists should continue developing their expertise over the course of their career. “But I think we have to be super careful about patient data.” Morris calls Declan’s experience “alarming.” 

Therapists need to be aware that general-purpose AI chatbots like ChatGPT are not approved by the US Food and Drug Administration and are not HIPAA compliant, says Pardis Emami-Naeini, assistant professor of computer science at Duke University, who has researched the privacy and security implications of LLMs in a health context. (HIPAA is a set of US federal regulations that protect people’s sensitive health information.)

“This creates significant risks for patient privacy if any information about the patient is disclosed or can be inferred by the AI,” she says.

In a recent paper, Emami-Naeini found that many users wrongly believe ChatGPT is HIPAA compliant, creating an unwarranted sense of trust in the tool. “I expect some therapists may share this misconception,” she says.

As a relatively open person, Declan says, he wasn’t completely distraught to learn how his therapist was using ChatGPT. “Personally, I am not thinking, ‘Oh, my God, I have deep, dark secrets,’” he said. But it did still feel violating: “I can imagine that if I was suicidal, or on drugs, or cheating on my girlfriend … I wouldn’t want that to be put into ChatGPT.”

When using AI to help with email, “it’s not as simple as removing obvious identifiers such as names and addresses,” says Emami-Naeini. “Sensitive information can often be inferred from seemingly nonsensitive details.”

She adds, “Identifying and rephrasing all potential sensitive data requires time and expertise, which may conflict with the intended convenience of using AI tools. In all cases, therapists should disclose their use of AI to patients and seek consent.” 

A growing number of companies, including Heidi Health, Upheal, Lyssn, and Blueprint, are marketing specialized tools to therapists, such as AI-assisted note-taking, training, and transcription services. These companies say they are HIPAA compliant and store data securely using encryption and pseudonymization where necessary. But many therapists are still wary of the privacy implications—particularly of services that necessitate the recording of entire sessions.

“Even if privacy protections are improved, there is always some risk of information leakage or secondary uses of data,” says Emami-Naeini.

A 2020 hack on a Finnish mental health company, which resulted in tens of thousands of clients’ treatment records being accessed, serves as a warning. People on the list were blackmailed, and subsequently the entire trove was publicly released, revealing extremely sensitive details such as peoples’ experiences of child abuse and addiction problems.

What therapists stand to lose

In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.

A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.

Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.

A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable.

Daniel Kimmel, a psychiatrist and neuroscientist at Columbia University, conducted experiments with ChatGPT where he posed as a client having relationship troubles. He says he found the chatbot was a decent mimic when it came to “stock-in-trade” therapeutic responses, like normalizing and validating, asking for additional information, or highlighting certain cognitive or emotional associations.

However, “it didn’t do a lot of digging,” he says. It didn’t attempt “to link seemingly or superficially unrelated things together into something cohesive … to come up with a story, an idea, a theory.”

“I would be skeptical about using it to do the thinking for you,” he says. Thinking, he says, should be the job of therapists.

Therapists could save time using AI-powered tech, but this benefit should be weighed against the needs of patients, says Morris: “Maybe you’re saving yourself a couple of minutes. But what are you giving away?”

Can an AI doppelgänger help me do my job?

Everywhere I look, I see AI clones. On X and LinkedIn, “thought leaders” and influencers offer their followers a chance to ask questions of their digital replicas. OnlyFans creators are having AI models of themselves chat, for a price, with followers. “Virtual human” salespeople in China are reportedly outselling real humans. 

Digital clones—AI models that replicate a specific person—package together a few technologies that have been around for a while now: hyperrealistic video models to match your appearance, lifelike voices based on just a couple of minutes of speech recordings, and conversational chatbots increasingly capable of holding our attention. But they’re also offering something the ChatGPTs of the world cannot: an AI that’s not smart in the general sense, but that ‘thinks’ like you do. 

Who are they for? Delphi, a startup that recently raised $16 million from funders including Anthropic and actor/director Olivia Wilde’s venture capital firm, Proximity Ventures, helps famous people create replicas that can speak with their fans in both chat and voice calls. It feels like MasterClass—the platform for instructional seminars led by celebrities—vaulted into the AI age. On its website, Delphi writes that modern leaders “possess potentially life-altering knowledge and wisdom, but their time is limited and access is constrained.”

It has a library of official clones created by famous figures that you can speak with. Arnold Schwarzenegger, for example, told me, “I’m here to cut the crap and help you get stronger and happier,” before informing me cheerily that I’ve now been signed up to receive the Arnold’s Pump Club newsletter. Even if his or other celebrities’ clones fall short of Delphi’s lofty vision of spreading “personalized wisdom at scale,” they at least seem to serve as a funnel to find fans, build mailing lists, or sell supplements.

But what about for the rest of us? Could well-crafted clones serve as our stand-ins? I certainly feel stretched thin at work sometimes, wishing I could be in two places at once, and I bet you do too. I could see a replica popping into a virtual meeting with a PR representative, not to trick them into thinking it’s the real me, but simply to take a brief call on my behalf. A recording of this call might summarize how it went. 

To find out, I tried making a clone. Tavus, a Y Combinator alum that raised $18 million last year, will build a video avatar of you (plans start at $59 per month) that can be coached to reflect your personality and can join video calls. These clones have the “emotional intelligence of humans, with the reach of machines,” according to the company. “Reporter’s assistant” does not appear on the company’s site as an example use case, but it does mention therapists, physician’s assistants, and other roles that could benefit from an AI clone.

For Tavus’s onboarding process, I turned on my camera, read through a script to help it learn my voice (which also acted as a waiver, with me agreeing to lend my likeness to Tavus), and recorded one minute of me just sitting in silence. Within a few hours, my avatar was ready. Upon meeting this digital me, I found it looked and spoke like I do (though I hated its teeth). But faking my appearance was the easy part. Could it learn enough about me and what topics I cover to serve as a stand-in with minimal risk of embarrassing me?

Via a helpful chatbot interface, Tavus walked me through how to craft my clone’s personality, asking what I wanted the replica to do. It then helped me formulate instructions that became its operating manual. I uploaded three dozen of my stories that it could use to reference what I cover. It may have benefited from having more of my content—interviews, reporting notes, and the like—but I would never share that data for a host of reasons, not the least of which being that the other people who appear in it have not consented to their sides of our conversations being used to train an AI replica.

So in the realm of AI—where models learn from entire libraries of data—I didn’t give my clone all that much to learn from, but I was still hopeful it had enough to be useful. 

Alas, conversationally it was a wild card. It acted overly excited about story pitches I would never pursue. It repeated itself, and it kept saying it was checking my schedule to set up a meeting with the real me, which it could not do as I never gave it access to my calendar. It spoke in loops, with no way for the person on the other end to wrap up the conversation. 

These are common early quirks, Tavus’s cofounder Quinn Favret told me. The clones typically rely on Meta’s Llama model, which “often aims to be more helpful than it truly is,” Favret says, and developers building on top of Tavus’s platform are often the ones who set instructions for how the clones finish conversations or access calendars.

For my purposes, it was a bust. To be useful to me, my AI clone would need to show at least some basic instincts for understanding what I cover, and at the very least not creep out whoever’s on the other side of the conversation. My clone fell short.

Such a clone could be helpful in other jobs, though. If you’re an influencer looking for ways to engage with more fans, or a salesperson for whom work is a numbers game and a clone could give you a leg up, it might just work. You run the risk that your replica could go off the rails or embarrass the real you, but the tradeoffs might be reasonable. 

Favret told me some of Tavus’s bigger customers are companies using clones for health-care intake and job interviews. Replicas are also being used in corporate role-play, for practicing sales pitches or having HR-related conversations with employees, for example.

But companies building clones are promising that they will be much more than cold-callers or telemarketing machines. Delphi says its clones will offer “meaningful, personal interactions at infinite scale,” and Tavus says its replicas have “a face, a brain, and memories” that enable “meaningful face-to-face conversations.” Favret also told me a growing number of Tavus’s customers are building clones for mentorship and even decision-making, like AI loan officers who use clones to qualify and filter applicants.

Which is sort of the crux of it. Teaching an AI clone discernment, critical thinking, and taste—never mind the quirks of a specific person—is still the stuff of science fiction. That’s all fine when the person chatting with a clone is in on the bit (most of us know that Schwarzenegger’s replica, for example, will not coach me to be a better athlete).

But as companies polish clones with “human” features and exaggerate their capabilities, I worry that people chasing efficiency will start using their replicas at best for roles that are cringeworthy, and at worst for making decisions they should never be entrusted with. In the end, these models are designed for scale, not fidelity. They can flatter us, amplify us, even sell for us—but they can’t quite become us.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

What health care providers actually want from AI

In a market flooded with AI promises, health care decision-makers are no longer dazzled by flashy demos or abstract potential. Today, they want pragmatic and pressure-tested products. They want solutions that work for their clinicians, staff, patients, and their bottom line.

To gain traction in 2025 and beyond, health care providers are looking for real-world solutions in AI right now.

Solutions that fix real problems

Hospitals and health systems are looking at AI-enabled solutions that target their most urgent pain points: staffing shortages, clinician burnout, rising costs, and patient bottlenecks. These operational realities keep leadership up at night, and AI solutions  must directly address them.

For instance, hospitals and health systems are eager for AI tools that can reduce documentation burden for physicians and nurses. Natural language processing (NLP) solutions that auto-generate clinical notes or streamline coding to free up time for direct patient care are far more compelling pitches than generic efficiency gains. Similarly, predictive analytics that help optimize staffing levels or manage patient flows can directly address operational workflow and improve throughput.

Ultimately, if an AI solution doesn’t target these critical issues and deliver tangible benefits, it’s unlikely to capture serious buyer interest.

Demonstrate real-world results

AI solutions need validation in environments that mirror actual care settings. The first step toward that is to leverage high-quality, well-curated real-world data to drive reliable insights and avoid misleading results when building and refining AI models. 

Then, hospitals and health systems need evidence that the solution does what it claims to do, for instance through independent-third party validation, pilot projects, peer-reviewed publications, or documented case studies.

Mayo Clinic Platform offers a rigorous independent process where clinical, data science, and regulatory experts evaluate a solution for intended use, proposed value, and clinical and algorithmic performance, which gives innovators the credibility their solutions need to win the confidence of health-care leaders.    

Integration with existing systems

With so many demands, health-care IT leaders have little patience for standalone AI tools that create additional complexity. They want solutions that integrate seamlessly into existing systems and workflows. Compatibility with major electronic health record (EHR) platforms, robust APIs, and smooth data ingestion processes are now baseline requirements.

Custom integrations that require significant IT resources—or worse, create duplicative work—are deal breakers for many organizations already stretched thin. The less disruption an AI solution introduces, the more likely it is to gain traction. This is the reason solution developers are turning to platforms like Mayo Clinic Platform Solutions Studio, a program that provides seamless integration, single implementation, expert guidance to reduce risk, and a simplified process to accelerate solution adoption among healthcare providers. 

Explainability and transparency

The importance of trust cannot be overstated when it comes to health care, and transparency and explainability are critical to establishing trust in AI. As AI models grow more complex, health-care providers recognize that simply knowing what an algorithm predicts isn’t enough. They also need to understand how it arrived at that insight.

Health-care organizations are increasingly wary of black-box AI systems whose logic remains opaque. Instead, they’re demanding solutions that offer clear, understandable explanations clinicians can relay confidently to peers, patients, and regulators.

As McKinsey research shows, organizations that embed explainability into their AI strategy not only reduce risk but also see higher adoption, better performance outcomes, and stronger financial returns. Solution developers that can demystify their models, provide transparent performance metrics, and build trust at every level will have a significant edge in today’s health-care market.

Clear ROI and low implementation burden

Hospitals and health systems want to know precisely how quickly an AI solution will pay for itself, how much staff time it will save, and what costs it will help offset. The more specific and evidence-backed the answers, the better rate of adoption.

Solution developers that offer comprehensive training and responsive support are far more likely to win deals and keep customers satisfied over the long term.

Alignment with regulatory and compliance needs

As AI adoption grows, so does regulatory scrutiny. Health-care providers are increasingly focused on ensuring that any new solution complies with HIPAA, data privacy laws, and emerging guidelines around AI governance and bias mitigation.

Solution developers that can proactively demonstrate compliance provide significant peace of mind. Transparent data handling practices, rigorous security measures, and alignment with ethical AI principles are all becoming essential selling points as well.

A solution developer that understands health care

Finally, it’s not just about the technology. Health-care providers want partners that genuinely understand the complexities of clinical care and hospital operations. They’re looking for partners that speak the language of health care, grasp the nuances of change management, and appreciate the realities of delivering patient care under tight margins and high stakes.

Successful AI vendors recognize that even the best technology must fit into a highly human-centered and often unpredictable environment. Long-term partnerships, not short-term sales, are the goal.

Delivering true value with AI

To earn their trust and investment, AI developers must focus relentlessly on solving real problems, demonstrating proven results, integrating without friction, and maintaining transparency and compliance.

Those that deliver on these expectations will have the chance to help shape the future of health care.

This content was produced by Mayo Clinic Platform. It was not written by MIT Technology Review’s editorial staff.

How healthcare accelerator programs are changing care

As healthcare faces mounting pressures, from rising costs and an aging population to widening disparities, forward thinking innovations are more essential than ever.

Accelerator programs have proven to be powerful launchpads for health tech companies, often combining resources, mentorship, and technology that startups otherwise would not have access to. By joining these fast-moving platforms, startups are better able to rapidly innovate, enhance, and scale their healthcare solutions, bringing transformative approaches to hospitals and patients faster.

So, why are healthcare accelerators becoming essential to the evolution of the industry? There are key reasons why these programs are reshaping health innovation and explanations how they are helping to make care more personalized, proactive, and accessible.

Empowering growth and scaling impact       

Healthcare accelerator programs offer a powerful combination of guidance, resources, and connections to help early-stage startups grow, scale, and succeed in a complex industry. 

Participants typically benefit from: 

  • Expert mentorship from seasoned healthcare professionals, entrepreneurs, and industry leaders to navigate clinical, regulatory, and business challenges
  • Access to valuable resources such as clinical data, testing environments, and technical infrastructure to refine and validate health tech solutions
  • Strategic support for growth including investor introductions, partnership opportunities, and go-to-market guidance to expand reach and impact 

Speeding up innovation 

Accelerators help startups and early-stage companies bring their solutions to market faster by streamlining the path through one of the most complex industries: healthcare. Traditionally, innovation in this space is slowed by regulatory hurdles, extended sales cycles, clinical validation requirements, and fragmented data systems.  

Through structured support, accelerators help companies refine their product market fit, navigate compliance and regulatory landscapes, integrate with healthcare systems, and gather the clinical evidence needed to build trust and credibility. They also open doors to early pilot opportunities, customer feedback, and strategic partnerships, compressing what could take years into just a few months. 

By removing barriers and accelerating critical early steps, these programs enable digital health innovators to reach the market more efficiently, with stronger solutions and a clearer path to impact. 

Connecting startups with key stakeholders 

Today, many accelerator programs are developed by large healthcare organizations that are driving change from within. These accelerator programs are especially beneficial to startups since they have strong partnerships with hospitals, pharma companies, insurance providers, and regulators. This gives startups a chance to validate their ideas in real-world settings, gather clinical feedback early, and scale more effectively.  

Many accelerators also bring together people from different fields; doctors, engineers, data scientists, and designers, encouraging fresh perspectives on persistent problems like chronic disease management, preventative care, data interoperability, and patient engagement. 

Breaking barriers to global expansion 

Healthcare accelerator programs act as gateways for international digital health companies looking to enter the U.S. market, often considered one of the most complex and highly regulated healthcare landscapes in the world. These programs provide tailored support to navigate U.S. compliance standards, understand payer and provider dynamics, and tailor offerings to meet the needs of U.S. patients and care delivery models. 

Through market-specific mentorship, strategic introductions, and access to a robust health innovation ecosystem, accelerators help international startups overcome geographic and regulatory barriers, enabling global ideas to scale and make an impact where they’re needed most. 

Building the future of healthcare

The role of healthcare accelerator programs extends far beyond startup support. They are helping to redefine how innovation happens, shifting it from isolated efforts to collaborative ecosystems of change. By bridging gaps between early-stage technology and real-world implementation, these programs play a critical role in making healthcare more personalized, preventative, and equitable.

As the digital transformation of healthcare continues, accelerator programs will remain indispensable in cultivating the next generation of breakthroughs, ensuring that bold ideas are not only born, but brought to life in meaningful, measurable ways.

Spotlight: Mayo Clinic Platform_Accelerate

One standout example of this innovation-forward approach is Mayo Clinic Platform_Accelerate, a 30-week accelerator program designed to help health tech startups reach market readiness. Participants gain access to de-identified clinical data, prototyping labs, and guidance from experts across clinical, regulatory, and business domains.

By combining Mayo Clinic’s legacy of clinical excellence with a forward-thinking innovation model, the Mayo Clinic Platform_Accelerate program helps promising startups to refine their solutions and prepare for meaningful scale, transforming how care is delivered across the continuum.

Finding value in accelerator programs

In a time when healthcare must evolve faster than ever, accelerator programs have become vital to the industry’s future. By supporting early-stage innovators with the tools, mentorship, and networks they need to succeed, these programs are paving the way for smarter, safer, and more connected care.

Whether tackling chronic disease, reimagining patient engagement, or unlocking the power of data, the startups nurtured in accelerator programs are helping to shape a more resilient and responsive health system, one innovation at a time.

This content was produced by Mayo Clinic Platform. It was not written by MIT Technology Review’s editorial staff.