Google’s AI Max for Search campaigns is now available worldwide in beta across Google Ads, Google Ads Editor, Search Ads 360, and the Google Ads API.
AI Max packages Google’s AI features as a one-click suite inside Search campaigns. New built-in experiments allow you to test the impact with minimal setup.
Image Credit: Google
What’s New
One-Click Experiments
AI Max is positioned as a faster path to smarter optimization inside Search campaigns.
New one-click experiments are integrated in the campaign flow, so you can compare performance without rebuilding campaigns.
Availability spans all major surfaces, including the API for teams that automate workflows.
How The Built-In Experiments Work
AI Max experiments are run within the same Search campaign by splitting traffic between a control (with AI Max off) and a trial (with AI Max on).
Since the test doesn’t clone the campaign, you’ll avoid sync errors and can ramp up faster. Once the experiment ends, review the performance and decide whether to apply the change or discard it.
Controls You Can Tweak During A Test
By default, your experiment starts with Search term matching and Asset optimization enabled, but it’s easy to customize these settings.
You can choose to turn off Search term matching at the ad group level or disable Asset optimization at the campaign level if that better suits your goals.
For more control over your landing pages, consider using URL exclusions at the campaign level and URL inclusions at the ad group level.
Brand controls are also available for added flexibility: you can set brand inclusions or exclusions at the campaign level, and specify brand inclusions within ad groups.
The “locations of interest” feature at the ad group level offers more geographic targeting precision.
Reporting Surfaces
Results appear under Experiments with an expanded Experiment summary.
AI Max also adds transparency across reports. These include “AI Max” match-type indicators in Search terms and Keywords reports, plus combined views that show the matched term, headlines, and landing URLs.
Auto-Apply Option
If you want, you can set the experiment to auto-apply when results are favorable. Otherwise, apply manually from the Experiments table or enable AI Max from Campaign settings after the test concludes.
Setup Limits To Know
You can’t create an AI Max experiment via this flow if the campaign:
Has legacy features like text customization (old ACA), brand inclusions/exclusions, or ad-group location inclusion already configured
Targets the Display Network
Uses a Portfolio bid strategy
Uses Shared budgets
Coming Soon: Text Guidelines
Google is working on a feature that will provide text guidelines to help AI create brand-safe content that meets your business needs.
This will be available to more advertisers this fall for both AI Max and Performance Max. In the meantime, stick to your usual brand approvals and policy checks.
If you’re already handling Search at scale, the API support simplifies standardizing experiments and comparing results to your existing setup.
Looking Ahead
Expect more controls around creative and safety as text guidelines roll out. Until then, low-lift experiments let you measure AI Max without committing your entire account.
I’ve been extremely antsy to publish this study. Consider it the AIO Usability study 1.5, with new insights. You also want to stay tuned for our first AI Mode usability study! It’s coming in a few weeks (make sure to subscribe not to miss it).
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
Since March, everyone’s been asking the same question: “Are AI Overviews killing our conversions?”
Our 2025 usability study gives a clearer answer than the hot takes you’ll see on LinkedIn and X (Twitter).
In May 2025, I published significant findings from the first comprehensive UX study of AI Overviews (AIOs). Today, I’m presenting you with new insights from that study based on a cutting-edge RAG system that analyzed over 100,000 words of transcription.
The most significant, stand-out finding from that study: People use AI Overviews to get oriented and save time.
Then, for any search that involves a transaction or high-stakes decision-making, searchers validate outside Google, usually with trusted brands or authority domains.
Net-net: AIO is a preview layer. Blue links still close. Before we dive in, you need to hear these insights from Garrett French, CEO of Xofu, who financed this study:
“What lit me up most from this latest work from Kevin: We have direct insight now into an “anchor pattern” of AIO behavior.
In this usability study, we discovered that users rarely voice distrust of AI Overviews directly – instead they hesitate, refine, or click out.
Therefore, hesitation itself is the loudest signal to us.
We see the same in complex, transition-enabling purchase-committee buying (B2B and B2C): Procurement stalls without lifecycle clarity, engineer stall without specs, IT stalls without validation.
These aren’t complaints. They’re unresolved, unanswered, and even unknown questions that have NEVER shown themselves in KW demand.
As content marketers, we have never held ourselves systematically accountable to answering them.
Customer service logs – as an example of one surface for discovering friction – expose the same hesitations in traceable form through repeated chats, escalations, deployment blocks, etc.
Customer service logs are one surface; AIOs are another.
But the real source of truth is always contextual audience friction.
Answering these “friction-inducing, unasked latent questions give us a way to read those signals and design content that truly moves decisions forward.
What The Study Actually Found:
Organic results are the most trusted and most consistently successful destination across tasks.
Sponsored results are noticed but actively skipped due to low trust.
In-SERP answers quickly resolved roughly 85% of straightforward factual questions.
Users often use AIO as a preview or shortcut, then click out to finish or validate (on brand sites, YouTube, coupon portals, and the like).
Shopping carousels aid discovery more than closure. Expect reassessment clicks.
Trust splits by stakes: Low-stakes search journeys often end in the AIO, while finance or health pushes people to known authorities like PayPal, NIH, or Mayo Clinic.
Age and device matter. Younger users, especially on smartphones, accept AIOs faster; older cohorts favor blue links and authority domains.
When the AIO is wrong or feels generic, people bail. We logged 12 unique “AIO is misleading/wrong” flags in higher-stakes contexts.
(Interested in diving deeper into the first findings from this study or need a refresher? Read the first full iteration of the UX study of AIOs.)
Why This Matters For The Bottom Line
In my earlier analysis, I argued that top-of-funnel visibility had more downstream impact than our marketing analytics ever credited. I also argued that demand doesn’t just disappear because clicks shrink.
This study’s behavior patterns support that: AIO satisfies quick lookup intent, but purchase intent still routes through external validation and brand trust – aka clicks. Participants in this study shared thoughts aloud, like:
“There’s the AI results, but I’d rather go straight to PayPal’s own site.”
“Mayo Clinic at the top of results, that’s where I’d go. I trust Mayo Clinic more than an AI summary.”
And that preserves downstream conversions (when you show up in the right places and have earned authority).
Image Credit: Kevin Indig
Deeper Insights: Secondary Findings You Need To See
Recently, I worked with Eric Van Buskirk (the research director of the study) and his team over at Clickstream Solutions to do a deeper analysis of the May 2025 findings.
Using an advanced RAG-driven AI system, we analyzed all 91,559 (!) words of the transcripts from recorded user sessions across 275 task instances.
This is important to understand: We were able to find new insights from this study because Eric has built cutting-edge technology.
Our new RAG system analyzes structured fields like SERP Features, AIO satisfaction, or user reactions from transcriptions and annotations. It creates a retrieval layer and uses ChatGPT-5 for semantic search.
The result is faster, more rigorous, and more transparent research. Every claim can be traced to data rows and transcript quotes, patterns are checked across the full dataset, and visual evidence is a query away.
(To sum that all up in plain language: Eric’s custom-built advanced RAG-driven AI system is wildly cool and extremely effective.)
Practical benefits:
Auditable insights: Conclusions map back to exact data slices.
Speed: Test a hypothesis in minutes instead of re-reading sessions.
Scale: Triangulate transcripts, coded fields, and outcomes across all participants.
Fit for the AI era: Clean structure and trustworthy signals mirror how retrieval systems pick sources, which aligns with our broader stance on visibility and trust.
Here’s what we found:
The data verified four distinct AIO Intent Patterns.
Key SERP features drove more engagement than others.
Core brands shape trust in AIOs.
About The New RAG System
We rebuilt the analysis on a retrieval-augmented system so answers come from the study data, not model guesswork. The backbone lives on structured fields with full transcripts and annotations, indexed in a lightweight database and paired with bucketed data for cohort filtering and cross-checks.
Core components:
Dataset ingestion and cleaning.
Retrieval layer based on hybrid keyword + semantic search.
Auto-coded sentiment to turn speech into consistent, queryable signals.
Validation loop to minimize hallucination.
The result is faster, more rigorous, and more transparent research. Every claim can be traced to rows and quotes, patterns are checked across the full dataset, and visual evidence is a query away.
Practical benefits:
Map conclusions back to exact data slices.
Test a hypothesis in minutes.
Triangulate transcripts, coded fields, and outcomes across all participants.
Clean structure and trustworthy signals.
Which AIO Intent Patterns Were Verified Through The Data
One of the biggest secondary findings from the AIO usability study is that the AIO Intent Patterns aren’t just “gut feelings” anymore – they’re statistically validated, built from measurable behavior.
Before some of you roll your eyes and annoyingly declare “here’s yet another newly created SEO/marketing buzzword,” the patterns we discovered in the data weren’t exactly search personas, and they weren’t exactly search intents, either.
Therefore, we’re using the phrase “AIO Intent Pattern” to distinguish these concepts from one another.
Here’s how I define AIO Intent Patterns: AIO Intent Patterns represent statistically validated clusters of user behavior – like dwell, scroll, refinements, and sentiment – that define how people respond to AIOs. They’re recurring, measurable behaviors that describe how people interact with AI Overviews, whether they accept, validate, compare, or reject them.
And, again, these patterns aren’t exactly search intents or queries, but they’re not exactly user profiles either.
Instead, these patterns represent a set of behaviors (that appeared throughout our data) carried out by users to validate AIOs in different and distinct ways. So that’s why we’ve called the individual behavioral patterns “validations” below.
By running a RAG-driven coding pass across 250+ task instances, we were able to quantify four different behavioral patterns of engagement with AIOs:
Efficiency-first validations that reward clean, extractable facts (accepting of AIOs).
Trust-driven validations that convert only with credibility (validate AIOs).
Comparative validations that use AIOs but compare with multiple sources.
Skeptical rejections that automatically distrust AIOs for high-stakes queries.
What matters most here is that these aren’t arbitrary labels.
Statistical tests showed the differences in dwell time, scrolling, and refinements between the four groups were far too large to be random.
To put it plainly: These are real AIO use behavioral segments or AIO use intents you can plan for.
Let’s look at each one.
1. Efficiency-First Validations
These are validations where users intend to seek a shortcut. Users dip into AIOs for fast fact lookups, skim for one answer, and move on.
Efficiency-driven validations thrive on content that’s concise, scannable, and fact-rich. Typical queries that are resolved directly in the AIO include:
“1 cup in ml”
“how to take a screenshot on Mac”
“UTC to CET converter”
“what is robots.txt”
“email regex example”
Below, you can check out two examples of “efficiency-first validation” task actions from the study.
“Okay, so I like the summary at the top. And I would go ahead and follow these instructions and only come back to a search if they didn’t work.”
“I just had to go straight to the AI overview… and I liked that answer. It gave me the information I needed, organized and clear. Found it.”
Our data shows an average dwell time of just 14 seconds for this group overall, with almost no scrolling or refinements.
Users that have an efficiency-first intent for their queries have a neutral to positive sentiment toward AIOs – with no hesitation flags – because AIOs scratch the efficiency-intent itch quickly.
For this behavioral pattern, the AIO often is the final answer – especially on mobile – and if they do click, it’s usually the first clear, extractable source.
👉 Optimization tips for this validation group:
Compress key facts into crisp TLDRs, FAQs, and schema so AIO can surface them.
Place definitions, checklists, and example blocks near the top of your page.
Use simple tables and step lists that can be lifted cleanly.
Ensure brand mentions and key facts appear high on the page for visibility.
2. Trust-Driven Validations
These validations are full of caution. Users with trust-driven intents engage with AIOs but rarely stop there.
They’ll skim the overview, hesitate, and then click out to an authority domain to validate what they saw, like in this example below:
The user shares that “…at the top, it gave me a really good description on how to transfer money. But I still clicked the PayPal link because it was directly from the official site. That’s what I went with – I trust that information to be more accurate.”
Typical queries that trigger this validation pattern include:
“PayPal buyer protection rules”
“Mayo Clinic strep symptoms”
“Is creatine safe long term”
“Stripe refund timeline”
“GDPR consent requirements example”
And our data from the study verifies users scroll more (2.7x on average), dwell longer (~57s), and often flag uncertainty in trust-driven mode. What they want is authority.
These users have a high rate of hesitation flags in their search experiments. Their sentiment is mixed – often neutral, sometimes anxious or frustrated – and their confidence is only medium to low.
For these searches, the AIO is a starting point, not the destination. They’ll click out to Mayo Clinic, PayPal, Stripe, or other trusted domains to validate.
👉 Optimization tips for this validation group:
Reinforce trust scaffolding on your landing pages: expert reviewers, citations, and last-reviewed dates.
Mirror official terminology and link to primary sources.
Add “What to do next” boxes that align with authority guidance.
Build strong E-E-A-T signals since credibility is the conversion lever here.
3. Comparative Validations
This search intent actively leans into the AIO for classic comparative queries (think “Ahrefs vs Semrush for content teams”) to fulfill their search intent OR to compare informational resources to get clarity on the “best” of something; they expand, scroll, refine, and use interactive features – but they don’t stop there.
Instead, they explore across multiple sources, hopping to YouTube reviews, Reddit threads, and vendor sites before making a decision.
Example queries that reveal AIO comparative validation behavior:
“Notion vs Obsidian for teams”
“Best mirrorless camera under 1000”
“How to change a bike tire”
“Standing desk benefits vs risks”
“Programmatic SEO examples B2B”
“How to install a nest thermostat”
Here’s an example using a “how to” search, where the user is comparing sources for the best way to receive the most accurate information:
“The AI Overview gave me clear step-by-step instructions that matched what I expected. But since it was a physical DIY task, I still preferred to branch out to watch a video for confirmation.”
On average, searchers looking for comparative validations in the AIO dwell for 45+ seconds, scroll 4-5 times, and often open multiple tabs.
Their AIO sentiment is positive, and their confidence is high, but they still want to compare.
If this feels familiar – like classic transactional or commercial search intents – it’s because it is related.
If you’ve been doing SEO for any time, it’s likely you’ve created some of these “versus” or “comparison” pages. You also have likely created “how to” content with step-by-step how-to guidance, like how to install a flatscreen TV on your wall.
Before AIOs, your target users would find themselves there if you ranked well in search.
But now, the AIO frames the landscape first, and the decision comes after weighing pros and cons across information sources to find the best solution.
👉 Optimization tips for this validation group:
Publish structured comparison pages with decision tables and use-case breakdowns.
Pair each page with short demo videos, social proof, and credible community posts to echo your takeaways.
Include “Who it is for” and “Who it isn’t for” sections to reduce ambiguity.
Seed content in YouTube and forums that AIOs (and users) can pick up.
4. Skeptical Rejections
Searchers with a make-or-break intent? They’re the outright AIO skeptical rejectors.
When stakes are high – health, finance, or legal … the typical YMYL (Your Money, Your Life) stuff – they don’t trust AIO to get it right.
Users may scan the summary briefly, but they quickly move to authoritative sources like government sites, hospitals, or financial institutions.
Common queries where this rejection pattern shows up:
“Metformin dosage for PCOS”
“How to file taxes as a freelancer in Germany”
“Credit card chargeback rights EU”
“Infant fever when to go to ER”
“LLC vs GmbH legal liability”
For this search intent, the dwell time in an AIO is short or nonexistent, and their sentiment often skews negative.
They show determination to bypass the AI layer in favor of direct authority validation.
👉 Optimization tips for this validation group:
Prioritize citations and mentions from highly trusted domains so AIOs lean on you indirectly.
Align your pages with the language and categories used by official sources.
Add explicit disclaimers and clear subheadings to strengthen authority signals.
For YMYL topics, focus on being cited rather than surfaced as the final answer.
SERP Features That Drove Engagement
Our RAG AI-driven system of the usability data verified that not all SERP features are created equal.
When we cut the data down to only features with meaningful engagement – which our study defined as ≥5 seconds of dwell time across at least 10 instances – only four SERP features findings stood out.
(I’ll give you a moment to take a few wild guesses regarding the outcomes … and then you’ll see if you’re right.)
Drumroll please. 🥁🥁🥁
(Okay, moment over. Here we go.)
1. Organic Results Are Still The Backbone
Whenever our study participants gave the classic blue links more than a passing glance, they almost always found success.
Transcripts from the study make it explicit: Users trusted official sites, government domains, and familiar authority brands, as one participant’s quote demonstrates:
“Mayo Clinic at the top of results, that’s where I’d go. I trust Mayo Clinic more than an AI summary.”
What about social or community sites that showed up in the organic blue-link results?
Reddit and YouTube were the social or community platforms found in the SERP that were mentioned most by study participants.
Reddit had 45 unique mentions across the entire study. Overall, seeing a Reddit result in organic results produces a user sentiment that is mostly positive, with some users feeling neutral toward the inclusion of Reddit in search, and very few negative comments about Reddit results.
YouTube had 20 unique mentions across the entire study. The sentiment toward YouTube inclusion in SERP results was overwhelmingly positive (19 out of 20 of those instances had a positive user sentiment). The emotions flagged from the study participants around YouTube results included happy/satisfied or curious/exploring.
There was a very clear theme across the study that appeared when social or community sites popped up in organic results:
Reddit was invoked when participants wanted community perspective, usually in comparison tasks. Confidence was high because Reddit validated nuance, but AIO trust was weak (users bypassed AIOs to Reddit instead).
YouTube was used as a visual validator, especially in product or technical comparison tasks. Users expressed positive sentiment and high satisfaction, even when explicit trust wasn’t verbalized. They treated YouTube as a natural step after the AIOs/organic SERP results.
2. Sponsored Results Barely Register
People saw them, but rarely acted on them. “I don’t like going to sponsored sites” was a common refrain.
High visibility, but low trust.
3. Shopping Carousels Aid Discovery But Not Closure.
Participants clicked into Shopping carousels for product ideas, but often bounced back out to reassess with external sites.
The carousel works as a catalog – not a closer.
4. Featured Snippets Continue To Punch Above Their Weight
For straightforward factual lookups, Snippets had an ~85% success rate of engagement.
They were efficient and final for fact-based queries like [example] and [example].
⚠️ Important note:Even though Google is replacing Featured Snippets with AIOs, it’s clear that this method of receiving information within the SERP has a high engagement. While the SERP feature may be in the process of being discontinued, the data shows users like engaging with snippets. The takeaway here is that if you were often appearing for featured snippets and you’re now often appearing for AIO citations, keep up the good work to continue earning visibility there, because it still matters.
SERP Features x AIO Intent Patterns
When you keep the intent pattern layers in mind with different persona groups, it makes the search behaviors sharper:
Younger users on mobile leaned heavily on AIO and snippets, often stopping there if the stakes were low. → That’s the hallmark of efficiency-first validations (quick fact lookups) and comparative validations (scrolling, refining, and treating AIO as the main lens).
Older users consistently bypassed AI elements in favor of organic authority results. → This is classic behavior for trust-driven validations, when users click out to brands like PayPal or the Mayo Clinic, and skeptical rejections, when users distrust AIO altogether for high-stakes tasks.
Transactional queries – money, health, booking – nearly always pushed people toward trusted brands, regardless of what AIO or ads surfaced. → This connects directly to trust-driven validations (users who need authority reinforcement to fulfill their search intent) and skeptical rejections (users who reject AIO in YMYL contexts because AIOs don’t meet the intent behind the behavior).
What this shows is that, for SEOs, the priority isn’t about chasing every feature and “winning them all.”
Take this as an example:
“The AI overview didn’t pop up, so I used the search results. These were mostly weird websites, but CNBC looked trustworthy. They had a comparison of different platforms like CardCash and GCX, so I went with CNBC because they’re a trusted source.”
Your job is to match intent (as always):
Earn extractable presence in AIOs for quick facts,
Reinforce trust scaffolding on authority-driven organic pages, and
Treat Shopping and Sponsored slots as visibility and awareness plays rather than conversion levers.
Which Brands Shaped Trust In AIOs
AIOs don’t stand on their own; they borrow credibility from the brands they surface – whether you like it or not.
Emerging platforms (Raise, CardCash, GameFlip, Kade Pay) gained traction primarily because an AIO surfaced them, not because of prior awareness.
👉 Why it matters: Brand trust is the glue between AIO exposure and user action.
Here’s a quick paraphrase of this user’s exploration: We’re looking for places to sell gift cards for instant payment. Platforms like Raise, Gift Card Granny, or CardCash come up. On CardCash, I tried a $10 7-Eleven card, and the offer was $8.30. So they ‘tax’ you for selling. That’s good to know – but it shows you can sell gift cards for cash, and CardCash is one option.
In this instance, the AIO surfaced CardCash. The user didn’t know about it before this search. They explored it in detail, but trust friction (“they tax you”) shaped whether they’d actually use it.
For SEOs, this means three plays running in tandem:
Win mentions in AIOs by ensuring your content is structured, scannable, and extractable.
Strengthen authority off-site so when users validate (or reject the AIO), they land on your pages with confidence.
Build topical authority in your niche through comprehensive persona-based topic coverage and valuable information gain across your topics. (This can be a powerful entry point or opportunity for teams competing against larger brands.)
What does this all mean for your own tactical optimizations?
But here’s the most crucial thing to take away from this analysis today:
With this information in mind, you can now go to your stakeholders and guide them to look at all your prompts, queries, and topics with fresh eyes.
You need to determine:
Which of the target queries/topics are quick answers?
Which of the target queries/topics are instances where people need more trust and assurance?
When do your ideal users expect to explore more, based on the target queries/topics?
This will help you set expectations accordingly and measure success over time.
Featured Image: Paulo Bobita/Search Engine Journal
In my previous article, “Closing the Digital Performance Gap,” I made the case that web effectiveness is a business issue, not a marketing metric. The website is no longer just a reflection of your brand – it is your brand. If it’s not delivering measurable business results, that’s a leadership problem, not a team problem.
But there’s a deeper issue underneath that: Who actually owns web performance?
The truth is, many companies don’t have a good answer. Or they think they do until something breaks. The SEO team doesn’t own the infrastructure. The dev team isn’t briefed on platform changes. The content team isn’t looped in until after a redesign. Visibility drops, conversions dip, and someone asks, “Why isn’t our SEO team performing?”
Because they don’t own the full system, no one does.
If we want to close the digital performance gap, we must address this root problem: lack of accountability.
The Fallacy Of Distributed Ownership
The idea that “everyone owns the website” likely stems from early digital transformation initiatives, where cross-functional collaboration was encouraged to break down departmental silos. The intent was to foster shared responsibility across departments – but the unintended consequence was diffused accountability.
It sounds collaborative, but in practice, it often means no one is fully accountable for performance.
Here’s how it typically breaks down:
IT owns infrastructure and hosting.
Marketing owns content and campaigns.
SEO owns visibility – but not implementation.
UX owns experience – but not findability.
Legal owns compliance – but limits usability.
Product owns the content management system (CMS) – but doesn’t track SEO.
Each group is doing its job, often with excellence. But the result? Disconnected execution. Strategy gets lost in translation, and performance stalls.
Case in point: For a global alcohol brand, a site refresh had legal requirements mandating an age verification gate before users could access the site. That was the extent of their specification. IT built the gate exactly to spec: a page with the statement to enter your birthdate and three pull-down options for Month, Day, and Year, and a check of that date to the U.S. legal drinking age. UX and creative delayed launch for weeks while debating the optimal wording, positioning, and color scheme.
Once launched, the website traffic, both direct and organic search, dropped to zero. This was due to several key reasons:
Analytics were not set up to track visits before and after the age gate.
Search engines can’t input a birthdate, so they were blocked.
The age requirement was set to the U.S. standard, rejecting younger, yet legal visitors from other countries.
Because everything was done in silos, no one had considered these critical details.
When we finally got all stakeholders in a room, agreed on the issues, and sorted through them, we redesigned the system:
Search engines were recognized and bypassed the age requirement.
The age requirement and date format are adapted to the user’s location.
UX developed multiple variations and tested abandonment.
Analytics captured pre- and post-gate performance.
UX used the data to validate new landing page formats.
The result? A compliant, user-friendly, and search-accessible module that could be reused globally. Visibility, conversions, and compliance all increased exponentially. But we lost months and millions in potential traffic simply because no one owned the whole picture.
Without centralized accountability, the site was optimized in parts but underperforming as a whole.
The AI Era Raises The Stakes
This kind of siloed ownership might have been manageable in the old “10 blue links” era. But in an AI-first world – where Google and other platforms synthesize content into answers, summarize brands, and bypass traditional click paths – every decision across your digital operation impacts your visibility, trust, and conversion.
Search visibility today depends on structured data, crawlable infrastructure, content relevance, and citation-worthiness. If even one of these is out of alignment, you lose shelf space in the AI-driven SERP. And chances are, the team responsible for the weak link doesn’t even know they’re part of the problem.
Why Most SEO Advice Falls Short
I’ve seen well-meaning advice to “improve your SEO strategy” fall flat – because it assumes the SEO team has control over all the necessary elements. They don’t.
You can’t fix crawl issues if you can’t talk to the dev team.
You can’t win AI citations if your content team doesn’t structure or enrich their pages.
You can’t build authority if your legal or PR teams strip bios and outbound references.
To create sustained performance, companies need to designate real ownership over web effectiveness. That doesn’t mean centralizing every task – but it does mean centralizing accountability.
Here are three practical approaches:
1. Establish A Digital Center Of Excellence (CoE)
A CoE provides governance, guidance, and support across business units and regions. It ensures that:
Standards are defined and enforced.
Platforms are chosen and maintained with shared goals.
Think of this like a Commissioning Authority in construction – a role that ensures every component works together to meet the original performance spec. A DEO:
Connects the dots between dev, SEO, UX, and content.
Advocates for platform investment and cross-team prioritization.
3. Build Shared KPIs Across Departments
Most teams optimize for what they’re measured on. If the SEO team is judged on rankings but not revenue, and the content team is judged on output but not visibility, you get misaligned efforts. Create chained KPIs that reflect end-to-end performance.
Characteristics Of A Performance-Driven Model
Companies that close the accountability gap tend to share these traits:
Unified Taxonomy and Tagging – so content is findable and trackable.
Structured Governance – clear roles and escalation paths across teams.
Shared Dashboards – everyone sees the same numbers, not vanity metrics.
Tech Stack Discipline – fewer, better tools with cross-functional usage.
Scenario Planning – AI, zero-click SERPs, and platform volatility are modeled, not ignored.
Final Thought: Performance Requires Ownership
If you’re serious about web effectiveness, you need more than skilled people and good tools. You need a system where someone is truly accountable for how the site performs – across traffic, visibility, UX, conversion, and AI resilience.
This doesn’t mean a top-down mandate. It means orchestrated ownership with clear roles, measurable outcomes, and a strategic anchor.
It’s time to stop asking the SEO team to fix what they don’t control.
It’s time to build a framework where the web is everyone’s responsibility – and someone’s job.
Let’s make web performance a leadership priority, not a guessing game.
In a digital-first era, customer loyalty is no longer an expectation. It’s something that can’t be bought or bribed, but rather earned through intentional action. Yet content marketers can build consumer trust when given the right framework and strategy.
Undoubtedly, technology will continue to evolve, and as it does, so will customer expectations. Content marketing leaders are put in a tough position, where they must navigate a delicate balance between leveraging technology innovations while still ensuring human connection remains at the forefront.
Your customers crave human-centric connection, and new research reveals consumers are rewarding the businesses that prioritize transparency, personalization, and ethical AI usage. The brands that put their customers at the heart of their business and truly understand what motivates them to take action will win.
Recent research from Forsta, surveying more than 4,000 consumers across the U.S. and UK, highlights a rising trend: Customers are increasingly willing to pay more, stay longer, and advocate for brands they trust.
Trust isn’t just a soft metric that’s nice to sporadically review. Instead, it’s becoming one of the most prominent ways to assess business performance and drive long-term value. For content marketing leaders, this marks a shift in the playbook, which we’ll delve into throughout this post.
Using research-backed insights, we’ll examine five strategies to build consumer trust in an increasingly competitive environment to drive growth and forge stronger customer relationships.
How To Build Trust Through Content Marketing
Cost effectiveness is no longer as persuasive as it once was. In fact, according to the aforementioned study, 71% of consumers (U.S. – 71%, UK – 72%) would rather choose a business they trust with their data over one that’s more affordable.
That staggering figure alone highlights a notable shift in what drives purchasing decisions. Slashing prices doesn’t move the needle; trust does.
For content marketing leaders, a significant opportunity is within reach. Consumers are telling us exactly what they want, decoding any preconceived notions. They want to buy from businesses that respect their privacy, communicate openly, and personalize their experiences in a way that resonates with them individually.
Trust has evolved to become the cornerstone of modern brand-building, and content marketers should adapt and evolve to earn business.
1. Personalize With Purpose
Content marketers understand the importance of personalizing customer experiences. For example, sending a mass email to your audience without proper segmentation or targeting is about as useless as shouting into a void.
Additionally, given the astounding rise and usage of AI, personalization is now easier than ever to achieve. Knowing personalization remains a top demand, it’s no longer nice to have. It’s a must.
However, consumers aren’t giving away their personal information in exchange for custom-tailored experiences. They’re becoming more attuned to how businesses use their data and, in turn, have become more selective when sharing personal information.
If the value exchange isn’t obvious, transparent, or respectful, consumers may second-guess engaging with your business.
The study asked respondents what mattered most when it came to personalization, and the answer may surprise you: The majority stated efficiency.
The most appreciated personalized experience isn’t targeted ads or dynamic pricing; it goes back to the basics. Consumers want personalization that’s efficient and responsive when they seek help. They want to feel heard and supported without being passed from agent to agent.
This finding flips traditional personalization logic on its head. Instead of focusing solely on selling products or services, content marketing leaders must also examine how personalized support can reduce friction and enhance the customer journey.
Key Takeaway: Shift how you think about personalization. It’s no longer about “attention-grabbing” but rather “value-delivering.”
Use both structured and unstructured data to identify where your greatest opportunities lie, from examining your reviews to your chat logs. Then, write content that addresses those concerns to educate and empower your target audience.
2. Be Transparent About AI Usage
AI is already redefining how businesses operate and how they engage with consumers. From leveraging AI tools to create search engine-optimized content outlines to performing keyword research to ensure content aligns with search intent, AI enables scale and speed humans simply can’t match.
But customers are still wary of what’s AI and what’s not. When they feel deceived, trust erodes, and so too can revenue. The study found that 38% of consumers (U.S. – 38%, UK – 40%) would lose trust in a brand if they discovered AI-generated content or interactions weren’t disclosed.
Customers want to know when and where AI is being used, and this information shouldn’t be hidden in plain sight. Your AI policies should be front and center, easily located on your landing pages and website’s privacy policy.
Key Takeaway: AI isn’t a replacement for human writers, but should rather be viewed as a helpful assistant. Brands must clearly disclose AI usage, offer opt-outs when appropriate, and stay away from using AI to fully draft content.
3. Ensure Every Experience Is A Positive One
Customer loyalty is fragile. Negative experiences are remembered, and businesses may not get a second chance to right their wrongs, as evidenced by the following finding.
More than 60% of consumers (U.S. – 63%, UK – 62%) said they would stop buying from a brand after just one or two negative experiences. This leaves little opportunity for error before customers take their hard-earned money elsewhere.
This begs the question: What types of mistakes are unforgivable? It’s often not the major mistakes that you’d expect, but rather the accumulation of small grievances.
Over half of consumers (U.S. 53%, UK – 51%) said that inconveniences like long checkout lines or slow customer service can do more damage than something you’d expect to be more catastrophic, like sending out an email for a sale that’s no longer active.
The little things add up, and customers are quick to move on even if it happens just once.
Key Takeaway: Marketing and customer experience leaders must build feedback loops to catch and fix small annoyances before they become a bigger issue, like affecting your business’s bottom line.
Both teams should stay aligned to ensure nothing falls through the cracks, such as a faulty form on a gated content’s landing page or a broken call-to-action (CTA) link in an ebook.
4. Focus On Human Connection
Despite the rise of digital tools, the data is clear: Consumers still want and value human interaction. A chatbot may help to solve a quick issue, but many want to speak to and engage with an actual human. If this isn’t an option, your business runs the risk of creating a trust deficit with potential customers.
Unsurprisingly, over half (58%) of U.S. respondents said they value the ability to talk to a real person when they need support. Customers don’t want to get stuck in a phone tree; they want real support in real-time.
This doesn’t mean abandoning digital transformation, but it should strike a delicate balance with empathy. Human connection is valued throughout all stages of the customer journey, whether engaging with a social post or responding to a promotional email. Make human connection seamless and simple.
Key Takeaway: Digital tools can be helpful for enabling quick support, but they shouldn’t eliminate the option for human connection, especially when escalation is necessary. Invest in omnichannel experiences that offer the best of both worlds.
5. Ensure Value In Exchange For Data
Consumers are still willing to share their data, but only if they believe they’ll get something worthwhile out of it.
Banks, for example, are largely seen as trustworthy, with 69% of U.S. and 81% of UK consumers agreeing they trust banks to handle their data responsibly.
In contrast, social media platforms and AI tools (like ChatGPT, Gemini, Perplexity, and more) rank lowest when it comes to trust.
For content marketing leaders, this adds a layer of complexity to strategies for success. We know customers do want personalized experiences, but it comes with conditions. They expect brands to use their data only for meaningful interactions, not for profit or intrusive profiling.
The value exchange must be evident, meaning content standards must be set high. Content can no longer be drafted to meet a quota or stuff some keywords.
In addition to drafting relevant and helpful content that matches search intent, marketers should clearly disclose:
What data you collect.
What they’ll get in exchange for it.
How you protect it.
Why you collect it.
Key Takeaway: Make data transparency a part of your brand promise. Clearly disclose the benefit consumers will receive in exchange for their personal information. Create content that resonates with your audience, solves their pain points, and offers them clear value.
Framework For Turning Trust Into A Strategic Asset
To truly operationalize trust, marketing leaders must move beyond surface-level gestures and embed it into every layer of their customer journey. Trust must no longer be treated as a compliance issue but rather as a growth strategy.
Brands that build a reputation for responsible data use, transparent AI disclosure, exceptional customer experiences, and prioritize human connection will stand out in today’s marketplace.
Key actions for content marketing leaders to take include:
Audit CX for friction: Map key points of failure across your digital journey. Understand the types of content that are converting best and what needs reassessment. Continually measure content marketing performance to identify what’s landing well with your audience.
Be radically transparent: From AI disclosures to privacy policies, it’s better to overcommunicate to your audience. Share how and when AI is used.
Use AI responsibly: AI simply can’t match the expertise, strength, and emotion of human writers. Therefore, it should be used as an aid rather than a crutch when it comes to drafting content.
Reframe personalization: Personalization is a must, but not at the cost of frustrating customers. Use personalization strategically, ensuring it serves utility over novelty.
Empower cross-functional teams: Every team should have visibility into shared trust key performance indicators (KPIs) so each team understands how they can help grow consumer trust.
The future of marketing isn’t just about accelerating AI, personalization, or even digital transformation. It’s about trust.
Trust is what turns first-time buyers into lifelong advocates. It’s what enables brands to charge a premium, recover from mistakes, and stand out in crowded markets. In an era where consumer skepticism is high, trust must be earned through every stage of the customer journey, from first click to collecting payment.
For content marketing leaders, the takeaway is clear: Trust is your brand’s most valuable asset. Invest in it wisely.
Google removed outdated structured data documentation, but instead of returning a 404 response, they have chosen to redirect the old URLs to a changelog that links to the old URL, thereby causing an infinite loop between the two pages. Although that is technically not a soft 404, it is an interesting use of a 301 redirect for a missing web page and not how SEOs typically handle missing web pages and 404 server responses. Did Google make a mistake?
Google Removed Structured Data Documentation
Google quitely published a changelog note announcing they had removed obsolete structured data documentation. An announcement was made three months ago in June and today they finally removed the obsolete documentation.
The missing pages are for the following structured data that is no longer supported:
Course info
Estimated salary
Learning video
Special announcement
Vehicle listing.
Those pages are completely missing. Gone, and likely never coming back. The usual procedure in that kind of situation is to return a 404 Page Not Found server response. But that’s not what is happening.
Instead of a 404 response Google is returning a 301 redirect back to the changelog. What makes this setup somewhat weird is that Google is linking back to the missing web page from the changelog, which then redirects back to the changelog, creating an infinite loop between the two pages.
Screenshot Of Changelog
In the above screenshot I’ve underlined in red the link to the Course Info structured data.
The words “course info” are a link to this URL: https://developers.google.com/search/docs/appearance/structured-data/course-info
Which redirects right back to the changelog here: https://developers.google.com/search/updates#september-2025
Which of course contains the links to the five URLs that no longer exist, essentially causing an infinite loop.
It’s not a good user experience and it’s not good for crawlers. So the question is, why did Google do that?
301 redirects are an option for pages that are missing, so Google is technically correct to use a 301 redirect. However, 301 redirects are generally used to point “to a more accurate URL” which generally means a redirect to a replacement page, one that serves the same or similar purpose.
Technically they didn’t create a soft 404. But the way they handled the missing pages creates a loop that sends crawlers back and forth between a missing web page and the changelog. It seems that it would have been a better user and crawler experience to instead link to the June 2025 blog post that explains why these structured data types are no longer supported rather than create an infinite loop.
I don’t think it’s anything most SEOs or publishers would do, so why does Google think it’s a good idea?
For multi-location brands, local search has always been competitive. But 2025 has introduced a new player: AI.
From AI Overviews to Maps Packs, how consumers discover your stores is evolving, and some brands are already pulling ahead.
Robert Cooney, VP of Client Strategy at DAC, and Kyle Harris, Director of Local Optimization, have spent months analyzing enterprise local search trends. Their findings reveal clear gaps between brands that merely appear and those that consistently win visibility across hundreds of locations.
Multi-generational search habits are shifting. Brands that align content to real consumer behavior capture more attention.
The next wave of “agentic search” is coming, and early preparation is the key to staying relevant.
This webinar is your chance to see these insights in action. Walk away with actionable steps toprotect your visibility, optimize local presence, and turn AI-driven search into a growth engine for your stores.
📌 Register now to see how enterprise brands are staying ahead of AI in local search. Can’t make it live? Sign up and we’ll send the recording straight to your inbox.
In Silicon Valley’s imagined future, AI models are so empathetic that we’ll use them as therapists. They’ll provide mental-health care for millions, unimpeded by the pesky requirements for human counselors, like the need for graduate degrees, malpractice insurance, and sleep. Down here on Earth, something very different has been happening.
Last week, we published a story about people finding out that their therapists were secretly using ChatGPT during sessions. In some cases it wasn’t subtle; one therapist accidentally shared his screen during a virtual appointment, allowing the patient to see his own private thoughts being typed into ChatGPT in real time. The model then suggested responses that his therapist parroted.
It’s my favorite AI story as of late, probably because it captures so well the chaos that can unfold when people actually use AI the way tech companies have all but told them to.
As the writer of the story, Laurie Clarke, points out, it’s not a total pipe dream that AI could be therapeutically useful. Early this year, I wrote about the first clinical trial of an AI bot built specifically for therapy. The results were promising! But the secretive use by therapists of AI models that are not vetted for mental health is something very different. I had a conversation with Clarke to hear more about what she found.
I have to say, I was really fascinated that people called out their therapists after finding out they were covertly using AI. How did you interpret the reactions of these therapists? Were they trying to hide it?
In all the cases mentioned in the piece, the therapist hadn’t provided prior disclosure of how they were using AI to their patients. So whether or not they were explicitly trying to conceal it, that’s how it ended up looking when it was discovered. I think for this reason, one of my main takeaways from writing the piece was that therapists should absolutely disclose when they’re going to use AI and how (if they plan to use it). If they don’t, it raises all these really uncomfortable questions for patients when it’s uncovered and risks irrevocably damaging the trust that’s been built.
In the examples you’ve come across, are therapists turning to AI simply as a time-saver? Or do they think AI models can genuinely give them a new perspective on what’s bothering someone?
Some see AI as a potential time-saver. I heard from a few therapists that notes are the bane of their lives. So I think there is some interest in AI-powered tools that can support this. Most I spoke to were very skeptical about using AI for advice on how to treat a patient. They said it would be better to consult supervisors or colleagues, or case studies in the literature. They were also understandably very wary of inputting sensitive data into these tools.
There is some evidence AI can deliver more standardized, “manualized” therapies like CBT [cognitive behavioral therapy] reasonably effectively. So it’s possible it could be more useful for that. But that is AI specifically designed for that purpose, not general-purpose tools like ChatGPT.
What happens if this goes awry? What attention is this getting from ethics groups and lawmakers?
At present, professional bodies like the American Counseling Association advise against using AI tools to diagnose patients. There could also be more stringent regulations preventing this in future. Nevada and Illinois, for example, have recently passed laws prohibiting the use of AI in therapeutic decision-making. More states could follow.
OpenAI’s Sam Altman said last month that “a lot of people effectively use ChatGPT as a sort of therapist,” and that to him, that’s a good thing. Do you think tech companies are overpromising on AI’s ability to help us?
I think that tech companies are subtly encouraging this use of AI because clearly it’s a route through which some people are forming an attachment to their products. I think the main issue is that what people are getting from these tools isn’t really “therapy” by any stretch. Good therapy goes far beyond being soothing and validating everything someone says. I’ve never in my life looked forward to a (real, in-person) therapy session. They’re often highly uncomfortable, and even distressing. But that’s part of the point. The therapist should be challenging you and drawing you out and seeking to understand you. ChatGPT doesn’t do any of these things.
The rising popularity of AI is driving an increase in electricity demand so significant it has the potential to reshape our grid. Energy consumption by data centers has gone up by 80% from 2020 to 2025 and is likely to keep growing. Electricity prices are already rising, especially in places where data centers are most concentrated.
Yet many people, especially in Big Tech, argue that AI will be, on balance, a positive force for the grid. They claim that the technology could help get more clean power online faster, run our power system more efficiently, and predict and prevent failures that cause blackouts.
This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.
There are early examples where AI is helping already, including AI tools that utilities are using to help forecast supply and demand. The question is whether these big promises will be realized fast enough to outweigh the negative effects of AI on local grids and communities.
A delicate balance
One area where AI is already being used for the grid is in forecasting, says Utkarsha Agwan, a member of the nonprofit group Climate Change AI.
Running the grid is a balancing act: Operators have to understand how much electricity demand there is and turn on the right combination of power plants to meet it. They optimize for economics along the way, choosing the sources that will keep prices lowest for the whole system.
That makes it necessary to look ahead hours and in some cases days. Operators consider factors such as historical data (holidays often see higher demand) and the weather (a hot day means more air conditioners sucking up power). These predictions also consider what level of supply is expected from intermittent sources like solar panels.
There’s little risk in using AI tools in forecasting; it’s often not as time sensitive as other applications, which can require reactions within seconds or even milliseconds. A grid operator might use a forecast to determine which plants will need to turn on. Other groups might run their own forecasts as well, using AI tools to decide how to staff a plant, for example. The tools also can’t physically control anything. Rather, they can be used alongside more conventional methods to provide more data.
Today, grid operators make a lot of approximations to model the grid, because the system is so incredibly complex that it’s impossible to truly know what’s going on in every place at every time. Not only are there a whole host of power plants and consumers to think about, but there are considerations like making sure power lines don’t get overloaded.
Working with those estimates can lead to some inefficiencies, says Kyri Baker, a professor at the University of Colorado Boulder. Operators tend to generate a bit more electricity than the system uses, for example. Using AI to create a better model could reduce some of those losses and allow operators to make decisions about how to control infrastructure in real time to reach a closer match of supply and demand.
She gives the example of a trip to the airport. Imagine there’s a route you know will get you there in about 45 minutes. There might be another, more complicated route that could save you some time in ideal conditions—but you’re not sure whether it’s better on any particular day. What the grid does now is the equivalent of taking the reliable route.
“So that’s the gap that AI can help close. We can solve this more complex problem, fast enough and reliably enough that we can possibly use it and shave off emissions,” Baker says.
In theory, AI could be used to operate the grid entirely without human intervention. But that work is largely still in the research phase. Grid operators are running some of the most critical infrastructure in this country, and the industry is hesitant to mess with something that’s already working, Baker says. If this sort of technology is ever used in grid operations, there will still be humans in the loop to help make decisions, at least when it’s first deployed.
Planning ahead
Another fertile area for AI is planning future updates to the grid. Building a power plant can take a very long time—the typical time from an initial request to commercial operation in the US is roughly four years. One reason for the lengthy wait is that new power plants have to demonstrate how they might affect the rest of the grid before they can connect.
An interconnection study examines whether adding a new power plant of a particular type in a particular place would require upgrades to the grid to prevent problems. After regulators and utilities determine what upgrades might be needed, they estimate the cost, and the energy developer generally foots the bill.
Today, those studies can take months. They involve trying to understand an incredibly complicated system, and because they rely on estimates of other existing and proposed power plants, only a few can happen in an area at any given time. This has helped create the years-long interconnection queue, a long line of plants waiting for their turn to hook up to the grid in markets like the US and Europe. The vast majority of projects in the queue today are renewables, which means there’s clean power just waiting to come online.
AI could help speed this process, producing these reports more quickly. The Midcontinent Independent System Operator, a grid operator that covers 15 states in the central US, is currently working with a company called Pearl Street to help automate these reports.
AI won’t be a cure-all for grid planning; there are other steps to clearing the interconnection queue, including securing the necessary permits. But the technology could help move things along. “The sooner we can speed up interconnection, the better off we’ll be,” says Rob Gramlich, president of Grid Strategies, a consultancy specializing in transmission and power markets.
There’s a growing list of other potential uses for AI on the grid and in electricity generation. The technology could monitor and plan ahead for failures in equipment ranging from power lines to gear boxes. Computer vision could help detect everything from wildfires to faulty lines. AI could also help balance supply and demand in virtual power plants, systems of distributed resources like EV chargers or smart water heaters.
While there are early examples of research and pilot programs for AI from grid planning to operation, some experts are skeptical that the technology will deliver at the level some are hoping for. “It’s not that AI has not had some kind of transformation on power systems,” Climate Change AI’s Agwan says. “It’s that the promise has always been bigger, and the hope has always been bigger.”
Some places are already seeing higher electricity prices because of power needs from data centers. The situation is likely to get worse. Electricity demand from data centers is set to double by the end of the decade, reaching 945 terawatt-hours, roughly the annual demand from the entire country of Japan.
The infrastructure growth needed to support AI load growth has outpaced the promises of the technology, “by quite a bit,” says Panayiotis Moutis, an assistant professor of electrical engineering at the City College of New York. Higher bills caused by the increasing energy needs of AI aren’t justified by existing ways of using the technology for the grid, he says.
“At the moment, I am very hesitant to lean on the side of AI being a silver bullet,” Moutis says.
Correction: This story has been updated to correct Moutis’s affiliation.
Earlier this year, when my colleague Casey Crownhart and I spent six months researching the climate and energy burden of AI, we came to see one number in particular as our white whale: how much energy the leading AI models, like ChatGPT or Gemini, use up when generating a single response.
This fundamental number remained elusive even as the scramble to power AI escalated to the White House and the Pentagon, and as projections showed that in three years AI could use as much electricity as 22% of all US households.
The problem with finding that number, as we explain in our piece published in May, was that AI companies are the only ones who have it. We pestered Google, OpenAI, and Microsoft, but each company refused to provide its figure. Researchers we spoke to who study AI’s impact on energy grids compared it to trying to measure the fuel efficiency of a car without ever being able to drive it, making guesses based on rumors of its engine size and what it sounds like going down the highway.
This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.
But then this summer, after we published, a strange thing started to happen. In June, OpenAI’s Sam Altman wrote that an average ChatGPT query uses 0.34 watt-hours of energy. In July, the French AI startup Mistral didn’t publish a number directly but released an estimate of the emissions generated. In August, Google revealed that answering a question to Gemini uses about 0.24 watt-hours of energy. The figures from Google and OpenAI were similar to what Casey and I estimated for medium-size AI models.
So with this newfound transparency, is our job complete? Did we finally harpoon our white whale, and if so, what happens next for people studying the climate impact of AI? I reached out to some of our old sources, and some new ones, to find out.
The numbers are vague and chat-only
The first thing they told me is that there’s a lot missing from the figures tech companies published this summer.
OpenAI’s number, for example, did not appear in a detailed technical paper but rather in a blog post by Altman that leaves lots of unanswered questions, such as which model he was referring to, how the energy use was measured, and how much it varies. Google’s figure, as Crownhart points out, refers to the median amount of energy per query, which doesn’t give us a sense of the more energy-demanding Gemini responses, like when it uses a reasoning model to “think” through a hard problem or generates a really long response.
The numbers also refer only to interactions with chatbots, not the other ways that people are becoming increasingly reliant on generative AI.
“As video and image becomes more prominent and used by more and more people, we need the numbers from different modalities and how they measure up,” says Sasha Luccioni, AI and climate lead at the AI platform Hugging Face.
This is also important because the figures for asking a question to a chatbot are, as expected, undoubtedly small—the same amount of electricity used by a microwave in just seconds. That’s part of the reason AI and climate researchers don’t suggest that any one individual’s AI use creates a significant climate burden.
A full accounting of AI’s energy demands—one that goes beyond what’s used to answer an individual query to help us understand its full net impact on the climate—would require application-specific information on how all this AI is being used. Ketan Joshi, an analyst for climate and energy groups, acknowledges that researchers don’t usually get such specific information from other industries but says it might be justified in this case.
“The rate of data center growth is inarguably unusual,” Joshi says. “Companies should be subject to significantly more scrutiny.”
We have questions about energy efficiency
Companies making billion-dollar investments into AI have struggled to square this growth in energy demand with their sustainability goals. In May, Microsoft said that its emissions have soared by over 23% since 2020, owing largely to AI, while the company has promised to be carbon negative by 2030. “It has become clear that our journey towards being carbon negative is a marathon, not a sprint,” Microsoft wrote.
Tech companies often justify this emissions burden by arguing that soon enough, AI itself will unlock efficiencies that will make it a net positive for the climate. Perhaps the right AI system, the thinking goes, could design more efficient heating and cooling systems for a building, or help discover the minerals required for electric-vehicle batteries.
But there are no signs that AI has been usefully used to do these things yet. Companies have shared anecdotes about using AI to find methane emission hot spots, for example, but they haven’t been transparent enough to help us know if these successes outweigh the surges in electricity demand and emissions that Big Tech has produced in the AI boom. In the meantime, more data centers are planned, and AI’s energy demand continues to rise and rise.
The ‘bubble’ question
One of the big unknowns in the AI energy equation is whether society will ever adopt AI at the levels that figure into tech companies’ plans. OpenAI has said that ChatGPT receives 2.5 billion prompts per day. It’s possible that this number, and the equivalent numbers for other AI companies, will continue to soar in the coming years. Projections released last year by the Lawrence Berkeley National Laboratory suggest that if they do, AI alone could consume as much electricity annually as 22% of all US households by 2028.
But this summer also saw signs of a slowdown that undercut the industry’s optimism. OpenAI’s launch of GPT-5 was largely considered a flop, even by the company itself, and that flop led critics to wonder if AI may be hitting a wall. When a group at MIT found that 95% of businesses are seeing no return on their massive AI investments, stocks floundered. The expansion of AI-specific data centers might be an investment that’s hard to recoup, especially as revenues for AI companies remain elusive.
One of the biggest unknowns about AI’s future energy burden isn’t how much a single query consumes, or any other figure that can be disclosed. It’s whether demand will ever reach the scale companies are building for or whether the technology will collapse under its own hype. The answer will determine whether today’s buildout becomes a lasting shift in our energy system or a short-lived spike.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Meet the AI honorees on our 35 Innovators Under 35 list for 2025
Each year, we select 35 outstanding individuals under the age of 35 who are using technology to tackle tough problems in their respective fields.
Our AI honorees include people who steer model development at Silicon Valley’s biggest tech firms and academic researchers who develop new techniques to improve AI’s performance.
Check out all of our AI innovators here, and the full list—including our innovator of the year—here.
How Yichao “Peak” Ji became a global AI app hitmaker
When Yichao Ji—also known as “Peak”—appeared in a launch video for Manus in March, he didn’t expect it to go viral. Speaking in fluent English, the 32-year-old introduced the AI agent built by Chinese startup Butterfly Effect, where he serves as chief scientist.
The video was not an elaborate production but something about Ji’s delivery, and the vision behind the product, cut through the noise. The product, then still an early preview available only through invite codes, spread across the Chinese internet to the world in a matter of days. Within a week of its debut, Manus had attracted a waiting list of around 2 million people.
Despite his relative youth, Ji has over a decade of experience building products that merge technical complexity with real-world usability. That earned him credibility—and put him at the forefront of a rising class of Chinese technologists with global ambitions. Read the full story.
—Caiwei Chen
Help! My therapist is secretly using ChatGPT
In Silicon Valley’s imagined future, AI models are so empathetic that we’ll use them as therapists. They’ll provide mental-health care for millions, unimpeded by the pesky requirements for human counselors, like the need for graduate degrees, malpractice insurance, and sleep. Down here on Earth, something very different has been happening.
Last week, we published a story about people finding out that their therapists were secretly using ChatGPT during sessions. In some cases it wasn’t subtle; one therapist accidentally shared his screen during a virtual appointment, allowing the patient to see his own private thoughts being typed into ChatGPT in real time.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
What’s next in tech: the breakthroughs that matter
Some technologies reshape industries, whether we’re ready or not.
Join us for our next LinkedIn Live event on September 10 as our editorial team explores the breakthroughs defining this moment and the ones on the horizon that demand our attention.
From quantum computing to humanoid robotics, AI agents to climate tech, we’ll explore the innovations that excite us, the challenges they may bring, and why they’re worth watching now. It kicks off at 12.30pm ET tomorrow—register here to join us.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 The US is abandoning its international push against disinformation The State Department will no longer collaborate with Europe to combat malicious information spread by foreign governments. (FT $) + It comes as Russia is increasing its efforts to interfere overseas. (NYT $)
2 The judge overseeing Anthropic’s copyright case isn’t happy Judge William Alsup says a $1.5 billion out-of-court settlement may not be in the authors’ best interests. (Bloomberg $)
3 WhatsApp’s former head of security is suing Meta Attaullah Baig is accusing the company of failing to protect user data. (WP $) + He claims he uncovered systemic security failures, but was ignored. (Bloomberg $) + Meta maintains that Baig was dismissed for poor performance, not whistleblowing. (NYT $)
4 DOGE’s acting head is urging the US government to start hiring again Following months of widespread firings and resignations. (Fast Company $) + How DOGE wreaked havoc in Social Security. (ProPublica) + DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)
5 OpenAI is weighing up leaving California It’s worried that state regulators could derail its efforts to convert to a for-profit entity. (WSJ $) + Rival Anthropic is backing California governor Gavin Newsom’s AI bill. (Politico)
6 ICE spends millions on facial recognition tech In an effort to pinpoint people it suspects have assaulted officers. (404 Media) + The Supreme Court has given ICE the go-ahead to target people based on race. (Vox) + ICE directors were told to triple their daily arrests for undocumented immigrants. (NY Mag $)
7 AI researchers are training AI to replace them They’re recording every detail of their working days to help AI grasp their jobs. (The Information $) + People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)
8 What comes after the smartphone? The rise of AI agents means we may not be staring at glass slabs forever. (NYT $) + What’s next for smart glasses. (MIT Technology Review)
9 Social media’s obsession with ‘locking in’ needs to die Hustle culture and maximizing productivity at all costs are the aims of the game. (Insider $)
10 What it’s like to receive a massage from a robot While it may not be quite as relaxing, it’s relatively cheap. (The Guardian) + Will we ever trust robots? (MIT Technology Review)
Quote of the day
“It was hell on Earth.”
—Duncan Okindo, who was enslaved in a Myanmar cyberscam compound and beaten for missing his targets, tells the Guardian about his harrowing experience.
One more thing
AI means the end of internet search as we’ve known it
We all know what it means, colloquially, to google something. You pop a few words in a search box and in return get a list of blue links to the most relevant results. Fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in a structured way.
But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines deliver information to us since the 1990s is happening right now, thanks to generative AI.
Not everyone is excited for the change. Publishers are completely freaked out. And people are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Read the full story.
—Mat Honan
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Stephen King’s list of favorite movies doesn’t feature a whole lot of horror. + Tune into a breathtaking livestream of Earth, beamed live from the International Space Station. + Rodent thumbnails are way more important than I gave them credit for + Mark our words, actor Wagner Moura is going to be the next big thing.