New Data Finds Gap Between Google Rankings And LLM Citations via @sejournal, @MattGSouthern

Large language models cite sources differently than Google ranks them.

Search Atlas, an SEO software company, compared citations from OpenAI’s GPT, Google’s Gemini, and Perplexity against Google search results.

The analysis of 18,377 matched queries finds a gap between traditional search visibility and AI platform citations.

Here’s an overview of the key differences Search Atlas found.

Perplexity Is Closest To Search

Perplexity performs live web retrieval, so you would expect its citations to look more like search results. The study supports that.

Across the dataset, Perplexity showed a median domain overlap of around 25–30% with Google results. Median URL overlap was close to 20%. In total, Perplexity shared 18,549 domains with Google, representing about 43% of the domains it cited.

ChatGPT And Gemini Are More Selective

ChatGPT showed much lower overlap with Google. Its median domain overlap stayed around 10–15%. The model shared 1,503 domains with Google, accounting for about 21% of its cited domains. URL matches typically remained below 10%.

Gemini behaved less consistently. Some responses had almost no overlap with search results. Others lined up more closely. Overall, Gemini shared just 160 domains with Google, representing about 4% of the domains that appeared in Google’s results, even though those domains made up 28% of Gemini’s citations.

What The Numbers Mean For Visibility

Ranking in Google doesn’t guarantee LLM citations. This report suggests the systems draw from the web in different ways.

Perplexity’s architecture actively searches the web and its citation patterns more closely track traditional search rankings. If your site already ranks well in Google, you are more likely to see similar visibility in Perplexity answers.

ChatGPT and Gemini rely more on pre-trained knowledge and selective retrieval. They cite a narrower set of sources and are less tied to current rankings. URL-level matches with Google are low for both.

Study Limitations

The dataset heavily favored Perplexity. It accounted for 89% of matched queries, with OpenAI at 8% and Gemini at 3%.

Researchers matched queries using semantic similarity scoring. Paired queries expressed similar information needs but were not identical user searches. The threshold was 82% similarity using OpenAI’s embedding model.

The two-month window provides a recent snapshot only. Longer timeframes would be needed to see whether the same overlap patterns hold over time.

Looking Ahead

For retrieval-based systems like Perplexity, traditional SEO signals and overall domain strength are likely to matter more for visibility.

For reasoning-focused models like ChatGPT and Gemini, those signals may have less direct influence on which sources appear in answers.


Featured Image: Ascannio/Shutterstock

Google’s Old Search Era Is Over – Here’s What 2026 SEO Will Really Look Like

For years, Google’s predictable, and at times too easily gamed, ecosystem created an illusion that SEO success came from creating any and all content and checking boxes rather than understanding users.

During the era of massive top‑of‑funnel traffic and generously ranked low‑quality content, many marketers don’t realize it, but they mistook timing and loopholes for talent. Google unintentionally fueled this overconfidence by rewarding keyword stuffing, shallow articles, and formulaic playbooks that had little to do with real expertise.

Those days are gone. Today, AI-slop in the SERPs, fragmented discovery across social and generative AI chatbots, and the rise of agentic systems have exposed just how fragile those old SEO tactics really were.

SEO isn’t dying; it’s finally maturing.

And the marketers who win from this point forward are the ones who:

  • Understand audience behavior.
  • Build trust.
  • Earn authoritative attention across platforms, formats, and AI-powered environments.

That’s why we created SEO Trends 2026, our most comprehensive annual analysis yet.

It captures where discovery is shifting, how search behavior is changing, and what’s actually working for top SEOs right now.

And, it’s based on first-hand insights from some of the most respected operators in the industry.

Inside this year’s edition, you’ll learn:

  • How to protect your visibility in an AI-first discovery landscape.
  • Which platforms and content types are emerging as new engines of trust.
  • Why brand experience now influences rankings as much as on-page content.
  • The single most important strategic shift SEOs must make for 2026.

Key Finding #1: SEO Is Splintering Into New Discovery Paths

Discovery has fractured far beyond the ten blue links. Users now bounce between TikTok, Reddit, YouTube, ChatGPT, Gemini, and AI assistants before ever reaching a website.

Gen Z alone starts 1 in 10 searches with Google Lens, and 20% of those carry commercial intent. 

Traditional TOFU content has lost ground as AI systems increasingly summarize it.

Why it matters for SEO: Visibility now requires showing up consistently across multiple platforms, not just search.

Learn how to start reallocating your content and platform strategy to match this shift. Download the SEO Trends 2026 ebook for the tactical playbook.

Key Finding #2: Content AI Can’t Replicate Is Driving Results

Top SEOs reported that the content performing best in 2026 is the kind AI can’t easily imitate: opinionated commentary, first-hand experience, data-rich insights, and multimedia storytelling.

Shelley Walsh highlights that video interviews and experience-based formats “gain visibility across social, SERPs, and LLMs” precisely because they contain a human perspective.

SEO Opportunity: SEOs must invest in formats that feel unmistakably human. It’s not enough to publish “helpful content.” You need content that’s un-cannibalizable.

Download the ebook to explore SEO-first content trends that are gaining visibility in 2026.

Key Finding #3: AI Is Now A Competitive Necessity And A Threat

AI assistants and chatbots are quickly becoming the default discovery channel for millions of users.

LLMs now absorb the informational queries that once fueled website traffic, and they evaluate brands based on third-party mentions, sentiment, and authority signals across platforms.

Yet at the same time, these systems introduce new risk:

  • Truncated SERPs.
  • Hallucinations.
  • Opaque ranking logic.

As Katie Morton notes, Google is incentivized to keep users on its properties, often at the expense of search quality.

Why it matters for SEO in 2026: If you aren’t shaping how AI systems interpret your brand, they’ll pull from someone else’s narrative.

Get direction from the industry’s top SEO experts in SEO Trends 2026.

Key Finding #4 & SEO Predictions For 2026

Download the full ebook to access the complete set of 2026 predictions.

Search is changing faster than ever, but the through-line is clear: SEO is becoming a holistic, multi-platform marketing discipline.

User journeys now weave through AI agents, social feeds, community forums, image results, chat interfaces, and, only sometimes, traditional SERPs. Brands need to meet users wherever they seek information, and ensure that every touchpoint reinforces clarity, authority, and trust.

The most successful teams in 2026 will:

  • Invest deeply in audience understanding.
  • Create content that satisfies human expectations, not algorithmic myths.
  • Build owned communities to reduce platform dependence.
  • Monitor how AI systems surface, summarize, and cite their content.
  • Prioritize conversion and loyalty over traffic alone.

If you want to future-proof your search strategy and strengthen your brand’s presence across every discovery engine, download SEO Trends 2026 today. It’s the clearest roadmap we’ve ever published for navigating the AI search era with confidence.

Get the full ebook now and start building your 2026 strategy with data, not guesswork.

SEO Trends 2026


Featured Image: CHIEW/Shutterstock

Adobe To Acquire Semrush In $1.9 Billion Cash Deal via @sejournal, @MattGSouthern

Adobe and Semrush announced today that they have entered into a definitive agreement for Adobe to acquire Semrush in an all-cash transaction valued at approximately $1.9 billion. Adobe will pay $12.00 per share, describing Semrush as a “leading brand visibility platform.”

The acquisition brings a widely used SEO platform under Adobe’s Digital Experience umbrella.

The deal is expected to close in the first half of 2026, subject to regulatory approvals and the approval of Semrush stockholders.

What Adobe Is Buying

Semrush is a Boston-based SaaS platform best known in search marketing for keyword research, site audits, competitive intelligence, and online visibility tracking.

Over the past two years, Semrush has added enterprise products focused on AI-driven visibility, including tools that monitor how brands are referenced in responses from large language models such as ChatGPT and Gemini, alongside traditional search results.

Semrush has also been an active acquirer. Recent deals have included SEO education and community assets like Backlinko and Traffic Think Tank, as well as technology and media acquisitions such as Third Door Media, the publisher of Search Engine Land.

For Adobe, this gives the Experience Cloud portfolio a direct line into the SEO workflow that many in-house teams and agencies already use daily.

How Semrush Fits Adobe’s AI Marketing Stack

Adobe positions the deal as part of a broader strategy to support “brand visibility” in what it describes as an agentic AI era.

In the announcement, Anil Chakravarthy, president of Adobe’s Digital Experience business, says:

“Brand visibility is being reshaped by generative AI, and brands that don’t embrace this new opportunity risk losing relevance and revenue.”

Semrush’s “generative engine optimization” positioning aligns with that narrative. The company has been pitching GEO as a counterpart to traditional SEO, focused on keeping brands discoverable inside AI-generated answers, not just organic listings.

Adobe plans to integrate Semrush with products like Adobe Experience Manager, Adobe Analytics, and its newer Brand Concierge offering.

Deal Terms And Timeline

Under the terms of the agreement, Adobe will acquire Semrush for $12.00 per share in cash, representing a total equity value of roughly $1.9 billion.

Coverage from financial outlets notes that the price reflects a premium of around 77 percent over Semrush’s prior closing share price and that Semrush stock jumped more than 70 percent in early trading following the announcement.

According to the companies, the transaction has already been approved by both boards. An associated SEC filing shows the merger agreement was signed on November 18.

Closing is targeted for the first half of 2026, pending customary regulatory reviews and the approval of Semrush shareholders. Until then, Adobe and Semrush say they will continue to operate as separate companies.

Why This Matters

This deal continues a broader trend: core search and visibility tools are moving deeper into large enterprise suites.

If you already rely on Semrush, you can expect tighter integration with Adobe’s analytics and customer experience products over time.

It also raises practical questions:

  • How will Semrush be packaged and priced once it sits inside Adobe’s enterprise stack?
  • Can agencies and smaller teams keep using Semrush as a relatively independent tool?
  • How will Adobe choose to handle Semrush’s media holdings, including Search Engine Land and related properties?

For now, both companies are presenting the acquisition as a way to give marketers a more complete view of brand visibility across search results and AI-generated answers, rather than as a change to Semrush’s current product line.

Looking Ahead

In the near term, there are two things to watch.

First, regulators will review the transaction, particularly given Adobe’s history with large acquisitions in the digital experience space. That process will shape the closing timeline.

Second, Adobe will need to decide how quickly to integrate Semrush into Experience Cloud and how much to preserve the existing product and brand. Those choices will influence how disruptive this feels for your current workflows.

Watch for changes to Semrush’s API access, plan structure, and reporting integrations once the deal moves closer to completion.


Featured Image: IB Photography/Shutterstock

Digital Equity Is Brand Equity: Don’t Lose Search Visibility In a Merger via @sejournal, @billhunt

Most mergers and acquisitions (M&A) fail to account for the digital infrastructure and visibility of the acquired brands. While executives obsess over legal, financial, and branding integration, they overlook the most visible and valuable touchpoint: the website. This digital neglect often leads to steep drops in search visibility, broken customer journeys, and millions in lost revenue.

This article breaks down the Digital Dilution Effect, a compounding loss of equity, visibility, and performance when digital is mismanaged during M&A, and offers a recovery playbook for executives looking to preserve and grow digital value.

I’ve seen the negative impact firsthand, working with multinationals that acquire dozens of companies each year. It’s the same drill over and over. I remember being in a meeting where the SVP was screaming at the former CEO of an acquired company for not delivering.

The CEO shot back:

“You destroyed everything. We used to get 90% of our leads from organic search. Now our 1,000-page site is gone, replaced by six fluff pages buried in your corporate site with no marketing or ad support.”

That moment became the catalyst for a project I’d been lobbying for: integrating digital migration planning into the M&A process to prevent what I now call the Digital Dilution Effect, the systematic erosion of online visibility and value post-acquisition.

What Is The Digital Dilution Effect?

Digital Dilution is the measurable loss of traffic, brand equity, and revenue that occurs when websites are merged, redirected, or rebranded without a coordinated SEO, content, and infrastructure strategy.

It’s the digital version of goodwill impairment, but worse:

  • The audience knows something’s broken.
  • The platforms (Google, Bing, ChatGPT) lose trust in your content.
  • Your visibility gets reassigned to a competitor or the generative AI black hole.

Why it matters:

In a world where discovery and decision-making are increasingly digital, failing to maintain your brand’s digital presence during an M&A can wipe out the very value you paid for.

The Most Common Causes

  1. Visibility Loss From Domain Consolidation. Rebranding a target company without preserving its search footprint is the fastest way to disappear from customer queries. Redirects are often misconfigured, delayed, or deprioritized.
  2. Visibility Loss From Content Consolidation. As in the experience above, the acquired companies’ digital assets are consolidated from hundreds or thousands into a few “product pages” on the acquirer’s website, losing all the equity they had gained.
  3. Mismatched Infrastructure & CMS Conflicts. Many acquired sites run on different platforms. Migrating to a “standard” content management system (CMS) without considering indexation, internal linking, and site structure almost always leads to crawl chaos.
  4. Conflicting Geo Targeting & Hreflang Implementation. For global firms, improper hreflang consolidation or mismatched country/language logic can result in pages being served to the wrong markets or not at all.
  5. Content Cannibalization. When duplicate or overlapping content isn’t rationalized, search engines are forced to choose which version to index, often selecting neither.
  6. Analytics & Conversion Tracking Breakage. If tracking is not unified across merged properties, you’re flying blind – unable to measure loss, retention, or recovery efforts.
  7. Delay Between Brand Announcement And Web Update. There’s often a months-long gap between press releases and full web updates. During this window, confused users and crawlers both disengage.

Case In Point: A Costly Oversight

A global manufacturing firm acquired a smaller European competitor in a $200 million deal. The acquired brand had strong organic rankings across multiple languages and had become the default source in Google’s AI snippets for specific technical questions.

However:

  • The SEO team wasn’t consulted until eight weeks after the post-acquisition rebrand launched.
  • All top-performing content was redirected to a single press release page.
  • Traffic dropped 94% within 30 days.
  • The AI systems removed the content from summaries, and competitors replaced it.

The cost?

Over $4.5 million in lost monthly inbound lead value, plus the erosion of the technical authority they had spent years building.

The Real Cost Of Misalignment

During M&A, you’ll hear executives ask:

“How quickly can we realize synergies?”
“What’s the roadmap for operational integration?”

But rarely:

“What’s our plan for preserving digital visibility and brand equity?”

That absence is costly.

  • Marketing loses traction with no ability to retarget or convert.
  • Sales loses via the inbound pipeline that powered growth.
  • Product teams struggle to communicate value.
  • Investors see a drop in performance that contradicts synergy projections.

And because SEO and digital visibility aren’t line items in the M&A model, the root cause is often missed.

Why It Keeps Happening

M&A teams are built for compliance and speed.

  • Legal teams want minimal liability.
  • IT wants platform standardization.
  • Marketing wants the new brand live, fast.

But no one is assigned to protect digital equity. The SEO team, if they’re even consulted, often gets overruled or brought in too late.

And in global M&As, the fragmentation is even worse:

  • Regionally controlled sites follow different standards.
  • Language variants conflict with the new global strategy.
  • Schema and structured data get stripped in the migration.

All of this results in a loss of discoverability – and with it, business momentum.

A Digital Recovery Playbook

To avoid – or reverse – digital dilution, here’s what leaders must do:

1. Audit Digital Visibility Before The Deal Closes

Understand which pages drive traffic, leads, and brand authority. This becomes your digital equity ledger.

2. Create A Visibility Preservation Plan

Build a redirect map, structured data strategy, and hreflang alignment plan before you migrate anything.

3. Assign A Digital Integration Lead

Give them real authority – someone who understands SEO, analytics, infrastructure, and cross-functional coordination.

4. Involve SEO In The Deal Room

Just as you review legal liabilities and brand risks, assess the visibility and platform risks with equal rigor.

5. Use The New Brand Launch As A Visibility Catalyst

Turn your rebrand into a content and media boost, not a silent flicker. Leverage schema, press coverage, and AI-optimized structured content.

6. Monitor And Course Correct

Expect a short-term dip, but monitor indexed pages, impressions, and citations weekly. Course correct aggressively.

Final Thought: Treat Digital Equity Like Brand Equity

In the analog world, a brand’s equity resides in customer trust, product perception, and reputation. In the digital world, that equity is increasingly stored in search visibility, content authority, and structured presence across AI and web ecosystems.

You wouldn’t toss out brand recognition in a logo redesign. Don’t toss out digital visibility in an M&A.

If the acquired company’s website is responsible for 60% of inbound leads, killing it without a plan is self-sabotage. If their blog is quoted in Google SGE or ChatGPT, removing it erases your relevance in future answers.

The CMO, CTO, and CSO must work together – from day zero of due diligence – not just to integrate operations but to preserve digital dominance.

Because if your brand can’t be found, it can’t be chosen. And if your new site becomes invisible, that “strategic acquisition” just became a liability.

M&A success isn’t just about alignment on paper; it’s about continuity in search, AI, and user experience. Protect that, and you protect your investment.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

Should Advertisers Be Worried About AI In PPC?

One scroll through LinkedIn and you’d struggle not to see a post, video, or ad about AI, whatever the industry you work in.

For digital marketing, it’s completely taken over, and it has woven itself into nearly every aspect of day-to-day life, especially within PPC advertising.

From automated bidding to AI-generated ad creative, platforms like Google Ads and Microsoft Advertising have been doubling down on this for years.

Naturally, this shift raises questions and concerns among advertisers, with one side claiming it’s out of control and taking over, the other side boasting about time saved and game-changing results, and then you’ve got the middle ground trying to figure out exactly what the impact is and where it is going.

It’s a difficult topic to answer with a simple yes or no, with so many opinions and platforms for sharing them; it’s everywhere, and although certainly not a topic that is in its infancy, it does feel that way in 2025.

In this article, we’ll explore how AI is used in PPC today, the benefits it offers, the concerns it brings, and how advertisers can best adapt.

What Role Does AI Play In PPC Today?

The majority of advertisers are already using some form of AI-driven tool in their workflow, with 74% of marketers reported using AI tools last year, up from just 21% in 2022.

Then, within the platforms, PPC campaigns are heavily invested in artificial intelligence, both above and below the hood. Key areas being:

Bid Automation

Gone are the days of manual bidding on hundreds of keywords or product groups (in most cases).

Google’s and Microsoft’s Automated Bidding use machine learning to set optimal bids for each auction based on the likelihood to convert.

These algorithms analyze countless signals (device, location, time of day, user behavior patterns, etc.) in real-time to adjust bids far more precisely than a human could.

In this scenario, the role of the advertiser is to feed these bidding strategies with the best possible data to then take forward in making decisions.

Then at a strategic level, advertisers will need to determine the structure, targeting, goals, etc, and this is where Google has further pushed AI into the hands of PPC teams.

From Google’s side, it’s an indication of trust that the AI will find relevant matches and handle bids for them, and I have seen this work incredibly well, but I’ve also seen this work terribly, and it’s all context-dependent.

Dynamic Creative & Assets

Responsive Search Ads (RSAs) allow advertisers to input multiple headlines and descriptions, which Google’s AI then mixes and matches to serve the best-performing combinations for each query.

Over time, the algorithm learns which messages resonate most.

Google has even introduced generative AI tools to create ad assets (headlines, images, etc.) automatically based on your website content and campaign goals.

Similarly, Microsoft’s platform now offers a Copilot feature that can generate ad copy variations, images, and suggest keywords using AI.

Of all the AI-related changes in Google Ads, in my experience, this was one that advertisers welcomed the most, as it is a time saver and created a nice way to test different messaging, call to actions, etc.

Keyword Match Types

The recipe for Google Ads in 2025 that advertisers are given from Google is to blend broad match and automated bidding.

Why is this? According to Google, machine learning attempts to understand user intent and match ads to queries that aren’t exact matches but are deemed relevant.

Think about it this way: You’ve done your research for your new search campaign, built out your ad groups, and are confident that you have covered all bases.

How will this change over time, and how can you guarantee you’re not missing relevant auctions? This is rhetoric Google runs with for broad match as it leans into the stats with billions of searches per day, with ~15% being brand new queries, pushing advertisers to loosen targeting to allow machine learning to operate constraint-free.

There is certainly value in this, and it’s reported that 62% of advertisers using Google’s Smart Bidding have made broad match their primary keyword match type, a strategy that was very much a no-go for years; however, handing all control over to AI doesn’t fully align with what matters most (profitability, LTV, margins, etc) and there has to be a middle ground.

Audience Targeting And Optimization

Both Google and Microsoft leverage AI to build and target audiences.

Campaign types like Performance Max are almost entirely AI-driven; they automatically allocate your budget across search, display, YouTube, Gmail, etc., to find conversions wherever they occur.

Advertisers simply provide creative assets, search themes, conversion goals, etc, and the AI does the rest.

The better quality the data inputted, the better the performance to a large degree.

Of all the AI topics for Google Ads, PMax is very much debated within the industry, but it’s telling that 63% of PPC experts plan to increase spend on Google’s feed-based Performance Max campaigns this year.

Recommendations, Auto Applies, And Budget Optimization

If you work within/around PPC, you’ll have seen, closed, shouted at, and maybe on a rare occasion, taken action off the back of these.

The platforms continuously analyze account performance and suggest optimizations.

Some are basic, but others (like budget reallocation or shifting to different bid strategies) are powered by machine learning insights across thousands of accounts.

As good as these may sound, they are only as good as the data being fed into the account and lack context, which, in some cases, if applied, can be detrimental to account performance.

In summary, advertisers have had to embrace AI to a large extent in their day-to-day campaign management.

But with this embrace comes a natural question: Is all this AI making things better or worse for advertisers, or is it just a way for ad platforms to grow their market share?

What Are The Benefits Of AI In PPC?

AI offers some clear advantages for paid search marketers.

When used properly, AI can make campaigns more efficient, effective, and can save a great deal of time once spent on monotonous tasks.

Here are some key benefits:

Efficiency And Time Savings

One of the biggest wins is automation of labor-intensive tasks.

AI can analyze massive data sets and adjust bids or ads 24/7, far faster than any human.

This frees up marketers to focus on strategy instead of repetitive tasks.

Mundane tasks such as bid adjustments, budget pacing, creative rotation, etc, can be picked up by AI to allow PPC teams to focus on high-level strategy and analysis, looking at the bigger picture.

It’s certainly not a case of set-and-forget, but the balance has shifted.

AI can now take care of the executional heavy lifting, while humans guide the strategy, interpret the nuance, and make the judgment calls that machines can’t.

Structural Management

A clear benefit of AI in many facets of paid search is the consolidation of account structures.

Large advertisers might have millions of keywords or hundreds of ads, which at one time were manually mapped out and managed group by group.

With automated bidding strategies adjusting bids in real time, serving the best possible creative and doubling down on the keywords, product groups, and SKUs that work, PPC teams are able to whittle down overly complex account structures into consolidated themes where they can feed their data.

Campaigns like Performance Max scale across channels automatically, finding additional inventory (like YouTube or Display) without the advertiser manually creating separate campaigns, further making life easier for advertisers who choose to use them.

Optimization Of Ad Creative And Testing

Rather than running a handful of ad variations, responsive ads powered by AI can test dozens of combinations of headlines and descriptions instantly.

The algorithm learns which messages work best for each search term or audience segment.

Additionally, new generative AI features can create ad copy or image variations you hadn’t considered, expanding creative possibilities, but please check these before launch, and if set to auto apply, maybe remove and review first, as these outputs can be interesting.

The overarching goal from the ad platforms is to work towards solving the problem many teams face regarding getting creatives produced and fast, which they do to an extent, but there’s still a way to go.

Audience Targeting And Personalization

AI can identify user patterns to target more precisely than manual bidding.

Google’s algorithms might learn that certain search queries or user demographics are more likely to convert and automatically adjust bids or show specific ad assets to those segments, and as these change over time, so do the bidding strategies.

This kind of micro-optimization of who sees which ad was very hard to do manually, and has great limitations.

In essence, the machine finds your potential customers using complex signals that adjust bids in real time based on the user vs. setting a bid for a term/product group to serve in every ad set, essentially treating each auction the same.

What Are The Concerns Of AI In PPC?

Despite all the promise, it’s natural for advertisers to have some worries about the march of AI in paid search.

Handing over control to algorithms and black box systems comes with its challenges.

In practice, there have been hiccups and valid concerns that explain why some in the industry are cautious.

Loss Of Control And Transparency

A common gripe is that as AI takes over, advertisers lose visibility into the “why” behind performance changes.

Take PMax, for example. These fully automated campaigns provide only limited data when compared to a segmented structure, making it hard to understand what’s driving conversions and putting advertisers in a difficult position when feeding back performance to stakeholders who once had a wealth of data to dig through.

Nearly half of PPC specialists said that managing campaigns has become harder in the last two years because of the loss of insights and data due to automated campaign types like PMax, with one industry survey finding that trust in major ad platforms has declined over the past year, with Google experiencing a 54% net decline in trust sentiment.

Respondents cited the platforms’ prioritization of black box automation over giving users control as a key issue, with many feeling like they are flying partially blind, a huge worry considering budgets and importance of Google Ads as an advertising channel for millions of brands worldwide.

Performance And Efficiency Trade-Offs

I’ve mentioned this a couple of times so far, but as with most AI in the context of Google Ads, the data being fed into the platform determines how well the AI performs, and adopting AI in PPC does not result in immediate performance improvements for every account, however hard Google pushes this narrative.

Algorithms optimize for the goal you set (e.g., achieve this ROAS), sometimes at the expense of other metrics like cost per conversion or return on investment (ROI).

Take broad match keywords combined with Smart Bidding; this might bring in more traffic, but some of that traffic could be low quality or not truly incremental, impacting the bottom line and how you manage your budgets.

To be taken with a pinch of salt due to context, however, an analysis of over 2,600 Google Ads accounts found that 72% of advertisers saw better return on ad spend (ROAS) with traditional exact match keyword targeting, whereas only ~26% of accounts achieved better ROAS using broad match automation.

Advertisers are rightly concerned that blindly following AI recommendations could lead to wasted spend on irrelevant clicks or diminishing returns.

Then, you have the learning period for automated strategies, which can also be costly (but necessary) where the algorithm might spend a lot figuring out what works, something not every business can afford.

Mistakes, Quality, And Brand Safety

AI isn’t infallible.

There have been instances of AI-generated ad copy that miss the mark or even violate brand guidelines.

For example, if you let generative AI create search ads, it might produce statements that are factually incorrect or not in the desired tone.

Having worked extensively in paid search for luxury fashion brands, the risk of AI producing off-brand creative and messaging is often a roadblock to getting on board with new campaign types.

In a Salesforce survey, 31% of marketing professionals cited accuracy and quality concerns with AI outputs as a barrier.

To add further complexity to this, many of the features, such as auto applies in Google Ads, are not the easiest to spot within the accounts and are dependent on the level of expertise within the team managing PPC; certain AI-generated assets or enhancements could be live without teams knowing, which can lead to friction within businesses with strict brand guidelines.

Over-Reliance And Skills Erosion

Another subtle worry is that marketers relying heavily on AI could see their own skills become redundant.

PPC professionals used to pride themselves on granular account optimization, but if the machine is doing everything, how will their jobs change?

A study by HubSpot found that over 57% of U.S. marketers feel pressure to learn AI tools or risk becoming irrelevant in their careers.

With PPC, all this means is that less and less time is spent within the accounts undertaking repetitive tasks, something that I’ve championed for years.

Every paid search team is different and is built from different levels of expertise; however, the true value that PPC teams bring shouldn’t be the intricacies of campaign management, it’s the understanding of the value their channel is driving and everything around this that influences performance.

So, Should Advertisers Be Worried About AI In PPC?

As with most topics in PPC (and most articles I write), there isn’t a simple yes or no answer, and it’s very much context dependent.

PPC advertisers shouldn’t panic; they should be aware, informed, and prepared, and this doesn’t mean knowing the exact ins and outs of AI models, far from it.

Rather than asking if you trust it or not, or if you really should give up the reins of manual campaign management, ask yourself how you can use AI to make your job easier and to drive better results for your business/clients.

Over my last decade and a half in performance marketing, working in-house, within independents, networks, and from running my own paid media agency, I’ve seen many trends come and go, each one shifting the role of the PPC team ever so slightly.

AI is certainly not a trend, and it’s fundamentally changing the world we live in, and within the PPC world, it’s changing the way we work, pushing advertisers to spend less time in the accounts than they once did, freeing up time to allocate to what really moves the needle when managing paid media.

In my opinion, this is a good thing, but there is definitely a balance that needs to be struck, and what this balance looks like is up to you and your teams.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

The Download: AI-powered warfare, and how embryo care is changing

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The State of AI: How war will be changed forever

—Helen Warrell & James O’Donnell

It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.

Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. 

But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Read the full story.

This is the third edition of The State of AI, our subscriber-only collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power.

Every Monday, writers from both publications will debate one aspect of the generative AI revolution reshaping global power. While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the MIT Technology Review are able to read the whole thing. Sign up here to receive future editions every Monday.

Job titles of the future: AI embryologist

Embryologists are the scientists behind the scenes of in vitro fertilization who oversee the development and selection of embryos, prepare them for transfer, and maintain the lab environment. They’ve been a critical part of IVF for decades, but their job has gotten a whole lot busier in recent years as demand for the fertility treatment skyrockets and clinics struggle to keep up.

Klaus Wiemer, a veteran embryologist and IVF lab director, believes artificial intelligence might help by predicting embryo health in real time and unlocking new avenues for productivity in the lab. Read the full story.

—Amanda Smith

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Big Tech’s job cuts are a warning sign
They’re a canary down the mine for other industries. (WP $)
+ Americans appear to feel increasingly unsettled by AI. (WSJ $)
+ Global fund managers worry companies are overinvesting in the technology. (FT $)

2 Iran is attempting to stimulate rain to end its deadly drought
But critics warn that cloud seeding is a challenging process. (New Scientist $)
+ Parts of western Iran are now experiencing flooding. (Reuters)
+ Why it’s so hard to bust the weather control conspiracy theory. (MIT Technology Review)

3 Air taxi startups may produce new aircraft for war zones
The US Army has announced its intentions to acquire most of its weapons from startups, not major contractors. (The Information $)
+ US firm Joby Aviation is launching flying taxis in Dubai. (NBC News)
+ This giant microwave may change the future of war. (MIT Technology Review)

4 Weight-loss drug make Eli Lilly is likely to cross a trillion-dollar valuation
As it prepares to launch a pill alternative to its injections. (WSJ $)
+ Arch rival Novo Nordisk A/S is undercutting the company to compete. (Bloomberg $)
+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)

5 What’s going on with the US TikTok ban?
Even the lawmakers in charge don’t seem to know. (The Verge)

6 It’s getting harder to grow cocoa
Mass tree felling and lower rainfall in the Congo Basin is to blame. (FT $)
+ Industrial agriculture activists are everywhere at COP30. (The Guardian)
+ Africa fights rising hunger by looking to foods of the past. (MIT Technology Review)

7 Russia is cracking down on its critical military bloggers
Armchair critics are facing jail time if they refuse to apologize. (Economist $)

8 Why the auto industry is so obsessed with humanoid robots
It’s not just Tesla—plenty of others want to get in on the act. (The Atlantic $)
+ China’s EV giants are betting big on humanoid robots. (MIT Technology Review)

9 Indian startups are challenging ChatGPT’s AI dominance
They support a far wider range of languages than the large AI firms’ models. (Rest of World)
+ OpenAI is huge in India. Its models are steeped in caste bias. (MIT Technology Review)

10 These tiny sensors track butterflies on their journey to Mexico 🦋
Scientists hope it’ll shed some light on their mysterious life cycles. (NYT $)

Quote of the day

“I think no company is going to be immune, including us.” 

—Sundar Pichai, CEO of Google, warns the BBC about the precarious nature of the AI bubble.

One more thing

How a 1980s toy robot arm inspired modern robotics

—Jon Keegan

As a child of an electronic engineer, I spent a lot of time in our local Radio Shack as a kid. While my dad was locating capacitors and resistors, I was in the toy section. It was there, in 1984, that I discovered the best toy of my childhood: the Armatron robotic arm.

Described as a “robot-like arm to aid young masterminds in scientific and laboratory experiments,” it was a legit robotic arm. And the bold look and function of Armatron made quite an impression on many young kids who would one day have a career in robotics. Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The US Library of Congress has attained some handwritten drafts of iconic songs from The Wizard of Oz.
+ This interesting dashboard tracks the world’s top 500 musical artists in the world right now—some of the listings may surprise you (or just make you feel really old.)
+ Cult author Chris Kraus shares what’s floating her boat right now.+ The first images of the forthcoming Legend of Zelda film are here!

Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent

<div data-chronoton-summary="

  • Generative interfaces: Gemini 3 ditches plain-text defaults, instead choosing optimal formats autonomously—spinning up website-like interfaces, sketching diagrams, or generating animations based on what it deems most effective for each prompt.
  • Gemini Agent: An experimental feature now handles complex tasks across Google Calendar, Gmail, and Reminders, breaking work into steps and pausing for user approval.
  • Integrated with other Google products: Gemini 3 Pro now powers enhanced Search summaries, generates Wirecutter-style shopping guides from 50 billion product listings, and enables better vibe-coding through Google Antigravity.

” data-chronoton-post-id=”1128065″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent. 

The previous model, Gemini 2.5, supports multimodal input. Users can feed it images, handwriting, or voice. But it usually requires explicit instructions about the format the user wants back, and it defaults to plain text regardless. 

But Gemini 3 introduces what Google calls “generative interfaces,” which allow the model to make its own choices about what kind of output fits the prompt best, assembling visual layouts and dynamic views on its own instead of returning a block of text. 

Ask for travel recommendations and it may spin up a website-like interface inside the app, complete with modules, images, and follow-up prompts such as “How many days are you traveling?” or “What kinds of activities do you enjoy?” It also presents clickable options based on what you might want next.

When asked to explain a concept, Gemini 3 may sketch a diagram or generate a simple animation on its own if it believes a visual is more effective. 

“Visual layout generates an immersive, magazine-style view complete with photos and modules,” says Josh Woodward, VP of Google Labs, Gemini, and AI Studio. “These elements don’t just look good but invite your input to further tailor the results.” 

With Gemini 3, Google is also introducing Gemini Agent, an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules. 

Similar to other agents, it breaks tasks into discrete steps, displays its progress in real time, and pauses for approval from the user before continuing. Google describes the feature as a step toward “a true generalist agent.” It will be available on the web for Google AI Ultra subscribers in the US starting November 18.

The overall approach can seem a lot like “vibe coding,” where users describe an end goal in plain language and let the model assemble the interface or code needed to get there.

The update also ties Gemini more deeply into Google’s existing products. In Search, a limited group of Google AI Pro and Ultra subscribers can now switch to Gemini 3 Pro, the reasoning variation of the new model, to receive deeper, more thorough AI-generated summaries that rely on the model’s reasoning rather than the existing AI Mode.

For shopping, Gemini will now pull from Google’s Shopping Graph—which the company says contains more than 50 billion product listings—to generate its own recommendation guides. Users just need to ask a shopping-related question or search a shopping-related phrase, and the model assembles an interactive, Wirecutter-style product recommendation piece, complete with prices and product details, without redirecting to an external site.

For developers, Google is also pushing single-prompt software generation further. The company introduced Google Antigravity, a  development platform that acts as an all-in-one space where code, tools, and workflows can be created and managed from a single prompt.

Derek Nee, CEO of Flowith, an agentic AI application, told MIT Technology Review that Gemini 3 Pro addresses several gaps in earlier models. Improvements include stronger visual understanding, better code generation, and better performance on long tasks—features he sees as essential for developers of AI apps and agents. 

“Given its speed and cost advantages, we’re integrating the new model into our product,” he says. “We’re optimistic about its potential, but we need deeper testing to understand how far it can go.” 

Realizing value with AI inference at scale and in production

Training an AI model to predict equipment failures is an engineering achievement. But it’s not until prediction meets action—the moment that model successfully flags a malfunctioning machine—that true business transformation occurs. One technical milestone lives in a proof-of-concept deck; the other meaningfully contributes to the bottom line.

Craig Partridge, senior director worldwide of Digital Next Advisory at HPE, believes “the true value of AI lies in inference”. Inference is where AI earns its keep. It’s the operational layer that puts all that training to use in real-world workflows. Partridge elaborates, “The phrase we use for this is ‘trusted AI inferencing at scale and in production,’” he says. “That’s where we think the biggest return on AI investments will come from.”

Getting to that point is difficult. Christian Reichenbach, worldwide digital advisor at HPE, points to findings from the company’s recent survey of 1,775 IT leaders: While nearly a quarter (22%) of organizations have now operationalized AI—up from 15% the previous year—the majority remain stuck in experimentation.

Reaching the next stage requires a three-part approach: establishing trust as an operating principle, ensuring data-centric execution, and cultivating IT leadership capable of scaling AI successfully.

Trust as a prerequisite for scalable, high-stakes AI

Trusted inference means users can actually rely on the answers they’re getting from AI systems. This is important for applications like generating marketing copy and deploying customer service chatbots, but it’s absolutely critical for higher-stakes scenarios—say, a robot assisting during surgeries or an autonomous vehicle navigating crowded streets.

Whatever the use case, establishing trust will require doubling down on data quality; first and foremost, inferencing outcomes must be built on reliable foundations. This reality informs one of Partridge’s go-to mantras: “Bad data in equals bad inferencing out.”

Reichenbach cites a real-world example of what happens when data quality falls short—the rise of unreliable AI-generated content, including hallucinations, that clogs workflows and forces employees to spend significant time fact-checking. “When things go wrong, trust goes down, productivity gains are not reached, and the outcome we’re  looking for is not achieved,” he says.

On the other hand, when trust is properly engineered into inference systems, efficiency and productivity gains can increase. Take a network operations team tasked with troubleshooting configurations. With a trusted inferencing engine, that unit gains a reliable copilot that can deliver faster, more accurate, custom-tailored recommendations—”a 24/7 member of the team they didn’t have before,” says Partridge.

The shift to data-centric thinking and rise of the AI factory

In the first AI wave, companies rushed to hire data scientists and many viewed sophisticated, trillion-parameter models as the primary goal. But today, as organizations move to turn early pilots into real, measurable outcomes, the focus has shifted toward data engineering and architecture.

“Over the past five years, what’s become more meaningful is breaking down data silos, accessing data streams, and quickly unlocking value,” says Reichenbach. It’s an evolution happening alongside the rise of the AI factory—the always-on production line where data moves through pipelines and feedback loops to generate continuous intelligence.

This shift reflects an evolution from model-centric to data-centric thinking, and with it comes a new set of strategic considerations. “It comes down to two things: How much of the intelligence–the model itself–is truly yours? And how much of the input–the data–is uniquely yours, from your customers, operations, or market?” says Reichenbach.

These two central questions inform everything from platform direction and operating models to engineering roles and trust and security considerations. To help clients map their answers—and translate them into actionable strategies—Partridge breaks down HPE’s four-quadrant AI factory implication matrix (see figure):

Source: HPE, 2025

  • Run: Accessing an external, pretrained model via an interface or API; organizations don’t own the model or the data. Implementation requires strong security and governance. It also requires establishing a center of excellence that makes and communicates decisions about AI usage.
  • RAG (retrieval augmented generation): Using external, pre-trained models combined with a company’s proprietary data to create unique insights. Implementation focuses on connecting data streams to inferencing capabilities that provide rapid, integrated access to full-stack AI platforms.
  • Riches: Training custom models on data that resides in the enterprise for unique differentiation opportunities and insights. Implementation requires scalable, energy-efficient environments, and often high-performance systems.
  • Regulate: Leveraging custom models trained on external data, requiring the same scalable setup as Riches, but with added focus on legal and regulatory compliance for handling sensitive, non-owned data with extreme caution.

Importantly, these quadrants are not mutually exclusive. Partridge notes that most organizations—including HPE itself—operate across many of the quadrants. “We build our own models to help understand how networks operate,” he says. “We then deploy that intelligence into our products, so that our end customer gets the chance to deliver in what we call the ‘Run’ quadrant. So for them, it’s not their data; it’s not their model. They’re just adding that capability inside their organization.”

IT’s moment to scale—and lead

The second part of Partridge’s catchphrase about inferencing—”at scale”— speaks to a primary tension in enterprise AI: what works for a handful of use cases often breaks when applied across an entire organization.

“There’s value in experimentation and kicking ideas around,” he says. “But if you want to really see the benefits of AI, it needs to be something that everybody can engage in and that solves for many different use cases.”

In Partridge’s view, the challenge of turning boutique pilots into organization-wide systems is uniquely suited to the IT function’s core competencies—and it’s a leadership opportunity the function can’t afford to sit out. “IT takes things that are small-scale and implements the discipline required to run them at scale,” he says. “So, IT organizations really need to lean into this debate.”

For IT teams content to linger on the sidelines, history offers a cautionary tale from the last major infrastructure shift: enterprise migration to the cloud. Many IT departments sat out decision-making during the early cloud adoption wave a decade ago, while business units independently deployed cloud services. This led to fragmented systems, redundant spending, and security gaps that took years to untangle.

The same dynamic threatens to repeat with AI, as different teams experiment with tools and models outside IT’s purview. This phenomenon—sometimes called shadow AI—describes environments where pilots proliferate without oversight or governance. Partridge believes that most organizations are already operating in the “Run” quadrant in some capacity, as employees will use AI tools whether or not they’re officially authorized to.

Rather than shut down experimentation, it is now IT’s mandate to bring structure to it. And enterprises must architect a data platform strategy that brings together enterprise data with guardrails, governance framework, and accessibility to feed AI. Also, it’s critical to keep standardizing infrastructure (such as private cloud AI platforms), protecting data integrity, and safeguarding brand trust, all while enabling the speed and flexibility that AI applications demand. These are the requirements for reaching the final milestone: AI that’s truly in production.

For teams on the path to that goal, Reichenbach distills what success requires. “It comes down to knowing where you play: When to Run external models smarter, when to apply RAG to make them more informed, where to invest to unlock Riches from your own data and models, and when to Regulate what you don’t control,” says Reichenbach. “The winners will be those who bring clarity to all quadrants and align technology ambition with governance and value creation.”

For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Networking for AI: Building the foundation for real-time intelligence

The Ryder Cup is an almost-century-old tournament pitting Europe against the United States in an elite showcase of golf skill and strategy. At the 2025 event, nearly a quarter of a million spectators gathered to watch three days of fierce competition on the fairways.

From a technology and logistics perspective, pulling off an event of this scale is no easy feat. The Ryder Cup’s infrastructure must accommodate the tens of thousands of network users who flood the venue (this year, at Bethpage Black in Farmingdale, New York) every day.

To manage this IT complexity, Ryder Cup engaged technology partner HPE to create a central hub for its operations. The solution centered around a platform where tournament staff could access data visualization supporting operational decision-making. This dashboard, which leveraged a high-performance network and private-cloud environment, aggregated and distilled insights from diverse real-time data feeds.

It was a glimpse into what AI-ready networking looks like at scale—a real-world stress test with implications for everything from event management to enterprise operations. While models and data readiness get the lion’s share of boardroom attention and media hype, networking is a critical third leg of successful AI implementation, explains Jon Green, CTO of HPE Networking. “Disconnected AI doesn’t get you very much; you need a way to get data into it and out of it for both training and inference,” he says.

As businesses move toward distributed, real-time AI applications, tomorrow’s networks will need to parse even more massive volumes of information at ever more lightning-fast speeds. What played out on the greens at Bethpage Black represents a lesson being learned across industries: Inference-ready networks are a make-or-break factor for turning AI’s promise into real-world performance.

Making a network AI inference-ready

More than half of organizations are still struggling to operationalize their data pipelines. In a recent HPE cross-industry survey of 1,775  IT leaders, 45% said they could run real-time data pushes and pulls for innovation. It’s a noticeable change over last year’s numbers (just 7% reported having such capabilities in 2024), but there’s still work to be done to connect data collection with real-time decision-making.

The network may hold the key to further narrowing that gap. Part of the solution will likely come down to infrastructure design. While traditional enterprise networks are engineered to handle the predictable flow of business applications—email, browsers, file sharing, etc.—they’re not designed to field the dynamic, high-volume data movement required by AI workloads. Inferencing in particular depends on shuttling vast datasets between multiple GPUs with supercomputer-like precision.

“There’s an ability to play fast and loose with a standard, off-the-shelf enterprise network,” says Green. “Few will notice if an email platform is half a second slower than it might’ve been. But with AI transaction processing, the entire job is gated by the last calculation taking place. So it becomes really noticeable if you’ve got any loss or congestion.”

Networks built for AI, therefore, must operate with a different set of performance characteristics, including ultra-low latency, lossless throughput, specialized equipment, and adaptability at scale. One of these differences is AI’s distributed nature, which affects the seamless flow of data.

The Ryder Cup was a vivid demonstration of this new class of networking in action. During the event, a Connected Intelligence Center was put in place to ingest data from ticket scans, weather reports, GPS-tracked golf carts, concession and merchandise sales, spectator and consumer queues, and network performance. Additionally, 67 AI-enabled cameras were positioned throughout the course. Inputs were analyzed through an operational intelligence dashboard and provided staff with an instantaneous view of activity across the grounds.

“The tournament is really complex from a networking perspective, because you have many big open areas that aren’t uniformly packed with people,” explains Green. “People tend to follow the action. So in certain areas, it’s really dense with lots of people and devices, while other areas are completely empty.”

To handle that variability, engineers built out a two-tiered architecture. Across the sprawling venue, more than 650 WiFi 6E access points, 170 network switches, and 25 user experience sensors worked together to maintain continuous connectivity and feed a private cloud AI cluster for live analytics. The front-end layer connected cameras, sensors, and access points to capture live video and movement data, while a back-end layer—located within a temporary on-site data center—linked GPUs and servers in a high-speed, low-latency configuration that effectively served as the system’s brain. Together, the setup enabled both rapid on-the-ground responses and data collection that could inform future operational planning. “AI models also were available to the team which could process video of the shots taken and help determine, from the footage, which ones were the most interesting,” says Green.

Physical AI and the return of on-prem intelligence

If time is of the essence for event management, it’s even more critical in contexts where safety is on the line—for instance a self-driving car making a split-second decision to accelerate or brake.

In planning for the rise of physical AI, where applications move off screens and onto factory floors and city streets, a growing number of enterprises are rethinking their architectures. Instead of sending the data to centralized clouds for inference, some are deploying edge-based AI clusters that process information closer to where it is generated. Data-intensive training may still occur in the cloud, but inferencing happens on-site.

This hybrid approach is fueling a wave of operational repatriation, as workloads once relegated to the cloud return to on-premises infrastructure for enhanced speed, security, sovereignty, and cost reasons. “We’ve had an out-migration of IT into the cloud in recent years, but physical AI is one of the use cases that we believe will bring a lot of that back on-prem,” predicts Green, giving the example of an AI-infused factory floor, where a round-trip of sensor data to the cloud would be too slow to safely control automated machinery. “By the time processing happens in the cloud, the machine has already moved,” he explains.

There’s data to back up Green’s projection: research from Enterprise Research Group shows that 84% of respondents are reevaluating application deployment strategies due to the growth of AI. Market forecasts also reflect this shift. According to IDC, the AI market for infrastructure is expected to reach $758 billion by 2029.

AI for networking and the future of self-driving infrastructure

The relationship between networking and AI is circular: Modern networks make AI at scale possible, but AI is also helping make networks smarter and more capable.

“Networks are some of the most data-rich systems in any organization,” says Green. “That makes them a perfect use case for AI. We can analyze millions of configuration states across thousands of customer environments and learn what actually improves performance or stability.”

At HPE for example, which has one of the largest network telemetry repositories in the world, AI models analyze anonymized data collected from billions of connected devices to identify trends and refine behavior over time. The platform processes more than a trillion telemetry points each day, which means it can continuously learn from real-world conditions.

The concept broadly known as AIOps (or AI-driven IT operations) is changing how enterprise networks are managed across industries. Today, AI surfaces insights as recommendations that administrators can choose to apply with a single click. Tomorrow, those same systems might automatically test and deploy low-risk changes themselves.

That long-term vision, Green notes, is referred to as a “self-driving network”—one that handles the repetitive, error-prone tasks that have historically plagued IT teams. “AI isn’t coming for the network engineer’s job, but it will eliminate the tedious stuff that slows them down,” he says. “You’ll be able to say, ‘Please go configure 130 switches to solve this issue,’ and the system will handle it. When a port gets stuck or someone plugs a connector in the wrong direction, AI can detect it—and in many cases, fix it automatically.”

Digital initiatives now depend on how effectively information moves. Whether coordinating a live event or streamlining a supply chain, the performance of the network increasingly defines the performance of the business. Building that foundation today will separate those who pilot from those who scale AI.

For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

New Books: Wikipedia, Ring, Vibe Code, More

Innovation and leadership are often synonymous. What follows are 10 new titles from entrepreneurs, practitioners, and academics on innovation, leadership, and lessons learned.

Vibe Coding: Building Production-Grade Software with GenAI, Chat, Agents, and Beyond

Cover of Vibe Coding

Vibe Coding

by Gene Kim, Steve Yegge

Vibe coding” is the practice of describing a software tool to a generative AI platform, which would then code it. The authors, veterans of leading tech companies including Tripwire, Google, and Amazon, cut through controversy to offer a groundbreaking look at “the good, the bad, and the ugly” of this transformational programming practice and how to unlock its potential.

Ding Dong: How Ring Went from Shark Tank Reject to Everyone’s Front Door

Cover of Ding Dong

Ding Dong

by Jamie Siminoff and Andrew Postman

The honest, humorous story of how Siminoff, the founder of Ring home security, took his product from a humiliating Shark Tank rejection to a billion-dollar business juggernaut with celebrity investors.

Leadership Unblocked: Break through the Beliefs That Limit Your Potential

Cover of Leadership Unblocked

Leadership Unblocked

by Muriel M. Wilkins

Wilkins, a corporate coach, consultant, author, and podcaster, shares her experience and research to show readers how to overcome the “hidden blockers” — unconscious, limiting beliefs — that all too often get in the way of effective leadership.

The Seven Rules of Trust: A Blueprint for Building Things That Last

Cover of Seven Rules of Trust

Seven Rules of Trust

by Jimmy Wales with Dan Gardner

Wikipedia has grown from an unorthodox experiment into an indispensable global encyclopedia. Founder Jimmy Wales shares the lessons he learned about building and maintaining trust, accountability, and creativity in an era when public confidence in almost everything else has plummeted.

Natural-Born Entrepreneurs: Breaking into Business Ownership

Cover of Natural-Born Entrepreneurs

Natural-Born Entrepreneurs

by Lisa Piercey

Debuting at number 1 in Amazon’s “Starting a Business” category, “Natural-Born Entrepreneurs” offers a roadmap for transitioning from employee to employer by acquiring businesses — addressing deal structures, operations, and governance. Piercey is a physician, executive, investor, and former Tennessee state health commissioner.

The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity

Cover of Age of Extraction

Age of Extraction

by Tim Wu

Wu is a bestselling author, professor of law, and former White House advisor on tech policy. In this new book, he explores the power of tech platforms to shape the digital economy. Reviewers call it “astute and timely,” “a must-read,” and “an urgent wake-up call.”

Think Bigger, Lead Better: Eight to Great Principles for Organizational Success

Cover of Think Bigger, Lead Better

Think Bigger, Lead Better

by Rick Tollakson

Tollakson presents the “eight to great” principles distilled from growing his business tenfold across decades. The book asks readers, “Are you ready to think bigger?”

The Winner’s Curse: Behavioral Economics Anomalies, Then and Now

Cover of Winner's Curse

Winner’s Curse

by Richard H. Thaler and Alex Imas

Thaler, a Nobel laureate, and Imas, an up-and-coming economist, join forces to revisit concepts that challenged the idea of rational decision-making and gave rise to the field of behavioral economics. They show that these behavioral concepts show up in everything from professional golf to retirement planning.

How They Get You: Sneaky Everyday Economics and Smart Ways to Hold on to Your Money

Cover of How They Get You

How They Get You

by Chris Kohler

This entertaining guide to making better money-management choices comes from a top Australia-based financial journalist. It covers how to outsmart loyalty programs, gift cards, sneaky subscriptions, and late fees — all designed to get you spending more without realizing it.

Seven Tenths of a Second: Life, Leadership and Formula 1

Cover of Seven Tenths of a Second

Seven Tenths of a Second

by Zak Brown

Brown went from professional race car driver to a global leader in motorsport marketing to CEO of McLaren Racing. His book gives readers behind-the-scenes insights into a sport and business that demands continuous innovation.