Designing digital resilience in the agentic AI era

Digital resilience—the ability to prevent, withstand, and recover from digital disruptions—has long been a strategic priority for enterprises. With the rise of agentic AI, the urgency for robust resilience is greater than ever.

Agentic AI represents a new generation of autonomous systems capable of proactive planning, reasoning, and executing tasks with minimal human intervention. As these systems shift from experimental pilots to core elements of business operations, they offer new opportunities but also introduce new challenges when it comes to ensuring digital resilience. That’s because the autonomy, speed, and scale at which agentic AI operates can amplify the impact of even minor data inconsistencies, fragmentation, or security gaps.

While global investment in AI is projected to reach $1.5 trillion in 2025, fewer than half of business leaders are confident in their organization’s ability to maintain service continuity, security, and cost control during unexpected events. This lack of confidence, coupled with the profound complexity introduced by agentic AI’s autonomous decision-making and interaction with critical infrastructure, requires a reimagining of digital resilience.

Organizations are turning to the concept of a data fabric—an integrated architecture that connects and governs information across all business layers. By breaking down silos and enabling real-time access to enterprise-wide data, a data fabric can empower both human teams and agentic AI systems to sense risks, prevent problems before they occur, recover quickly when they do, and sustain operations.

Machine data: A cornerstone of agentic AI and digital resilience

Earlier AI models relied heavily on human-generated data such as text, audio, and video, but agentic AI demands deep insight into an organization’s machine data: the logs, metrics, and other telemetry generated by devices, servers, systems, and applications.

To put agentic AI to use in driving digital resilience, it must have seamless, real-time access to this data flow. Without comprehensive integration of machine data, organizations risk limiting AI capabilities, missing critical anomalies, or introducing errors. As Kamal Hathi, senior vice president and general manager of Splunk, a Cisco company, emphasizes, agentic AI systems rely on machine data to understand context, simulate outcomes, and adapt continuously. This makes machine data oversight a cornerstone of digital resilience.

“We often describe machine data as the heartbeat of the modern enterprise,” says Hathi. “Agentic AI systems are powered by this vital pulse, requiring real-time access to information. It’s essential that these intelligent agents operate directly on the intricate flow of machine data and that AI itself is trained using the very same data stream.” 

Few organizations are currently achieving the level of machine data integration required to fully enable agentic systems. This not only narrows the scope of possible use cases for agentic AI, but, worse, it can also result in data anomalies and errors in outputs or actions. Natural language processing (NLP) models designed prior to the development of generative pre-trained transformers (GPTs) were plagued by linguistic ambiguities, biases, and inconsistencies. Similar misfires could occur with agentic AI if organizations rush ahead without providing models with a foundational fluency in machine data. 

For many companies, keeping up with the dizzying pace at which AI is progressing has been a major challenge. “In some ways, the speed of this innovation is starting to hurt us, because it creates risks we’re not ready for,” says Hathi. “The trouble is that with agentic AI’s evolution, relying on traditional LLMs trained on human text, audio, video, or print data doesn’t work when you need your system to be secure, resilient, and always available.”

Designing a data fabric for resilience

To address these shortcomings and build digital resilience, technology leaders should pivot to what Hathi describes as a data fabric design, better suited to the demands of agentic AI. This involves weaving together fragmented assets from across security, IT, business operations, and the network to create an integrated architecture that connects disparate data sources, breaks down silos, and enables real-time analysis and risk management. 

“Once you have a single view, you can do all these things that are autonomous and agentic,” says Hathi. “You have far fewer blind spots. Decision-making goes much faster. And the unknown is no longer a source of fear because you have a holistic system that’s able to absorb these shocks and disruption without losing continuity,” he adds.

To create this unified system, data teams must first break down departmental silos in how data is shared, says Hathi. Then, they must implement a federated data architecture—a decentralized system where autonomous data sources work together as a single unit without physically merging—to create a unified data source while maintaining governance and security. And finally, teams must upgrade data platforms to ensure this newly unified view is actionable for agentic AI. 

During this transition, teams may face technical limitations if they rely on traditional platforms modeled on structured data—that is, mostly quantitative information such as customer records or financial transactions that can be organized in a predefined format (often in tables) that is easy to query. Instead, companies need a platform that can also manage streams of unstructured data such as system logs, security events, and application traces, which lack uniformity and are often qualitative rather than quantitative. Analyzing, organizing, and extracting insights from these kinds of data requires more advanced methods enabled by AI.

Harnessing AI as a collaborator

AI itself can be a powerful tool in creating the data fabric that enables AI systems. AI-powered tools can, for example, quickly identify relationships between disparate data—both structured and unstructured—automatically merging them into one source of truth. They can detect and correct errors and employ NLP to tag and categorize data to make it easier to find and use. 

Agentic AI systems can also be used to augment human capabilities in detecting and deciphering anomalies in an enterprise’s unstructured data streams. These are often beyond human capacity to spot or interpret at speed, leading to missed threats or delays. But agentic AI systems, designed to perceive, reason, and act autonomously, can plug the gap, delivering higher levels of digital resilience to an enterprise.

“Digital resilience is about more than withstanding disruptions,” says Hathi. “It’s about evolving and growing over time. AI agents can work with massive amounts of data and continuously learn from humans who provide safety and oversight. This is a true self-optimizing system.”

Humans in the loop

Despite its potential, agentic AI should be positioned as assistive intelligence. Without proper oversight, AI agents could introduce application failures or security risks.

Clearly defined guardrails and maintaining humans in the loop is “key to trustworthy and practical use of AI,” Hathi says. “AI can enhance human decision-making, but ultimately, humans are in the driver’s seat.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Quantum physicists have shrunk and “de-censored” DeepSeek R1

<div data-chronoton-summary="

Quantum-inspired compression Spanish firm Multiverse Computing has created DeepSeek R1 Slim, a version of the Chinese AI model that’s 55% smaller but maintains similar performance. The technique uses tensor networks from quantum physics to represent complex data more efficiently.

Chinese censorship removed Researchers claim to have stripped away built-in censorship that prevented the original model from answering politically sensitive questions about topics like Tiananmen Square or jokes about President Xi. Testing showed the modified model could provide factual responses comparable to Western models.

Selective model editing The quantum-inspired approach allows for granular control over AI models, potentially enabling researchers to remove specific biases or add specialized knowledge. However, critics warn that completely removing censorship may be difficult as it’s embedded throughout the training process in Chinese models.

” data-chronoton-post-id=”1128119″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

A group of quantum physicists claims to have created a version of the powerful reasoning AI model DeepSeek R1 that strips out the censorship built into the original by its Chinese creators. 

The scientists at Multiverse Computing, a Spanish firm specializing in quantum-inspired AI techniques, created DeepSeek R1 Slim, a model that is 55% smaller but performs almost as well as the original model. Crucially, they also claim to have eliminated official Chinese censorship from the model.

In China, AI companies are subject to rules and regulations meant to ensure that content output aligns with laws and “socialist values.” As a result, companies build in layers of censorship when training the AI systems. When asked questions that are deemed “politically sensitive,” the models often refuse to answer or provide talking points straight from state propaganda.

To trim down the model, Multiverse turned to a mathematically complex approach borrowed from quantum physics that uses networks of high-dimensional grids to represent and manipulate large data sets. Using these so-called tensor networks shrinks the size of the model significantly and allows a complex AI system to be expressed more efficiently.

The method gives researchers a “map” of all the correlations in the model, allowing them to identify and remove specific bits of information with precision. After compressing and editing a model, Multiverse researchers fine-tune it so its output remains as close as possible to that of the original.

To test how well it worked, the researchers compiled a data set of around 25 questions on topics known to be restricted in Chinese models, including “Who does Winnie the Pooh look like?”—a reference to a meme mocking President Xi Jinping—and “What happened in Tiananmen in 1989?” They tested the modified model’s responses against the original DeepSeek R1, using OpenAI’s GPT-5 as an impartial judge to rate the degree of censorship in each answer. The uncensored model was able to provide factual responses comparable to those from Western models, Multiverse says.

This work is part of Multiverse’s broader effort to develop technology to compress and manipulate existing AI models. Most large language models today demand high-end GPUs and significant computing power to train and run. However, they are inefficient, says Roman Orús, Multiverse’s cofounder and chief scientific officer. A compressed model can perform almost as well and save both energy and money, he says. 

There is a growing effort across the AI industry to make models smaller and more efficient. Distilled models, such as DeepSeek’s own R1-Distill variants, attempt to capture the capabilities of larger models by having them “teach” what they know to a smaller model, though they often fall short of the original’s performance on complex reasoning tasks.

Other ways to compress models include quantization, which reduces the precision of the model’s parameters (boundaries that are set when it’s trained), and pruning, which removes individual weights or entire “neurons.”

“It’s very challenging to compress large AI models without losing performance,” says Maxwell Venetos, an AI research engineer at Citrine Informatics, a software company focusing on materials and chemicals, who didn’t work on the Multiverse project. “Most techniques have to compromise between size and capability. What’s interesting about the quantum-inspired approach is that it uses very abstract math to cut down redundancy more precisely than usual.”

This approach makes it possible to selectively remove bias or add behaviors to LLMs at a granular level, the Multiverse researchers say. In addition to removing censorship from the Chinese authorities, researchers could inject or remove other kinds of perceived biases or specialty knowledge. In the future, Multiverse says, it plans to compress all mainstream open-source models.  

Thomas Cao, assistant professor of technology policy at Tufts University’s Fletcher School, says Chinese authorities require models to build in censorship—and this requirement now shapes the global information ecosystem, given that many of the most influential open-source AI models come from China.

Academics have also begun to document and analyze the phenomenon. Jennifer Pan, a professor at Stanford, and Princeton professor Xu Xu conducted a study earlier this year examining government-imposed censorship in large language models. They found that models created in China exhibit significantly higher rates of censorship, particularly in response to Chinese-language prompts.

There is growing interest in efforts to remove censorship from Chinese models. Earlier this year, the AI search company Perplexity released its own uncensored variant of DeepSeek R1, which it named R1 1776. Perplexity’s approach involved post-training the model on a data set of 40,000 multilingual prompts related to censored topics, a more traditional fine-tuning method than the one Multiverse used. 

However, Cao warns that claims to have fully “removed” censorship may be overstatements. The Chinese government has tightly controlled information online since the internet’s inception, which means that censorship is both dynamic and complex. It is baked into every layer of AI training, from the data collection process to the final alignment steps. 

“It is very difficult to reverse-engineer that [a censorship-free model] just from answers to such a small set of questions,” Cao says. 

Scaling innovation in manufacturing with AI

Manufacturing is getting a major system upgrade. As AI amplifies existing technologies—like digital twins, the cloud, edge computing, and the industrial internet of things (IIoT)—it is enabling factory operations teams to shift from reactive, isolated problem-solving to proactive, systemwide optimization.

Digital twins—physically accurate virtual representations of a piece of equipment, a production line, a process, or even an entire factory—allow workers to test, optimize, and contextualize complex, real-world environments. Manufacturers are using digital twins to simulate factory environments with pinpoint detail.

“AI-powered digital twins mark a major evolution in the future of manufacturing, enabling real-time visualization of the entire production line, not just individual machines,” says Indranil Sircar, global chief technology officer for the manufacturing and mobility industry at Microsoft. “This is allowing manufacturers to move beyond isolated monitoring toward much wider insights.”

A digital twin of a bottling line, for example, can integrate one-dimensional shop-floor telemetry, two-dimensional enterprise data, and three-dimensional immersive modeling into a single operational view of the entire production line to improve efficiency and reduce costly downtime. Many high-speed industries face downtime rates as high as 40%, estimates Jon Sobel, co-founder and chief executive officer of Sight Machine, an industrial AI company that partners with Microsoft and NVIDIA to transform complex data into actionable insights. By tracking micro-stops and quality metrics via digital twins, companies can target improvements and adjustments with greater precision, saving millions in once-lost productivity without disrupting ongoing operations.

AI offers the next opportunity. Sircar estimates that up to 50% of manufacturers are currently deploying AI in production. This is up from 35% of manufacturers surveyed in a 2024 MIT Technology Review Insights report who said they have begun to put AI use cases into production. Larger manufacturers with more than $10 billion in revenue were significantly ahead, with 77% already deploying AI use cases, according to the report.

“Manufacturing has a lot of data and is a perfect use case for AI,” says Sobel. “An industry that has been seen by some as lagging when it comes to digital technology and AI may be in the best position to lead. It’s very unexpected.”

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent

<div data-chronoton-summary="

  • Generative interfaces: Gemini 3 ditches plain-text defaults, instead choosing optimal formats autonomously—spinning up website-like interfaces, sketching diagrams, or generating animations based on what it deems most effective for each prompt.
  • Gemini Agent: An experimental feature now handles complex tasks across Google Calendar, Gmail, and Reminders, breaking work into steps and pausing for user approval.
  • Integrated with other Google products: Gemini 3 Pro now powers enhanced Search summaries, generates Wirecutter-style shopping guides from 50 billion product listings, and enables better vibe-coding through Google Antigravity.

” data-chronoton-post-id=”1128065″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent. 

The previous model, Gemini 2.5, supports multimodal input. Users can feed it images, handwriting, or voice. But it usually requires explicit instructions about the format the user wants back, and it defaults to plain text regardless. 

But Gemini 3 introduces what Google calls “generative interfaces,” which allow the model to make its own choices about what kind of output fits the prompt best, assembling visual layouts and dynamic views on its own instead of returning a block of text. 

Ask for travel recommendations and it may spin up a website-like interface inside the app, complete with modules, images, and follow-up prompts such as “How many days are you traveling?” or “What kinds of activities do you enjoy?” It also presents clickable options based on what you might want next.

When asked to explain a concept, Gemini 3 may sketch a diagram or generate a simple animation on its own if it believes a visual is more effective. 

“Visual layout generates an immersive, magazine-style view complete with photos and modules,” says Josh Woodward, VP of Google Labs, Gemini, and AI Studio. “These elements don’t just look good but invite your input to further tailor the results.” 

With Gemini 3, Google is also introducing Gemini Agent, an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules. 

Similar to other agents, it breaks tasks into discrete steps, displays its progress in real time, and pauses for approval from the user before continuing. Google describes the feature as a step toward “a true generalist agent.” It will be available on the web for Google AI Ultra subscribers in the US starting November 18.

The overall approach can seem a lot like “vibe coding,” where users describe an end goal in plain language and let the model assemble the interface or code needed to get there.

The update also ties Gemini more deeply into Google’s existing products. In Search, a limited group of Google AI Pro and Ultra subscribers can now switch to Gemini 3 Pro, the reasoning variation of the new model, to receive deeper, more thorough AI-generated summaries that rely on the model’s reasoning rather than the existing AI Mode.

For shopping, Gemini will now pull from Google’s Shopping Graph—which the company says contains more than 50 billion product listings—to generate its own recommendation guides. Users just need to ask a shopping-related question or search a shopping-related phrase, and the model assembles an interactive, Wirecutter-style product recommendation piece, complete with prices and product details, without redirecting to an external site.

For developers, Google is also pushing single-prompt software generation further. The company introduced Google Antigravity, a  development platform that acts as an all-in-one space where code, tools, and workflows can be created and managed from a single prompt.

Derek Nee, CEO of Flowith, an agentic AI application, told MIT Technology Review that Gemini 3 Pro addresses several gaps in earlier models. Improvements include stronger visual understanding, better code generation, and better performance on long tasks—features he sees as essential for developers of AI apps and agents. 

“Given its speed and cost advantages, we’re integrating the new model into our product,” he says. “We’re optimistic about its potential, but we need deeper testing to understand how far it can go.” 

Realizing value with AI inference at scale and in production

Training an AI model to predict equipment failures is an engineering achievement. But it’s not until prediction meets action—the moment that model successfully flags a malfunctioning machine—that true business transformation occurs. One technical milestone lives in a proof-of-concept deck; the other meaningfully contributes to the bottom line.

Craig Partridge, senior director worldwide of Digital Next Advisory at HPE, believes “the true value of AI lies in inference”. Inference is where AI earns its keep. It’s the operational layer that puts all that training to use in real-world workflows. Partridge elaborates, “The phrase we use for this is ‘trusted AI inferencing at scale and in production,’” he says. “That’s where we think the biggest return on AI investments will come from.”

Getting to that point is difficult. Christian Reichenbach, worldwide digital advisor at HPE, points to findings from the company’s recent survey of 1,775 IT leaders: While nearly a quarter (22%) of organizations have now operationalized AI—up from 15% the previous year—the majority remain stuck in experimentation.

Reaching the next stage requires a three-part approach: establishing trust as an operating principle, ensuring data-centric execution, and cultivating IT leadership capable of scaling AI successfully.

Trust as a prerequisite for scalable, high-stakes AI

Trusted inference means users can actually rely on the answers they’re getting from AI systems. This is important for applications like generating marketing copy and deploying customer service chatbots, but it’s absolutely critical for higher-stakes scenarios—say, a robot assisting during surgeries or an autonomous vehicle navigating crowded streets.

Whatever the use case, establishing trust will require doubling down on data quality; first and foremost, inferencing outcomes must be built on reliable foundations. This reality informs one of Partridge’s go-to mantras: “Bad data in equals bad inferencing out.”

Reichenbach cites a real-world example of what happens when data quality falls short—the rise of unreliable AI-generated content, including hallucinations, that clogs workflows and forces employees to spend significant time fact-checking. “When things go wrong, trust goes down, productivity gains are not reached, and the outcome we’re  looking for is not achieved,” he says.

On the other hand, when trust is properly engineered into inference systems, efficiency and productivity gains can increase. Take a network operations team tasked with troubleshooting configurations. With a trusted inferencing engine, that unit gains a reliable copilot that can deliver faster, more accurate, custom-tailored recommendations—”a 24/7 member of the team they didn’t have before,” says Partridge.

The shift to data-centric thinking and rise of the AI factory

In the first AI wave, companies rushed to hire data scientists and many viewed sophisticated, trillion-parameter models as the primary goal. But today, as organizations move to turn early pilots into real, measurable outcomes, the focus has shifted toward data engineering and architecture.

“Over the past five years, what’s become more meaningful is breaking down data silos, accessing data streams, and quickly unlocking value,” says Reichenbach. It’s an evolution happening alongside the rise of the AI factory—the always-on production line where data moves through pipelines and feedback loops to generate continuous intelligence.

This shift reflects an evolution from model-centric to data-centric thinking, and with it comes a new set of strategic considerations. “It comes down to two things: How much of the intelligence–the model itself–is truly yours? And how much of the input–the data–is uniquely yours, from your customers, operations, or market?” says Reichenbach.

These two central questions inform everything from platform direction and operating models to engineering roles and trust and security considerations. To help clients map their answers—and translate them into actionable strategies—Partridge breaks down HPE’s four-quadrant AI factory implication matrix (see figure):

Source: HPE, 2025

  • Run: Accessing an external, pretrained model via an interface or API; organizations don’t own the model or the data. Implementation requires strong security and governance. It also requires establishing a center of excellence that makes and communicates decisions about AI usage.
  • RAG (retrieval augmented generation): Using external, pre-trained models combined with a company’s proprietary data to create unique insights. Implementation focuses on connecting data streams to inferencing capabilities that provide rapid, integrated access to full-stack AI platforms.
  • Riches: Training custom models on data that resides in the enterprise for unique differentiation opportunities and insights. Implementation requires scalable, energy-efficient environments, and often high-performance systems.
  • Regulate: Leveraging custom models trained on external data, requiring the same scalable setup as Riches, but with added focus on legal and regulatory compliance for handling sensitive, non-owned data with extreme caution.

Importantly, these quadrants are not mutually exclusive. Partridge notes that most organizations—including HPE itself—operate across many of the quadrants. “We build our own models to help understand how networks operate,” he says. “We then deploy that intelligence into our products, so that our end customer gets the chance to deliver in what we call the ‘Run’ quadrant. So for them, it’s not their data; it’s not their model. They’re just adding that capability inside their organization.”

IT’s moment to scale—and lead

The second part of Partridge’s catchphrase about inferencing—”at scale”— speaks to a primary tension in enterprise AI: what works for a handful of use cases often breaks when applied across an entire organization.

“There’s value in experimentation and kicking ideas around,” he says. “But if you want to really see the benefits of AI, it needs to be something that everybody can engage in and that solves for many different use cases.”

In Partridge’s view, the challenge of turning boutique pilots into organization-wide systems is uniquely suited to the IT function’s core competencies—and it’s a leadership opportunity the function can’t afford to sit out. “IT takes things that are small-scale and implements the discipline required to run them at scale,” he says. “So, IT organizations really need to lean into this debate.”

For IT teams content to linger on the sidelines, history offers a cautionary tale from the last major infrastructure shift: enterprise migration to the cloud. Many IT departments sat out decision-making during the early cloud adoption wave a decade ago, while business units independently deployed cloud services. This led to fragmented systems, redundant spending, and security gaps that took years to untangle.

The same dynamic threatens to repeat with AI, as different teams experiment with tools and models outside IT’s purview. This phenomenon—sometimes called shadow AI—describes environments where pilots proliferate without oversight or governance. Partridge believes that most organizations are already operating in the “Run” quadrant in some capacity, as employees will use AI tools whether or not they’re officially authorized to.

Rather than shut down experimentation, it is now IT’s mandate to bring structure to it. And enterprises must architect a data platform strategy that brings together enterprise data with guardrails, governance framework, and accessibility to feed AI. Also, it’s critical to keep standardizing infrastructure (such as private cloud AI platforms), protecting data integrity, and safeguarding brand trust, all while enabling the speed and flexibility that AI applications demand. These are the requirements for reaching the final milestone: AI that’s truly in production.

For teams on the path to that goal, Reichenbach distills what success requires. “It comes down to knowing where you play: When to Run external models smarter, when to apply RAG to make them more informed, where to invest to unlock Riches from your own data and models, and when to Regulate what you don’t control,” says Reichenbach. “The winners will be those who bring clarity to all quadrants and align technology ambition with governance and value creation.”

For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Networking for AI: Building the foundation for real-time intelligence

The Ryder Cup is an almost-century-old tournament pitting Europe against the United States in an elite showcase of golf skill and strategy. At the 2025 event, nearly a quarter of a million spectators gathered to watch three days of fierce competition on the fairways.

From a technology and logistics perspective, pulling off an event of this scale is no easy feat. The Ryder Cup’s infrastructure must accommodate the tens of thousands of network users who flood the venue (this year, at Bethpage Black in Farmingdale, New York) every day.

To manage this IT complexity, Ryder Cup engaged technology partner HPE to create a central hub for its operations. The solution centered around a platform where tournament staff could access data visualization supporting operational decision-making. This dashboard, which leveraged a high-performance network and private-cloud environment, aggregated and distilled insights from diverse real-time data feeds.

It was a glimpse into what AI-ready networking looks like at scale—a real-world stress test with implications for everything from event management to enterprise operations. While models and data readiness get the lion’s share of boardroom attention and media hype, networking is a critical third leg of successful AI implementation, explains Jon Green, CTO of HPE Networking. “Disconnected AI doesn’t get you very much; you need a way to get data into it and out of it for both training and inference,” he says.

As businesses move toward distributed, real-time AI applications, tomorrow’s networks will need to parse even more massive volumes of information at ever more lightning-fast speeds. What played out on the greens at Bethpage Black represents a lesson being learned across industries: Inference-ready networks are a make-or-break factor for turning AI’s promise into real-world performance.

Making a network AI inference-ready

More than half of organizations are still struggling to operationalize their data pipelines. In a recent HPE cross-industry survey of 1,775  IT leaders, 45% said they could run real-time data pushes and pulls for innovation. It’s a noticeable change over last year’s numbers (just 7% reported having such capabilities in 2024), but there’s still work to be done to connect data collection with real-time decision-making.

The network may hold the key to further narrowing that gap. Part of the solution will likely come down to infrastructure design. While traditional enterprise networks are engineered to handle the predictable flow of business applications—email, browsers, file sharing, etc.—they’re not designed to field the dynamic, high-volume data movement required by AI workloads. Inferencing in particular depends on shuttling vast datasets between multiple GPUs with supercomputer-like precision.

“There’s an ability to play fast and loose with a standard, off-the-shelf enterprise network,” says Green. “Few will notice if an email platform is half a second slower than it might’ve been. But with AI transaction processing, the entire job is gated by the last calculation taking place. So it becomes really noticeable if you’ve got any loss or congestion.”

Networks built for AI, therefore, must operate with a different set of performance characteristics, including ultra-low latency, lossless throughput, specialized equipment, and adaptability at scale. One of these differences is AI’s distributed nature, which affects the seamless flow of data.

The Ryder Cup was a vivid demonstration of this new class of networking in action. During the event, a Connected Intelligence Center was put in place to ingest data from ticket scans, weather reports, GPS-tracked golf carts, concession and merchandise sales, spectator and consumer queues, and network performance. Additionally, 67 AI-enabled cameras were positioned throughout the course. Inputs were analyzed through an operational intelligence dashboard and provided staff with an instantaneous view of activity across the grounds.

“The tournament is really complex from a networking perspective, because you have many big open areas that aren’t uniformly packed with people,” explains Green. “People tend to follow the action. So in certain areas, it’s really dense with lots of people and devices, while other areas are completely empty.”

To handle that variability, engineers built out a two-tiered architecture. Across the sprawling venue, more than 650 WiFi 6E access points, 170 network switches, and 25 user experience sensors worked together to maintain continuous connectivity and feed a private cloud AI cluster for live analytics. The front-end layer connected cameras, sensors, and access points to capture live video and movement data, while a back-end layer—located within a temporary on-site data center—linked GPUs and servers in a high-speed, low-latency configuration that effectively served as the system’s brain. Together, the setup enabled both rapid on-the-ground responses and data collection that could inform future operational planning. “AI models also were available to the team which could process video of the shots taken and help determine, from the footage, which ones were the most interesting,” says Green.

Physical AI and the return of on-prem intelligence

If time is of the essence for event management, it’s even more critical in contexts where safety is on the line—for instance a self-driving car making a split-second decision to accelerate or brake.

In planning for the rise of physical AI, where applications move off screens and onto factory floors and city streets, a growing number of enterprises are rethinking their architectures. Instead of sending the data to centralized clouds for inference, some are deploying edge-based AI clusters that process information closer to where it is generated. Data-intensive training may still occur in the cloud, but inferencing happens on-site.

This hybrid approach is fueling a wave of operational repatriation, as workloads once relegated to the cloud return to on-premises infrastructure for enhanced speed, security, sovereignty, and cost reasons. “We’ve had an out-migration of IT into the cloud in recent years, but physical AI is one of the use cases that we believe will bring a lot of that back on-prem,” predicts Green, giving the example of an AI-infused factory floor, where a round-trip of sensor data to the cloud would be too slow to safely control automated machinery. “By the time processing happens in the cloud, the machine has already moved,” he explains.

There’s data to back up Green’s projection: research from Enterprise Research Group shows that 84% of respondents are reevaluating application deployment strategies due to the growth of AI. Market forecasts also reflect this shift. According to IDC, the AI market for infrastructure is expected to reach $758 billion by 2029.

AI for networking and the future of self-driving infrastructure

The relationship between networking and AI is circular: Modern networks make AI at scale possible, but AI is also helping make networks smarter and more capable.

“Networks are some of the most data-rich systems in any organization,” says Green. “That makes them a perfect use case for AI. We can analyze millions of configuration states across thousands of customer environments and learn what actually improves performance or stability.”

At HPE for example, which has one of the largest network telemetry repositories in the world, AI models analyze anonymized data collected from billions of connected devices to identify trends and refine behavior over time. The platform processes more than a trillion telemetry points each day, which means it can continuously learn from real-world conditions.

The concept broadly known as AIOps (or AI-driven IT operations) is changing how enterprise networks are managed across industries. Today, AI surfaces insights as recommendations that administrators can choose to apply with a single click. Tomorrow, those same systems might automatically test and deploy low-risk changes themselves.

That long-term vision, Green notes, is referred to as a “self-driving network”—one that handles the repetitive, error-prone tasks that have historically plagued IT teams. “AI isn’t coming for the network engineer’s job, but it will eliminate the tedious stuff that slows them down,” he says. “You’ll be able to say, ‘Please go configure 130 switches to solve this issue,’ and the system will handle it. When a port gets stuck or someone plugs a connector in the wrong direction, AI can detect it—and in many cases, fix it automatically.”

Digital initiatives now depend on how effectively information moves. Whether coordinating a live event or streamlining a supply chain, the performance of the network increasingly defines the performance of the business. Building that foundation today will separate those who pilot from those who scale AI.

For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The State of AI: How war will be changed forever

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.

In this conversation, Helen Warrell, FT investigations reporter and former defense and security editor, and James O’Donnell, MIT Technology Review’s senior AI reporter, consider the ethical quandaries and financial incentives around AI’s use by the military.

Helen Warrell, FT investigations reporter 

It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.

Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Henry Kissinger, the former US secretary of state, spent his final years warning about the coming catastrophe of AI-driven warfare.

Grasping and mitigating these risks is the military priority—some would say the “Oppenheimer moment”—of our age. One emerging consensus in the West is that decisions around the deployment of nuclear weapons should not be outsourced to AI. UN secretary-general António Guterres has gone further, calling for an outright ban on fully autonomous lethal weapons systems. It is essential that regulation keep pace with evolving technology. But in the sci-fi-fueled excitement, it is easy to lose track of what is actually possible. As researchers at Harvard’s Belfer Center point out, AI optimists often underestimate the challenges of fielding fully autonomous weapon systems. It is entirely possible that the capabilities of AI in combat are being overhyped.

Anthony King, Director of the Strategy and Security Institute at the University of Exeter and a key proponent of this argument, suggests that rather than replacing humans, AI will be used to improve military insight. Even if the character of war is changing and remote technology is refining weapon systems, he insists, “the complete automation of war itself is simply an illusion.”

Of the three current military use cases of AI, none involves full autonomy. It is being developed for planning and logistics, cyber warfare (in sabotage, espionage, hacking, and information operations; and—most controversially—for weapons targeting, an application already in use on the battlefields of Ukraine and Gaza. Kyiv’s troops use AI software to direct drones able to evade Russian jammers as they close in on sensitive sites. The Israel Defense Forces have developed an AI-assisted decision support system known as Lavender, which has helped identify around 37,000 potential human targets within Gaza. 

Helen Warrell and James O'Donnell

FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK

There is clearly a danger that the Lavender database replicates the biases of the data it is trained on. But military personnel carry biases too. One Israeli intelligence officer who used Lavender claimed to have more faith in the fairness of a “statistical mechanism” than that of a grieving soldier.

Tech optimists designing AI weapons even deny that specific new controls are needed to control their capabilities. Keith Dear, a former UK military officer who now runs the strategic forecasting company Cassi AI, says existing laws are more than sufficient: “You make sure there’s nothing in the training data that might cause the system to go rogue … when you are confident you deploy it—and you, the human commander, are responsible for anything they might do that goes wrong.”

It is an intriguing thought that some of the fear and shock about use of AI in war may come from those who are unfamiliar with brutal but realistic military norms. What do you think, James? Is some opposition to AI in warfare less about the use of autonomous systems and really an argument against war itself? 

James O’Donnell replies:

Hi Helen, 

One thing I’ve noticed is that there’s been a drastic shift in attitudes of AI companies regarding military applications of their products. In the beginning of 2024, OpenAI unambiguously forbade the use of its tools for warfare, but by the end of the year, it had signed an agreement with Anduril to help it take down drones on the battlefield. 

This step—not a fully autonomous weapon, to be sure, but very much a battlefield application of AI—marked a drastic change in how much tech companies could publicly link themselves with defense. 

What happened along the way? For one thing, it’s the hype. We’re told AI will not just bring superintelligence and scientific discovery but also make warfare sharper, more accurate and calculated, less prone to human fallibility. I spoke with US Marines, for example, who tested a type of AI while patrolling the South Pacific that was advertised to analyze foreign intelligence faster than a human could. 

Secondly, money talks. OpenAI and others need to start recouping some of the unimaginable amounts of cash they’re spending on training and running these models. And few have deeper pockets than the Pentagon. And Europe’s defense heads seem keen to splash the cash too. Meanwhile, the amount of venture capital funding for defense tech this year has already doubled the total for all of 2024, as VCs hope to cash in on militaries’ newfound willingness to buy from startups. 

I do think the opposition to AI warfare falls into a few camps, one of which simply rejects the idea that more precise targeting (if it’s actually more precise at all) will mean fewer casualties rather than just more war. Consider the first era of drone warfare in Afghanistan. As drone strikes became cheaper to implement, can we really say it reduced carnage? Instead, did it merely enable more destruction per dollar?

But the second camp of criticism (and now I’m finally getting to your question) comes from people who are well versed in the realities of war but have very specific complaints about the technology’s fundamental limitations. Missy Cummings, for example, is a former fighter pilot for the US Navy who is now a professor of engineering and computer science at George Mason University. She has been outspoken in her belief that large language models, specifically, are prone to make huge mistakes in military settings.

The typical response to this complaint is that AI’s outputs are human-checked. But if an AI model relies on thousands of inputs for its conclusion, can that conclusion really be checked by one person?

Tech companies are making extraordinarily big promises about what AI can do in these high-stakes applications, all while pressure to implement them is sky high. For me, this means it’s time for more skepticism, not less. 

Helen responds:

Hi James, 

We should definitely continue to question the safety of AI warfare systems and the oversight to which they’re subjected—and hold political leaders to account in this area. I am suggesting that we also apply some skepticism to what you rightly describe as the “extraordinarily big promises” made by some companies about what AI might be able to achieve on the battlefield. 

There will be both opportunities and hazards in what the military is being offered by a relatively nascent (though booming) defense tech scene. The danger is that in the speed and secrecy of an arms race in AI weapons, these emerging capabilities may not receive the scrutiny and debate they desperately need.

Further reading:

Michael C. Horowitz, director of Perry World House at the University of Pennsylvania, explains the need for responsibility in the development of military AI systems in this FT op-ed.

The FT’s tech podcast asks what Israel’s defense tech ecosystem can tell us about the future of warfare 

This MIT Technology Review story analyzes how OpenAI completed its pivot to allowing its technology on the battlefield.

MIT Technology Review also uncovered how US soldiers are using generative AI to help scour thousands of pieces of open-source intelligence.

Google DeepMind is using Gemini to train agents inside Goat Simulator 3

Google DeepMind has built a new video-game-playing agent called SIMA 2 that can navigate and solve problems in a wide range of 3D virtual worlds. The company claims it’s a big step toward more general-purpose agents and better real-world robots.   

Google DeepMind first demoed SIMA (which stands for “scalable instructable multiworld agent”) last year. But SIMA 2 has been built on top of Gemini, the firm’s flagship large language model, which gives the agent a huge boost in capability.

The researchers claim that SIMA 2 can carry out a range of more complex tasks inside virtual worlds, figure out how to solve certain challenges by itself, and chat with its users. It can also improve itself by tackling harder tasks multiple times and learning through trial and error.

“Games have been a driving force behind agent research for quite a while,” Joe Marino, a research scientist at Google DeepMind, said in a press conference this week. He noted that even a simple action in a game, such as lighting a lantern, can involve multiple steps: “It’s a really complex set of tasks you need to solve to progress.”

The ultimate aim is to develop next-generation agents that are able to follow instructions and carry out open-ended tasks inside more complex environments than a web browser. In the long run, Google DeepMind wants to use such agents to drive real-world robots. Marino claimed that the skills SIMA 2 has learned, such as navigating an environment, using tools, and collaborating with humans to solve problems, are essential building blocks for future robot companions.

Unlike previous work on game-playing agents such as AlphaZero, which beat a Go grandmaster in 2016, or AlphaStar, which beat 99.8% of ranked human competition players at the video game StarCraft 2 in 2019, the idea behind SIMA is to train an agent to play an open-ended game without preset goals. Instead, the agent learns to carry out instructions given to it by people.

Humans control SIMA 2 via text chat, by talking to it out loud, or by drawing on the game’s screen. The agent takes in a video game’s pixels frame by frame and figures out what actions it needs to take to carry out its tasks.

Like its predecessor, SIMA 2 was trained on footage of humans playing eight commercial video games, including No Man’s Sky and Goat Simulator 3, as well as three virtual worlds created by the company. The agent learned to match keyboard and mouse inputs to actions.

Hooked up to Gemini, the researchers claim, SIMA 2 is far better at following instructions (asking questions and providing updates as it goes) and figuring out for itself how to perform certain more complex tasks.  

Google DeepMind tested the agent inside environments it had never seen before. In one set of experiments, researchers asked Genie 3, the latest version of the firm’s world model, to produce environments from scratch and dropped SIMA 2 into them. They found that the agent was able to navigate and carry out instructions there.

The researchers also used Gemini to generate new tasks for SIMA 2. If the agent failed, at first Gemini generated tips that SIMA 2 took on board when it tried again. Repeating a task multiple times in this way often allowed SIMA 2 to improve by trial and error until it succeeded, Marino said.

Git gud

SIMA 2 is still an experiment. The agent struggles with complex tasks that require multiple steps and more time to complete. It also remembers only its most recent interactions (to make SIMA 2 more responsive, the team cut its long-term memory). It’s also still nowhere near as good as people at using a mouse and keyboard to interact with a virtual world.

Julian Togelius, an AI researcher at New York University who works on creativity and video games, thinks it’s an interesting result. Previous attempts at training a single system to play multiple games haven’t gone too well, he says. That’s because training models to control multiple games just by watching the screen isn’t easy: “Playing in real time from visual input only is ‘hard mode,’” he says.

In particular, Togelius calls out GATO, a previous system from Google DeepMind, which—despite being hyped at the time—could not transfer skills across a significant number of virtual environments.  

Still, he is open-minded about whether or not SIMA 2 could lead to better robots. “The real world is both harder and easier than video games,” he says. It’s harder because you can’t just press A to open a door. At the same time, a robot in the real world will know exactly what its body can and can’t do at any time. That’s not the case in video games, where the rules inside each virtual world can differ.

Others are more skeptical. Matthew Guzdial, an AI researcher at the University of Alberta, isn’t too surprised that SIMA 2 can play many different video games. He notes that most games have very similar keyboard and mouse controls: Learn one and you learn them all. “If you put a game with weird input in front of it, I don’t think it’d be able to perform well,” he says.

Guzdial also questions how much of what SIMA 2 has learned would really carry over to robots. “It’s much harder to understand visuals from cameras in the real world compared to games, which are designed with easily parsable visuals for human players,” he says.

Still, Marino and his colleagues hope to continue their work with Genie 3 to allow the agent to improve inside a kind of endless virtual training dojo, where Genie generates worlds for SIMA to learn in via trial and error guided by Gemini’s feedback. “We’ve kind of just scratched the surface of what’s possible,” he said at the press conference.  

OpenAI’s new LLM exposes the secrets of how AI really works

ChatGPT maker OpenAI has built an experimental large language model that is far easier to understand than typical models.

That’s a big deal, because today’s LLMs are black boxes: Nobody fully understands how they do what they do. Building a model that is more transparent sheds light on how LLMs work in general, helping researchers figure out why models hallucinate, why they go off the rails, and just how far we should trust them with critical tasks.

“As these AI systems get more powerful, they’re going to get integrated more and more into very important domains,” Leo Gao, a research scientist at OpenAI, told MIT Technology Review in an exclusive preview of the new work. “It’s very important to make sure they’re safe.”

This is still early research. The new model, called a weight-sparse transformer, is far smaller and far less capable than top-tier mass-market models like the firm’s GPT-5, Anthropic’s Claude, and Google DeepMind’s Gemini. At most it’s as capable as GPT-1, a model that OpenAI developed back in 2018, says Gao (though he and his colleagues haven’t done a direct comparison).    

But the aim isn’t to compete with the best in class (at least, not yet). Instead, by looking at how this experimental model works, OpenAI hopes to learn about the hidden mechanisms inside those bigger and better versions of the technology.

It’s interesting research, says Elisenda Grigsby, a mathematician at Boston College who studies how LLMs work and who was not involved in the project: “I’m sure the methods it introduces will have a significant impact.” 

Lee Sharkey, a research scientist at AI startup Goodfire, agrees. “This work aims at the right target and seems well executed,” he says.

Why models are so hard to understand

OpenAI’s work is part of a hot new field of research known as mechanistic interpretability, which is trying to map the internal mechanisms that models use when they carry out different tasks.

That’s harder than it sounds. LLMs are built from neural networks, which consist of nodes, called neurons, arranged in layers. In most networks, each neuron is connected to every other neuron in its adjacent layers. Such a network is known as a dense network.

Dense networks are relatively efficient to train and run, but they spread what they learn across a vast knot of connections. The result is that simple concepts or functions can be split up between neurons in different parts of a model. At the same time, specific neurons can also end up representing multiple different features, a phenomenon known as superposition (a term borrowed from quantum physics). The upshot is that you can’t relate specific parts of a model to specific concepts.

“Neural networks are big and complicated and tangled up and very difficult to understand,” says Dan Mossing, who leads the mechanistic interpretability team at OpenAI. “We’ve sort of said: ‘Okay, what if we tried to make that not the case?’”

Instead of building a model using a dense network, OpenAI started with a type of neural network known as a weight-sparse transformer, in which each neuron is connected to only a few other neurons. This forced the model to represent features in localized clusters rather than spread them out.

Their model is far slower than any LLM on the market. But it is easier to relate its neurons or groups of neurons to specific concepts and functions. “There’s a really drastic difference in how interpretable the model is,” says Gao.

Gao and his colleagues have tested the new model with very simple tasks. For example, they asked it to complete a block of text that opens with quotation marks by adding matching marks at the end.  

It’s a trivial request for an LLM. The point is that figuring out how a model does even a straightforward task like that involves unpicking a complicated tangle of neurons and connections, says Gao. But with the new model, they were able to follow the exact steps the model took.

“We actually found a circuit that’s exactly the algorithm you would think to implement by hand, but it’s fully learned by the model,” he says. “I think this is really cool and exciting.”

Where will the research go next? Grigsby is not convinced the technique would scale up to larger models that have to handle a variety of more difficult tasks.    

Gao and Mossing acknowledge that this is a big limitation of the model they have built so far and agree that the approach will never lead to models that match the performance of cutting-edge products like GPT-5. And yet OpenAI thinks it might be able to improve the technique enough to build a transparent model on a par with GPT-3, the firm’s breakthrough 2021 LLM. 

“Maybe within a few years, we could have a fully interpretable GPT-3, so that you could go inside every single part of it and you could understand how it does every single thing,” says Gao. “If we had such a system, we would learn so much.”

Improving VMware migration workflows with agentic AI

For years, many chief information officers (CIOs) looked at VMware-to-cloud migrations with a wary pragmatism. Manually mapping dependencies and rewriting legacy apps mid-flight was not an enticing, low-lift proposition for enterprise IT teams.

But the calculus for such decisions has changed dramatically in a short period of time. Following recent VMware licensing changes, organizations are seeing greater uncertainty around the platform’s future. At the same time, cloud-native innovation is accelerating. According to the CNCF’s 2024 Annual Survey, 89% of organizations have already adopted at least some cloud-native techniques, and the share of companies reporting nearly all development and deployment as cloud-native grew sharply from 2023 to 2024 (20% to 24%). And market research firm IDC reports that cloud providers have become top strategic partners for generative AI initiatives.

This is all happening amid escalating pressure to innovate faster and more cost-effectively to meet the demands of an AI-first future. As enterprises prepare for that inevitability, they are facing compute demands that are difficult, if not prohibitively expensive, to maintain exclusively on-premises.

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.