Imagining the future of banking with agentic AI

Agentic AI is coming of age. And with it comes new opportunities in the financial services sector. Banks are increasingly employing agentic AI to optimize processes, navigate complex systems, and sift through vast quantities of unstructured data to make decisions and take actions—with or without human involvement. “With the maturing of agentic AI, it is becoming a lot more technologically possible for large-scale process automation that was not possible with rules-based approaches like robotic process automation before,” says Sameer Gupta, Americas financial services AI leader at EY. “That moves the needle in terms of cost, efficiency, and customer experience impact.”

From responding to customer services requests, to automating loan approvals, adjusting bill payments to align with regular paychecks, or extracting key terms and conditions from financial agreements, agentic AI has the potential to transform the customer experience—and how financial institutions operate too.

Adapting to new and emerging technologies like agentic AI is essential for an organization’s survival, says Murli Buluswar, head of US personal banking analytics at Citi. “A company’s ability to adopt new technical capabilities and rearchitect how their firm operates is going to make the difference between the firms that succeed and those that get left behind,” says Buluswar. “Your people and your firm must recognize that how they go about their work is going to be meaningfully different.”

The emerging landscape

Agentic AI is already being rapidly adopted in the banking sector. A 2025 survey of 250 banking executives by MIT Technology Review Insights found that 70% of leaders say their firm uses agentic AI to some degree, either through existing deployments (16%) or pilot projects (52%). And it is already proving effective in a range of different functions. More than half of executives say agentic AI systems are highly capable of improving fraud detection (56%) and security (51%). Other strong use cases include reducing cost and increasing efficiency (41%) and improving the customer experience (41%).

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Building the AI-enabled enterprise of the future

Artificial intelligence is fundamentally reshaping how the world operates. With its potential to automate repetitive tasks, analyze vast datasets, and augment human capabilities, the use of AI technologies is already driving changes across industries.

In health care and pharmaceuticals, machine learning and AI-powered tools are advancing disease diagnosis, reducing drug discovery timelines by as much as 50%, and heralding a new era of personalized medicine. In supply chain and logistics, AI models can help prevent or mitigate disruptions, allowing businesses to make informed decisions and enhance resilience amid geopolitical uncertainty. Across sectors, AI in research and development cycles may reduce time-to-market by 50% and lower costs in industries like automotive and aerospace by as much as 30%.

“This is one of those inflection points where I don’t think anybody really has a full view of the significance of the change this is going to have on not just companies but society as a whole,” says Patrick Milligan, chief information security officer at Ford, which is making AI an important part of its transformation efforts and expanding its use across company operations.

Given its game-changing potential—and the breakneck speed with which it is evolving—it is perhaps not surprising that companies are feeling the pressure to deploy AI as soon as possible: 98% say they feel an increased sense of urgency in the last year. And 85% believe they have less than 18 months to deploy an AI strategy or they will see negative business effects.

Companies that take a “wait and see” approach will fall behind, says Jeetu Patel, president and chief product officer at Cisco. “If you wait for too long, you risk becoming irrelevant,” he says. “I don’t worry about AI taking my job, but I definitely worry about another person that uses AI better than me or another company that uses AI better taking my job or making my company irrelevant.”

But despite the urgency, just 13% of companies globally say they are ready to leverage AI to its full potential. IT infrastructure is an increasing challenge as workloads grow ever larger. Two-thirds (68%) of organizations say their infrastructure is moderately ready at best to adopt and scale AI technologies.

Essential capabilities include adequate compute power to process complex AI models, optimized network performance across the organization and in data centers, and enhanced cybersecurity capabilities to detect and prevent sophisticated attacks. This must be combined with observability, which ensures the reliable and optimized performance of infrastructure, models, and the overall AI system by providing continuous monitoring and analysis of their behavior. Good quality, well-managed enterprise-wide data is also essential—after all, AI is only as good as the data it draws on. All of this must be supported by AI-focused company culture and talent development.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The connected customer

As brands compete for increasingly price conscious consumers, customer experience (CX) has become a decisive differentiator. Yet many struggle to deliver, constrained by outdated systems, fragmented data, and organizational silos that limit both agility and consistency.

The current wave of artificial intelligence, particularly agentic AI that can reason and act across workflows, offers a powerful opportunity to reshape service delivery. Organizations can now provide fast, personalized support at scale while improving workforce productivity and satisfaction. But realizing that potential requires more than isolated tools; it calls for a unified platform that connects people, data, and decisions across the service lifecycle. This report explores how leading organizations are navigating that shift, and what it takes to move from AI potential to CX impact.

Key findings include:

  • AI is transforming customer experience (CX). Customer service has evolved from the era of voicebased support through digital commerce and cloud to today’s AI revolution. Powered by large language models (LLMs) and a growing pool of data, AI can handle more diverse customer queries, produce highly personalized communication at scale, and help staff and senior management with decision support. Customers are also warming to AI-powered platforms as performance and reliability improves. Early adopters report improvements including more satisfied customers, more productive staff, and richer performance insights.
  • Legacy infrastructure and data fragmentation are hindering organizations from maximizing the value of AI. While customer service and IT departments are early adopters of AI, the broader organizations across industries are often riddled with outdated infrastructure. This impinges the ability of autonomous AI tools to move freely across workflows and data repositories to deliver goal-based tasks. Creating a unified platform and orchestration architecture will be key to unlock AI’s potential. The transition can be a catalyst for streamlining and rationalizing the business as a whole.
  • High-performing organizations use AI without losing the human touch. While consumers are warming to AI, rollout should include some discretion. Excessive personalization could make customers uncomfortable about their personal data, while engineered “empathy” from bots may be received as insincere. Organizations should not underestimate the unique value their workforce offers. Sophisticated adopters strike the right balance between human and machine capabilities. Their leaders are proactive in addressing job displacement worries through transparent communication, comprehensive training, and clear delineation between AI and human roles. The most effective organizations treat AI as a collaborative tool that enhances rather than replaces human connection and expertise.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

What health care providers actually want from AI

In a market flooded with AI promises, health care decision-makers are no longer dazzled by flashy demos or abstract potential. Today, they want pragmatic and pressure-tested products. They want solutions that work for their clinicians, staff, patients, and their bottom line.

To gain traction in 2025 and beyond, health care providers are looking for real-world solutions in AI right now.

Solutions that fix real problems

Hospitals and health systems are looking at AI-enabled solutions that target their most urgent pain points: staffing shortages, clinician burnout, rising costs, and patient bottlenecks. These operational realities keep leadership up at night, and AI solutions  must directly address them.

For instance, hospitals and health systems are eager for AI tools that can reduce documentation burden for physicians and nurses. Natural language processing (NLP) solutions that auto-generate clinical notes or streamline coding to free up time for direct patient care are far more compelling pitches than generic efficiency gains. Similarly, predictive analytics that help optimize staffing levels or manage patient flows can directly address operational workflow and improve throughput.

Ultimately, if an AI solution doesn’t target these critical issues and deliver tangible benefits, it’s unlikely to capture serious buyer interest.

Demonstrate real-world results

AI solutions need validation in environments that mirror actual care settings. The first step toward that is to leverage high-quality, well-curated real-world data to drive reliable insights and avoid misleading results when building and refining AI models. 

Then, hospitals and health systems need evidence that the solution does what it claims to do, for instance through independent-third party validation, pilot projects, peer-reviewed publications, or documented case studies.

Mayo Clinic Platform offers a rigorous independent process where clinical, data science, and regulatory experts evaluate a solution for intended use, proposed value, and clinical and algorithmic performance, which gives innovators the credibility their solutions need to win the confidence of health-care leaders.    

Integration with existing systems

With so many demands, health-care IT leaders have little patience for standalone AI tools that create additional complexity. They want solutions that integrate seamlessly into existing systems and workflows. Compatibility with major electronic health record (EHR) platforms, robust APIs, and smooth data ingestion processes are now baseline requirements.

Custom integrations that require significant IT resources—or worse, create duplicative work—are deal breakers for many organizations already stretched thin. The less disruption an AI solution introduces, the more likely it is to gain traction. This is the reason solution developers are turning to platforms like Mayo Clinic Platform Solutions Studio, a program that provides seamless integration, single implementation, expert guidance to reduce risk, and a simplified process to accelerate solution adoption among healthcare providers. 

Explainability and transparency

The importance of trust cannot be overstated when it comes to health care, and transparency and explainability are critical to establishing trust in AI. As AI models grow more complex, health-care providers recognize that simply knowing what an algorithm predicts isn’t enough. They also need to understand how it arrived at that insight.

Health-care organizations are increasingly wary of black-box AI systems whose logic remains opaque. Instead, they’re demanding solutions that offer clear, understandable explanations clinicians can relay confidently to peers, patients, and regulators.

As McKinsey research shows, organizations that embed explainability into their AI strategy not only reduce risk but also see higher adoption, better performance outcomes, and stronger financial returns. Solution developers that can demystify their models, provide transparent performance metrics, and build trust at every level will have a significant edge in today’s health-care market.

Clear ROI and low implementation burden

Hospitals and health systems want to know precisely how quickly an AI solution will pay for itself, how much staff time it will save, and what costs it will help offset. The more specific and evidence-backed the answers, the better rate of adoption.

Solution developers that offer comprehensive training and responsive support are far more likely to win deals and keep customers satisfied over the long term.

Alignment with regulatory and compliance needs

As AI adoption grows, so does regulatory scrutiny. Health-care providers are increasingly focused on ensuring that any new solution complies with HIPAA, data privacy laws, and emerging guidelines around AI governance and bias mitigation.

Solution developers that can proactively demonstrate compliance provide significant peace of mind. Transparent data handling practices, rigorous security measures, and alignment with ethical AI principles are all becoming essential selling points as well.

A solution developer that understands health care

Finally, it’s not just about the technology. Health-care providers want partners that genuinely understand the complexities of clinical care and hospital operations. They’re looking for partners that speak the language of health care, grasp the nuances of change management, and appreciate the realities of delivering patient care under tight margins and high stakes.

Successful AI vendors recognize that even the best technology must fit into a highly human-centered and often unpredictable environment. Long-term partnerships, not short-term sales, are the goal.

Delivering true value with AI

To earn their trust and investment, AI developers must focus relentlessly on solving real problems, demonstrating proven results, integrating without friction, and maintaining transparency and compliance.

Those that deliver on these expectations will have the chance to help shape the future of health care.

This content was produced by Mayo Clinic Platform. It was not written by MIT Technology Review’s editorial staff.

How healthcare accelerator programs are changing care

As healthcare faces mounting pressures, from rising costs and an aging population to widening disparities, forward thinking innovations are more essential than ever.

Accelerator programs have proven to be powerful launchpads for health tech companies, often combining resources, mentorship, and technology that startups otherwise would not have access to. By joining these fast-moving platforms, startups are better able to rapidly innovate, enhance, and scale their healthcare solutions, bringing transformative approaches to hospitals and patients faster.

So, why are healthcare accelerators becoming essential to the evolution of the industry? There are key reasons why these programs are reshaping health innovation and explanations how they are helping to make care more personalized, proactive, and accessible.

Empowering growth and scaling impact       

Healthcare accelerator programs offer a powerful combination of guidance, resources, and connections to help early-stage startups grow, scale, and succeed in a complex industry. 

Participants typically benefit from: 

  • Expert mentorship from seasoned healthcare professionals, entrepreneurs, and industry leaders to navigate clinical, regulatory, and business challenges
  • Access to valuable resources such as clinical data, testing environments, and technical infrastructure to refine and validate health tech solutions
  • Strategic support for growth including investor introductions, partnership opportunities, and go-to-market guidance to expand reach and impact 

Speeding up innovation 

Accelerators help startups and early-stage companies bring their solutions to market faster by streamlining the path through one of the most complex industries: healthcare. Traditionally, innovation in this space is slowed by regulatory hurdles, extended sales cycles, clinical validation requirements, and fragmented data systems.  

Through structured support, accelerators help companies refine their product market fit, navigate compliance and regulatory landscapes, integrate with healthcare systems, and gather the clinical evidence needed to build trust and credibility. They also open doors to early pilot opportunities, customer feedback, and strategic partnerships, compressing what could take years into just a few months. 

By removing barriers and accelerating critical early steps, these programs enable digital health innovators to reach the market more efficiently, with stronger solutions and a clearer path to impact. 

Connecting startups with key stakeholders 

Today, many accelerator programs are developed by large healthcare organizations that are driving change from within. These accelerator programs are especially beneficial to startups since they have strong partnerships with hospitals, pharma companies, insurance providers, and regulators. This gives startups a chance to validate their ideas in real-world settings, gather clinical feedback early, and scale more effectively.  

Many accelerators also bring together people from different fields; doctors, engineers, data scientists, and designers, encouraging fresh perspectives on persistent problems like chronic disease management, preventative care, data interoperability, and patient engagement. 

Breaking barriers to global expansion 

Healthcare accelerator programs act as gateways for international digital health companies looking to enter the U.S. market, often considered one of the most complex and highly regulated healthcare landscapes in the world. These programs provide tailored support to navigate U.S. compliance standards, understand payer and provider dynamics, and tailor offerings to meet the needs of U.S. patients and care delivery models. 

Through market-specific mentorship, strategic introductions, and access to a robust health innovation ecosystem, accelerators help international startups overcome geographic and regulatory barriers, enabling global ideas to scale and make an impact where they’re needed most. 

Building the future of healthcare

The role of healthcare accelerator programs extends far beyond startup support. They are helping to redefine how innovation happens, shifting it from isolated efforts to collaborative ecosystems of change. By bridging gaps between early-stage technology and real-world implementation, these programs play a critical role in making healthcare more personalized, preventative, and equitable.

As the digital transformation of healthcare continues, accelerator programs will remain indispensable in cultivating the next generation of breakthroughs, ensuring that bold ideas are not only born, but brought to life in meaningful, measurable ways.

Spotlight: Mayo Clinic Platform_Accelerate

One standout example of this innovation-forward approach is Mayo Clinic Platform_Accelerate, a 30-week accelerator program designed to help health tech startups reach market readiness. Participants gain access to de-identified clinical data, prototyping labs, and guidance from experts across clinical, regulatory, and business domains.

By combining Mayo Clinic’s legacy of clinical excellence with a forward-thinking innovation model, the Mayo Clinic Platform_Accelerate program helps promising startups to refine their solutions and prepare for meaningful scale, transforming how care is delivered across the continuum.

Finding value in accelerator programs

In a time when healthcare must evolve faster than ever, accelerator programs have become vital to the industry’s future. By supporting early-stage innovators with the tools, mentorship, and networks they need to succeed, these programs are paving the way for smarter, safer, and more connected care.

Whether tackling chronic disease, reimagining patient engagement, or unlocking the power of data, the startups nurtured in accelerator programs are helping to shape a more resilient and responsive health system, one innovation at a time.

This content was produced by Mayo Clinic Platform. It was not written by MIT Technology Review’s editorial staff.

From pilot to scale: Making agentic AI work in health care

Over the past 20 years building advanced AI systems—from academic labs to enterprise deployments—I’ve witnessed AI’s waves of success rise and fall. My journey began during the “AI Winter,” when billions were invested in expert systems that ultimately underdelivered. Flash forward to today: large language models (LLMs) represent a quantum leap forward, but their prompt-based adoption is similarly overhyped, as it’s essentially a rule-based approach disguised in natural language.

At Ensemble, the leading revenue cycle management (RCM) company for hospitals, we focus on overcoming model limitations by investing in what we believe is the next step in AI evolution: grounding LLMs in facts and logic through neuro-symbolic AI. Our in-house AI incubator pairs elite AI researchers with health-care experts to develop agentic systems powered by a neuro-symbolic AI framework. This bridges LLMs’ intuitive power with the precision of symbolic representation and reasoning.

Overcoming LLM limitations

LLMs excel at understanding nuanced context, performing instinctive reasoning, and generating human-like interactions, making them ideal for agentic tools to then interpret intricate data and communicate effectively. Yet in a domain like health care where compliance, accuracy, and adherence to regulatory standards are non-negotiable—and where a wealth of structured resources like taxonomies, rules, and clinical guidelines define the landscape—symbolic AI is indispensable.

By fusing LLMs and reinforcement learning with structured knowledge bases and clinical logic, our hybrid architecture delivers more than just intelligent automation—it minimizes hallucinations, expands reasoning capabilities, and ensures every decision is grounded in established guidelines and enforceable guardrails.

Creating a successful agentic AI strategy

Ensemble’s agentic AI approach includes three core pillars:

1. High-fidelity data sets: By managing revenue operations for hundreds of hospitals nationwide, Ensemble has unparallelled access to one of the most robust administrative datasets in health care. The team has decades of data aggregation, cleansing, and harmonization efforts, providing an exceptional environment to develop advanced applications.

To power our agentic systems, we’ve harmonized more than 2 petabytes of longitudinal claims data, 80,000 denial audit letters, and 80 million annual transactions mapped to industry-leading outcomes. This data fuels our end-to-end intelligence engine, EIQ, providing structured, context-rich data pipelines spanning across the 600-plus steps of revenue operations.

2. Collaborative domain expertise: Partnering with revenue cycle domain experts at each step of innovation, our AI scientists benefit from direct collaboration with in-house RCM experts, clinical ontologists, and clinical data labeling teams. Together, they architect nuanced use cases that account for regulatory constraints, evolving payer-specific logic and the complexity of revenue cycle processes. Embedded end users provide post-deployment feedback for continuous improvement cycles, flagging friction points early and enabling rapid iteration.

This trilateral collaboration—AI scientists, health-care experts, and end users—creates unmatched contextual awareness that escalates to human judgement appropriately, resulting in a system mirroring decision-making of experienced operators, and with the speed, scale, and consistency of AI, all with human oversight.

3. Elite AI scientists drive differentiation: Ensemble’s incubator model for research and development is comprised of AI talent typically only found in big tech. Our scientists hold PhD and MS degrees from top AI/NLP institutions like Columbia University and Carnegie Mellon University, and bring decades of experience from FAANG companies [Facebook/Meta, Amazon, Apple, Netflix, Google/Alphabet] and AI startups. At Ensemble, they’re able to pursue cutting-edge research in areas like LLMs, reinforcement learning, and neuro-symbolic AI within a mission-driven environment.

The also have unparalleled access to vast amounts of private and sensitive health-care data they wouldn’t see at tech giants paired with compute and infrastructure that startups simply can’t afford. This unique environment equips our scientists with everything they need to test novel ideas and push the frontiers of AI research—while driving meaningful, real-world impact in health care and improving lives.

Strategy in action: Health-care use cases in production and pilot

By pairing the brightest AI minds with the most powerful health-care resources, we’re successfully building, deploying, and scaling AI models that are delivering tangible results across hundreds of health systems. Here’s how we put it into action:

Supporting clinical reasoning: Ensemble deployed neuro-symbolic AI with fine-tuned LLMs to support clinical reasoning. Clinical guidelines are rewritten into proprietary symbolic language and reviewed by humans for accuracy. When a hospital is denied payment for appropriate clinical care, an LLM-based system parses the patient record to produce the same symbolic language describing the patient’s clinical journey, which is matched deterministically against the guidelines to find the right justification and the proper evidence from the patient’s record. An LLM then generates a denial appeal letter with clinical justification grounded in evidence. AI-enabled clinical appeal letters have already improved denial overturn rates by 15% or more across Ensemble’s clients.

Building on this success, Ensemble is piloting similar clinical reasoning capabilities for utilization management and clinical documentation improvement, by analyzing real-time records, flagging documentation gaps, and suggesting compliance enhancements to reduce denial or downgrade risks.

Accelerating accurate reimbursement: Ensemble is piloting a multi-agent reasoning model to manage the complex process of collecting accurate reimbursement from health insurers. With this approach, a complex and coordinated system of autonomous agents work together to interpret account details, retrieve required data from various systems, decide account-specific next actions, automate resolution, and escalate complex cases to humans.

This will help reduce payment delays and minimize administrative burden for hospitals and ultimately improve the financial experience for patients.

Improving patient engagement: Ensemble’s conversational AI agents handle inbound patient calls naturally, routing to human operators as required. Operator assistant agents deliver call transcriptions, surface relevant data, suggest next-best actions, and streamline follow-up routines. According to Ensemble client performance metrics, the combination of these AI capabilities has reduced patient call duration by 35%, increasing one-call resolution rates and improving patient satisfaction by 15%.

The AI path forward in health care demands rigor, responsibility, and real-world impact. By grounding LLMs in symbolic logic and pairing AI scientists with domain experts, Ensemble is successfully deploying scalable AI to improve the experience for health-care providers and the people they serve.

This content was produced by Ensemble. It was not written by MIT Technology Review’s editorial staff.

Unlocking enterprise agility in the API economy

Across industries, enterprises are increasingly adopting an on-demand approach to compute, storage, and applications. They are favoring digital services that are faster to deploy, easier to scale, and better integrated with partner ecosystems. Yet, one critical pillar has lagged: the network. While software-defined networking has made inroads, many organizations still operate rigid, pre-provisioned networks. As applications become increasingly distributed and dynamic—including hybrid cloud and edge deployments—a programmable, on-demand network infrastructure can enhance and enable this new era.

From CapEx to OpEx: The new connectivity mindset

Another, practical concern is also driving this shift: the need for IT models that align cost with usage. Rising uncertainty about inflation, consumer spending, business investment, and global supply chains are just a few of the economic factors weighing on company decision-making. And chief information officers (CIOs) are scrutinizing capital-expenditure-heavy infrastructure more closely and increasingly adopting operating-expenses-based subscription models.

Instead of long-term circuit contracts and static provisioning, companies are looking for cloud-ready, on-demand network services that can scale, adapt, and integrate across hybrid environments. This trend is fueling demand for API-first network infrastructure connectivity that behaves like software, dynamically orchestrated and integrated into enterprise IT ecosystems. There has been such rapid interest, the global network API market is projected to surge from $1.53 billion in 2024 to over $72 billion in 2034.

In fact, McKinsey estimates the network API market could unlock between $100 billion and $300 billion in connectivity- and edge-computing-related revenue for telecom operators over the next five to seven years, with an additional $10 billion to $30 billion generated directly from APIs themselves.

“When the cloud came in, first there was a trickle of adoptions. And then there was a deluge,” says Rajarshi Purkayastha, VP of solutions at Tata Communications. “We’re seeing the same trend with programmable networks. What was once a niche industry is now becoming mainstream as CIOs prioritize agility and time-to-value.”

Programmable networks as a catalyst for innovation

Programmable subscription-based networks are not just about efficiency, they are about enabling faster innovation, better user experiences, and global scalability. Organizations are preferring API-first systems to avoid vendor lock-in, enable multi-vendor integration, and foster innovation. API-first approaches allow seamless integration across different hardware and software stacks, reducing operational complexity and costs.

With APIs, enterprises can provision bandwidth, configure services, and connect to clouds and edge locations in real time, all through automation layers embedded in their DevOps and application platforms. This makes the network an active enabler of digital transformation rather than a lagging dependency.

For example, Netflix—one of the earliest adopters of microservices—handles billions of API requests daily through over 500 microservices and gateways, supporting global scalability and rapid innovation. After a two-year transition period, it redesigned its IT structure and organized it using microservice architecture.

Elsewhere, Coca-Cola integrated its global systems using APIs, enabling faster, lower-cost delivery and improved cross-functional collaboration. And Uber moved to microservices with API gateways, allowing independent scaling and rapid deployment across markets.

In each case, the network had to evolve from being static and hardware-bound to dynamic, programmable, and consumption-based. “API-first infrastructure fits naturally into how today’s IT teams work,” says Purkayastha. “It aligns with continuous integration and continuous delivery/deployment (CI/CD) pipelines and service orchestration tools. That reduces friction and accelerates how fast enterprises can launch new services.”

Powering on-demand connectivity

Tata Communications deployed Network Fabric—its programmable platform that uses APIs to allow enterprise systems to request and adjust network resources dynamically—to help a global software-as-a-service (SaaS) company modernize how it manages network capacity in response to real-time business needs. As the company scaled its digital services worldwide, it needed a more agile, cost-efficient way to align network performance with unpredictable traffic surges and fast-changing user demands. With Tata’s platform, the company’s operations teams were able to automatically scale bandwidth in key regions for peak performance, during high-impact events like global software releases. And just as quickly scale down once demand normalized, avoiding unnecessary costs.

In another scenario, when the SaaS provider needed to run large-scale data operations between its US and Asia hubs, the network was programmatically reconfigured in under an hour; a process that previously required weeks of planning and provisioning. “What we delivered wasn’t just bandwidth, it was the ability for their teams to take control,” says Purkayastha. “By integrating our Network Fabric APIs into their automation workflows, we gave them a network that responds at the pace of their business.”

Barriers to transformation — and how to overcome them

Transforming network infrastructure is no small task. Many enterprises still rely on legacy multiprotocol label switching (MPLS) and hardware-defined wide-area network (WAN) architectures. These environments are rigid, manually managed, and often incompatible with modern APIs or automation frameworks. As with any organization, barriers can be both technical and internal, and legacy devices may not support programmable interfaces. Organizations are often siloed, meaning networks are managed separately to application and DevOps workflows.

Furthermore, CIOs face pressure for quick returns and may not even remain in the company long enough to oversee the process and results, making it harder to push for long-term network modernization strategies. “Often, it’s easier to address the low-hanging fruit rather than go after the transformation because decision-makers may not be around to see the transformation come to life,” says Purkayastha.

But quick fixes or workarounds may not yield the desired results; transformation is needed instead. “Enterprises have historically built their networks for stability, not agility,” says Purkayastha. “But now, that same rigidity becomes a bottleneck when applications, users, and workloads are distributed across the cloud, edge, and remote locations.”

Despite the challenges, there is a clear path forward, starting with overlay orchestration, well-defined API contracts, and security-first design. Instead of completely removing and replacing an existing system, many enterprises are layering APIs over existing infrastructure, enabling controlled migrations and real-time service automation.

“We don’t just help customers adopt APIs, we guide them through the operational shift it requires,” says Purkayastha. “We have blueprints for what to automate first, how to manage hybrid environments, and how to design for resilience.”

For some organizations, there will be resistance to the change initially. Fears of extra workloads, or misalliance with teams’ existing goals and objectives are common, as is the deeply human distrust of change. These can be overcome, however. “There are playbooks on what we’ve done earlier—learnings from transformation—which we share with clients,” says Purkayastha. “We also plan for the unknowns. We usually reserve 10% of time and resources just to manage unforeseen risks, and the result is an empowered organization to scale innovation and reduce operational complexity.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The road to artificial general intelligence

Artificial intelligence models that can discover drugs and write code still fail at puzzles a lay person can master in minutes. This phenomenon sits at the heart of the challenge of artificial general intelligence (AGI). Can today’s AI revolution produce models that rival or surpass human intelligence across all domains? If so, what underlying enablers—whether hardware, software, or the orchestration of both—would be needed to power them?

Dario Amodei, co-founder of Anthropic, predicts some form of “powerful AI” could come as early as 2026, with properties that include Nobel Prize-level domain intelligence; the ability to switch between interfaces like text, audio, and the physical world; and the autonomy to reason toward goals, rather than responding to questions and prompts as they do now. Sam Altman, chief executive of OpenAI, believes AGI-like properties are already “coming into view,” unlocking a societal transformation on par with electricity and the internet. He credits progress to continuous gains in training, data, and compute, along with falling costs, and a socioeconomic value that is
super-exponential.

Optimism is not confined to founders. Aggregate forecasts give at least a 50% chance of AI systems achieving several AGI milestones by 2028. The chance of unaided machines outperforming humans in every possible task is estimated at 10% by 2027, and 50% by 2047, according to one expert survey. Time horizons shorten with each breakthrough, from 50 years at the time of GPT-3’s launch to five years by the end of 2024. “Large language and reasoning models are transforming nearly every industry,” says Ian Bratt, vice president of machine learning technology and fellow at Arm.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.