Powering HPC with next-generation CPUs
For all the excitement around GPUs—the workhorses of today’s AI revolution—the central processing unit (CPU) remains the backbone of high-performance computing (HPC). CPUs still handle 80% to 90% of HPC workloads globally, powering everything from climate modeling to semiconductor design. Far from being eclipsed, they’re evolving in ways that make them more competitive, flexible, and indispensable than ever.
The competitive landscape around CPUs has intensified. Once dominated almost exclusively by Intel’s x86 chips, the market now includes powerful alternatives based on ARM and even emerging architectures like RISC-V. Flagship examples like Japan’s Fugaku supercomputer demonstrate how CPU innovation is pushing performance to new frontiers. Meanwhile, cloud providers like Microsoft and AWS are developing their own silicon, adding even more diversity to the ecosystem.
What makes CPUs so enduring? Flexibility, compatibility, and cost efficiency are key. As Evan Burness of Microsoft Azure points out, CPUs remain the “it-just-works” technology. Moving complex, proprietary code to GPUs can be an expensive and time-consuming effort, while CPUs typically support software continuity across generations with minimal friction. That reliability matters for businesses and researchers who need results, not just raw power.
Innovation is also reshaping what a CPU can be. Advances in chiplet design, on-package memory, and hybrid CPU-GPU architectures are extending the performance curve well beyond the limits of Moore’s Law. For many organizations, the CPU is the strategic choice that balances speed, efficiency, and cost.
Looking ahead, the relationship between CPUs, GPUs, and specialized processors like NPUs will define the future of HPC. Rather than a zero-sum contest, it’s increasingly a question of fit-for-purpose design. As Addison Snell, co-founder and chief executive officer of Intersect360 Research, notes, science and industry never run out of harder problems to solve.
That means CPUs, far from fading, will remain at the center of the computing ecosystem.
To learn more, read the new report “Designing CPUs for next-generation supercomputing.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Designing CPUs for next-generation supercomputing
In Seattle, a meteorologist analyzes dynamic atmospheric models to predict the next major storm system. In Stuttgart, an automotive engineer examines crash-test simulations for vehicle safety certification. And in Singapore, a financial analyst simulates portfolio stress tests to hedge against global economic shocks.
Each of these professionals—and the consumers, commuters, and investors who depend on their insights— relies on a time-tested pillar of high-performance computing: the humble CPU.

With GPU-powered AI breakthroughs getting the lion’s share of press (and investment) in 2025, it is tempting to assume that CPUs are yesterday’s news. Recent predictions anticipate that GPU and accelerator installations will increase by 17% year over year through 2030. But, in reality, CPUs are still responsible for the vast majority of today’s most cutting-edge scientific, engineering, and research workloads. Evan Burness, who leads Microsoft Azure’s HPC and AI product teams, estimates that CPUs still support 80% to 90% of HPC simulation jobs today.



In 2025, not only are these systems far from obsolete, they are experiencing a technological renaissance. A new wave of CPU innovation, including high-bandwidth memory (HBM), is delivering major performance gains— without requiring costly architectural resets.
To learn more, watch the new webcast “Powering HPC with next-generation CPUs.“
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
De-risking investment in AI agents
Automation has become a defining force in the customer experience. Between the chatbots that answer our questions and the recommendation systems that shape our choices, AI-driven tools are now embedded in nearly every interaction. But the latest wave of so-called “agentic AI”—systems that can plan, act, and adapt toward a defined goal—promises to push automation even further.
“Every single person that I’ve spoken to has at least spoken to some sort of GenAI bot on their phones. They expect experiences to be not scripted. It’s almost like we’re not improving customer experience, we’re getting to the point of what customers expect customer experience to be,” says vice president of product management at NICE, Neeraj Verma.



For businesses, the potential is transformative: AI agents that can handle complex service interactions, support employees in real time, and scale seamlessly as customer demands shift. But the move from scripted, deterministic flows to non-deterministic, generative systems brings new challenges. How can you test something that doesn’t always respond the same way twice? How can you balance safety and flexibility when giving an AI system access to core infrastructure? And how can you manage cost, transparency, and ethical risk while still pursuing meaningful returns?
These solutions will determine how, and how quickly, companies embrace the next era of customer experience technology.
Verma argues that the story of customer experience automation over the past decade has been one of shifting expectations—from rigid, deterministic flows to flexible, generative systems. Along the way, businesses have had to rethink how they mitigate risk, implement guardrails, and measure success. The future, Verma suggests, belongs to organizations that focus on outcome-oriented design: tools that work transparently, safely, and at scale.
“I believe that the big winners are going to be the use case companies, the applied AI companies,” says Verma.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Partnering with generative AI in the finance function
Generative AI has the potential to transform the finance function. By taking on some of the more mundane tasks that can occupy a lot of time, generative AI tools can help free up capacity for more high-value strategic work. For chief financial officers, this could mean spending more time and energy on proactively advising the business on financial strategy as organizations around the world continue to weather ongoing geopolitical and financial uncertainty.



CFOs can use large language models (LLMs) and generative AI tools to support everyday tasks like generating quarterly reports, communicating with investors, and formulating strategic summaries, says Andrew W. Lo, Charles E. and Susan T. Harris professor and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management. “LLMs can’t replace the CFO by any means, but they can take a lot of the drudgery out of the role by providing first drafts of documents that summarize key issues and outline strategic priorities.”
Generative AI is also showing promise in functions like treasury, with use cases including cash, revenue, and liquidity forecasting and management, as well as automating contracts and investment analysis. However, challenges still remain for generative AI to contribute to forecasting due to the mathematical limitations of LLMs. Regardless, Deloitte’s analysis of its 2024 State of Generative AI in the Enterprise survey found that one-fifth (19%) of finance organizations have already adopted generative AI in the finance function.
Despite return on generative AI investments in finance functions being 8 points below expectations so far for surveyed organizations (see Figure 1), some finance departments appear to be moving ahead with investments. Deloitte’s fourth-quarter 2024 North American CFO Signals survey found that 46% of CFOs who responded expect deployment or spend on generative AI in finance to increase in the next 12 months (see Figure 2). Respondents cite the technology’s potential to help control costs through self-service and automation and free up workers for higher-level, higher-productivity tasks as some of the top benefits of the technology.



“Companies have used AI on the customer-facing side of the house for a long time, but in finance, employees are still creating documents and presentations and emailing them around,” says Robyn Peters, principal in finance transformation at Deloitte Consulting LLP. “Largely, the human-centric experience that customers expect from brands in retail, transportation, and hospitality haven’t been pulled through to the finance organization. And there’s no reason we cannot do that—and, in fact, AI makes it a lot easier to do.”
If CFOs think they can just sit by for the next five years and watch how AI evolves, they may lose out to more nimble competitors that are actively experimenting in the space. Future finance professionals are growing up using generative AI tools too. CFOs should consider reimagining what it looks like to be a successful finance professional, in collaboration with AI.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Adapting to new threats with proactive risk management
In July 2024, a botched update to the software defenses managed by cybersecurity firm CrowdStrike caused more than 8 million Windows systems to fail. From hospitals to manufacturers, stock markets to retail stores, the outage caused parts of the global economy to grind to a halt. Payment systems were disrupted, broadcasters went off the air, and flights were canceled. In all, the outage is estimated to have caused direct losses of more than $5 billion to Fortune 500 companies. For US air carrier Delta Air Lines, the error exposed the brittleness of its systems. The airline suffered weeks of disruptions, leading to $500 million in losses and 7,000 canceled flights.



The magnitude of the CrowdStrike incident revealed just how interconnected digital systems are, and the extensive vulnerabilities in some companies when confronted with an unexpected occurrence. “On any given day, there could be a major weather event or some event like what happened…with CrowdStrike,” said then-US secretary of transportation Pete Buttigieg on announcing an investigation into how Delta Air Lines handled the incident. “The question is, is your airline prepared to absorb something like that and get back on its feet and take care of customers?”
Unplanned downtime poses a major challenge for organizations, and is estimated to cost Global 2000 companies on average $200 million per year. Beyond the financial impact, it can also erode customer trust and loyalty, decrease productivity, and even result in legal or privacy issues.
A 2024 ransomware attack on Change Healthcare, the medical-billing subsidiary of industry giant UnitedHealth Group—the biggest health and medical data breach in US history—exposed the data of around 190 million people and led to weeks of outages for medical groups. Another ransomware attack in 2024, this time on CDK Global, a software firm that works with nearly 15,000 auto dealerships in North America, led to around $1 billion worth of losses for car dealers as a result of the three-week disruption.
Managing risk and mitigating downtime is a growing challenge for businesses. As organizations become ever more interconnected, the expanding surface of networks and the rapid adoption of technologies like AI are exposing new vulnerabilities—and more opportunities for threat actors. Cyberattacks are also becoming increasingly sophisticated and damaging as AI-driven malware and malware-as-a-service platforms turbocharge attacks.



To prepare for these challenges head on, companies must take a more proactive approach to security and resilience. “We’ve had a traditional way of doing things that’s actually worked pretty well for maybe 15 to 20 years, but it’s been based on detecting an incident after the event,” says Chris Millington, global cyber resilience technical expert at Hitachi Vantara. “Now, we’ve got to be more preventative and use intelligence to focus on making the systems and business more resilient.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Transforming CX with embedded real-time analytics
During Black Friday in 2024, Stripe processed more than $31 billion in transactions, with processing rates peaking at 137,000 transactions per minute, the highest in the company’s history. The financial-services firm had to analyze every transaction in real time to prevent nearly 21 million fraud attempts that could have siphoned more than $910 million from its merchant customers.
Yet, fraud protection is only one reason that Stripe embraced real-time data analytics. Evaluating trends in massive data flows is essential for the company’s services, such as allowing businesses to bill based on usage and monitor orders and inventory. In fact, many of Stripe’s services would not be possible without real-time analytics, says Avinash Bhat, head of data infrastructure at Stripe. “We have certain products that require real-time analytics, like usage-based billing and fraud detection,” he says. “Without our real-time analytics, we would not have a few of our products and that’s why it’s super important.”



Stripe is not alone. In today’s digital world, data analysis is increasingly delivered directly to business customers and individual users, allowing real-time, continuous insights to shape user experiences. Ride-hailing apps calculate prices and estimate times of arrival (ETAs) in near-real time. Financial platforms deliver real-time cash-flow analysis. Customers expect and reward data-driven services that reflect what is happening now.
In fact, having the capability to collect and analyze data in real time correlates with companies’ ability to grow. Business leaders that scored company in the top quartile for real-time operations saw 50% higher revenue growth and net margins, compared to companies placed in the bottom quartile, according to a survey conducted by the MIT Center for Information Systems Research (CISR) and Insight Partners. The top companies focused on automated processes and fast decision-making at all levels, relying on easily accessible data services updated in real time.



Companies that wait on data are putting themselves in a bind, says Kishore Gopalakrishna, co-founder and CEO of StarTree, a real-time data-analytics technology provider. “The basis of real-time analytics is—when the value of the data is very high—we want to capitalize on it instead of waiting and doing batch analytics,” he says. “Getting access to the data a day, or even hours, later is sometimes actually too late.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Imagining the future of banking with agentic AI
Agentic AI is coming of age. And with it comes new opportunities in the financial services sector. Banks are increasingly employing agentic AI to optimize processes, navigate complex systems, and sift through vast quantities of unstructured data to make decisions and take actions—with or without human involvement. “With the maturing of agentic AI, it is becoming a lot more technologically possible for large-scale process automation that was not possible with rules-based approaches like robotic process automation before,” says Sameer Gupta, Americas financial services AI leader at EY. “That moves the needle in terms of cost, efficiency, and customer experience impact.”



From responding to customer services requests, to automating loan approvals, adjusting bill payments to align with regular paychecks, or extracting key terms and conditions from financial agreements, agentic AI has the potential to transform the customer experience—and how financial institutions operate too.
Adapting to new and emerging technologies like agentic AI is essential for an organization’s survival, says Murli Buluswar, head of US personal banking analytics at Citi. “A company’s ability to adopt new technical capabilities and rearchitect how their firm operates is going to make the difference between the firms that succeed and those that get left behind,” says Buluswar. “Your people and your firm must recognize that how they go about their work is going to be meaningfully different.”
The emerging landscape
Agentic AI is already being rapidly adopted in the banking sector. A 2025 survey of 250 banking executives by MIT Technology Review Insights found that 70% of leaders say their firm uses agentic AI to some degree, either through existing deployments (16%) or pilot projects (52%). And it is already proving effective in a range of different functions. More than half of executives say agentic AI systems are highly capable of improving fraud detection (56%) and security (51%). Other strong use cases include reducing cost and increasing efficiency (41%) and improving the customer experience (41%).
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Building the AI-enabled enterprise of the future
Artificial intelligence is fundamentally reshaping how the world operates. With its potential to automate repetitive tasks, analyze vast datasets, and augment human capabilities, the use of AI technologies is already driving changes across industries.



In health care and pharmaceuticals, machine learning and AI-powered tools are advancing disease diagnosis, reducing drug discovery timelines by as much as 50%, and heralding a new era of personalized medicine. In supply chain and logistics, AI models can help prevent or mitigate disruptions, allowing businesses to make informed decisions and enhance resilience amid geopolitical uncertainty. Across sectors, AI in research and development cycles may reduce time-to-market by 50% and lower costs in industries like automotive and aerospace by as much as 30%.
“This is one of those inflection points where I don’t think anybody really has a full view of the significance of the change this is going to have on not just companies but society as a whole,” says Patrick Milligan, chief information security officer at Ford, which is making AI an important part of its transformation efforts and expanding its use across company operations.
Given its game-changing potential—and the breakneck speed with which it is evolving—it is perhaps not surprising that companies are feeling the pressure to deploy AI as soon as possible: 98% say they feel an increased sense of urgency in the last year. And 85% believe they have less than 18 months to deploy an AI strategy or they will see negative business effects.
Companies that take a “wait and see” approach will fall behind, says Jeetu Patel, president and chief product officer at Cisco. “If you wait for too long, you risk becoming irrelevant,” he says. “I don’t worry about AI taking my job, but I definitely worry about another person that uses AI better than me or another company that uses AI better taking my job or making my company irrelevant.”



But despite the urgency, just 13% of companies globally say they are ready to leverage AI to its full potential. IT infrastructure is an increasing challenge as workloads grow ever larger. Two-thirds (68%) of organizations say their infrastructure is moderately ready at best to adopt and scale AI technologies.
Essential capabilities include adequate compute power to process complex AI models, optimized network performance across the organization and in data centers, and enhanced cybersecurity capabilities to detect and prevent sophisticated attacks. This must be combined with observability, which ensures the reliable and optimized performance of infrastructure, models, and the overall AI system by providing continuous monitoring and analysis of their behavior. Good quality, well-managed enterprise-wide data is also essential—after all, AI is only as good as the data it draws on. All of this must be supported by AI-focused company culture and talent development.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
The connected customer
As brands compete for increasingly price conscious consumers, customer experience (CX) has become a decisive differentiator. Yet many struggle to deliver, constrained by outdated systems, fragmented data, and organizational silos that limit both agility and consistency.
The current wave of artificial intelligence, particularly agentic AI that can reason and act across workflows, offers a powerful opportunity to reshape service delivery. Organizations can now provide fast, personalized support at scale while improving workforce productivity and satisfaction. But realizing that potential requires more than isolated tools; it calls for a unified platform that connects people, data, and decisions across the service lifecycle. This report explores how leading organizations are navigating that shift, and what it takes to move from AI potential to CX impact.



Key findings include:
- AI is transforming customer experience (CX). Customer service has evolved from the era of voicebased support through digital commerce and cloud to today’s AI revolution. Powered by large language models (LLMs) and a growing pool of data, AI can handle more diverse customer queries, produce highly personalized communication at scale, and help staff and senior management with decision support. Customers are also warming to AI-powered platforms as performance and reliability improves. Early adopters report improvements including more satisfied customers, more productive staff, and richer performance insights.
- Legacy infrastructure and data fragmentation are hindering organizations from maximizing the value of AI. While customer service and IT departments are early adopters of AI, the broader organizations across industries are often riddled with outdated infrastructure. This impinges the ability of autonomous AI tools to move freely across workflows and data repositories to deliver goal-based tasks. Creating a unified platform and orchestration architecture will be key to unlock AI’s potential. The transition can be a catalyst for streamlining and rationalizing the business as a whole.



- High-performing organizations use AI without losing the human touch. While consumers are warming to AI, rollout should include some discretion. Excessive personalization could make customers uncomfortable about their personal data, while engineered “empathy” from bots may be received as insincere. Organizations should not underestimate the unique value their workforce offers. Sophisticated adopters strike the right balance between human and machine capabilities. Their leaders are proactive in addressing job displacement worries through transparent communication, comprehensive training, and clear delineation between AI and human roles. The most effective organizations treat AI as a collaborative tool that enhances rather than replaces human connection and expertise.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
