Turning migration into modernization

In late 2023, a long-trusted virtualization staple became the biggest open question on the enterprise IT roadmap.

Amid concerns of VMware licensing changes and steeper support costs, analysts noticed an exodus mentality. Forrester predicted that one in five large VMware customers would begin moving away from the platform in 2024. A subsequent Gartner community poll found that 74% of respondents were rethinking their VMware relationship in light of recent changes. CIOs contending with pricing hikes and product roadmap opacity face a daunting choice: double‑down on a familiar but costlier stack, or use the disruption to rethink how—and where—critical workloads should run.

“There’s still a lot of uncertainty in the marketplace around VMware,” explains Matt Crognale, senior director, migrations and modernization at cloud modernization firm Effectual, adding that the VMware portfolio has been streamlined and refocused over the past couple of years. “The portfolio has been trimmed down to a core offering focused on the technology versus disparate systems.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Unlocking AI’s full potential requires operational excellence

Talk of AI is inescapable. It’s often the main topic of discussion at board and executive meetings, at corporate retreats, and in the media. A record 58% of S&P 500 companies mentioned AI in their second-quarter earnings calls, according to Goldman Sachs.

But it’s difficult to walk the talk. Just 5% of generative AI pilots are driving measurable profit-and-loss impact, according to a recent MIT study. That means 95% of generative AI pilots are realizing zero return, despite significant attention and investment.

Although we’re nearly three years past the watershed moment of ChatGPT’s public release, the vast majority of organizations are stalling out in AI. Something is broken. What is it?

Date from Lucid’s AI readiness survey sheds some light on the tripwires that are making organizations stumble. Fortunately, solving these problems doesn’t require recruiting top AI talent worth hundreds of millions of dollars, at least for most companies. Instead, as they race to implement AI quickly and successfully, leaders need to bring greater rigor and structure to their operational processes.

Operations are the gap between AI’s promise and practical adoption

I can’t fault any leader for moving as fast as possible with their implementation of AI. In many cases, the existential survival of their company—and their own employment—depends on it. The promised benefits to improve productivity, reduce costs, and enhance communication are transformational, which is why speed is paramount.

But while moving quickly, leaders are skipping foundational steps required for any technology implementation to be successful. Our survey research found that more than 60% of knowledge workers believe their organization’s AI strategy is only somewhat to not at all well aligned with operational capabilities.

AI can process unstructured data, but AI will only create more headaches for unstructured organizations. As Bill Gates said, “The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”

Where are the operations gaps in AI implementations? Our survey found that approximately half of respondents (49%) cite undocumented or ad-hoc processes impacting efficiency sometimes; 22% say this happens often or always.

The primary challenge of AI transformation lies not in the technology itself, but in the final step of integrating it into daily workflows. We can compare this to the “last mile problem” in logistics: The most difficult part of a delivery is getting the product to the customer, no matter how efficient the rest of the process is.

In AI, the “last mile” is the crucial task of embedding AI into real-world business operations. Organizations have access to powerful models but struggle to connect them to the people who need to use them. The power of AI is wasted if it’s not effectively integrated into business operations, and that requires clear documentation of those operations.

Capturing, documenting, and distributing knowledge at scale is critical to organizational success with AI. Yet our survey showed only 16% of respondents say their workflows are extremely well-documented. The top barriers to proper documentation are a lack of time, cited by 40% of respondents, and a lack of tools, cited by 30%.

The challenge of integrating new technology with old processes was perfectly illustrated in a recent meeting I had with a Fortune 500 executive. The company is pushing for significant productivity gains with AI, but it still relies on an outdated collaboration tool that was never designed for teamwork. This situation highlights the very challenge our survey uncovered: Powerful AI initiatives can stall if teams lack modern collaboration and documentation tools.

This disconnect shows that AI adoption is about more than just the technology itself. For it to truly succeed enterprise-wide, companies need to provide a unified space for teams to brainstorm, plan, document, and make decisions. The fundamentals of successful technology adoption still hold true: You need the right tools to enable collaboration and documentation for AI to truly make an impact.

Collaboration and change management are hidden blockers to AI implementation

A company’s approach to AI is perceived very differently depending on an employee’s role. While 61% of C-suite executives believe their company’s strategy is well-considered, that number drops to 49% for managers and just 36% for entry-level employees, as our survey found.

Just like with product development, building a successful AI strategy requires a structured approach. Leaders and teams need a collaborative space to come together, brainstorm, prioritize the most promising opportunities, and map out a clear path forward. As many companies have embraced hybrid or distributed work, supporting remote collaboration with digital tools becomes even more important.

We recently used AI to streamline a strategic challenge for our executive team. A product leader used it to generate a comprehensive preparatory memo in a fraction of the typical time, complete with summaries, benchmarks, and recommendations.

Despite this efficiency, the AI-generated document was merely the foundation. We still had to meet to debate the specifics, prioritize actions, assign ownership, and formally document our decisions and next steps.

According to our survey, 23% of respondents reported that collaboration is frequently a bottleneck in complex work. Employees are willing to embrace change, but friction from poor collaboration adds risk and reduces the potential impact of AI.

Operational readiness enhances your AI readiness

Operations lacking structure are preventing many organizations from implementing AI successfully. We asked teams about their top needs to help them adapt to AI. At the top of their lists were document collaboration (cited by 37% of respondents), process documentation (34%), and visual workflows (33%).

Notice that none of these requests are for more sophisticated AI. The technology is plenty capable already, and most organizations are still just scratching the surface of its full potential. Instead, what teams want most is ensuring the fundamentals around processes, documentation, and collaboration are covered.

AI offers a significant opportunity for organizations to gain a competitive edge in productivity and efficiency. But moving fast isn’t a guarantee of success. The companies best positioned for successful AI adoption are those that invest in operational excellence, down to the last mile.

This content was produced by Lucid Software. It was not written by MIT Technology Review’s editorial staff.

Powering HPC with next-generation CPUs

For all the excitement around GPUs—the workhorses of today’s AI revolution—the central processing unit (CPU) remains the backbone of high-performance computing (HPC). CPUs still handle 80% to 90% of HPC workloads globally, powering everything from climate modeling to semiconductor design. Far from being eclipsed, they’re evolving in ways that make them more competitive, flexible, and indispensable than ever.

The competitive landscape around CPUs has intensified. Once dominated almost exclusively by Intel’s x86 chips, the market now includes powerful alternatives based on ARM and even emerging architectures like RISC-V. Flagship examples like Japan’s Fugaku supercomputer demonstrate how CPU innovation is pushing performance to new frontiers. Meanwhile, cloud providers like Microsoft and AWS are developing their own silicon, adding even more diversity to the ecosystem.

What makes CPUs so enduring? Flexibility, compatibility, and cost efficiency are key. As Evan Burness of Microsoft Azure points out, CPUs remain the “it-just-works” technology. Moving complex, proprietary code to GPUs can be an expensive and time-consuming effort, while CPUs typically support software continuity across generations with minimal friction. That reliability matters for businesses and researchers who need results, not just raw power.

Innovation is also reshaping what a CPU can be. Advances in chiplet design, on-package memory, and hybrid CPU-GPU architectures are extending the performance curve well beyond the limits of Moore’s Law. For many organizations, the CPU is the strategic choice that balances speed, efficiency, and cost.

Looking ahead, the relationship between CPUs, GPUs, and specialized processors like NPUs will define the future of HPC. Rather than a zero-sum contest, it’s increasingly a question of fit-for-purpose design. As Addison Snell, co-founder and chief executive officer of Intersect360 Research, notes, science and industry never run out of harder problems to solve.

That means CPUs, far from fading, will remain at the center of the computing ecosystem.

To learn more, read the new report “Designing CPUs for next-generation supercomputing.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Designing CPUs for next-generation supercomputing

In Seattle, a meteorologist analyzes dynamic atmospheric models to predict the next major storm system. In Stuttgart, an automotive engineer examines crash-test simulations for vehicle safety certification. And in Singapore, a financial analyst simulates portfolio stress tests to hedge against global economic shocks. 

Each of these professionals—and the consumers, commuters, and investors who depend on their insights— relies on a time-tested pillar of high-performance computing: the humble CPU. 

With GPU-powered AI breakthroughs getting the lion’s share of press (and investment) in 2025, it is tempting to assume that CPUs are yesterday’s news. Recent predictions anticipate that GPU and accelerator installations will increase by 17% year over year through 2030. But, in reality, CPUs are still responsible for the vast majority of today’s most cutting-edge scientific, engineering, and research workloads. Evan Burness, who leads Microsoft Azure’s HPC and AI product teams, estimates that CPUs still support 80% to 90% of HPC simulation jobs today.

In 2025, not only are these systems far from obsolete, they are experiencing a technological renaissance. A new wave of CPU innovation, including high-bandwidth memory (HBM), is delivering major performance gains— without requiring costly architectural resets. 

Download the report.

To learn more, watch the new webcast “Powering HPC with next-generation CPUs.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

De-risking investment in AI agents

Automation has become a defining force in the customer experience. Between the chatbots that answer our questions and the recommendation systems that shape our choices, AI-driven tools are now embedded in nearly every interaction. But the latest wave of so-called “agentic AI”—systems that can plan, act, and adapt toward a defined goal—promises to push automation even further.

“Every single person that I’ve spoken to has at least spoken to some sort of GenAI bot on their phones. They expect experiences to be not scripted. It’s almost like we’re not improving customer experience, we’re getting to the point of what customers expect customer experience to be,” says vice president of product management at NICE, Neeraj Verma.

For businesses, the potential is transformative: AI agents that can handle complex service interactions, support employees in real time, and scale seamlessly as customer demands shift. But the move from scripted, deterministic flows to non-deterministic, generative systems brings new challenges. How can you test something that doesn’t always respond the same way twice? How can you balance safety and flexibility when giving an AI system access to core infrastructure? And how can you manage cost, transparency, and ethical risk while still pursuing meaningful returns?

These solutions will determine how, and how quickly, companies embrace the next era of customer experience technology.

Verma argues that the story of customer experience automation over the past decade has been one of shifting expectations—from rigid, deterministic flows to flexible, generative systems. Along the way, businesses have had to rethink how they mitigate risk, implement guardrails, and measure success. The future, Verma suggests, belongs to organizations that focus on outcome-oriented design: tools that work transparently, safely, and at scale.

“I believe that the big winners are going to be the use case companies, the applied AI companies,” says Verma.

Watch the webcast now.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Partnering with generative AI in the finance function

Generative AI has the potential to transform the finance function. By taking on some of the more mundane tasks that can occupy a lot of time, generative AI tools can help free up capacity for more high-value strategic work. For chief financial officers, this could mean spending more time and energy on proactively advising the business on financial strategy as organizations around the world continue to weather ongoing geopolitical and financial uncertainty.

CFOs can use large language models (LLMs) and generative AI tools to support everyday tasks like generating quarterly reports, communicating with investors, and formulating strategic summaries, says Andrew W. Lo, Charles E. and Susan T. Harris professor and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management. “LLMs can’t replace the CFO by any means, but they can take a lot of the drudgery out of the role by providing first drafts of documents that summarize key issues and outline strategic priorities.”

Generative AI is also showing promise in functions like treasury, with use cases including cash, revenue, and liquidity forecasting and management, as well as automating contracts and investment analysis. However, challenges still remain for generative AI to contribute to forecasting due to the mathematical limitations of LLMs. Regardless, Deloitte’s analysis of its 2024 State of Generative AI in the Enterprise survey found that one-fifth (19%) of finance organizations have already adopted generative AI in the finance function.

Despite return on generative AI investments in finance functions being 8 points below expectations so far for surveyed organizations (see Figure 1), some finance departments appear to be moving ahead with investments. Deloitte’s fourth-quarter 2024 North American CFO Signals survey found that 46% of CFOs who responded expect deployment or spend on generative AI in finance to increase in the next 12 months (see Figure 2). Respondents cite the technology’s potential to help control costs through self-service and automation and free up workers for higher-level, higher-productivity tasks as some of the top benefits of the technology.

“Companies have used AI on the customer-facing side of the house for a long time, but in finance, employees are still creating documents and presentations and emailing them around,” says Robyn Peters, principal in finance transformation at Deloitte Consulting LLP. “Largely, the human-centric experience that customers expect from brands in retail, transportation, and hospitality haven’t been pulled through to the finance organization. And there’s no reason we cannot do that—and, in fact, AI makes it a lot easier to do.”

If CFOs think they can just sit by for the next five years and watch how AI evolves, they may lose out to more nimble competitors that are actively experimenting in the space. Future finance professionals are growing up using generative AI tools too. CFOs should consider reimagining what it looks like to be a successful finance professional, in collaboration with AI.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Adapting to new threats with proactive risk management

In July 2024, a botched update to the software defenses managed by cybersecurity firm CrowdStrike caused more than 8 million Windows systems to fail. From hospitals to manufacturers, stock markets to retail stores, the outage caused parts of the global economy to grind to a halt. Payment systems were disrupted, broadcasters went off the air, and flights were canceled. In all, the outage is estimated to have caused direct losses of more than $5 billion to Fortune 500 companies. For US air carrier Delta Air Lines, the error exposed the brittleness of its systems. The airline suffered weeks of disruptions, leading to $500 million in losses and 7,000 canceled flights.

The magnitude of the CrowdStrike incident revealed just how interconnected digital systems are, and the extensive vulnerabilities in some companies when confronted with an unexpected occurrence. “On any given day, there could be a major weather event or some event like what happened…with CrowdStrike,” said then-US secretary of transportation Pete Buttigieg on announcing an investigation into how Delta Air Lines handled the incident. “The question is, is your airline prepared to absorb something like that and get back on its feet and take care of customers?”

Unplanned downtime poses a major challenge for organizations, and is estimated to cost Global 2000 companies on average $200 million per year. Beyond the financial impact, it can also erode customer trust and loyalty, decrease productivity, and even result in legal or privacy issues.

A 2024 ransomware attack on Change Healthcare, the medical-billing subsidiary of industry giant UnitedHealth Group—the biggest health and medical data breach in US history—exposed the data of around 190 million people and led to weeks of outages for medical groups. Another ransomware attack in 2024, this time on CDK Global, a software firm that works with nearly 15,000 auto dealerships in North America, led to around $1 billion worth of losses for car dealers as a result of the three-week disruption.

Managing risk and mitigating downtime is a growing challenge for businesses. As organizations become ever more interconnected, the expanding surface of networks and the rapid adoption of technologies like AI are exposing new vulnerabilities—and more opportunities for threat actors. Cyberattacks are also becoming increasingly sophisticated and damaging as AI-driven malware and malware-as-a-service platforms turbocharge attacks.

To prepare for these challenges head on, companies must take a more proactive approach to security and resilience. “We’ve had a traditional way of doing things that’s actually worked pretty well for maybe 15 to 20 years, but it’s been based on detecting an incident after the event,” says Chris Millington, global cyber resilience technical expert at Hitachi Vantara. “Now, we’ve got to be more preventative and use intelligence to focus on making the systems and business more resilient.”

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Transforming CX with embedded real-time analytics 

During Black Friday in 2024, Stripe processed more than $31 billion in transactions, with processing rates peaking at 137,000 transactions per minute, the highest in the company’s history. The financial-services firm had to analyze every transaction in real time to prevent nearly 21 million fraud attempts that could have siphoned more than $910 million from its merchant customers. 

Yet, fraud protection is only one reason that Stripe embraced real-time data analytics. Evaluating trends in massive data flows is essential for the company’s services, such as allowing businesses to bill based on usage and monitor orders and inventory. In fact, many of Stripe’s services would not be possible without real-time analytics, says Avinash Bhat, head of data infrastructure at Stripe. “We have certain products that require real-time analytics, like usage-based billing and fraud detection,” he says. “Without our real-time analytics, we would not have a few of our products and that’s why it’s super important.” 

Stripe is not alone. In today’s digital world, data analysis is increasingly delivered directly to business customers and individual users, allowing real-time, continuous insights to shape user experiences. Ride-hailing apps calculate prices and estimate times of arrival (ETAs) in near-real time. Financial platforms deliver real-time cash-flow analysis. Customers expect and reward data-driven services that reflect what is happening now. 

In fact, having the capability to collect and analyze data in real time correlates with companies’ ability to grow. Business leaders that scored company in the top quartile for real-time operations saw 50% higher revenue growth and net margins, compared to companies placed in the bottom quartile, according to a survey conducted by the MIT Center for Information Systems Research (CISR) and Insight Partners. The top companies focused on automated processes and fast decision-making at all levels, relying on easily accessible data services updated in real time. 

Companies that wait on data are putting themselves in a bind, says Kishore Gopalakrishna, co-founder and CEO of StarTree, a real-time data-analytics technology provider. “The basis of real-time analytics is—when the value of the data is very high—we want to capitalize on it instead of waiting and doing batch analytics,” he says. “Getting access to the data a day, or even hours, later is sometimes actually too late.” 

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.