Aligning VMware migration with business continuity

For decades, business continuity planning meant preparing for anomalous events like hurricanes, floods, tornadoes, or regional power outages. In anticipation of these rare disasters, IT teams built playbooks, ran annual tests, crossed their fingers, and hoped they’d never have to use them.

In recent years, an even more persistent threat has emerged. Cyber incidents, particularly ransomware, are now more common—and often, more damaging—than physical disasters. In a recent survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year. Earlier in 2025, ransomware attack rates on enterprises reached record highs.

Mark Vaughn, senior director of the virtualization practice at Presidio, has witnessed the trend firsthand. “When I speak at conferences, I’ll ask the room, ‘How many people have been impacted?’ For disaster recovery, you usually get a few hands,” he says. “But a little over a year ago, I asked how many people in the room had been hit by ransomware, and easily two-thirds of the hands went up.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

From vibe coding to context engineering: 2025 in software development

This year, we’ve seen a real-time experiment playing out across the technology industry, one in which AI’s software engineering capabilities have been put to the test against human technologists. And although 2025 may have started with AI looking strong, the transition from vibe coding to what’s being termed context engineering shows that while the work of human developers is evolving, they nevertheless remain absolutely critical.

This is captured in the latest volume of the “Thoughtworks Technology Radar,” a report on the technologies used by our teams on projects with clients. In it, we see the emergence of techniques and tooling designed to help teams better tackle the problem of managing context when working with LLMs and AI agents. 

Taken together, there’s a clear signal of the direction of travel in software engineering and even AI more broadly. After years of the industry assuming progress in AI is all about scale and speed, we’re starting to see that what matters is the ability to handle context effectively.

Vibes, antipatterns, and new innovations 

In February 2025, Andrej Karpathy coined the term vibe coding. It took the industry by storm. It certainly sparked debate at Thoughtworks; many of us were skeptical. On an April episode of our technology podcast, we talked about our concerns and were cautious about how vibe coding might evolve.

Unsurprisingly given the implied imprecision of vibe-based coding, antipatterns have been proliferating. We’ve once again noted, for instance, complacency with AI generated code on the latest volume of the Technology Radar, but it’s also worth pointing out that early ventures into vibe coding also exposed a degree of complacency about what AI models can actually handle — users demanded more and prompts grew larger, but model reliability started to falter.

Experimenting with generative AI 

This is one of the drivers behind increasing interest in engineering context. We’re well aware of its importance, working with coding assistants like Claude Code and Augment Code. Providing necessary context—or knowledge priming—is crucial. It ensures outputs are more consistent and reliable, which will ultimately lead to better software that needs less work — reducing rewrites and potentially driving productivity.

When effectively prepared, we’ve seen good results when using generative AI to understand legacy codebases. Indeed, done effectively with the appropriate context, it can even help when we don’t have full access to source code

It’s important to remember that context isn’t just about more data and more detail. This is one of the lessons we’ve taken from using generative AI for forward engineering. It might sound counterintuitive, but in this scenario, we’ve found AI to be more effective when it’s further abstracted from the underlying system — or, in other words, further removed from the specifics of the legacy code. This is because the solution space becomes much wider, allowing us to better leverage the generative and creative capabilities of the AI models we use.

Context is critical in the agentic era

The backdrop of changes that have happened over recent months is the growth of agents and agentic systems — both as products organizations want to develop and as technology they want to leverage. This has forced the industry to properly reckon with context and move away from a purely vibes-based approach.

Indeed, far from simply getting on with tasks they’ve been programmed to do, agents require significant human intervention to ensure they are equipped to respond to complex and dynamic contexts. 

There are a number of context-related technologies aimed at tackling this challenge, including agents.md, Context7, and Mem0. But it’s also a question of approach. For instance, we’ve found success with anchoring coding agents to a reference application — essentially providing agents with a contextual ground truth. We’re also experimenting with using teams of coding agents; while this might sound like it increases complexity, it actually removes some of the burden of having to give a single agent all the dense layers of context it needs to do its job successfully.

Toward consensus

Hopefully the space will mature as practices and standards embed. It would be remiss to not mention the significance of the Model Context Protocol, which has emerged as the go-to protocol for connecting LLMs or agentic AI to sources of context. Relatedly, the agent2agent (A2A) protocol leads the way with standardizing how agents interact with one another. 

It remains to be seen whether these standards win out. But in any case, it’s important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems. Sure, AI needs context, but so do we. Techniques like curated shared instructions for software teams may not sound like the hottest innovation on the planet, but they can be remarkably powerful for helping teams work together.

There’s perhaps also a conversation to be had about what these changes mean for agile software development. Spec-driven development is one idea that appears to have some traction, but there are still questions about how we remain adaptable and flexible while also building robust contextual foundations and ground truths for AI systems.

Software engineers can solve the context challenge

Clearly, 2025 has been a huge year in the evolution of software engineering as a practice. There’s a lot the industry needs to monitor closely, but it’s also an exciting time. And while fears about AI job automation may remain, the fact the conversation has moved from questions of speed and scale to context puts software engineers right at the heart of things. 

Once again, it will be down to them to experiment, collaborate, and learn — the future depends on it.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

A new ion-based quantum computer makes error correction simpler

The US- and UK-based company Quantinuum today unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability. 

Like all other existing quantum computers, Helios is not powerful enough to execute the industry’s dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling. But Quantinuum’s machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google’s and IBM’s.

“Helios is an important proof point in our road map about how we’ll scale to larger physical systems,” says Jennifer Strabley, vice president at Quantinuum, which formed in 2021 from the merger of Honeywell Quantum Solutions and Cambridge Quantum. Honeywell remains Quantinuum’s majority owner.

Located at Quantinuum’s facility in Colorado, Helios comprises a myriad of components, including mirrors, lasers, and optical fiber. Its core is a thumbnail-size chip containing the barium ions that serve as the qubits, which perform the actual computing. Helios computes with 98 barium ions at a time; its predecessor, H2, used 56 ytterbium qubits. The barium ions are an upgrade, as they have proven easier to control than ytterbium.  These components all sit within a chamber that is cooled to about 15 Kelvin (-432.67 ℉), on top of an optical table. Users can access the computer by logging in remotely over the cloud.

Helios encodes information in the ions’ quantum states, which can represent not only 0s and 1s, like the bits in classical computing, but probabilistic combinations of both, known as superpositions. A hallmark of quantum computing, these superposition states are akin to the state of a coin flipping in the air—neither heads nor tails, but some probability of both. 

Quantum computing exploits the unique mathematics of quantum-mechanical objects like ions to perform computations. Proponents of the technology believe this should enable commercially useful applications, such as highly accurate chemistry simulations for the development of batteries or better optimization algorithms for logistics and finance. 

In the last decade, researchers at companies and academic institutions worldwide have incrementally developed the technology with billions of dollars of private and public funding. Still, quantum computing is in an awkward teenage phase. It’s unclear when it will bring profitable applications. Of late, developers have focused on scaling up the machines. 

A key challenge to making a more powerful quantum computer is implementing error correction. Like all computers, quantum computers occasionally make mistakes. Classical computers correct these errors by storing information redundantly. Owing to quirks of quantum mechanics, quantum computers can’t do this and require special correction techniques. 

Quantum error correction involves storing a single unit of information in multiple qubits rather than in a single qubit. The exact methods vary depending on the specific hardware of the quantum computer, with some machines requiring more qubits per unit of information than others. The industry refers to an error-corrected unit of quantum information as a “logical qubit.” Helios needs two ions, or “physical qubits,” to create one logical qubit.

This is fewer physical qubits than needed in recent quantum computers made of superconducting circuits. In 2024, Google used 105 physical qubits to create a logical qubit. This year, IBM used 12 physical qubits per single logical qubit, and Amazon Web Services used nine physical qubits to produce a single logical qubit. All three companies use variations of superconducting circuits as qubits.

Helios is noteworthy for its qubits’ precision, says Rajibul Islam, a physicist at the University of Waterloo in Canada, who is not affiliated with Quantinuum. The computer’s qubit error rates are low to begin with, which means it doesn’t need to devote as much of its hardware to error correction. Quantinuum had pairs of qubits interact in an operation known as entanglement and found that they behaved as expected 99.921% of the time. “To the best of my knowledge, no other platform is at this level,” says Islam.

This advantage comes from a design property of ions. Unlike superconducting circuits, which are affixed to the surface of a quantum computing chip, ions on Quantinuum’s Helios chip can be shuffled around. Because the ions can move, they can interact with every other ion in the computer, a capacity known as “all-to-all connectivity.” This connectivity allows for error correction approaches that use fewer physical qubits. In contrast, superconducting qubits can only interact with their direct neighbors, so a computation between two non-adjacent qubits requires several intermediate steps involving the qubits in between. “It’s becoming increasingly more apparent how important all-to-all-connectivity is for these high-performing systems,” says Strabley.

Still, it’s not clear what type of qubit will win in the long run. Each type has design benefits that could ultimately make it easier to scale. Ions (which are used by the US-based startup IonQ as well as Quantinuum) offer an advantage because they produce relatively few errors, says Islam: “Even with fewer physical qubits, you can do more.” However, it’s easier to manufacture superconducting qubits. And qubits made of neutral atoms, such as the quantum computers built by the Boston-based startup QuEra, are “easier to trap” than ions, he says. 

Besides increasing the number of qubits on its chip, another notable achievement for Quantinuum is that it demonstrated error correction “on the fly,” says David Hayes, the company’s director of computational theory and design, That’s a new capability for its machines. Nvidia GPUs were used to identify errors in the qubits in parallel. Hayes thinks that GPUs are more effective for error correction than chips known as FPGAs, also used in the industry.

Quantinuum has used its computers to investigate the basic physics of magnetism and superconductivity. Earlier this year, it reported simulating a magnet on H2, Quantinuum’s predecessor, with the claim that it “rivals the best classical approaches in expanding our understanding of magnetism.” Along with announcing the introduction of Helios, the company has used the machine to simulate the behavior of electrons in a high-temperature superconductor. 

“These aren’t contrived problems,” says Hayes. “These are problems that the Department of Energy, for example, is very interested in.”

Quantinuum plans to build another version of Helios in its facility in Minnesota. It has already begun to build a prototype for a fourth-generation computer, Sol, which it plans to deliver in 2027, with 192 physical qubits. Then, in 2029, the company hopes to release Apollo, which it says will have thousands of physical qubits and should be “fully fault tolerant,” or able to implement error correction at a large scale.

Turning migration into modernization

In late 2023, a long-trusted virtualization staple became the biggest open question on the enterprise IT roadmap.

Amid concerns of VMware licensing changes and steeper support costs, analysts noticed an exodus mentality. Forrester predicted that one in five large VMware customers would begin moving away from the platform in 2024. A subsequent Gartner community poll found that 74% of respondents were rethinking their VMware relationship in light of recent changes. CIOs contending with pricing hikes and product roadmap opacity face a daunting choice: double‑down on a familiar but costlier stack, or use the disruption to rethink how—and where—critical workloads should run.

“There’s still a lot of uncertainty in the marketplace around VMware,” explains Matt Crognale, senior director, migrations and modernization at cloud modernization firm Effectual, adding that the VMware portfolio has been streamlined and refocused over the past couple of years. “The portfolio has been trimmed down to a core offering focused on the technology versus disparate systems.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Powering HPC with next-generation CPUs

For all the excitement around GPUs—the workhorses of today’s AI revolution—the central processing unit (CPU) remains the backbone of high-performance computing (HPC). CPUs still handle 80% to 90% of HPC workloads globally, powering everything from climate modeling to semiconductor design. Far from being eclipsed, they’re evolving in ways that make them more competitive, flexible, and indispensable than ever.

The competitive landscape around CPUs has intensified. Once dominated almost exclusively by Intel’s x86 chips, the market now includes powerful alternatives based on ARM and even emerging architectures like RISC-V. Flagship examples like Japan’s Fugaku supercomputer demonstrate how CPU innovation is pushing performance to new frontiers. Meanwhile, cloud providers like Microsoft and AWS are developing their own silicon, adding even more diversity to the ecosystem.

What makes CPUs so enduring? Flexibility, compatibility, and cost efficiency are key. As Evan Burness of Microsoft Azure points out, CPUs remain the “it-just-works” technology. Moving complex, proprietary code to GPUs can be an expensive and time-consuming effort, while CPUs typically support software continuity across generations with minimal friction. That reliability matters for businesses and researchers who need results, not just raw power.

Innovation is also reshaping what a CPU can be. Advances in chiplet design, on-package memory, and hybrid CPU-GPU architectures are extending the performance curve well beyond the limits of Moore’s Law. For many organizations, the CPU is the strategic choice that balances speed, efficiency, and cost.

Looking ahead, the relationship between CPUs, GPUs, and specialized processors like NPUs will define the future of HPC. Rather than a zero-sum contest, it’s increasingly a question of fit-for-purpose design. As Addison Snell, co-founder and chief executive officer of Intersect360 Research, notes, science and industry never run out of harder problems to solve.

That means CPUs, far from fading, will remain at the center of the computing ecosystem.

To learn more, read the new report “Designing CPUs for next-generation supercomputing.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Designing CPUs for next-generation supercomputing

In Seattle, a meteorologist analyzes dynamic atmospheric models to predict the next major storm system. In Stuttgart, an automotive engineer examines crash-test simulations for vehicle safety certification. And in Singapore, a financial analyst simulates portfolio stress tests to hedge against global economic shocks. 

Each of these professionals—and the consumers, commuters, and investors who depend on their insights— relies on a time-tested pillar of high-performance computing: the humble CPU. 

With GPU-powered AI breakthroughs getting the lion’s share of press (and investment) in 2025, it is tempting to assume that CPUs are yesterday’s news. Recent predictions anticipate that GPU and accelerator installations will increase by 17% year over year through 2030. But, in reality, CPUs are still responsible for the vast majority of today’s most cutting-edge scientific, engineering, and research workloads. Evan Burness, who leads Microsoft Azure’s HPC and AI product teams, estimates that CPUs still support 80% to 90% of HPC simulation jobs today.

In 2025, not only are these systems far from obsolete, they are experiencing a technological renaissance. A new wave of CPU innovation, including high-bandwidth memory (HBM), is delivering major performance gains— without requiring costly architectural resets. 

Download the report.

To learn more, watch the new webcast “Powering HPC with next-generation CPUs.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Adapting to new threats with proactive risk management

In July 2024, a botched update to the software defenses managed by cybersecurity firm CrowdStrike caused more than 8 million Windows systems to fail. From hospitals to manufacturers, stock markets to retail stores, the outage caused parts of the global economy to grind to a halt. Payment systems were disrupted, broadcasters went off the air, and flights were canceled. In all, the outage is estimated to have caused direct losses of more than $5 billion to Fortune 500 companies. For US air carrier Delta Air Lines, the error exposed the brittleness of its systems. The airline suffered weeks of disruptions, leading to $500 million in losses and 7,000 canceled flights.

The magnitude of the CrowdStrike incident revealed just how interconnected digital systems are, and the extensive vulnerabilities in some companies when confronted with an unexpected occurrence. “On any given day, there could be a major weather event or some event like what happened…with CrowdStrike,” said then-US secretary of transportation Pete Buttigieg on announcing an investigation into how Delta Air Lines handled the incident. “The question is, is your airline prepared to absorb something like that and get back on its feet and take care of customers?”

Unplanned downtime poses a major challenge for organizations, and is estimated to cost Global 2000 companies on average $200 million per year. Beyond the financial impact, it can also erode customer trust and loyalty, decrease productivity, and even result in legal or privacy issues.

A 2024 ransomware attack on Change Healthcare, the medical-billing subsidiary of industry giant UnitedHealth Group—the biggest health and medical data breach in US history—exposed the data of around 190 million people and led to weeks of outages for medical groups. Another ransomware attack in 2024, this time on CDK Global, a software firm that works with nearly 15,000 auto dealerships in North America, led to around $1 billion worth of losses for car dealers as a result of the three-week disruption.

Managing risk and mitigating downtime is a growing challenge for businesses. As organizations become ever more interconnected, the expanding surface of networks and the rapid adoption of technologies like AI are exposing new vulnerabilities—and more opportunities for threat actors. Cyberattacks are also becoming increasingly sophisticated and damaging as AI-driven malware and malware-as-a-service platforms turbocharge attacks.

To prepare for these challenges head on, companies must take a more proactive approach to security and resilience. “We’ve had a traditional way of doing things that’s actually worked pretty well for maybe 15 to 20 years, but it’s been based on detecting an incident after the event,” says Chris Millington, global cyber resilience technical expert at Hitachi Vantara. “Now, we’ve got to be more preventative and use intelligence to focus on making the systems and business more resilient.”

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Why basic science deserves our boldest investment

In December 1947, three physicists at Bell Telephone Laboratories—John Bardeen, William Shockley, and Walter Brattain—built a compact electronic device using thin gold wires and a piece of germanium, a material known as a semiconductor. Their invention, later named the transistor (for which they were awarded the Nobel Prize in 1956), could amplify and switch electrical signals, marking a dramatic departure from the bulky and fragile vacuum tubes that had powered electronics until then.

Its inventors weren’t chasing a specific product. They were asking fundamental questions about how electrons behave in semiconductors, experimenting with surface states and electron mobility in germanium crystals. Over months of trial and refinement, they combined theoretical insights from quantum mechanics with hands-on experimentation in solid-state physics—work many might have dismissed as too basic, academic, or unprofitable.

Their efforts culminated in a moment that now marks the dawn of the information age. Transistors don’t usually get the credit they deserve, yet they are the bedrock of every smartphone, computer, satellite, MRI scanner, GPS system, and artificial-intelligence platform we use today. With their ability to modulate (and route) electrical current at astonishing speeds, transistors make modern and future computing and electronics possible.

This breakthrough did not emerge from a business plan or product pitch. It arose from open-ended, curiosity-driven research and enabling development, supported by an institution that saw value in exploring the unknown. It took years of trial and error, collaborations across disciplines, and a deep belief that understanding nature—even without a guaranteed payoff—was worth the effort.

After the first successful demonstration in late 1947, the invention of the transistor remained confidential while Bell Labs filed patent applications and continued development. It was publicly announced at a press conference on June 30, 1948, in New York City. The scientific explanation followed in a seminal paper published in the journal Physical Review

How do they work? At their core, transistors are made of semiconductors—materials like germanium and, later, silicon—that can either conduct or resist electricity depending on subtle manipulations of their structure and charge. In a typical transistor, a small voltage applied to one part of the device (the gate) either allows or blocks the electric current flowing through another part (the channel). It’s this simple control mechanism, scaled up billions of times, that lets your phone run apps, your laptop render images, and your search engine return answers in milliseconds.

Though early devices used germanium, researchers soon discovered that silicon—more thermally stable, moisture resistant, and far more abundant—was better suited for industrial production. By the late 1950s, the transition to silicon was underway, making possible the development of integrated circuits and, eventually, the microprocessors that power today’s digital world.

A modern chip the size of a human fingernail now contains tens of billions of silicon transistors, each measured in nanometers—smaller than many viruses. These tiny switches turn on and off billions of times per second, controlling the flow of electrical signals involved in computation, data storage, audio and visual processing, and artificial intelligence. They form the fundamental infrastructure behind nearly every digital device in use today. 

The global semiconductor industry is now worth over half a trillion dollars. Devices that began as experimental prototypes in a physics lab now underpin economies, national security, health care, education, and global communication. But the transistor’s origin story carries a deeper lesson—one we risk forgetting.

Much of the fundamental understanding that moved transistor technology forward came from federally funded university research. Nearly a quarter of transistor research at Bell Labs in the 1950s was supported by the federal government. Much of the rest was subsidized by revenue from AT&T’s monopoly on the US phone system, which flowed into industrial R&D.

Inspired by the 1945 report “Science: The Endless Frontier,” authored by Vannevar Bush at the request of President Truman, the US government began a long-standing tradition of investing in basic research. These investments have paid steady dividends across many scientific domains—from nuclear energy to lasers, and from medical technologies to artificial intelligence. Trained in fundamental research, generations of students have emerged from university labs with the knowledge and skills necessary to push existing technology beyond its known capabilities.

And yet, funding for basic science—and for the education of those who can pursue it—is under increasing pressure. The new White House’s proposed federal budget includes deep cuts to the Department of Energy and the National Science Foundation (though Congress may deviate from those recommendations). Already, the National Institutes of Health has canceled or paused more than $1.9 billion in grants, while NSF STEM education programs suffered more than $700 million in terminations.

These losses have forced some universities to freeze graduate student admissions, cancel internships, and scale back summer research opportunities—making it harder for young people to pursue scientific and engineering careers. In an age dominated by short-term metrics and rapid returns, it can be difficult to justify research whose applications may not materialize for decades. But those are precisely the kinds of efforts we must support if we want to secure our technological future.

Consider John McCarthy, the mathematician and computer scientist who coined the term “artificial intelligence.” In the late 1950s, while at MIT, he led one of the first AI groups and developed Lisp, a programming language still used today in scientific computing and AI applications. At the time, practical AI seemed far off. But that early foundational work laid the groundwork for today’s AI-driven world.

After the initial enthusiasm of the 1950s through the ’70s, interest in neural networks—a leading AI architecture today inspired by the human brain—declined during the so-called “AI winters” of the late 1990s and early 2000s. Limited data, inadequate computational power, and theoretical gaps made it hard for the field to progress. Still, researchers like Geoffrey Hinton and John Hopfield pressed on. Hopfield, now a 2024 Nobel laureate in physics, first introduced his groundbreaking neural network model in 1982, in a paper published in Proceedings of the National Academy of Sciences of the USA. His work revealed the deep connections between collective computation and the behavior of disordered magnetic systems. Together with the work of colleagues including Hinton, who was awarded the Nobel the same year, this foundational research seeded the explosion of deep-learning technologies we see today.

One reason neural networks now flourish is the graphics processing unit, or GPU—originally designed for gaming but now essential for the matrix-heavy operations of AI. These chips themselves rely on decades of fundamental research in materials science and solid-state physics: high-dielectric materials, strained silicon alloys, and other advances making it possible to produce the most efficient transistors possible. We are now entering another frontier, exploring memristors, phase-changing and 2D materials, and spintronic devices.

If you’re reading this on a phone or laptop, you’re holding the result of a gamble someone once made on curiosity. That same curiosity is still alive in university and research labs today—in often unglamorous, sometimes obscure work quietly laying the groundwork for revolutions that will infiltrate some of the most essential aspects of our lives 50 years from now. At the leading physics journal where I am editor, my collaborators and I see the painstaking work and dedication behind every paper we handle. Our modern economy—with giants like Nvidia, Microsoft, Apple, Amazon, and Alphabet—would be unimaginable without the humble transistor and the passion for knowledge fueling the relentless curiosity of scientists like those who made it possible.

The next transistor may not look like a switch at all. It might emerge from new kinds of materials (such as quantum, hybrid organic-inorganic, or hierarchical types) or from tools we haven’t yet imagined. But it will need the same ingredients: solid fundamental knowledge, resources, and freedom to pursue open questions driven by curiosity, collaboration—and most importantly, financial support from someone who believes it’s worth the risk.

Julia R. Greer is a materials scientist at the California Institute of Technology. She is a judge for MIT Technology Review’s Innovators Under 35 and a former honoree (in 2008).

Transforming CX with embedded real-time analytics 

During Black Friday in 2024, Stripe processed more than $31 billion in transactions, with processing rates peaking at 137,000 transactions per minute, the highest in the company’s history. The financial-services firm had to analyze every transaction in real time to prevent nearly 21 million fraud attempts that could have siphoned more than $910 million from its merchant customers. 

Yet, fraud protection is only one reason that Stripe embraced real-time data analytics. Evaluating trends in massive data flows is essential for the company’s services, such as allowing businesses to bill based on usage and monitor orders and inventory. In fact, many of Stripe’s services would not be possible without real-time analytics, says Avinash Bhat, head of data infrastructure at Stripe. “We have certain products that require real-time analytics, like usage-based billing and fraud detection,” he says. “Without our real-time analytics, we would not have a few of our products and that’s why it’s super important.” 

Stripe is not alone. In today’s digital world, data analysis is increasingly delivered directly to business customers and individual users, allowing real-time, continuous insights to shape user experiences. Ride-hailing apps calculate prices and estimate times of arrival (ETAs) in near-real time. Financial platforms deliver real-time cash-flow analysis. Customers expect and reward data-driven services that reflect what is happening now. 

In fact, having the capability to collect and analyze data in real time correlates with companies’ ability to grow. Business leaders that scored company in the top quartile for real-time operations saw 50% higher revenue growth and net margins, compared to companies placed in the bottom quartile, according to a survey conducted by the MIT Center for Information Systems Research (CISR) and Insight Partners. The top companies focused on automated processes and fast decision-making at all levels, relying on easily accessible data services updated in real time. 

Companies that wait on data are putting themselves in a bind, says Kishore Gopalakrishna, co-founder and CEO of StarTree, a real-time data-analytics technology provider. “The basis of real-time analytics is—when the value of the data is very high—we want to capitalize on it instead of waiting and doing batch analytics,” he says. “Getting access to the data a day, or even hours, later is sometimes actually too late.” 

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.