Consolidating systems for AI with iPaaS

For decades, enterprises reacted to shifting business pressures with stopgap technology solutions. To rein in rising infrastructure costs, they adopted cloud services that could scale on demand. When customers shifted their lives onto smartphones, companies rolled out mobile apps to keep pace. And when businesses began needing real-time visibility into factories and stockrooms, they layered on IoT systems to supply those insights.

Each new plug-in or platform promised better, more efficient operations. And individually, many delivered. But as more and more solutions stacked up, IT teams had to string together a tangled web to connect them—less an IT ecosystem and more of a make-do collection of ad-hoc workarounds.

That reality has led to bottlenecks and maintenance burdens, and the impact is showing up in performance. Today, fewer than half of CIOs (48%) say their current digital initiatives are meeting or exceeding business outcome targets. Another 2025 survey found that operations leaders point to integration complexity and data quality issues as top culprits for why investments haven’t delivered as expected.

Achim Kraiss, chief product officer of SAP Integration Suite, elaborates on the wide-ranging problems inherent in patchwork IT: “A fragmented landscape makes it difficult to see and control end-to-end business processes,” he explains. “Monitoring, troubleshooting, and governance all suffer. Costs go up because of all the complex mappings and multi-application connectivity you have to maintain.”

These challenges take on new significance as enterprises look to adopt AI. As AI becomes embedded in everyday workflows, systems are suddenly expected to move far larger volumes of data, at higher speeds, and with tighter coordination than yesterday’s architectures were built
to sustain.

As companies now prepare for an AI-powered future, whether that is generative AI, machine learning, or agentic AI, many are realizing that the way data moves through their business matters just as much as the insights it generates. As a result, organizations are moving away from scattered integration tools and toward consolidated, end-to-end platforms that restore order and streamline how systems interact.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Reimagining ERP for the agentic AI era

The story of enterprise resource planning (ERP) is really a story of businesses learning to organize themselves around the latest, greatest technology of the times. In the 1960s through the ’80s, mainframes, material requirements planning (MRP), and manufacturing resource planning (MRP II) brought core business data from file cabinets to centralized systems. Client-server architectures defined the ’80s and ’90s, taking digitization mainstream during the internet’s infancy. And in the 21st century, as work moved beyond the desktop, SaaS and cloud ushered in flexible access and elastic infrastructure.

The rise of composability and agentic AI marks yet another dawn—and an apt one for the nascent intelligence age. Composable architectures let organizations assemble capabilities from multiple systems in a mix-and-match fashion, so they can swap vendor gridlock for an à la carte portfolio of fit-for-purpose modules. On top of that architectural shift, agentic AI enables coordination across systems that weren’t originally designed to talk to one another.

Early indicators suggest that AI-enabled ERP will yield meaningful performance gains: One 2024 study found that organizations implementing AI-driven ERP solutions stand to gain around a 30% boost in user satisfaction and a 25% lift in productivity; another suggested that AI-driven ERP can lead to processing time savings of up to 45%, as well as improvements in decision accuracy to the tune of 60%.

These dual advancements address long-standing gaps that previous ERP eras fell short of delivering: freedom to innovate outside of vendor roadmaps, capacity for rapid iteration, and true interoperability across all critical functions. This shift signals the end of monolithic dependency as well as a once-in-a-generation opportunity for early movers to gain a competitive edge.

Key takeaways include:

  • Enterprises are moving away from monolithic ERP vendor upgrades in favor of modular architectures that allow them to change or modernize components independently while keeping a stable core for essential transactions.
  • Agentic AI is a timely complement to composability, functioning as a UX and orchestration layer that can coordinate workflows across disparate systems and turn multi-step processes into automated, cross-platform operations.
  • These dual shifts are finally enabling technology architecture to organize around the business, instead of the business around the ERP. Companies can modernize by reconfiguring and extending what they already have, rather than relying on ERP-centric upgrades.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Securing digital assets as crypto crime surges

In February 2025, cyberattackers thought to be linked to North Korea executed a sophisticated supply chain attack on cryptocurrency exchange Bybit. By targeting its infrastructure and multi-signature security process, hackers managed to steal more than $1.5 billion worth of Ethereum in the largest known digital-asset theft to date.

The ripple effects were felt across the cryptocurrency market, with the price of Bitcoin dropping 20% from its record high in January. And the massive losses put 2025 on track to be the worst year in history for cryptocurrency theft.

Bitcoin, Ethereum, and stablecoins have established themselves as benchmark monetary vehicles, and, despite volatility, their values continue to rise. In October 2025, the value of cryptocurrency and other digital assets topped $4 trillion.

Yet, with this burgeoning value and liquidity comes more attention from cybercriminals and digital thieves. The Bybit attack demonstrates how focused sophisticated attackers are on finding ways to break the security measures that guard the crypto ecosystem, says Charles Guillemet, chief technology officer of Ledger, a provider of secure signer platforms.

”The attackers were very well organized, they have plenty of money, and they are spending a lot of time and resources trying to attack big stuff, because they can,” he says. “In terms of opportunity costs, it’s a big investment, but if at the end they earn $1.4 billion it makes sense to do this investment.”

But it also demonstrates how the crypto threat landscape has pitfalls not just for the unwary but for the tech savvy too. On the one hand, cybercriminals are using techniques like social engineering to target end users. On the other, they are increasingly looking for vulnerabilities to exploit at different points in the cryptocurrency infrastructure.

Historically, owners of digital assets have had to stand against these attackers alone. But now, cybersecurity firms and cryptocurrency-solution providers are offering new solutions, powered by in-depth threat research.

A treasure trove for attackers

One of the advantages of cryprocurrency is self custody. Users can save their private keys—the critical piece of alphanumeric code that proves ownership and grants full control over digital assets—into either a software or hardware wallet to safeguard it.

But users must put their faith in the security of the wallet technology, and, because the data is the asset, if the keys are lost or forgotten, the value too can be lost.

”If I hack your credit card, what is the issue? You will call your bank, and they will manage to revert the operations,” says Vincent Bouzon, head of the Donjon research team at Ledger. “The problem with crypto is, if something happens, it’s too late. So we must eliminate the possibility of vulnerabilities and give users security.”

Increasingly, attackers are focusing on digital assets known as stablecoins, a form of cryptocurrency that is pegged to the value of a hard asset, such as gold, or a fiat currency, like the US dollar.

Stablecoins rely on smart contracts—digital contracts stored on blockchain that use pre-set code to manage issuance, maintain value, and enforce rules—that can be vulnerable to different classes of attacks, often taking advantage of users’ credulity or lack of awareness about the threats. Post-theft countermeasures, such as freezing the transfer of coins and blacklisting of addresses, can lessen the risk with these kinds of attacks, however.

Understanding vulnerabilities

Software-based wallets, also known as “hot wallets,” which are applications or programs that run on a user’s computer, phone, or web browser, are often a weak link. While their connection to the internet makes them convenient for users, it also makes them more readily accessible to hackers too.

“If you are using a software wallet, by design it’s vulnerable because your keys are stored inside your computer or inside your phone. And unfortunately, a phone or a computer is not designed for security.” says Guillemet.

The rewards for exploiting this kind of vulnerability can be extensive. Hackers who stole credentials in a targeted attack on encrypted password manager application LastPass in 2022 managed to transfer millions worth of cryptocurrency away from victims in the subsequent two or more years. 

Even hardware-based wallets, which often resemble USB drives or key fobs and are more secure than their software counterparts since they are completely offline, can have vulnerabilities that a diligent attacker might find and exploit.

Tactics include the use of side-channel attacks, for example, where a cycbercriminal observes a system’s physical side effects, like timing, power, or electromagnetic and acoustic emissions to gain information about the implementation of an algorithm.

Guillemet explains that cybersecurity providers building digital asset solutions, such as wallets, need to help minimize the burden on the users by building security features and providing education about enhancing defense.

For businesses to protect cryptocurrency, tokens, critical documents, or other digital assets, this could be a platform that allows multi-stakeholder custody and governance, supports software and hardware protections, and allows for visibility of assets and transactions through Web3 checks.

Developing proactive security measures

As the threat landscape evolves at breakneck speed, in-depth research conducted by attack labs like Ledger Donjon can help security firms keep pace. The team at Ledger Donjon are working to understand how to proactively secure the digital asset ecosystem and set global security standards.

Key projects include the team’s offensive security research, which uses ethical and white hat hackers to simulate attacks and uncover weaknesses in hardware wallets, cryptographic systems, and infrastructure.

In November 2022, the Donjon team discovered a vulnerability in Web3 wallet platform Trust Wallet, which had been acquired by Binance. They found that the seed-phrase generation was not random enough, allowing the team to compute all possible private keys and putting as much as $30 million stored in Trust Wallet accounts at risk, says Bouzon. “The entropy was not high enough, the entropy was only 4 billion. It was huge, but not enough,” he says.

To enhance overall safety there are three key principles that digital-asset protection platforms should apply, says Bouzon. First, security providers should create secure algorithms to generate the seed phrases for private keys and conduct in-depth security audits of the software. Second, users should use hardware wallets with a secure screen instead of software wallets. And finally, any smart contract transaction should include visibility into what is being signed to avoid blind signing attacks.

Ultimately, the responsibility for safeguarding these valuable assets lies on both digital asset solution providers and the users themselves. As the value of cryptocurrencies continues to grow so too will the threat landscape as hackers keep attempting to circumvent new security measures. While digital asset providers, security firms, and wallet solutions must work to build strong and simple protection to support the cryptocurrency ecosystems, users must also seek out the information and education they need to proactively protect themselves and their wallets.

Learn more about how to secure digital assets in the Ledger Academy.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Deploying a hybrid approach to Web3 in the AI era

When the concept of “Web 3.0” first emerged about a decade ago the idea was clear: Create a more user-controlled internet that lets you do everything you can now, except without servers or intermediaries to manage the flow of information.

Where Web2, which emerged in the early 2000s, relies on centralized systems to store data and supply compute, all owned—and monetized by—a handful of global conglomerates, Web3 turns that structure on its head. Instead, data and compute are decentralized through technologies like blockchain and peer-to-peer networks.

What was once a futuristic concept is quickly becoming a more concrete reality, even at a time when Web2 still dominates. Six out of ten Fortune 500 companies are exploring blockchain-based solutions, most taking a hybrid approach that combines traditional Web2 business models and infrastructure with the decentralized technologies and principles of Web3.

Popular use cases include cloud services, supply chain management, and, most notably financial services. In fact, at one point, the daily volume of transactions processed on decentralized finance exchanges exceeded $10 billion.

Gaining a Web3 edge

Among the advantages of Web3 for the enterprise are greater ownership and control of sensitive data, says Erman Tjiputra, founder and CEO of the AIOZ Network, which is building infrastructure for Web3, powered by decentralized physical infrastructure networks (DePIN), blockchain-based systems that govern physical infrastructure assets.

More cost-effective compute is another benefit, as is enhanced security and privacy as the cyberattack landscape grows more hostile, he adds. And it could even help protect companies from outages caused by a single point of failure, which can lead to downtime, data loss, and revenue deficits.

But perhaps the most exciting opportunity, says Tjiputra, is the ability to build and scale AI reliably and affordably. By leveraging a people-powered internet infrastructure, companies can far more easily access—and contribute to—shared resource like bandwidth, storage, and processing power to run AI inference, train models, and store data. All while using familiar developer tooling and open, usage-based incentives.

“We’re in a compute crunch where requirements are insatiable, and Web3 creates this ability to benefit while contributing,” explains Tjiputra.

In 2025, AIOZ Network launched a distributed compute platform and marketplace where developers and enterprises can access and monetize AI assets, and run AI inference or training on AIOZ Network’s more than 300,000 contributing devices. The model allows companies to move away from opaque datasets and models and scale flexibly, without centralized lock in.

Overcoming Web3 deployment challenges

Despite the promise, it is still early days for Web3, and core systemic challenges are leaving senior leadership and developers hesitant about its applicability at scale.

One hurdle is a lack of interoperability. The current fragmentation of blockchain networks creates a segregated ecosystem that makes it challenging to transfer assets or data between platforms. This often complicates transactions and introduces new security risks due to the reliance on mechanisms such as cross-chain bridges. These are tools that allow asset transfers between platforms but which have been shown to be vulnerable to targeted attacks.

“We have countless blockchains running on different protocols and consensus models,” says Tjiputra. “These blockchains need to work with each other so applications can communicate regardless of which chain they are on. This makes interoperability fundamental.”

Regulatory uncertainty is also a challenge. Outdated legal frameworks can sit at odds with decentralized infrastructures, especially when it comes to compliance with data protection and anti-money laundering regulations.

“Enterprises care about verifiability and compliance as much as innovation, so we need frameworks where on-chain transparency strengthens accountability instead of adding friction,” Tjiputra says.

And this is compounded by user experience (UX) challenges, says Tjiputra. “The biggest setback in Web3 today is UX,” he says. “For example, in Web2, if I forget my bank username or password, I can still contact the bank, log in and access my assets. The trade-off in Web3 is that, should that key be compromised or lost, we lose access to those assets. So, key recovery is a real problem.”

Building a bridge to Web3

Although such systemic challenges won’t be solved overnight, by leveraging DePIN networks, enterprises can bridge the gap between Web2 and Web3, without making a wholesale switch. This can minimize risk while harnessing much of the potential.

AIOZ Network’s own ecosystem includes capacity for media streaming, AI compute, and distributed storage that can be plugged into an existing Web2 tech stack. “You don’t need to go full Web3,” says Tjiputra. “You can start by plugging distributed storage into your workflow, test it, measure it, and see the benefits firsthand.”

The AIOZ Storage solution, for example, offers scalable distributed object storage by leveraging the global network of contributor devices on AIOZ DePIN. It is also compatible with existing storage systems or commonly used web application programming interfaces (APIs).

“Say we have a programmer or developer who uses Amazon S3 Storage or REST APIs, then all they need to do is just repoint the endpoints,” explains Tjiputra. “That’s it. It’s the same tools, it’s really simple. Even with media, with a single one-stop shop, developers can do transcoding and streaming with a simple REST API.”

Built on Cosmos, a network of hundreds of different blockchains that can communicate with each other, and a standardized framework enabled by Ethereum Virtual Machine (EVM), AIOZ Network has also prioritized interoperability. “Applications shouldn’t care which chain they’re on. Developers should target APIs without worrying about consensus mechanisms. That’s why we built on Cosmos and EVM—interoperability first.”

This hybrid model, which allows enterprises to use both Web2 and Web3 advantages in tandem, underpins what Tjiputra sees as the longer-term ambition for the much-hyped next iteration of the internet.

“Our vision is a truly peer-to-peer foundation for a people-powered internet, one that minimizes single points of failure through multi-region, multi-operator design,” says Tjiputra. “By distributing compute and storage across contributors, we gain both cost efficiency and end-to-end security by default.

“Ideally, we want to evolve the internet toward a more people-powered model, but we’re not there yet. We’re still at the starting point and growing.”

Indeed, Web3 isn’t quite snapping at the heels of the world’s Web2 giants, but its commercial advantages in an era of AI have become much harder to ignore. And with DePIN bridging the gap, enterprises and developers can step into that potential while keeping one foot on surer ground.

To learn more from AIOZ Network, you can read the AIOZ Network Vision Paper.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Meet the man hunting the spies in your smartphone

In April 2025, Ronald Deibert left all electronic devices at home in Toronto and boarded a plane. When he landed in Illinois, he took a taxi to a mall and headed directly to the Apple Store to purchase a new laptop and iPhone. He’d wanted to keep the risk of having his personal devices confiscated to a minimum, because he knew his work made him a prime target for surveillance. “I’m traveling under the assumption that I am being watched, right down to exactly where I am at any moment,” Deibert says.

Deibert directs the Citizen Lab, a research center he founded in 2001 to serve as “counterintelligence for civil society.” Housed at the University of Toronto, the lab operates independently of governments or corporate interests, relying instead on research grants and private philanthropy for financial support. It’s one of the few institutions that investigate cyberthreats exclusively in the public interest, and in doing so, it has exposed some of the most egregious digital abuses of the past two decades.

For many years, Deibert and his colleagues have held up the US as the standard for liberal democracy. But that’s changing, he says: “The pillars of democracy are under assault in the United States. For many decades, in spite of its flaws, it has upheld norms about what constitutional democracy looks like or should aspire to. [That] is now at risk.”

Even as some of his fellow Canadians avoided US travel after Donald Trump’s second election, Deibert relished the opportunity to visit. Alongside his meetings with human rights defenders, he also documented active surveillance at Columbia University during the height of its student protests. Deibert snapped photos of drones above campus and noted the exceptionally strict security protocols. “It was unorthodox to go to the United States,” he says. “But I really gravitate toward problems in the world.”


Deibert, 61, grew up in East Vancouver, British Columbia, a gritty area with a boisterous countercultural presence. In the ’70s, Vancouver brimmed with draft dodgers and hippies, but Deibert points to American investigative journalism—exposing the COINTELPRO surveillance program, the Pentagon Papers, Watergate—as the seed of his respect for antiestablishment sentiment. He didn’t imagine that this fascination would translate into a career, however.

“My horizons were pretty low because I came from a working-class family, and there weren’t many people in my family—in fact, none—who went on to university,” he says.

Deibert eventually entered a graduate program in international relations at the University of British Columbia. His doctoral research brought him to a field of inquiry that would soon explode: the geopolitical implications of the nascent internet.

“In my field, there were a handful of people beginning to talk about the internet, but it was very shallow, and that frustrated me,” he says. “And meanwhile, computer science was very technical, but not political—[politics] was almost like a dirty word.”

Deibert continued to explore these topics at the University of Toronto when he was appointed to a tenure-track professorship, but it wasn’t until after he founded the Citizen Lab in 2001 that his work rose to global prominence. 

What put the lab on the map, Deibert says, was its 2009 report “Tracking GhostNet,” which uncovered a digital espionage network in China that had breached offices of foreign embassies and diplomats in more than 100 countries, including the office of the Dalai Lama. The report and its follow-up in 2010 were among the first to publicly expose cybersurveillance in real time. In the years since, the lab has published over 180 such analyses, garnering praise from human rights advocates ranging from Margaret Atwood to Edward Snowden.

The lab has rigorously investigated authoritarian regimes around the world (Deibert says both Russia and China have his name on a “list” barring his entry). The group was the first to uncover the use of commercial spyware to surveil people close to the Saudi dissident and Washington Post journalist Jamal Khashoggi prior to his assassination, and its research has directly informed G7 and UN resolutions on digital repression and led to sanctions on spyware vendors. Even so, in 2025 US Immigration and Customs Enforcement reactivated a $2 million contract with the spyware vendor Paragon. The contract, which the Biden administration had previously placed under a stop-work order, resembles steps taken by governments in Europe and Israel that have also deployed domestic spyware to address security concerns. 

“It saves lives, quite literally,” Cindy Cohn, executive director of the Electronic Frontier Foundation, says of the lab’s work. “The Citizen Lab [researchers] were the first to really focus on technical attacks on human rights activists and democracy activists all around the world. And they’re still the best at it.”


When recruiting new Citizen Lab employees (or “Labbers,” as they refer to one another), Deibert forgoes stuffy, pencil-pushing academics in favor of brilliant, colorful personalities, many of whom personally experienced repression from some of the same regimes the lab now investigates.

Noura Aljizawi, a researcher on digital repression who survived torture at the hands of the al-Assad regime in Syria, researches the distinct threat that digital technologies pose to women and queer people, particularly when deployed against exiled nationals. She helped create Security Planner, a tool that gives personalized, expert-reviewed guidance to people looking to improve their digital hygiene, for which the University of Toronto awarded her an Excellence Through Innovation Award. 

Work for the lab is not without risk. Citizen Lab fellow Elies Campo, for example, was followed and photographed after the lab published a 2022 report that exposed the digital surveillance of dozens of Catalonian citizens and members of parliament, including four Catalonian presidents who were targeted during or after their terms.

Still, the lab’s reputation and mission make recruitment fairly easy, Deibert says. “This good work attracts a certain type of person,” he says. “But they’re usually also drawn to the sleuthing. It’s detective work, and that can be highly intoxicating—even addictive.”

Deibert frequently deflects the spotlight to his fellow Labbers. He rarely discusses the group’s accomplishments without referencing two senior researchers, Bill Marczak and John Scott-Railton, alongside other staffers. And on the occasion that someone decides to leave the Citizen Lab to pursue another position, this appreciation remains.

“We have a saying: Once a Labber, always a Labber,” Deibert says.


While in the US, Deibert taught a seminar on the Citizen Lab’s work to Northwestern University undergraduates and delivered talks on digital authoritarianism at the Columbia University Graduate School of Journalism. Universities in the US had been subjected to funding cuts and heightened scrutiny from the Trump administration, and Deibert wanted to be “in the mix” at such institutions to respond to what he sees as encroaching authoritarian practices by the US government. 

Since Deibert’s return to Canada, the lab has continued its work unearthing digital threats to civil society worldwide, but now Deibert must also contend with the US—a country that was once his benchmark for democracy but has become another subject of his scrutiny. “I do not believe that an institution like the Citizen Lab could exist right now in the United States,” he says. “The type of research that we pioneered is under threat like never before.”

He is particularly alarmed by the increasing pressures facing federal oversight bodies and academic institutions in the US. In September, for example, the Trump administration defunded the Council of the Inspectors General on Integrity and Efficiency, a government organization dedicated to preventing waste, fraud, and abuse within federal agencies, citing partisanship concerns. The White House has also threatened to freeze federal funding to universities that do not comply with administration directives related to gender, DEI, and campus speech. These sorts of actions, Deibert says, undermine the independence of watchdogs and research groups like the Citizen Lab. 

Cohn, the director of the EFF, says the lab’s location in Canada allows it to avoid many of these attacks on institutions that provide accountability. “Having the Citizen Lab based in Toronto and able to continue to do its work largely free of the things we’re seeing in the US,” she says, “could end up being tremendously important if we’re going to return to a place of the rule of law and protection of human rights and liberties.” 

Finian Hazen is a journalism and political science student at Northwestern University.

Securing VMware workloads in regulated industries

At a regional hospital, a cardiac patient’s lab results sit behind layers of encryption, accessible to his surgeon but shielded from those without strictly need-to-know status. Across the street at a credit union, a small business owner anxiously awaits the all-clear for a wire transfer, unaware that fraud detection systems have flagged it for further review.

Such scenarios illustrate how companies in regulated industries juggle competing directives: Move data and process transactions quickly enough to save lives and support livelihoods, but carefully enough to maintain ironclad security and satisfy regulatory scrutiny.

Organizations subject to such oversight walk a fine line every day. And recently, a number of curveballs have thrown off that hard-won equilibrium. Agencies are ramping up oversight thanks to escalating data privacy concerns; insurers are tightening underwriting and requiring controls like MFA and privileged-access governance as a condition of coverage. Meanwhile, the shifting VMware landscape has introduced more complexity for IT teams tasked with planning long-term infrastructure strategies. 

Download the full article

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Accelerating VMware migrations with a factory model approach

In 1913, Henry Ford cut the time it took to build a Model T from 12 hours to just over 90 minutes. He accomplished this feat through a revolutionary breakthrough in process design: Instead of skilled craftsmen building a car from scratch by hand, Ford created an assembly line where standardized tasks happened in sequence, at scale.

The IT industry is having a similar moment of reinvention. Across operations from software development to cloud migration, organizations are adopting an AI-infused factory model that replaces manual, one-off projects with templated, scalable systems designed for speed and cost-efficiency.

Take VMware migrations as an example. For years, these projects resembled custom production jobs—bespoke efforts that often took many months or even years to complete. Fluctuating licensing costs added a layer of complexity, just as business leaders began pushing for faster modernization to make their organizations AI-ready. That urgency has become nearly universal: According to a recent IDC report, six in 10 organizations evaluating or using cloud services say their IT infrastructure requires major transformation, while 82% report their cloud environments need modernization.

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Moving toward LessOps with VMware-to-cloud migrations

Today’s IT leaders face competing mandates to do more (“make us an ‘AI-first’ enterprise—yesterday”) with less (“no new hires for at least the next six months”).

VMware has become a focal point of these dueling directives. It remains central to enterprise IT, with 80% of organizations using VMware infrastructure products. But shifting licensing models are prompting teams to reconsider how they manage and scale these workloads, often on tighter budgets.

For many organizations, the path forward involves adopting a LessOps model, an operational strategy that makes hybrid environments manageable without increasing headcount. This operational philosophy minimizes human intervention through extensive automation and selfservice capabilities while maintaining governance and compliance.

In practice, VMware-to-cloud migrations create a “two birds, one stone” opportunity. They present a practical moment to codify the automation and governance practices LessOps depends on—laying the groundwork for a leaner, more resilient IT operating model.

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Aligning VMware migration with business continuity

For decades, business continuity planning meant preparing for anomalous events like hurricanes, floods, tornadoes, or regional power outages. In anticipation of these rare disasters, IT teams built playbooks, ran annual tests, crossed their fingers, and hoped they’d never have to use them.

In recent years, an even more persistent threat has emerged. Cyber incidents, particularly ransomware, are now more common—and often, more damaging—than physical disasters. In a recent survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year. Earlier in 2025, ransomware attack rates on enterprises reached record highs.

Mark Vaughn, senior director of the virtualization practice at Presidio, has witnessed the trend firsthand. “When I speak at conferences, I’ll ask the room, ‘How many people have been impacted?’ For disaster recovery, you usually get a few hands,” he says. “But a little over a year ago, I asked how many people in the room had been hit by ransomware, and easily two-thirds of the hands went up.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

From vibe coding to context engineering: 2025 in software development

This year, we’ve seen a real-time experiment playing out across the technology industry, one in which AI’s software engineering capabilities have been put to the test against human technologists. And although 2025 may have started with AI looking strong, the transition from vibe coding to what’s being termed context engineering shows that while the work of human developers is evolving, they nevertheless remain absolutely critical.

This is captured in the latest volume of the “Thoughtworks Technology Radar,” a report on the technologies used by our teams on projects with clients. In it, we see the emergence of techniques and tooling designed to help teams better tackle the problem of managing context when working with LLMs and AI agents. 

Taken together, there’s a clear signal of the direction of travel in software engineering and even AI more broadly. After years of the industry assuming progress in AI is all about scale and speed, we’re starting to see that what matters is the ability to handle context effectively.

Vibes, antipatterns, and new innovations 

In February 2025, Andrej Karpathy coined the term vibe coding. It took the industry by storm. It certainly sparked debate at Thoughtworks; many of us were skeptical. On an April episode of our technology podcast, we talked about our concerns and were cautious about how vibe coding might evolve.

Unsurprisingly given the implied imprecision of vibe-based coding, antipatterns have been proliferating. We’ve once again noted, for instance, complacency with AI generated code on the latest volume of the Technology Radar, but it’s also worth pointing out that early ventures into vibe coding also exposed a degree of complacency about what AI models can actually handle — users demanded more and prompts grew larger, but model reliability started to falter.

Experimenting with generative AI 

This is one of the drivers behind increasing interest in engineering context. We’re well aware of its importance, working with coding assistants like Claude Code and Augment Code. Providing necessary context—or knowledge priming—is crucial. It ensures outputs are more consistent and reliable, which will ultimately lead to better software that needs less work — reducing rewrites and potentially driving productivity.

When effectively prepared, we’ve seen good results when using generative AI to understand legacy codebases. Indeed, done effectively with the appropriate context, it can even help when we don’t have full access to source code

It’s important to remember that context isn’t just about more data and more detail. This is one of the lessons we’ve taken from using generative AI for forward engineering. It might sound counterintuitive, but in this scenario, we’ve found AI to be more effective when it’s further abstracted from the underlying system — or, in other words, further removed from the specifics of the legacy code. This is because the solution space becomes much wider, allowing us to better leverage the generative and creative capabilities of the AI models we use.

Context is critical in the agentic era

The backdrop of changes that have happened over recent months is the growth of agents and agentic systems — both as products organizations want to develop and as technology they want to leverage. This has forced the industry to properly reckon with context and move away from a purely vibes-based approach.

Indeed, far from simply getting on with tasks they’ve been programmed to do, agents require significant human intervention to ensure they are equipped to respond to complex and dynamic contexts. 

There are a number of context-related technologies aimed at tackling this challenge, including agents.md, Context7, and Mem0. But it’s also a question of approach. For instance, we’ve found success with anchoring coding agents to a reference application — essentially providing agents with a contextual ground truth. We’re also experimenting with using teams of coding agents; while this might sound like it increases complexity, it actually removes some of the burden of having to give a single agent all the dense layers of context it needs to do its job successfully.

Toward consensus

Hopefully the space will mature as practices and standards embed. It would be remiss to not mention the significance of the Model Context Protocol, which has emerged as the go-to protocol for connecting LLMs or agentic AI to sources of context. Relatedly, the agent2agent (A2A) protocol leads the way with standardizing how agents interact with one another. 

It remains to be seen whether these standards win out. But in any case, it’s important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems. Sure, AI needs context, but so do we. Techniques like curated shared instructions for software teams may not sound like the hottest innovation on the planet, but they can be remarkably powerful for helping teams work together.

There’s perhaps also a conversation to be had about what these changes mean for agile software development. Spec-driven development is one idea that appears to have some traction, but there are still questions about how we remain adaptable and flexible while also building robust contextual foundations and ground truths for AI systems.

Software engineers can solve the context challenge

Clearly, 2025 has been a huge year in the evolution of software engineering as a practice. There’s a lot the industry needs to monitor closely, but it’s also an exciting time. And while fears about AI job automation may remain, the fact the conversation has moved from questions of speed and scale to context puts software engineers right at the heart of things. 

Once again, it will be down to them to experiment, collaborate, and learn — the future depends on it.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.