Reimagining cybersecurity in the era of AI and quantum

AI and quantum technologies are dramatically reconfiguring how cybersecurity functions, redefining the speed and scale with which digital defenders and their adversaries can operate.

The weaponization of AI tools for cyberattacks is already proving a worthy opponent to current defenses. From reconnaissance to ransomware, cybercriminals can automate attacks faster than ever before with AI. This includes using generative AI to create social engineering attacks at scale, churning out tens of thousands of tailored phishing emails in seconds, or accessing widely available voice cloning software capable of bypassing security defenses for as little as a few dollars. And now, agentic AI raises the stakes by introducing autonomous systems that can reason, act, and adapt like human adversaries.

But AI isn’t the only force shaping the threat landscape. Quantum computing has the potential to seriously undermine current encryption standards if developed unchecked. Quantum algorithms can solve the mathematical problems underlying most modern cryptography, particularly public-key systems like RSA and Elliptic Curve, widely used for secure online communication, digital signatures, and cryptocurrency.

“We know quantum is coming. Once it does, it will force a change in how we secure data across everything, including governments, telecoms, and financial systems,” says Peter Bailey, senior vice president and general manager of Cisco’s security business.

“Most organizations are understandably focused on the immediacy of AI threats,” says Bailey. “Quantum might sound like science fiction, but those scenarios are coming faster than many realize. It’s critical to start investing now in defenses that can withstand both AI and quantum attacks.”

Critical to this defense is a zero trust approach to cybersecurity, which assumes no user or device can be inherently trusted. By enforcing continuous verification, zero trust enables constant monitoring and ensures that any attempts to exploit vulnerabilities are quickly detected and addressed in real time. This approach is technology-agnostic and creates a resilient framework even in the face of an ever-changing threat landscape.

Putting up AI defenses 

AI is lowering the barrier to entry for cyberattacks, enabling hackers even with limited skills or resources to infiltrate, manipulate, and exploit the slightest digital vulnerability.

Nearly three-quarters (74%) of cybersecurity professionals say AI-enabled threats are already having a significant impact on their organization, and 90% anticipate such threats in the next one to two years. 

“AI-powered adversaries have advanced techniques and operate at machine speed,” says Bailey. “The only way to keep pace is to use AI to automate response and defend at machine speed.”

To do this, Bailey says, organizations must modernize systems, platforms, and security operations to automate threat detection and response—processes that have previously relied on human rule-writing and reaction times. These systems must adapt dynamically as environments evolve and criminal tactics change.

At the same time, companies must strengthen the security of their AI models and data to reduce exposure to manipulation from AI-enabled malware. Such risks could include, for instance, prompt injections, where a malicious user crafts a prompt to manipulate an AI model into performing unintended actions, bypassing its original instructions and safeguards.

Agentic AI further ups the ante, with hackers able to use AI agents to automate attacks and make tactical decisions without constant human oversight. “Agentic AI has the potential to collapse the cost of the kill chain,” says Bailey. “That means everyday cybercriminals could start executing campaigns that today only well-funded espionage operations can afford.”

Organizations, in turn, are exploring how AI agents can help them stay ahead. Nearly 40% of companies expect agentic AI to augment or assist teams over the next 12 months, especially in cybersecurity, according to Cisco’s 2025 AI Readiness Index. Use cases include AI agents trained on telemetry, which can identify anomalies or signals from machine data too disparate and unstructured to be deciphered by humans. 

Calculating the quantum threat

As many cybersecurity teams focus on the very real AI-driven threat, quantum is waiting on the sidelines. Almost three-quarters (73%) of US organizations surveyed by KPMG say they believe it is only a matter of time before cybercriminals are using quantum to decrypt and disrupt today’s cybersecurity protocols. And yet, the majority (81%) also admit they could do more to ensure that their data remains secure.

Companies are right to be concerned. Threat actors are already carrying out harvest now, decrypt later attacks, stockpiling sensitive encrypted data to crack once quantum technology matures. Examples include state-sponsored actors intercepting government communications and cybercriminal networks storing encrypted internet traffic or financial records. 

Large technology companies are among the first to roll out quantum defenses. For example, Apple is using cryptography protocol PQ3 to defend against harvest now, decrypt later attacks on its iMessage platform. Google is testing post-quantum cryptography (PQC)—which is resistant to attacks from both quantum and classical computers—in its Chrome browser. And Cisco “has made significant investments in quantum-proofing our software and infrastructure,” says Bailey. “You’ll see more enterprises and governments taking similar steps over the next 18 to 24 months,” he adds. 

As regulations like the US Quantum Computing Cybersecurity Preparedness Act lay out requirements for mitigating against quantum threats, including standardized PQC algorithms by the National Institute of Standards and Technology, a wider range of organizations will start preparing their own quantum defenses. 

For organizations beginning that journey, Bailey outlines two key actions. First, establish visibility. “Understand what data you have and where it lives,” he says. “Take inventory, assess sensitivity, and review your encryption keys, rotating out any that are weak or outdated.”

Second, plan for migration. “Next, assess what it will take to support post-quantum algorithms across your infrastructure. That means addressing not just the technology, but also the process and people implications,” Bailey says.

Adopting proactive defense 

Ultimately, the foundation for building resilience against both AI and quantum is a zero trust approach, says Bailey. By embedding zero trust access controls across users, devices, business applications, networks, and clouds, this approach grants only the minimum access required to complete a task and enables continuous monitoring. It can also minimize the attack surface by confining a potential threat to an isolated zone, preventing it from accessing other critical systems.

Into this zero trust architecture, organizations can integrate specific measures to defend against AI and quantum risks. For instance, quantum-immune cryptography and AI-powered analytics and security tools can be used to identify complex attack patterns and automate real-time responses. 

“Zero trust slows down attacks and builds resilience,” Bailey says. “It ensures that even if a breach occurs, the crown jewels stay protected and operations can recover quickly.”

Ultimately, companies should not wait for threats to emerge and evolve. They must get ahead now. “This isn’t a what-if scenario; it’s a when,” says Bailey. “Organizations that invest early will be the ones setting the pace, not scrambling to catch up.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

From vibe coding to context engineering: 2025 in software development

This year, we’ve seen a real-time experiment playing out across the technology industry, one in which AI’s software engineering capabilities have been put to the test against human technologists. And although 2025 may have started with AI looking strong, the transition from vibe coding to what’s being termed context engineering shows that while the work of human developers is evolving, they nevertheless remain absolutely critical.

This is captured in the latest volume of the “Thoughtworks Technology Radar,” a report on the technologies used by our teams on projects with clients. In it, we see the emergence of techniques and tooling designed to help teams better tackle the problem of managing context when working with LLMs and AI agents. 

Taken together, there’s a clear signal of the direction of travel in software engineering and even AI more broadly. After years of the industry assuming progress in AI is all about scale and speed, we’re starting to see that what matters is the ability to handle context effectively.

Vibes, antipatterns, and new innovations 

In February 2025, Andrej Karpathy coined the term vibe coding. It took the industry by storm. It certainly sparked debate at Thoughtworks; many of us were skeptical. On an April episode of our technology podcast, we talked about our concerns and were cautious about how vibe coding might evolve.

Unsurprisingly given the implied imprecision of vibe-based coding, antipatterns have been proliferating. We’ve once again noted, for instance, complacency with AI generated code on the latest volume of the Technology Radar, but it’s also worth pointing out that early ventures into vibe coding also exposed a degree of complacency about what AI models can actually handle — users demanded more and prompts grew larger, but model reliability started to falter.

Experimenting with generative AI 

This is one of the drivers behind increasing interest in engineering context. We’re well aware of its importance, working with coding assistants like Claude Code and Augment Code. Providing necessary context—or knowledge priming—is crucial. It ensures outputs are more consistent and reliable, which will ultimately lead to better software that needs less work — reducing rewrites and potentially driving productivity.

When effectively prepared, we’ve seen good results when using generative AI to understand legacy codebases. Indeed, done effectively with the appropriate context, it can even help when we don’t have full access to source code

It’s important to remember that context isn’t just about more data and more detail. This is one of the lessons we’ve taken from using generative AI for forward engineering. It might sound counterintuitive, but in this scenario, we’ve found AI to be more effective when it’s further abstracted from the underlying system — or, in other words, further removed from the specifics of the legacy code. This is because the solution space becomes much wider, allowing us to better leverage the generative and creative capabilities of the AI models we use.

Context is critical in the agentic era

The backdrop of changes that have happened over recent months is the growth of agents and agentic systems — both as products organizations want to develop and as technology they want to leverage. This has forced the industry to properly reckon with context and move away from a purely vibes-based approach.

Indeed, far from simply getting on with tasks they’ve been programmed to do, agents require significant human intervention to ensure they are equipped to respond to complex and dynamic contexts. 

There are a number of context-related technologies aimed at tackling this challenge, including agents.md, Context7, and Mem0. But it’s also a question of approach. For instance, we’ve found success with anchoring coding agents to a reference application — essentially providing agents with a contextual ground truth. We’re also experimenting with using teams of coding agents; while this might sound like it increases complexity, it actually removes some of the burden of having to give a single agent all the dense layers of context it needs to do its job successfully.

Toward consensus

Hopefully the space will mature as practices and standards embed. It would be remiss to not mention the significance of the Model Context Protocol, which has emerged as the go-to protocol for connecting LLMs or agentic AI to sources of context. Relatedly, the agent2agent (A2A) protocol leads the way with standardizing how agents interact with one another. 

It remains to be seen whether these standards win out. But in any case, it’s important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems. Sure, AI needs context, but so do we. Techniques like curated shared instructions for software teams may not sound like the hottest innovation on the planet, but they can be remarkably powerful for helping teams work together.

There’s perhaps also a conversation to be had about what these changes mean for agile software development. Spec-driven development is one idea that appears to have some traction, but there are still questions about how we remain adaptable and flexible while also building robust contextual foundations and ground truths for AI systems.

Software engineers can solve the context challenge

Clearly, 2025 has been a huge year in the evolution of software engineering as a practice. There’s a lot the industry needs to monitor closely, but it’s also an exciting time. And while fears about AI job automation may remain, the fact the conversation has moved from questions of speed and scale to context puts software engineers right at the heart of things. 

Once again, it will be down to them to experiment, collaborate, and learn — the future depends on it.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Building a high performance data and AI organization (2nd edition)

Four years is a lifetime when it comes to artificial intelligence. Since the first edition of this study was published in 2021, AI’s capabilities have been advancing at speed, and the advances have not slowed since generative AI’s breakthrough. For example, multimodality— the ability to process information not only as text but also as audio, video, and other unstructured formats—is becoming a common feature of AI models. AI’s capacity to reason and act autonomously has also grown, and organizations are now starting to work with AI agents that can do just that.

Amid all the change, there remains a constant: the quality of an AI model’s outputs is only ever as good as the data
that feeds it. Data management technologies and practices have also been advancing, but the second edition of this study suggests that most organizations are not leveraging those fast enough to keep up with AI’s development. As a result of that and other hindrances, relatively few organizations are delivering the desired business results from their AI strategy. No more than 2% of senior executives we surveyed rate their organizations highly in terms of delivering results from AI.

To determine the extent to which organizational data performance has improved as generative AI and other AI advances have taken hold, MIT Technology Review Insights surveyed 800 senior data and technology executives. We also conducted in-depth interviews with 15 technology and business leaders.

Key findings from the report include the following:

Few data teams are keeping pace with AI. Organizations are doing no better today at delivering on data strategy than in pre-generative AI days. Among those surveyed in 2025, 12% are self-assessed data “high achievers” compared with 13% in 2021. Shortages of skilled talent remain a constraint, but teams also struggle with accessing fresh data, tracing lineage, and dealing with security complexity—important requirements for AI success.

Partly as a result, AI is not fully firing yet. There are even fewer “high achievers” when it comes to AI. Just 2% of respondents rate their organizations’ AI performance highly today in terms of delivering measurable business results. In fact, most are still struggling to scale generative AI. While two thirds have deployed it, only 7% have done so widely.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Finding return on AI investments across industries

The market is officially three years post ChatGPT and many of the pundit bylines have shifted to using terms like “bubble” to suggest reasons behind generative AI not realizing material returns outside a handful of technology suppliers. 

In September, the MIT NANDA report made waves because the soundbite every author and influencer picked up on was that 95% of all AI pilots failed to scale or deliver clear and measurable ROI. McKinsey earlier published a similar trend indicating that agentic AI would be the way forward to achieve huge operational benefits for enterprises. At The Wall Street Journal’s Technology Council Summit, AI technology leaders recommended CIOs stop worrying about AI’s return on investment because measuring gains is difficult and if they were to try, the measurements would be wrong. 

This places technology leaders in a precarious position–robust tech stacks already sustain their business operations, so what is the upside to introducing new technology? 

For decades, deployment strategies have followed a consistent cadence where tech operators avoid destabilizing business-critical workflows to swap out individual components in tech stacks. For example, a better or cheaper technology is not meaningful if it puts your disaster recovery at risk. 

While the price might increase when a new buyer takes over mature middleware, the cost of losing part of your enterprise data because you are mid-way through transitioning your enterprise to a new technology is way more severe than paying a higher price for a stable technology that you’ve run your business on for 20 years.

So, how do enterprises get a return on investing in the latest tech transformation?

First principle of AI: Your data is your value

Most of the articles about AI data relate to engineering tasks to ensure that an AI model infers against business data in repositories that represent past and present business realities. 

However, one of the most widely-deployed use cases in enterprise AI begins with prompting an AI model by uploading file attachments into the model. This step narrows an AI model’s range to the content of the uploaded files, accelerating accurate response times and reducing the number of prompts required to get the best answer. 

This tactic relies upon sending your proprietary business data into an AI model, so there are two important considerations to take in parallel with data preparation: first, governing your system for appropriate confidentiality; and second, developing a deliberate negotiation strategy with the model vendors, who cannot advance their frontier models without getting access to non-public data, like your business’ data. 

Recently, Anthropic and OpenAI completed massive deals with enterprise data platforms and owners because there is not enough high-value primary data publicly available on the internet. 

Most enterprises would automatically prioritize confidentiality of their data and design business workflows to maintain trade secrets. From an economic value point of view, especially considering how costly every model API call really is, exchanging selective access to your data for services or price offsets may be the right strategy. Rather than approaching model purchase/onboarding as a typical supplier/procurement exercise, think through the potential to realize mutual benefits in advancing your suppliers’ model and your business adoption of the model in tandem.

Second principle of AI: Boring by design

According to Information is Beautiful, in 2024 alone, 182 new generative AI models were introduced to the market. When GPT5 came into the market in 2025, many of the models from 12 to 24 months prior were rendered unavailable until subscription customers threatened to cancel. Their previously stable AI workflows were built on models that no longer worked. Their tech providers thought the customers would be excited about the newest models and did not realize the premium that business workflows place on stability. Video gamers are happy to upgrade their custom builds throughout the entire lifespan of the system components in their gaming rigs, and will upgrade the entire system just to play a newly released title. 

However, behavior does not translate to business run rate operations. While many employees may use the latest models for document processing or generating content, back-office operations can’t sustain swapping a tech stack three times a week to keep up with the latest model drops. The back-office work is boring by design.

The most successful AI deployments have focused on deploying AI on business problems unique to their business, often running in the background to accelerate or augment mundane but mandated tasks. Relieving legal or expense audits from having to manually cross check individual reports but putting the final decision in a humans’ responsibility zone combines the best of both. 

The important point is that none of these tasks require constant updates to the latest model to deliver that value. This is also an area where abstracting your business workflows from using direct model APIs can offer additional long-term stability while maintaining options to update or upgrade the underlying engines at the pace of your business.

Third principle of AI: Mini-van economics

The best way to avoid upside-down economics is to design systems to align to the users rather than vendor specs and benchmarks. 

Too many businesses continue to fall into the trap of buying new gear or new cloud service types based on new supplier-led benchmarks rather than starting their AI journey from what their business can consume, at what pace, on the capabilities they have deployed today. 

While Ferrari marketing is effective and those automobiles are truly magnificent, they drive the same speed through school zones and lack ample trunk space for groceries. Keep in mind that every remote server and model touched by a user layers on the costs and design for frugality by reconfiguring workflows to minimize spending on third-party services. 

Too many companies have found that their customer support AI workflows add millions of dollars of operational run rate costs and end up adding more development time and cost to update the implementation for OpEx predictability. Meanwhile, the companies that decided that a system running at the pace a human can read—less than 50 tokens per second—were able to successfully deploy scaled-out AI applications with minimal additional overhead.

There are so many aspects of this new automation technology to unpack—the best guidance is to start practical, design for independence in underlying technology components to keep from disrupting stable applications long term, and to leverage the fact that AI technology makes your business data valuable to the advancement of your tech suppliers’ goals.

This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.

Redefining data engineering in the age of AI

As organizations weave AI into more of their operations, senior executives are realizing data engineers hold a central role in bringing these initiatives to life. After all, AI only delivers when you have large amounts of reliable and well-managed, high-quality data. Indeed, this report finds that data engineers play a pivotal role in their organizations as enablers of AI. And in so doing, they are integral to the overall success of the business.

According to the results of a survey of 400 senior data and technology executives, conducted by MIT Technology Review Insights, data engineers have become influential in areas that extend well beyond their traditional remit as pipeline managers. The technology is also changing how data engineers work, with the balance of their time shifting from core data management tasks toward AI-specific activities.

As their influence grows, so do the challenges data engineers face. A major one is dealing with greater complexity, as more advanced AI models elevate the importance of managing unstructured data and real-time pipelines. Another challenge is managing expanding workloads; data engineers are being asked to do more today than ever before, and that’s not likely to change.

Key findings from the report include the following:

  • Data engineers are integral to the business. This is the view of 72% of the surveyed technology leaders—and 86% of those in the survey’s biggest organizations, where AI maturity is greatest. It is a view held especially strongly among executives in financial services and manufacturing companies.
  • AI is changing everything data engineers do. The share of time data engineers spend each day on AI projects has nearly doubled in the past two years, from an average of 19% in 2023 to 37% in 2025, according to our survey. Respondents expect this figure to continue rising to an average of 61% in two years’ time. This is also contributing to bigger data engineer workloads; most respondents (77%) see these growing increasingly heavy.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Unlocking the potential of SAF with book and claim in air freight

Used in aviation, book and claim offers companies the ability to financially support the use of SAF even when it is not physically available at their locations.

As companies that ship goods by air or provide air freight related services address a range of climate goals aiming to reduce emissions, the importance of sustainable aviation fuel (SAF) couldn’t be more pronounced. In its neat form, SAF has the potential to reduce life cycle GHG emissions by up to 80% compared to conventional jet fuel.

In this exclusive webcast, leaders discuss the urgency for reducing air freight emissions for freight forwarders and shippers, and reasons why companies should use SAF. They also explain how companies can best make use of the book and claim model to support their emissions reduction strategies.

Learn from the leaders

  • What book and claim is and how companies can use it
  • Why SAF use is so important
  • How freight-forwarders and shippers can both potentially utilise and contribute to the benefits of SAF

Featured speakers

Raman Ojha, President, Shell Aviation. Raman is responsible for Shell’s global aviation business, which supplies fuels, lubricants, and lower carbon solutions, and offers a range of technical services globally. During almost 20 years at Shell, Raman has held leadership positions across a variety of industry sectors, including energy, lubricants, construction, and fertilisers. He has broad experience across both matured markets in the Americas and Europe, as well as developing markets including China, India, and Southeast Asia.  

Bettina Paschke, VP ESG Accounting, Reporting & Controlling, DHL Express. Bettina Paschke leads ESG Accounting, Reporting & Controlling, at DHL Express a division of DHL Group. In her role, she is responsible for ESG, including, EU Taxonomy Reporting, and Carbon Accounting. She has more than 20 years’ experience in Finance. In her role she is driving the Sustainable Aviation Fuel agenda at DHL Express and is engaged in various industry initiatives to allow reliable book and claim transactions.

Christoph Wolff, Chief Executive Officer at Smart Freight Centre. Christoph Wolff is currently the Chief Executive Officer at Smart Freight Centre, leading programs focused on sustainability in freight transport. Prior to this role, Christoph served as the Senior Advisor and Director at ACME Group, a global leader in green energy solutions. With a background in various industries, Christoph has held positions such as Managing Director at European Climate Foundation and Senior Board Advisor at Ferrostaal GmbH. Christoph has also worked at Novatec, Solar Millennium AG, DB Schenker, McKinsey & Company, and served as an Assistant Professor at Northwestern University – Kellogg School of Management. Christoph holds multiple degrees from RWTH Aachen University and ETH Zürich, along with ongoing executive education at the University of Michigan.

Watch the webcast.

This discussion is presented by MIT Technology Review Insights in association with Avelia. Avelia is a Shell owned solution and brand that was developed with support from Amex GBT, Accenture and Energy Web Foundation. The views from individuals not affiliated with Shell are their own and not those of Shell PLC or its affiliates. Cautionary note | Shell Global

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Not all offerings are available in all jurisdictions. Depending on jurisdiction and local laws, Shell may offer the sale of Environmental Attributes (for which subject to applicable law and consultation with own advisors, buyers might be able to use such Environmental Attributes for their own emission reduction purposes) and/or Environmental Attribute Information (pursuant to which buyers are helping subsidize the use of SAF and lower overall aviation emissions at designated airports but no emission reduction claims may be made by buyers for their own emissions reduction purposes). Different offerings have different forms of contracts, and no assumptions should be made about a particular offering without reading the specific contractual language applicable to such offering.

Future-proofing business capabilities with AI technologies

Artificial intelligence has always promised speed, efficiency, and new ways of solving problems. But what’s changed in the past few years is how quickly those promises are becoming reality. From oil and gas to retail, logistics to law, AI is no longer confined to pilot projects or speculative labs. It is being deployed in critical workflows, reducing processes that once took hours to just minutes, and freeing up employees to focus on higher-value work.

“Business process automation has been around a long while. What GenAI and AI agents are allowing us to do is really give superpowers, so to speak, to business process automation.” says chief AI architect at Cloudera, Manasi Vartak.

Much of the momentum is being driven by two related forces: the rise of AI agents and the rapid democratization of AI tools. AI agents, whether designed for automation or assistance, are proving especially powerful at speeding up response times and removing friction from complex workflows. Instead of waiting on humans to interpret a claim form, read a contract, or process a delivery driver’s query, AI agents can now do it in seconds, and at scale. 

At the same time, advances in usability are putting AI into the hands of nontechnical staff, making it easier for employees across various functions to experiment, adopt and adapt these tools for their own needs.

That doesn’t mean the road is without obstacles. Concerns about privacy, security, and the accuracy of LLMs remain pressing. Enterprises are also grappling with the realities of cost management, data quality, and how to build AI systems that are sustainable over the long term. And as companies explore what comes next—including autonomous agents, domain-specific models, and even steps toward artificial general intelligence—questions about trust, governance, and responsible deployment loom large.

“Your leadership is especially critical in making sure that your business has an AI strategy that addresses both the opportunity and the risk while giving the workforce some ability to upskill such that there’s a path to become fluent with these AI tools,” says principal advisor of AI and modern data strategy at Amazon Web Services, Eddie Kim.

Still, the case studies are compelling. A global energy company cutting threat detection times from over an hour to just seven minutes. A Fortune 100 legal team saving millions by automating contract reviews. A humanitarian aid group harnessing AI to respond faster to crises. Long gone are the days of incremental steps forward. These examples illustrate that when data, infrastructure, and AI expertise come together, the impact is transformative. 

The future of enterprise AI will be defined by how effectively organizations can marry innovation with scale, security, and strategy. That’s where the real race is happening.

Watch the webcast now.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Transforming commercial pharma with agentic AI 

Amid the turbulence of the wider global economy in recent years, the pharmaceuticals industry is weathering its own storms. The rising cost of raw materials and supply chain disruptions are squeezing margins as pharma companies face intense pressure—including from countries like the US—to control drug costs. At the same time, a wave of expiring patents threatens around $300 billion in potential lost sales by 2030. As companies lose the exclusive right to sell the drugs they have developed, competitors can enter the market with generic and biosimilar lower-cost alternatives, leading to a sharp decline in branded drug sales—a “patent cliff.” Simultaneously, the cost of bringing new drugs to market is climbing. McKinsey estimates cost per launch is growing 8% each year, reaching $4 billion in 2022. 

In clinics and health-care facilities, norms and expectations are evolving, too. Patients and health-care providers are seeking more personalized services, leading to greater demand for precision drugs and targeted therapies. While proving effective for patients, the complexity of formulating and producing these drugs makes them expensive and restricts their sale to a smaller customer base.

The need for personalization extends to sales and marketing operations too as pharma companies are increasingly needing to compete for the attention of health-care professionals (HCPs). Estimates suggest that biopharmas were able to reach 45% of HCPs in 2024, down from 60% in 2022. Personalization, real-time communication channels, and relevant content offer a way of building trust and reaching HCPs in an increasingly competitive market. But with ever-growing volumes of content requiring medical, legal, and regulatory (MLR) review, companies are struggling to keep up, leading to potential delays and missed opportunities. 

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.