 
		
		
	 
		
		
	 
		
		
	Building a high performance data and AI organization (2nd edition)
Four years is a lifetime when it comes to artificial intelligence. Since the first edition of this study was published in 2021, AI’s capabilities have been advancing at speed, and the advances have not slowed since generative AI’s breakthrough. For example, multimodality— the ability to process information not only as text but also as audio, video, and other unstructured formats—is becoming a common feature of AI models. AI’s capacity to reason and act autonomously has also grown, and organizations are now starting to work with AI agents that can do just that.
Amid all the change, there remains a constant: the quality of an AI model’s outputs is only ever as good as the data
that feeds it. Data management technologies and practices have also been advancing, but the second edition of this study suggests that most organizations are not leveraging those fast enough to keep up with AI’s development. As a result of that and other hindrances, relatively few organizations are delivering the desired business results from their AI strategy. No more than 2% of senior executives we surveyed rate their organizations highly in terms of delivering results from AI. 

To determine the extent to which organizational data performance has improved as generative AI and other AI advances have taken hold, MIT Technology Review Insights surveyed 800 senior data and technology executives. We also conducted in-depth interviews with 15 technology and business leaders.



Key findings from the report include the following:
• Few data teams are keeping pace with AI. Organizations are doing no better today at delivering on data strategy than in pre-generative AI days. Among those surveyed in 2025, 12% are self-assessed data “high achievers” compared with 13% in 2021. Shortages of skilled talent remain a constraint, but teams also struggle with accessing fresh data, tracing lineage, and dealing with security complexity—important requirements for AI success.
• Partly as a result, AI is not fully firing yet. There are even fewer “high achievers” when it comes to AI. Just 2% of respondents rate their organizations’ AI performance highly today in terms of delivering measurable business results. In fact, most are still struggling to scale generative AI. While two thirds have deployed it, only 7% have done so widely.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
 
		
		
	Finding return on AI investments across industries
The market is officially three years post ChatGPT and many of the pundit bylines have shifted to using terms like “bubble” to suggest reasons behind generative AI not realizing material returns outside a handful of technology suppliers.
In September, the MIT NANDA report made waves because the soundbite every author and influencer picked up on was that 95% of all AI pilots failed to scale or deliver clear and measurable ROI. McKinsey earlier published a similar trend indicating that agentic AI would be the way forward to achieve huge operational benefits for enterprises. At The Wall Street Journal’s Technology Council Summit, AI technology leaders recommended CIOs stop worrying about AI’s return on investment because measuring gains is difficult and if they were to try, the measurements would be wrong.



This places technology leaders in a precarious position–robust tech stacks already sustain their business operations, so what is the upside to introducing new technology?
For decades, deployment strategies have followed a consistent cadence where tech operators avoid destabilizing business-critical workflows to swap out individual components in tech stacks. For example, a better or cheaper technology is not meaningful if it puts your disaster recovery at risk.
While the price might increase when a new buyer takes over mature middleware, the cost of losing part of your enterprise data because you are mid-way through transitioning your enterprise to a new technology is way more severe than paying a higher price for a stable technology that you’ve run your business on for 20 years.
So, how do enterprises get a return on investing in the latest tech transformation?
First principle of AI: Your data is your value
Most of the articles about AI data relate to engineering tasks to ensure that an AI model infers against business data in repositories that represent past and present business realities.
However, one of the most widely-deployed use cases in enterprise AI begins with prompting an AI model by uploading file attachments into the model. This step narrows an AI model’s range to the content of the uploaded files, accelerating accurate response times and reducing the number of prompts required to get the best answer.
This tactic relies upon sending your proprietary business data into an AI model, so there are two important considerations to take in parallel with data preparation: first, governing your system for appropriate confidentiality; and second, developing a deliberate negotiation strategy with the model vendors, who cannot advance their frontier models without getting access to non-public data, like your business’ data.
Recently, Anthropic and OpenAI completed massive deals with enterprise data platforms and owners because there is not enough high-value primary data publicly available on the internet.
Most enterprises would automatically prioritize confidentiality of their data and design business workflows to maintain trade secrets. From an economic value point of view, especially considering how costly every model API call really is, exchanging selective access to your data for services or price offsets may be the right strategy. Rather than approaching model purchase/onboarding as a typical supplier/procurement exercise, think through the potential to realize mutual benefits in advancing your suppliers’ model and your business adoption of the model in tandem.
Second principle of AI: Boring by design
According to Information is Beautiful, in 2024 alone, 182 new generative AI models were introduced to the market. When GPT5 came into the market in 2025, many of the models from 12 to 24 months prior were rendered unavailable until subscription customers threatened to cancel. Their previously stable AI workflows were built on models that no longer worked. Their tech providers thought the customers would be excited about the newest models and did not realize the premium that business workflows place on stability. Video gamers are happy to upgrade their custom builds throughout the entire lifespan of the system components in their gaming rigs, and will upgrade the entire system just to play a newly released title.
However, behavior does not translate to business run rate operations. While many employees may use the latest models for document processing or generating content, back-office operations can’t sustain swapping a tech stack three times a week to keep up with the latest model drops. The back-office work is boring by design.
The most successful AI deployments have focused on deploying AI on business problems unique to their business, often running in the background to accelerate or augment mundane but mandated tasks. Relieving legal or expense audits from having to manually cross check individual reports but putting the final decision in a humans’ responsibility zone combines the best of both.
The important point is that none of these tasks require constant updates to the latest model to deliver that value. This is also an area where abstracting your business workflows from using direct model APIs can offer additional long-term stability while maintaining options to update or upgrade the underlying engines at the pace of your business.
Third principle of AI: Mini-van economics
The best way to avoid upside-down economics is to design systems to align to the users rather than vendor specs and benchmarks.
Too many businesses continue to fall into the trap of buying new gear or new cloud service types based on new supplier-led benchmarks rather than starting their AI journey from what their business can consume, at what pace, on the capabilities they have deployed today.
While Ferrari marketing is effective and those automobiles are truly magnificent, they drive the same speed through school zones and lack ample trunk space for groceries. Keep in mind that every remote server and model touched by a user layers on the costs and design for frugality by reconfiguring workflows to minimize spending on third-party services.
Too many companies have found that their customer support AI workflows add millions of dollars of operational run rate costs and end up adding more development time and cost to update the implementation for OpEx predictability. Meanwhile, the companies that decided that a system running at the pace a human can read—less than 50 tokens per second—were able to successfully deploy scaled-out AI applications with minimal additional overhead.
There are so many aspects of this new automation technology to unpack—the best guidance is to start practical, design for independence in underlying technology components to keep from disrupting stable applications long term, and to leverage the fact that AI technology makes your business data valuable to the advancement of your tech suppliers’ goals.
This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.
 
		
		
	Redefining data engineering in the age of AI
As organizations weave AI into more of their operations, senior executives are realizing data engineers hold a central role in bringing these initiatives to life. After all, AI only delivers when you have large amounts of reliable and well-managed, high-quality data. Indeed, this report finds that data engineers play a pivotal role in their organizations as enablers of AI. And in so doing, they are integral to the overall success of the business.
According to the results of a survey of 400 senior data and technology executives, conducted by MIT Technology Review Insights, data engineers have become influential in areas that extend well beyond their traditional remit as pipeline managers. The technology is also changing how data engineers work, with the balance of their time shifting from core data management tasks toward AI-specific activities.



As their influence grows, so do the challenges data engineers face. A major one is dealing with greater complexity, as more advanced AI models elevate the importance of managing unstructured data and real-time pipelines. Another challenge is managing expanding workloads; data engineers are being asked to do more today than ever before, and that’s not likely to change.



Key findings from the report include the following:
- Data engineers are integral to the business. This is the view of 72% of the surveyed technology leaders—and 86% of those in the survey’s biggest organizations, where AI maturity is greatest. It is a view held especially strongly among executives in financial services and manufacturing companies.
- AI is changing everything data engineers do. The share of time data engineers spend each day on AI projects has nearly doubled in the past two years, from an average of 19% in 2023 to 37% in 2025, according to our survey. Respondents expect this figure to continue rising to an average of 61% in two years’ time. This is also contributing to bigger data engineer workloads; most respondents (77%) see these growing increasingly heavy.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
 
		
		
	Unlocking the potential of SAF with book and claim in air freight
Used in aviation, book and claim offers companies the ability to financially support the use of SAF even when it is not physically available at their locations.
As companies that ship goods by air or provide air freight related services address a range of climate goals aiming to reduce emissions, the importance of sustainable aviation fuel (SAF) couldn’t be more pronounced. In its neat form, SAF has the potential to reduce life cycle GHG emissions by up to 80% compared to conventional jet fuel.
In this exclusive webcast, leaders discuss the urgency for reducing air freight emissions for freight forwarders and shippers, and reasons why companies should use SAF. They also explain how companies can best make use of the book and claim model to support their emissions reduction strategies.



Learn from the leaders
- What book and claim is and how companies can use it
- Why SAF use is so important
- How freight-forwarders and shippers can both potentially utilise and contribute to the benefits of SAF
Featured speakers
Raman Ojha, President, Shell Aviation. Raman is responsible for Shell’s global aviation business, which supplies fuels, lubricants, and lower carbon solutions, and offers a range of technical services globally. During almost 20 years at Shell, Raman has held leadership positions across a variety of industry sectors, including energy, lubricants, construction, and fertilisers. He has broad experience across both matured markets in the Americas and Europe, as well as developing markets including China, India, and Southeast Asia.
Bettina Paschke, VP ESG Accounting, Reporting & Controlling, DHL Express. Bettina Paschke leads ESG Accounting, Reporting & Controlling, at DHL Express a division of DHL Group. In her role, she is responsible for ESG, including, EU Taxonomy Reporting, and Carbon Accounting. She has more than 20 years’ experience in Finance. In her role she is driving the Sustainable Aviation Fuel agenda at DHL Express and is engaged in various industry initiatives to allow reliable book and claim transactions.
Christoph Wolff, Chief Executive Officer at Smart Freight Centre. Christoph Wolff is currently the Chief Executive Officer at Smart Freight Centre, leading programs focused on sustainability in freight transport. Prior to this role, Christoph served as the Senior Advisor and Director at ACME Group, a global leader in green energy solutions. With a background in various industries, Christoph has held positions such as Managing Director at European Climate Foundation and Senior Board Advisor at Ferrostaal GmbH. Christoph has also worked at Novatec, Solar Millennium AG, DB Schenker, McKinsey & Company, and served as an Assistant Professor at Northwestern University – Kellogg School of Management. Christoph holds multiple degrees from RWTH Aachen University and ETH Zürich, along with ongoing executive education at the University of Michigan.
This discussion is presented by MIT Technology Review Insights in association with Avelia. Avelia is a Shell owned solution and brand that was developed with support from Amex GBT, Accenture and Energy Web Foundation. The views from individuals not affiliated with Shell are their own and not those of Shell PLC or its affiliates. Cautionary note | Shell Global
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Not all offerings are available in all jurisdictions. Depending on jurisdiction and local laws, Shell may offer the sale of Environmental Attributes (for which subject to applicable law and consultation with own advisors, buyers might be able to use such Environmental Attributes for their own emission reduction purposes) and/or Environmental Attribute Information (pursuant to which buyers are helping subsidize the use of SAF and lower overall aviation emissions at designated airports but no emission reduction claims may be made by buyers for their own emissions reduction purposes). Different offerings have different forms of contracts, and no assumptions should be made about a particular offering without reading the specific contractual language applicable to such offering.
 
		
		
	Future-proofing business capabilities with AI technologies
Artificial intelligence has always promised speed, efficiency, and new ways of solving problems. But what’s changed in the past few years is how quickly those promises are becoming reality. From oil and gas to retail, logistics to law, AI is no longer confined to pilot projects or speculative labs. It is being deployed in critical workflows, reducing processes that once took hours to just minutes, and freeing up employees to focus on higher-value work.



“Business process automation has been around a long while. What GenAI and AI agents are allowing us to do is really give superpowers, so to speak, to business process automation.” says chief AI architect at Cloudera, Manasi Vartak.
Much of the momentum is being driven by two related forces: the rise of AI agents and the rapid democratization of AI tools. AI agents, whether designed for automation or assistance, are proving especially powerful at speeding up response times and removing friction from complex workflows. Instead of waiting on humans to interpret a claim form, read a contract, or process a delivery driver’s query, AI agents can now do it in seconds, and at scale.
At the same time, advances in usability are putting AI into the hands of nontechnical staff, making it easier for employees across various functions to experiment, adopt and adapt these tools for their own needs.
That doesn’t mean the road is without obstacles. Concerns about privacy, security, and the accuracy of LLMs remain pressing. Enterprises are also grappling with the realities of cost management, data quality, and how to build AI systems that are sustainable over the long term. And as companies explore what comes next—including autonomous agents, domain-specific models, and even steps toward artificial general intelligence—questions about trust, governance, and responsible deployment loom large.
“Your leadership is especially critical in making sure that your business has an AI strategy that addresses both the opportunity and the risk while giving the workforce some ability to upskill such that there’s a path to become fluent with these AI tools,” says principal advisor of AI and modern data strategy at Amazon Web Services, Eddie Kim.
Still, the case studies are compelling. A global energy company cutting threat detection times from over an hour to just seven minutes. A Fortune 100 legal team saving millions by automating contract reviews. A humanitarian aid group harnessing AI to respond faster to crises. Long gone are the days of incremental steps forward. These examples illustrate that when data, infrastructure, and AI expertise come together, the impact is transformative.
The future of enterprise AI will be defined by how effectively organizations can marry innovation with scale, security, and strategy. That’s where the real race is happening.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
 
		
		
	Transforming commercial pharma with agentic AI
Amid the turbulence of the wider global economy in recent years, the pharmaceuticals industry is weathering its own storms. The rising cost of raw materials and supply chain disruptions are squeezing margins as pharma companies face intense pressure—including from countries like the US—to control drug costs. At the same time, a wave of expiring patents threatens around $300 billion in potential lost sales by 2030. As companies lose the exclusive right to sell the drugs they have developed, competitors can enter the market with generic and biosimilar lower-cost alternatives, leading to a sharp decline in branded drug sales—a “patent cliff.” Simultaneously, the cost of bringing new drugs to market is climbing. McKinsey estimates cost per launch is growing 8% each year, reaching $4 billion in 2022.



In clinics and health-care facilities, norms and expectations are evolving, too. Patients and health-care providers are seeking more personalized services, leading to greater demand for precision drugs and targeted therapies. While proving effective for patients, the complexity of formulating and producing these drugs makes them expensive and restricts their sale to a smaller customer base.



The need for personalization extends to sales and marketing operations too as pharma companies are increasingly needing to compete for the attention of health-care professionals (HCPs). Estimates suggest that biopharmas were able to reach 45% of HCPs in 2024, down from 60% in 2022. Personalization, real-time communication channels, and relevant content offer a way of building trust and reaching HCPs in an increasingly competitive market. But with ever-growing volumes of content requiring medical, legal, and regulatory (MLR) review, companies are struggling to keep up, leading to potential delays and missed opportunities.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
 
		
		
	 
		
		
	Turning migration into modernization
In late 2023, a long-trusted virtualization staple became the biggest open question on the enterprise IT roadmap.
Amid concerns of VMware licensing changes and steeper support costs, analysts noticed an exodus mentality. Forrester predicted that one in five large VMware customers would begin moving away from the platform in 2024. A subsequent Gartner community poll found that 74% of respondents were rethinking their VMware relationship in light of recent changes. CIOs contending with pricing hikes and product roadmap opacity face a daunting choice: double‑down on a familiar but costlier stack, or use the disruption to rethink how—and where—critical workloads should run.



“There’s still a lot of uncertainty in the marketplace around VMware,” explains Matt Crognale, senior director, migrations and modernization at cloud modernization firm Effectual, adding that the VMware portfolio has been streamlined and refocused over the past couple of years. “The portfolio has been trimmed down to a core offering focused on the technology versus disparate systems.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
 
		
		
	Unlocking AI’s full potential requires operational excellence
Talk of AI is inescapable. It’s often the main topic of discussion at board and executive meetings, at corporate retreats, and in the media. A record 58% of S&P 500 companies mentioned AI in their second-quarter earnings calls, according to Goldman Sachs.



But it’s difficult to walk the talk. Just 5% of generative AI pilots are driving measurable profit-and-loss impact, according to a recent MIT study. That means 95% of generative AI pilots are realizing zero return, despite significant attention and investment.
Although we’re nearly three years past the watershed moment of ChatGPT’s public release, the vast majority of organizations are stalling out in AI. Something is broken. What is it?
Date from Lucid’s AI readiness survey sheds some light on the tripwires that are making organizations stumble. Fortunately, solving these problems doesn’t require recruiting top AI talent worth hundreds of millions of dollars, at least for most companies. Instead, as they race to implement AI quickly and successfully, leaders need to bring greater rigor and structure to their operational processes.
Operations are the gap between AI’s promise and practical adoption
I can’t fault any leader for moving as fast as possible with their implementation of AI. In many cases, the existential survival of their company—and their own employment—depends on it. The promised benefits to improve productivity, reduce costs, and enhance communication are transformational, which is why speed is paramount.
But while moving quickly, leaders are skipping foundational steps required for any technology implementation to be successful. Our survey research found that more than 60% of knowledge workers believe their organization’s AI strategy is only somewhat to not at all well aligned with operational capabilities.
AI can process unstructured data, but AI will only create more headaches for unstructured organizations. As Bill Gates said, “The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”
Where are the operations gaps in AI implementations? Our survey found that approximately half of respondents (49%) cite undocumented or ad-hoc processes impacting efficiency sometimes; 22% say this happens often or always.
The primary challenge of AI transformation lies not in the technology itself, but in the final step of integrating it into daily workflows. We can compare this to the “last mile problem” in logistics: The most difficult part of a delivery is getting the product to the customer, no matter how efficient the rest of the process is.
In AI, the “last mile” is the crucial task of embedding AI into real-world business operations. Organizations have access to powerful models but struggle to connect them to the people who need to use them. The power of AI is wasted if it’s not effectively integrated into business operations, and that requires clear documentation of those operations.
Capturing, documenting, and distributing knowledge at scale is critical to organizational success with AI. Yet our survey showed only 16% of respondents say their workflows are extremely well-documented. The top barriers to proper documentation are a lack of time, cited by 40% of respondents, and a lack of tools, cited by 30%.
The challenge of integrating new technology with old processes was perfectly illustrated in a recent meeting I had with a Fortune 500 executive. The company is pushing for significant productivity gains with AI, but it still relies on an outdated collaboration tool that was never designed for teamwork. This situation highlights the very challenge our survey uncovered: Powerful AI initiatives can stall if teams lack modern collaboration and documentation tools.
This disconnect shows that AI adoption is about more than just the technology itself. For it to truly succeed enterprise-wide, companies need to provide a unified space for teams to brainstorm, plan, document, and make decisions. The fundamentals of successful technology adoption still hold true: You need the right tools to enable collaboration and documentation for AI to truly make an impact.
Collaboration and change management are hidden blockers to AI implementation
A company’s approach to AI is perceived very differently depending on an employee’s role. While 61% of C-suite executives believe their company’s strategy is well-considered, that number drops to 49% for managers and just 36% for entry-level employees, as our survey found.
Just like with product development, building a successful AI strategy requires a structured approach. Leaders and teams need a collaborative space to come together, brainstorm, prioritize the most promising opportunities, and map out a clear path forward. As many companies have embraced hybrid or distributed work, supporting remote collaboration with digital tools becomes even more important.
We recently used AI to streamline a strategic challenge for our executive team. A product leader used it to generate a comprehensive preparatory memo in a fraction of the typical time, complete with summaries, benchmarks, and recommendations.
Despite this efficiency, the AI-generated document was merely the foundation. We still had to meet to debate the specifics, prioritize actions, assign ownership, and formally document our decisions and next steps.
According to our survey, 23% of respondents reported that collaboration is frequently a bottleneck in complex work. Employees are willing to embrace change, but friction from poor collaboration adds risk and reduces the potential impact of AI.
Operational readiness enhances your AI readiness
Operations lacking structure are preventing many organizations from implementing AI successfully. We asked teams about their top needs to help them adapt to AI. At the top of their lists were document collaboration (cited by 37% of respondents), process documentation (34%), and visual workflows (33%).
Notice that none of these requests are for more sophisticated AI. The technology is plenty capable already, and most organizations are still just scratching the surface of its full potential. Instead, what teams want most is ensuring the fundamentals around processes, documentation, and collaboration are covered.
AI offers a significant opportunity for organizations to gain a competitive edge in productivity and efficiency. But moving fast isn’t a guarantee of success. The companies best positioned for successful AI adoption are those that invest in operational excellence, down to the last mile.
This content was produced by Lucid Software. It was not written by MIT Technology Review’s editorial staff.
