Data strategies for AI leaders

Organizations are starting the heavy lifting to get real business value from generative AI. As Arnab Chakraborty, chief responsible AI officer at Accenture, puts it, “2023 was the year when clients were amazed with generative AI and the possibilities. In 2024, we are starting to see scaled implementations of responsible generative AI programs.”

Some generative AI efforts remain modest. As Neil Ward-Dutton, vice president for automation, analytics, and AI at IDC Europe, describes it, this is “a classic kind of automation: making teams or individuals more productive, getting rid of drudgery, and allowing people to deliver better results more quickly.” Most companies, though, have much greater ambitions for generative AI: they are looking to reshape how they operate and what they sell.

Great expectations for generative AI

The expectation that generative AI could fundamentally upend business models and product offerings is driven by the technology’s power to unlock vast amounts of data that were previously inaccessible. “Eighty to 90% of the world’s data is unstructured,” says Baris Gultekin, head of AI at AI data cloud company Snowflake. “But what’s exciting is that AI is opening the door for organizations to gain insights from this data that they simply couldn’t before.”

In a poll conducted by MIT Technology Review Insights, global executives were asked about the value they hoped to derive from generative AI. Many say they are prioritizing the technology’s ability to increase efficiency and productivity (72%), increase market competitiveness (55%), and drive better products and services (47%). Few see the technology primarily as a driver of increased revenue (30%) or reduced costs (24%), which is suggestive of executives’ loftier ambitions. Respondents’ top ambitions for generative AI seem to work hand in hand. More than half of companies say new routes toward market competitiveness are one of their top three goals, and the two likely paths they might take to achieve this are increased efficiency and better products or services.

For companies rolling out generative AI, these are not necessarily distinct choices. Chakraborty sees a “thin line between efficiency and innovation” in current activity. “We are starting to notice companies applying generative AI agents for employees, and the use case is internal,” he says, but the time saved on mundane tasks allows personnel to focus on customer service or more creative activities. Gultekin agrees. “We’re seeing innovation with customers building internal generative AI products that unlock a lot of value,” he says. “They’re being built for productivity gains and efficiencies.”

Chakraborty cites marketing campaigns as an example: “The whole supply chain of creative input is getting re-imagined using the power of generative AI. That is obviously going to create new levels of efficiency, but at the same time probably create innovation in the way you bring new product ideas into the market.” Similarly, Gultekin reports that a global technology conglomerate and Snowflake customer has used AI to make “700,000 pages of research available to their team so that they can ask questions and then increase the pace of their own innovation.”

The impact of generative AI on chatbots—in Gultekin’s words, “the bread and butter of the recent AI cycle”—may be the best example. The rapid expansion in chatbot capabilities using AI borders between the improvement of an existing tool and creation of a new one. It is unsurprising, then, that 44% of respondents see improved customer satisfaction as a way that generative AI will bring value.

A closer look at our survey results reflects this overlap between productivity enhancement and product or service innovation. Nearly one-third of respondents (30%) included both increased productivity and innovation in the top three types of value they hope to achieve with generative AI. The first, in many cases, will serve as the main route to the other.

But efficiency gains are not the only path to product or service innovation. Some companies, Chakraborty says, are “making big bets” on wholesale innovation with generative AI. He cites pharmaceutical companies as an example. They, he says, are asking fundamental questions about the technology’s power: “How can I use generative AI to create new treatment pathways or to reimagine my clinical trials process? Can I accelerate the drug discovery time frame from 10 years to five years to one?”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Productivity Electrified: Tech That Is Supercharging Business

This sponsored session was presented by Ford Pro at MIT Technology Review’s 2024 EmTech MIT event.

A decarbonized transportation system is a necessary pre-requisite for a sustainable economy. In the transportation industry, the road to electrification and greater technology adoption can also increase business bottom lines and reduce downstream costs to tax payers. Focusing on early adopters such as first responders, local municipalities, and small business owners, we’ll discuss common misconceptions, barriers to adoption, implementation strategies, and how these insights carry over into wide-spread adoption of emerging technology and electric vehicles.


About the speaker

Wanda Young, Global Chief Marketing & Experience Officer, Ford Pro

Wanda Young is a visionary brand marketer and digital transformation expert who thrives at the intersection of brand, digital, technology, and data; paired with a deep understanding of the consumer mindset. She gained her experience working for the largest brands in retail, sports & entertainment, consumer products, and electronics. She is a successful brand marketer and change agent that organizations seek to drive digital and data transformation – a Chief Experience Officer years before the title was invented. In her roles managing multiple notable brands, including Samsung, Disney, ESPN, Walmart, Alltel, and Acxiom, she developed knowledge of the interconnectedness of brand, digital, and data; of the importance of customer experience across all touchpoints; the power of data and localization; and the in-the-trenches accountability to drive outcomes. Now at Ford Pro, the Commercial Division of Ford Motor Company, she is focused on helping grow the newly-launched division and brand which only Ford can offer commercial customers – an integrated lineup of vehicles and services designed to meet the needs of all businesses to keep their productivity on pace to drive growth.

Young enjoyed a series of firsts in her career, including launching ESPN+, developing Walmart’s first social media presence and building 5000 of their local Facebook pages (which are still live today and continue to scale), developing the first weather-triggered ad product with The Weather Company, designing an ad product with Google called Local Inventory Ads, being part of team who took Alltel Wireless private (which later sold to Verizon Wireless), launching the Acxiom.com website on her first Mother’s Day with her daughter on her lap. She serves on the board of or is involved in a number of industry memberships and has been the recipient of many prestigious awards. Young received a Bachelor of Arts in English with a minor in Advertising from the University of Arkansas.

Preventing Climate Change: A Team Sport

This sponsored session was presented by MEDC at MIT Technology Review’s 2024 EmTech MIT event.

Michigan is at the forefront of the clean energy transition, setting an example in mobility and automotive innovation. Other states and organizations can learn from Michigan’s approach to public-private partnerships, actionable climate plans, and business-government alignment. Progressive climate policies are not only crucial for sustainability but also for attracting talent in today’s competitive job market.

Read more from MIT Technology Review Insights & MEDC about addressing climate change impacts


About the speaker

Hilary Doe, Chief Growth & Marketing Officer, Michigan Economic Development Corporation

As Chief Growth & Marketing Officer, Hilary Doe leads the state’s efforts to grow Michigan’s population, economy, and reputation as the best place to live, work, raise a family, and start a business. Hilary works alongside the Growing Michigan Together Council on a once-in-a-generation effort to grow Michigan’s population, boost economic growth, and make Michigan the place everyone wants to call home.

Hilary is a dynamic leader in nonprofits, technology, strategy, and public policy. She served as the national director at the Roosevelt Network, where she built and led an organization engaging thousands of young people in civic engagement and social change programming at chapters nationwide, which ultimately earned the organization recognition as a recipient of the MacArthur Award for Creative and Effective Institutions. She also served as Vice President of the Roosevelt Institute, where she oversaw strategy and expanded the Institute’s Four Freedoms Center, with the goal of empowering communities and reducing inequality alongside the greatest economists of our generations. Most recently, she served as President and Chief Strategy Officer at Nationbuilder, working to equip the world’s leaders with software to grow their movements, businesses, and organizations, while spreading democracy.

Hilary is a graduate of the University of Michigan’s Honors College and Ford School of Public Policy, a Detroit resident, and proud Michigander.

Addressing climate change impacts

The reality of climate change has spurred enormous public and private investment worldwide, funding initiatives to mitigate its effects and to adapt to its impacts. That investment has spawned entire industries and countless new businesses, resulting in the creation of new green jobs and contributions to economic growth. In the United States, this includes the single largest climate-related investment in the country’s history, made in 2022 as part of the Inflation Reduction Act.

For most US businesses, however, the costs imposed by climate change and the future risks it poses will outweigh growth opportunities afforded by the green sector. In a survey of 300 senior US executives conducted by MIT Technology Review, every respondent agrees that climate change is either harming the economy today or will do so in the future. Most expect their organizations to contend with extreme weather, such as severe storms, flooding, and extreme heat, in the near term. Respondents also report their businesses are already incurring costs related to climate change.

This research examines how US businesses view their climate change risk and the steps they are taking to adapt to climate change’s impacts. The results make clear that climate considerations, such as frequency of extreme weather and access to natural resources, are now a prime factor in businesses’ site location decisions. As climate change accelerates, such considerations are certain to grow in importance.

Key findings include the following:

Businesses are weighing relocation due to climate risks. Most executives in the survey (62%) deem their physical infrastructure (some or all of it) exposed to the impacts of climate change, with 20% reporting it is “very exposed.” A full 75% of respondents report their organization has considered relocating due to climate risk, with 6% indicating they have concrete plans to relocate facilities within the next five years due to climate factors. And 24% report they have already relocated physical infrastructure to prepare for climate change impacts.

Companies must lock in the costs of climate change adaptation. Nearly all US businesses have already suffered from the effects of climate change, judging by the survey. Weighing most heavily thus far, and likely in the future, are increases in operational costs (affecting 64%) and insurance premiums (63%), as well as disruption to operations (61%) and damage to infrastructure (55%).

Executives know climate change is here, and many are planning for it. Four-fifths (81%) of survey respondents deem climate planning and preparedness important to their business, and one-third describe it as very important. There is a seeming lag at some companies, however, at translating this perceived importance into actual planning: only 62% have developed a climate change adaptation plan, and 52% have conducted a climate risk assessment.

Climate-planning resources are a key criterion in site location. When judging a potential new business site on its climate mitigation features, 71% of executives highlight the availability of climate-planning resources as among their top criteria. Nearly two-thirds (64%) also cite the importance of a location’s access to critical natural resources.

Though climate change will affect everyone, its risks and impacts vary by region. No US region is immune to climate change: a majority of surveyed businesses in every region have experienced at least some negative climate change impacts. However, respondents believe the risks are lowest in the Midwest, with nearly half of respondents (47%) naming that region as least exposed to climate change risk.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Preparing for the unknown: A guide to future-proofing imaging IT

In an era of unprecedented technological advancement, the health-care industry stands at a crossroad. As health expenditure continues to outpace GDP in many countries, health-care executives grapple with crucial decisions on investment prioritization for digitization, innovation, and digital transformation. The imperative to provide high-quality, patient-centric care in an increasingly digital world has never been more pressing. At the forefront of this transformation is imaging IT—a critical component that’s evolving to meet the challenges of modern health care.

The future of imaging IT is characterized by interconnected systems, advanced analytics, robust data security, AI-driven enhancements, and agile infrastructure. Organizations that embrace these trends will be well-positioned to thrive in the changing health-care landscape. But what exactly does this future look like, and how can health-care providers prepare for it?

Networked care models: The new paradigm

The adoption of networked care models is set to revolutionize health-care delivery. These models foster collaboration among stakeholders, making patient information readily available and leading to more personalized and efficient care. As we move forward, expect to see health-care organizations increasingly investing in technologies that enable seamless data sharing and interoperability.

Imagine a scenario where a patient’s entire medical history, including imaging data from various specialists, is instantly accessible to any authorized health-care provider. This level of connectivity not only improves diagnosis and treatment but also enhances the overall patient experience.

Data integration and analytics: Unlocking insights

True data integration is becoming the norm in health care. Robust integrated image and data management solutions (IDM) are consolidating patient data from diverse sources. But the real game-changer lies in the application of advanced analytics and AI to this treasure trove of information.

By leveraging these technologies, medical professionals can extract meaningful insights from complex data sets, leading to quicker and more accurate diagnoses and treatment decisions. The potential for improving patient outcomes through data-driven decision-making is immense.

A case in point is the implementation of Syngo Carbon Image and Data Management (IDM) at Tirol Kliniken GmbH in Innsbruck, Austria. This solution consolidates all patient-centric data points in one place, including different image and photo formats, DICOM CDs, and digitalized video sources from endoscopy or microscopy. The system digitizes all documents in their raw formats, enabling the distribution of native, actionable data throughout the enterprise.

Data privacy and edge computing: Balancing innovation and security

As health care becomes increasingly data-driven, concerns about data privacy remain paramount. Enter edge computing—a solution that enables the processing of sensitive patient data locally, reducing the risk of data breaches during processing and transmission.

This approach is crucial for health-care facilities aiming to maintain patient trust while adopting advanced technologies. By keeping data processing close to the source, health-care providers can leverage cutting-edge analytics without compromising on security.

Workflow integration and AI: Enhancing efficiency and accuracy

The integration of AI into medical imaging workflows is set to dramatically improve efficiency, accuracy, and the overall quality of patient care. AI-powered solutions are becoming increasingly common, reducing the burden of repetitive tasks and speeding up diagnosis.

From automated image analysis to predictive modeling, AI is transforming every aspect of the imaging workflow. This not only improves operational efficiency but also allows health-care professionals to focus more on patient care and complex cases that require human expertise.

A quantitative analysis at the Medical University of South Carolina demonstrates the impact of AI integration. With the support of deep learning algorithms fully embedded in the clinical workflow, cardiothoracic radiologists exhibited a reduction in chest CT interpretation times of 22.1% compared to workflows without AI support.

Virtualization: The key to agility

To future-proof their IT infrastructure, health-care organizations are turning to virtualization. This approach allows for modularization and flexibility, making it easier to adapt to rapidly evolving technologies such as AI-driven diagnostics.

Container technology is playing a pivotal role in optimizing resource utilization and scalability. By embracing virtualization, health-care providers can ensure their IT systems remain agile and responsive to changing needs.

Standardization and compliance: Ensuring long-term compatibility

As imaging IT systems evolve, adherence to industry standards and compliance requirements remains crucial. These systems need to seamlessly interact with Electronic Health Records (EHRs), medical devices, and other critical systems.

This adherence ensures long-term compatibility and the ability to accommodate emerging technologies. It also facilitates smoother integration of new solutions into existing IT ecosystems, reducing implementation challenges and costs.

Real-world success stories

The benefits of these technologies are not theoretical—they are being realized in health-care organizations around the world. For instance, the virtualization strategy implemented at University Hospital Essen (UME), one of Germany’s largest university hospitals, has dramatically improved the hospital’s ability to manage increasing data volumes and applications. UME’s critical clinical information systems now run on modular and virtualized systems, allowing experts to design and use innovative solutions, including AI tools that automate tasks previously done manually by IT and medical staff.

Similarly, the PANCAIM project leverages edge computing for pancreatic cancer detection. This EU-funded initiative uses Siemens Healthineers’ edge computing approach to develop and validate AI algorithms. At Karolinska Institutet, Sweden, an algorithm was implemented for a real pancreatic cancer case, ensuring sensitive patient data remains within the hospital while advancing AI validation in clinical settings.

Another innovative approach is the concept of a Common Patient Data Model (CPDM). This standardized framework defines how patient data is organized, stored, and exchanged across different health-care systems and platforms, addressing interoperability challenges in the current health-care landscape.

The road ahead: Continuous innovation

As we look to the future, it’s clear that technological advancements in radiology will continue at a rapid pace. To stay competitive and provide the best patient care, health-care organizations must prioritize ongoing innovation and the adoption of new technologies.

This includes not only IT systems but also medical devices and treatment methodologies. The health-care providers who embrace this ethos of continuous improvement will be best positioned to navigate the challenges and opportunities that lie ahead.

In conclusion, the future of imaging IT is bright, promising unprecedented levels of efficiency, accuracy, and patient-centricity. By embracing networked care models, leveraging advanced analytics and AI, prioritizing data security, and maintaining agile IT infrastructure, health-care organizations can ensure they’re prepared for whatever the future may hold.

The journey towards future-proof imaging IT may seem daunting, but it’s a necessary evolution in our quest to provide the best possible health care. As we stand on the brink of this new era, one thing is clear: the future of health care is digital, data-driven, and more connected than ever before.

If you want to learn more, you can find more information from Siemens Healthineers.

Syngo Carbon consists of several products which are (medical) devices in their own right. Some products are under development and not commercially available. Future availability cannot be ensured.

The results by Siemens Healthineers customers described herein are based on results that were achieved in the customer’s unique setting. Since there is no “typical” hospital and many variables exist (e.g., hospital size, case mix, level of IT adoption), it cannot be guaranteed that other customers will achieve the same results.

This content was produced by Siemens Healthineers. It was not written by MIT Technology Review’s editorial staff.

Integrating security from code to cloud

The Human Genome Project, SpaceX’s rocket technology, and Tesla’s Autopilot system may seem worlds apart in form and function, but they all share a common characteristic: the use of open-source software (OSS) to drive innovation.

Offering publicly accessible code that can be viewed, modified, and distributed freely, OSS expedites developer productivity and creates a collaborative space for groundbreaking advancements.

“Open source is critical,” says David Harmon, director of software engineering for AMD. “It provides an environment of collaboration and technical advancements. Savvy users can look at the code themselves; they can evaluate it; they can review it and know that the code that they’re getting is legit and functional for what they’re trying to do.”

But OSS can also compromise an organization’s security posture by introducing hidden vulnerabilities that fall under the radar of busy IT teams, especially as cyberattacks targeting open source are on the rise. OSS may contain weaknesses, for example, that can be exploited to gain unauthorized access to confidential systems or networks. Bad actors can even intentionally introduce into OSS a space for exploits—“backdoors”—that can compromise an organization’s security posture. 

“Open source is an enabler to productivity and collaboration, but it also presents security challenges,” says Vlad Korsunsky, corporate vice president of cloud and enterprise security for Microsoft. Part of the problem is that open source introduces into the organization code that can be hard to verify and difficult to trace. Organizations often don’t know who made changes to open-source code or the intent of those changes, factors that can increase a company’s attack surface.

Complicating matters is that OSS’s increasing popularity coincides with the rise of cloud and its own set of security challenges. Cloud-native applications that run on OSS, such as Linux, deliver significant benefits, including greater flexibility, faster release of new software features, effortless infrastructure management, and increased resiliency. But they also can create blind spots in an organization’s security posture, or worse, burden busy development and security teams with constant threat signals and never-ending to-do lists of security improvements.

“When you move into the cloud, a lot of the threat models completely change,” says Harmon. “The performance aspects of things are still relevant, but the security aspects are way more relevant. No CTO wants to be in the headlines associated with breaches.”

Staying out of the news, however, is becoming increasingly more difficult: According to cloud company Flexera’s State of the Cloud 2024 survey, 89% of enterprises use multi-cloud environments. Cloud spend and security top respondents’ lists of cloud challenges. Security firm Tenable’s 2024 Cloud Security Outlook reported that 95% of its surveyed organizations suffered a cloud breach during the 18 months before their survey.

Code-to-cloud security

Until now, organizations have relied on security testing and analysis to examine an application’s output and identify security issues in need of repair. But these days, addressing a security threat requires more than simply seeing how it is configured in runtime. Rather, organizations must get to the root cause of the problem.

It’s a tall order that presents a balancing act for IT security teams, according to Korsunsky. “Even if you can establish that code-to-cloud connection, a security team may be reluctant to deploy a fix if they’re unsure of its potential impact on the business. For example, a fix could improve security but also derail some functionality of the application itself and negatively impact employee productivity,” he says.

Rather, to properly secure an application, says Korsunsky, IT security teams should collaborate with developers and application security teams to better understand the software they’re working with and to determine the impacts of applying security fixes.

Fortunately, a code-to-cloud security platform with comprehensive cloud-native security can help by identifying and stopping software vulnerabilities at the root. Code-to-cloud creates a pipeline between code repositories and cloud deployment, linking how the application was written to how it performs—“connecting the things that you see in runtime to where they’re developed and how they’re deployed,” says Korsunsky.

The result is a more collaborative and consolidated approach to security that enables security teams to identify a code’s owner and to work with that owner to make an application more secure. This ensures that security is not just an afterthought but a critical aspect of the entire software development lifecycle, from writing code to running it in the cloud.

Better yet, an IT security team can gain complete visibility into the security posture of preproduction application code across multi-pipeline and multi-cloud environments while, at the same time, minimizing cloud misconfigurations from reaching production environments. Together, these proactive strategies not only prevent risks from arising but allow IT security teams to focus on critical emerging threats.

The path to security success

Making the most of a code-to-cloud security platform requires more than innovative tools. Establishing best practices in your organization can ensure a stronger, long-term security posture.

Create a comprehensive view of assets: Today’s organizations rely on a wide array of security tools to safeguard their digital assets. But these solutions must be consolidated into a single pane of glass to manage exposure of the various applications and resources that operate across an entire enterprise, including the cloud. “Companies can’t have separate solutions for separate environments, separate cloud, separate platforms,” warns Korsunsky. “At the end of the day, attackers don’t think in silos. They’re after the crown jewels of an enterprise and they’ll do whatever it takes to get those. They’ll move laterally across environments and clouds—that’s why companies need a consolidated approach.”

Take advantage of artificial intelligence (AI): Many IT security teams are overwhelmed with incidents that require immediate attention. That’s all the more reason for organizations to outsource straightforward security tasks to AI. “AI can sift through the noise so that organizations don’t have to deploy their best experts,” says Korsunsky. For instance, by leveraging its capabilities for comparing and distinguishing written texts and images, AI can be used as a copilot to detect phishing emails. After all, adds Korsunsky, “There isn’t much of an advantage for a human being to read long emails and try to determine whether or not they’re credible.” By taking over routine security tasks, AI frees employees to focus on more critical activities.

Find the start line: Every organization has a long list of assets to secure and vulnerabilities to fix. So where should they begin? “Protect your most critical assets by knowing where your most critical data is and what’s effectively exploitable,” recommends Korsunsky. This involves conducting a comprehensive inventory of a company’s assets and determining how their data interconnects and what dependencies they require.

Protect data in use: The Confidential Computing Consortium is a community, part of the Linux Foundation, focused on accelerating the adoption of confidential computing through open collaboration. Confidential computing can protect an organization’s most sensitive data during processing by performing computations in a hardware-based Trusted Execution Environment (TEE), such as Azure confidential virtual machines based on AMD EPYC CPUs. By encrypting data in memory in a TEE, organizations can ensure that their most sensitive data is only processed after a cloud environment has been verified, helping prevent data access by cloud providers, administrators, or unauthorized users.

A solution for the future As Linux, OSS, and cloud-native applications continue to increase in popularity, so will the pressure on organizations to prioritize security. The good news is that a code-to-cloud approach to cloud security can empower organizations to get a head start on security—during the software development process—while providing valuable insight into an organization’s security posture and freeing security teams to focus on business-critical tasks.

Secure your Linux and open source workloads from code to cloud with Microsoft Azure and AMD. Learn more about Linux on Azure  and Microsoft Security.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Readying business for the age of AI

Rapid advancements in AI technology offer unprecedented opportunities to enhance business operations, customer and employee engagement, and decision-making. Executives are eager to see the potential of AI realized. Among 100 c-suite respondents polled in WNS Analytics’ “The Future of Enterprise Data & AI” report, 76% say they are already implementing or planning to implement generative AI solutions. Among those same leaders, however, 67% report struggling with data migration, and others cite grappling with data quality, talent shortages, and data democratization issues. 

MIT Technology Review Insights recently had a conversation with Alex Sidgreaves, chief data officer at Zurich Insurance; Bogdan Szostek, chief data officer at Animal Friends; Shan Lodh, director of data platforms at Shawbrook Bank; and Gautam Singh, head of data, analytics, and AI at WNS Analytics, to discuss how enterprises can navigate the burgeoning era of AI.

AI across industries

There is no shortage of AI use cases across sectors. Retailers are tailoring shopping experiences to individual preferences by leveraging customer behavior data and advanced machine learning models. Traditional AI models can deliver personalized offerings. However, with generative AI, these personalized offerings are elevated by incorporating tailored communication that considers the customer’s persona, behavior, and past interactions. In insurance, by leveraging generative AI, companies can identify subrogation recovery opportunities that a manual handler might overlook, enhancing efficiency and maximizing recovery potential. Banking and financial services institutions are leveraging AI to bolster customer due diligence and enhance anti-money laundering efforts by leveraging AI-driven credit risk management practices. AI technologies are enhancing diagnostic accuracy through sophisticated image recognition in radiology, allowing for earlier and more precise detection of diseases while predictive analytics enable personalized treatment plans.

The core of successful AI implementation lies in understanding its business value, building a robust data foundation, aligning with the strategic goals of the organization, and infusing skilled expertise across every level of an enterprise.

  • “I think we should also be asking ourselves, if we do succeed, what are we going to stop doing? Because when we empower colleagues through AI, we are giving them new capabilities [and] faster, quicker, leaner ways of doing things. So we need to be true to even thinking about the org design. Oftentimes, an AI program doesn’t work, not because the technology doesn’t work, but the downstream business processes or the organizational structures are still kept as before.” Shan Lodh, director of data platforms, Shawbrook Bank

Whether automating routine tasks, enhancing customer experiences, or providing deeper insights through data analysis, it’s essential to define what AI can do for an enterprise in specific terms. AI’s popularity and broad promises are not good enough reasons to jump headfirst into enterprise-wide adoption. 

“AI projects should come from a value-led position rather than being led by technology,” says Sidgreaves. “The key is to always ensure you know what value you’re bringing to the business or to the customer with the AI. And actually always ask yourself the question, do we even need AI to solve that problem?”

Having a good technology partner is crucial to ensure that value is realized. Gautam Singh, head of data, analytics, and AI at WNS, says, “At WNS Analytics, we keep clients’ organizational goals at the center. We have focused and strengthened around core productized services that go deep in generating value for our clients.” Singh explains their approach, “We do this by leveraging our unique AI and human interaction approach to develop custom services and deliver differentiated outcomes.”

The foundation of any advanced technology adoption is data and AI is no exception. Singh explains, “Advanced technologies like AI and generative AI may not always be the right choice, and hence we work with our clients to understand the need, to develop the right solution for each situation.” With increasingly large and complex data volumes, effectively managing and modernizing data infrastructure is essential to provide the basis for AI tools. 

This means breaking down silos and maximizing AI’s impact involves regular communication and collaboration across departments from marketing teams working with data scientists to understand customer behavior patterns to IT teams ensuring their infrastructure supports AI initiatives. 

  • “I would emphasize the growing customer’s expectations in terms of what they expect our businesses to offer them and to provide us a quality and speed of service. At Animal Friends, we see the generative AI potential to be the biggest with sophisticated chatbots and voice bots that can serve our customers 24/7 and deliver the right level of service, and being cost effective for our customers. Bogdan Szostek, chief data officer, Animal Friends

Investing in domain experts with insight into the regulations, operations, and industry practices is just as necessary in the success of deploying AI systems as the right data foundations and strategy. Continuous training and upskilling are essential to keep pace with evolving AI technologies.

Ensuring AI trust and transparency

Creating trust in generative AI implementation requires the same mechanisms employed for all emerging technologies: accountability, security, and ethical standards. Being transparent about how AI systems are used, the data they rely on, and the decision-making processes they employ can go a long way in forging trust among stakeholders. In fact, The Future of Enterprise Data & AI report cites 55% of organizations identify “building trust in AI systems among stakeholders” as the biggest challenge when scaling AI initiatives. 

“We need talent, we need communication, we need the ethical framework, we need very good data, and so on,” says Lodh. “Those things don’t really go away. In fact, they become even more necessary for generative AI, but of course the usages are more varied.” 

AI should augment human decision-making and business workflows. Guardrails with human oversight ensure that enterprise teams have access to AI tools but are in control of high-risk and high-value decisions.

“Bias in AI can creep in from almost anywhere and will do so unless you’re extremely careful. Challenges come into three buckets. You’ve got privacy challenges, data quality, completeness challenges, and then really training AI systems on data that’s biased, which is easily done,” says Sidgreaves. She emphasizes it is vital to ensure that data is up-to-date, accurate, and clean. High-quality data enhances the reliability and performance of AI models. Regular audits and data quality checks can help maintain the integrity of data.

An agile approach to AI implementation

ROI is always top of mind for business leaders looking to cash in on the promised potential of AI systems. As technology continues to evolve rapidly and the potential use cases of AI grow, starting small, creating measurable benchmarks, and adopting an agile approach can ensure success in scaling solutions. By starting with pilot projects and scaling successful initiatives, companies can manage risks and optimize resources. Sidgreaves, Szostek, and Lodh stress that while it may be tempting to throw everything at the wall and see what sticks, accessing the greatest returns from expanding AI tools means remaining flexible, strategic, and iterative. 

In insurance, two areas where AI has a significant ROI impact are risk and operational efficiency. Sidgreaves underscores that reducing manual processes is essential for large, heritage organizations, and generative AI and large language models (LLMs) are revolutionizing this aspect by significantly diminishing the need for manual activities.

To illustrate her point, she cites a specific example: “Consider the task of reviewing and drafting policy wording. Traditionally, this process would take an individual up to four weeks. However, with LLMs, this same task can now be completed in a matter of seconds.”  

Lodh adds that establishing ROI at the project’s onset and implementing cross-functional metrics are crucial for capturing a comprehensive view of a project’s impact. For instance, using LLMs for writing code is a great example of how IT and information security teams can collaborate. By assessing the quality of static code analysis generated by LLMs, these teams can ensure that the code meets security and performance standards.

“It’s very hard because technology is changing so quickly,” says Szostek. “We need to truly apply an agile approach, do not try to prescribe all the elements of the future deliveries in 12, 18, 24 months. We have to test and learn and iterate, and also fail fast if that’s needed.” 

Navigating the future of the AI era 

The rapid evolution of the digital age continues to bring immense opportunities for enterprises globally from the c-suite to the factory floor. With no shortage of use cases and promises to boost efficiencies, drive innovation, and improve customer and employee experiences, few business leaders dismiss the proliferation of AI as mere hype. However, the successful and responsible implementation of AI requires a careful balance of strategy, transparency, and robust data privacy and security measures.

  • “It’s really easy as technology people to be driven by the next core thing, but we would have to be solving a business problem. So the key is to always ensure you know what value you’re bringing to the business or to the customer with the AI. And actually always ask yourself the question, do we even need AI to solve that problem?” — Alex Sidgreaves, chief data officer, Zurich Insurance

Fully harnessing the power of AI while maintaining trust means defining clear business values, ensuring accountability, managing data privacy, balancing innovation with ethical use, and staying ahead of future trends. Enterprises must remain vigilant and adaptable, committed to ethical practices and an agile approach to thrive in this rapidly changing business landscape.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The rise of the data platform for hybrid cloud

Whether pursuing digital transformation, exploring the potential of AI, or simply looking to simplify and optimize existing IT infrastructure, today’s organizations must do this in the context of increasingly complex multi-cloud environments. These complicated architectures are here to stay—2023 research by Enterprise Strategy Group, for example, found that 87% of organizations expect their applications to be distributed across still more locations in the next two years.

Scott Sinclair, practice director at Enterprise Strategy Group, outlines the problem: “Data is becoming more distributed. Apps are becoming more distributed. The typical organization has multiple data centers, multiple cloud providers, and umpteen edge locations. Data is all over the place and continues to be created at a very rapid rate.”

Finding a way to unify this disparate data is essential. In doing so, organizations must balance the explosive growth of enterprise data; the need for an on-premises, cloud-like consumption model to mitigate cyberattack risks; and continual pressure to cut costs and improve performance.

Sinclair summarizes: “What you want is something that can sit on top of this distributed data ecosystem and present something that is intuitive and consistent that I can use to leverage the data in the most impactful way, the most beneficial way to my business.”

For many, the solution is an overarching software-defined, virtualized data platform that delivers a common data plane and control plane across hybrid cloud environments. Ian Clatworthy, head of data platform product marketing at Hitachi Vantara, describes a data platform as “an integrated set of technologies that meets an organization’s data needs, enabling storage and delivery of data, the governance of data, and the security of data for a business.”

Gartner projects that these consolidated data storage platforms will constitute 70% of file and object storage by 2028, doubling from 35% in 2023. The research firm underscores that “Infrastructure and operations leaders must prioritize storage platforms to stay ahead of business demands.”

A transitional moment for enterprise data

Historically, organizations have stored their various types of data—file, block, object—in separate silos. Why change now? Because two main drivers are rendering traditional data storage schemes inadequate for today’s business needs: digital transformation and AI.

As digital transformation initiatives accelerate, organizations are discovering that having distinct storage solutions for each workload is inadequate for their escalating data volumes and changing business landscapes. The complexity of the modern data estate hinders many efforts toward change.

Clatworthy says that when organizations move to hybrid cloud environments, they may find, for example, that they have mainframe or data center data stored in one silo, block storage running on an appliance, apps running file storage, another silo for public cloud, and a separate VMware stack. The result is increased complexity and
cost in their IT infrastructure, as well as reduced flexibility and efficiency.

Then, Clatworthy adds, “When we get to the world of generative AI that’s bubbling around the edges, and we’re going to have this mass explosion of data, we need to simplify how that data is managed so that applications can consume it. That’s where a platform comes in.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Advancing to adaptive cloud

For many years now, cloud solutions have helped organizations streamline their operations, increase their scalability, and reduce costs. Yet, enterprise cloud investment has been fragmented, often lacking a coherent organization-wide approach. In fact, it’s not uncommon for various teams across an organization to have spun up their own cloud projects, adopting a wide variety of cloud strategies and providers, from public and hybrid to multi-cloud and edge computing.

The problem with this approach is that it often leads to “a sprawling set of systems and disparate teams working on these cloud systems, making it difficult to keep up with the pace of innovation,” says Bernardo Caldas, corporate vice president of Azure Edge product management at Microsoft. In addition to being an IT headache, a fragmented cloud environment leads to technological and organizational repercussions.

A complex multi-cloud deployment can make it difficult for IT teams to perform mission-critical tasks, such as applying security patches, meeting regulatory requirements, managing costs, and accessing data for data analytics. Configuring and securing these types of environments is a challenging and time-consuming task. And ad hoc cloud deployments often culminate in systems incompatibility when one-off pilots are ready to scale or be combined with existing products.

Without a common IT operations and application development platform, teams can’t share lessons learned or pool important resources, which tends to cause them to become increasingly siloed. “People want to do more with their data, but if their data is trapped and isolated in these different systems, it can make it really hard to tap into the data for insights and to accelerate progress,” says Caldas.

As the pace of change accelerates, however, many organizations are adopting a new adaptive cloud approach—one that will enable them to respond quickly to evolving consumer demands and market fluctuations while simplifying the management of their complex cloud environments.

An adaptive strategy for success

Heralding a departure from yesteryear’s fragmented cloud environments, an adaptive cloud approach unites sprawling systems, disparate silos, and distributed sites into a single operations, development, security, application, and data model. This unified approach empowers organizations to glean value from cloud-native technologies, open source software such as Linux, and AI across hybrid, multi-cloud, edge, and IoT.

“You’ve got a lot of legacy software out there, and for the most part, you don’t want to change production environments,” says David Harmon, director of software engineering at AMD. “Nobody wants to change code. So while CTOs and developers really want to take advantage of all the hardware changes, they want to do nothing to their code base if possible, because that change is very, very expensive.”

An adaptive cloud approach answers this challenge by taking an agnostic approach to the environments it brings together on a single control plane. By seamlessly collecting disparate computing environments, including those that run outside of hyperscale data centers, the control plane creates greater visibility across thousands of assets, simplifies security enforcement, and allows for easier management.

An adaptive cloud approach enables unified management of disparate systems and resources, leading to improved oversight and control. An adaptive approach also creates scalability, as it allows organizations to meet the fluctuating demands of a business without the risk of over-provisioning or under-provisioning resources.

There are also clear business advantages to embracing an adaptive cloud approach. Consider, for example, an operational technology team that deploys an automation system to accelerate a factory’s production capabilities. In a fragmented and distributed environment, systems often struggle to communicate. But in an adaptive cloud environment, a factory’s automation system can easily be connected to the organization’s customer relationship management system, providing sales teams with real-time insights into supply-demand fluctuations.

A united platform is not only capable of bringing together disparate systems but also of connecting employees from across functions, from sales to engineering. By sharing an interconnected web of cloud-native tools, a workforce’s collective skills and knowledge can be applied to initiatives across the organization—a valuable asset in today’s resource-strapped and talent-scarce business climate.

Using cloud-native technologies like Kubernetes and microservices can also expedite the development of applications across various environments, regardless of an application’s purpose. For example, IT teams can scale applications from massive cloud platforms to on-site production without complex rewrites. Together, these capabilities “propel innovation, simplify complexity, and enhance the ability to respond to business opportunities,” says Caldas.

The AI equation

From automating mundane processes to optimizing operations, AI is revolutionizing the way businesses work. In fact, the market for AI reached $184 billion in 2024—a staggering increase from nearly $50 billion in 2023, and it is expected to surpass $826 billion in 2030.

But AI applications and models require high-quality data to generate high-quality outputs. That’s a challenging feat when data sets are trapped in silos across distributed environments. Fortunately, an adaptive cloud approach can provide a unified data platform for AI initiatives.

“An adaptive cloud approach consolidates data from various locations in a way that’s more useful for companies and creates a robust foundation for AI applications,” says Caldas. “It creates a unified data platform that ensures that companies’ AI tools have access to high-quality data to make decisions.”

Another benefit of an adaptive cloud approach is the ability to tap into the capabilities of innovative tools such as Microsoft Copilot in Azure. Copilot in Azure is an AI companion that simplifies how IT teams operate and troubleshoot apps and infrastructure. By leveraging large language models to interact with an organization’s data, Copilot allows for deeper exploration and intelligent assessment of systems within a unified management framework.

Imagine, for example, the task of troubleshooting the root cause of a system anomaly. Typically, IT teams must sift through thousands of logs, exchanging a series of emails with colleagues, and reading documentation for answers. Copilot in Azure, however, can cut through this complexity by easing anomaly detection of unanticipated system changes while, at the same time, providing recommendations for speedy resolution.

“Organizations can now interact with systems using chat capabilities, ask questions about environments, and gain real insights into what’s happening across the heterogenous environments,” says Caldas.

An adaptive approach for the technology future

Today’s technology environments are only increasing in complexity. More systems, more data, more applications—together, they form a massive sprawling infrastructure. But proactively reacting to change, be it in market trends or customer needs, requires greater agility and integration across the organization. The answer: an adaptive approach. A unified platform for IT operations and management, applications, data, and security can consolidate the disparate parts of a fragmented environment in ways that not only ease IT management and application development but also deliver key business benefits, from faster time to market to AI efficiencies, at a time when organizations must move swiftly to succeed.

Microsoft Azure and AMD meet you where you are on your cloud journey. Learn more about an adaptive cloud approach with Azure.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

A playbook for crafting AI strategy

Giddy predictions about AI, from its contributions to economic growth to the onset of mass automation, are now as frequent as the release of powerful new generative AI models. The consultancy PwC, for example, predicts that AI could boost global gross domestic product (GDP) 14% by 2030, generating US $15.7 trillion.

Forty percent of our mundane tasks could be automated by then, claim researchers at the University of Oxford, while Goldman Sachs forecasts US $200 billion in AI investment by 2025. “No job, no function will remain untouched by AI,” says SP Singh, senior vice president and global head, enterprise application integration and services, at technology company Infosys.

While these prognostications may prove true, today’s businesses are finding major hurdles when they seek to graduate from pilots and experiments to enterprise-wide AI deployment. Just 5.4% of US businesses, for example, were using AI to produce a product or service in 2024.

Moving from initial forays into AI use, such as code generation and customer service, to firm-wide integration depends on strategic and organizational transitions in infrastructure, data governance, and supplier ecosystems. As well, organizations must weigh uncertainties about developments in AI performance and how to measure return on investment.

If organizations seek to scale AI across the business in coming years, however, now is the time to act. This report explores the current state of enterprise AI adoption and offers a playbook for crafting an AI strategy, helping business leaders bridge the chasm between ambition and execution. Key findings include the following:

AI ambitions are substantial, but few have scaled beyond pilots. Fully 95% of companies surveyed are already using AI and 99% expect to in the future. But few organizations have graduated beyond pilot projects: 76% have deployed AI in just one to three use cases. But because half of companies expect to fully deploy AI across all business functions within two years, this year is key to establishing foundations for enterprise-wide AI.

AI readiness spending is slated to rise significantly. Overall, AI spending in 2022 and 2023 was modest or flat for most companies, with only one in four increasing their spending by more than a quarter. That is set to change in 2024, with nine in ten respondents expecting to increase AI spending on data readiness (including platform modernization, cloud migration, and data quality) and in adjacent areas like strategy, cultural change, and business models. Four in ten expect to increase spending by 10 to 24%, and one-third expect to increase spending by 25 to 49%.

Data liquidity is one of the most important attributes for AI deployment. The ability to seamlessly access, combine, and analyze data from various sources enables firms to extract relevant information and apply it effectively to specific business scenarios. It also eliminates the need to sift through vast data repositories, as the data is already curated and tailored to the task at hand.

Data quality is a major limitation for AI deployment. Half of respondents cite data quality as the most limiting data issue in deployment. This is especially true for larger firms with more data and substantial investments in legacy IT infrastructure. Companies with revenues of over US $10 billion are the most likely to cite both data quality and data infrastructure as limiters, suggesting that organizations presiding over larger data repositories find the problem substantially harder.

Companies are not rushing into AI. Nearly all organizations (98%) say they are willing to forgo being the first to use AI if that ensures they deliver it safely and securely. Governance, security, and privacy are the biggest brake on the speed of AI deployment, cited by 45% of respondents (and a full 65% of respondents from the largest companies).

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.