Transforming software with generative AI

Generative AI’s promises for the software development lifecycle (SDLC)—code that writes itself, fully automated test generation, and developers who spend more time innovating than debugging—are as alluring as they are ambitious. Some bullish industry forecasts project a 30% productivity boost from AI developer tools, which, if realized, could inject more than $1.5 trillion into the global GDP.

But while there’s little doubt that software development is undergoing a profound transformation, separating the hype and speculation from the realities of implementation and ROI is no simple task. As with previous technological revolutions, the dividends won’t be instant. “There’s an equivalency between what’s going on with AI and when digital transformation first happened,” observes Carolina Dolan Chandler, chief digital officer at Globant. “AI is an integral shift. It’s going to affect every single job role in every single way. But it’s going to be a long-term process.”

Where exactly are we on this transformative journey? How are enterprises navigating this new terrain—and what’s still ahead? To investigate how generative AI is impacting the SDLC, MIT Technology Review Insights surveyed more than 300 business leaders about how they’re using the technology in their software and product lifecycles.

The findings reveal that generative AI has rich potential to revolutionize software development, but that many enterprises are still in the early stages of realizing its full impact. While adoption is widespread and accelerating, there are significant untapped opportunities. This report explores the projected course of these advancements, as well as how emerging innovations, including agentic AI, might bring about some of the technology’s loftier promises.

Key findings include the following:

Substantial gains from generative AI in the SDLC still lie ahead. Only 12% of surveyed business leaders say that the technology has “fundamentally” changed how they develop software today. Future gains, however, are widely anticipated: Thirty-eight percent of respondents believe generative AI will “substantially” change the SDLC across most organizations in one to three years, and another 31% say this will happen in four to 10 years.

Use of generative AI in the SDLC is nearly universal, but adoption is not comprehensive. A full 94% of respondents say they’re using generative AI for software development in some capacity. One-fifth (20%) describe generative AI as an “established, well-integrated part” of their SDLC, and one-third (33%) report it’s “widely used” in at least part of their SDLC. Nearly one-third (29%), however, are still “conducting small pilots” or adopting the technology on an individual-employee basis (rather than via a team-wide integration).

Generative AI is not just for code generation. Writing software may be the most obvious use case, but most respondents (82%) report using generative AI in at least two phases of the SDLC, and one-quarter (26%) say they are using it across four or more. The most common additional use cases include designing and prototyping new features, streamlining requirement development, fast-tracking testing, improving bug detection, and
boosting overall code quality.

Generative AI is already meeting or exceeding expectations in the SDLC. Even with this room to grow in how fully they integrate generative AI into their software development workflows, 46% of survey respondents say generative AI is already meeting expectations, and 33% say it “exceeds” or “greatly exceeds” expectations.

AI agents represent the next frontier. Looking to the future, almost half (49%) of leaders believe advanced AI tools, such as assistants and agents, will lead to efficiency gains or cost savings. Another 20% believe such tools will lead to improved throughput or faster time to market.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Cloud transformation clears businesses for digital takeoff

In an age where customer experience can make or break a business, Cathay Pacific is embracing cloud transformation to enhance service delivery and revolutionize operations from the inside out. It’s not just technology companies that are facing pressure to deliver better customer service, do more with data, and improve agility. An almost 80-year-old airline, Cathay Pacific embarked on its digital transformation journey in 2014, spurred by a critical IT disruption that became the catalyst for revamping their technology.

By embracing the cloud, the airline has not only streamlined operations but also paved the way for innovative solutions like DevSecOps and AI integration. This shift has enabled Cathay to deliver faster, more reliable services to both passengers and staff, while maintaining a robust security framework in an increasingly digital world. 

According to Rajeev Nair, general manager of IT infrastructure and security at Cathay Pacific, becoming a digital-first airline was met with early resistance from both business and technical teams. The early stages required a lot of heavy lifting as they shifted legacy apps first from their server room to a dedicated data center and then to the cloud. From there began the process of modernization that Cathay Pacific, now in its final stages of this transformation, continues to fine tune.

The cloud migration also helped Cathay align with their ESG goals. “Two years ago, if you asked me what IT could do for sustainability, we would’ve been clueless,” says Nair. However, through cloud-first strategies and green IT practices, the airline has made notable strides in reducing its carbon footprint. Currently, the business is in the process of moving to a smaller data center, reducing physical infrastructure and its carbon emissions significantly by 2025.

The broader benefits of this cloud transformation for Cathay Pacific go beyond sustainability. Agility, time-to-market, and operational efficiency have improved drastically. “If you ask many of the enterprises, they would probably say that shifting to the cloud is all about cost-saving,” says Nair. “But for me, those are secondary aspects and the key is about how to enable the business to be more agile and nimble so that the business capability could be delivered much faster by IT and the technology team.”

By 2025, Cathay Pacific aims to have 100% of their business applications running on the cloud, significantly enhancing their agility, customer service, and cost efficiency, says Nair.

As Cathay Pacific continues its digital evolution, Nair remains focused on future-proofing the airline through emerging technologies. Looking ahead, he is particularly excited about the potential of AI, generative AI, and virtual reality to further enhance both customer experience and internal operations. From more immersive VR-based training for cabin crew to enabling passengers to preview in-flight products before boarding, these innovations are set to redefine how the airline engages with its customers and staff. 

“We have been exploring that for quite some time, but we believe that it will continue to be a mainstream technology that can change the way we serve the customer,” says Nair.

This episode of Business Lab is produced in association with Infosys Cobalt.

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Our topic today is cloud transformation to meet business goals and customer needs. It’s not just tech companies that have to stay one step ahead. Airlines too are under pressure to deliver better customer service, do more with data, and improve agility. 

Two words for you: going further. 

My guest is Rajeev Nair, who is the general manager of IT infrastructure and security at Cathay Pacific. This podcast is produced in association with Infosys Cobalt. Welcome, Rajeev. 

Rajeev Nair: Thank you. Thank you, Megan. Thank you for having me. 

Megan: Thank you ever so much for joining us. Now to get some context for our conversation today, could you first describe how Cathay Pacific’s digital transformation journey began, and explain, I guess, what stage of this transformation this almost 80-year-old airline is currently in, too? 

Rajeev: Sure, definitely Megan. So for Cathay, we started this transformation journey probably a decade back, way back in 2014. It all started with facing some major service disruption within Cathay IT where it had a massive impact on the business operation. That prompted us to trigger and initiate this transformation journey. So the first thing is we started looking at many of our legacy applications. Back in those days we still had mainframe systems that provided so many of our critical services. We started looking at migrating those legacy apps first, moving them outside of that legacy software and moving them into a proper data center. Back in those days, our data center used to be our corporate headquarters. We didn’t have a dedicated data center and it used to be in a server room. So those were the initial stages of our transformation journey, just a basic building block. So we started moving into a proper data center so that resilience and availability could be improved. 

And as a second phase, we started looking at the cloud. Those days, cloud was just about to kick off in this part of the world. We started looking at migrating to the cloud and it has been a huge challenge or resistance even from the business as well as from the technology team. Once we started moving, shifting apps to the cloud, we had multiple transformation programs to do that modernization activities. Once that is done, then the third phase of the journey is more about your network. Once your applications are moved to the cloud, your network design needs to be completely changed. Then we started looking at how we could modernize our network because Cathay operates in about 180 regions across the world. So our network is very crucial for us. We started looking at redesigning our network. 

And then, it comes to your security aspects. Things moving to the cloud, your network design is getting changed, your cybersecurity needs heavy lifting to accommodate the modern world. We started focusing on cybersecurity initiatives where our security posture has been improved a lot over the last few years. And with those basic building blocks done on the hardware and on the technology side, then comes your IT operations. Because one is your hardware and software piece, but how do you sustain your processes to ensure that it can support those changing technology landscapes? We started investing a lot around the IT operations side, but things like ITIL processes have been revisited. We started adopting many of the DevOps and the DevSecOps practices. So a lot of emphasis around processes and practices to help the team move forward, right? 

And those operations initiatives are in phase. As we stand today, we are at the final stage of our cloud journey where we are looking at how we can optimize it better. So we shifted things to the cloud and that has been a heavy lifting that has been done in the early phases. Now we are focusing around how we can rewrite or refactor your application so that it can better liberate your cloud technologies where we could optimize the performance, thereby optimizing your usage and the cloud resources wherein you could save on the cost as well as on the sustainability aspect. That is where we stand. By 2025, we are looking at moving 100% of our business applications to the cloud and also reducing our physical footprint in our data centers as well. 

Megan: Fantastic. And you mentioned sustainability there. I wonder how does the focus on environmental, social, and governance goals or ESG tie into your wider technology strategy? 

Rajeev: Sure. And to be very honest, Megan, if you asked me this question two years back, we would’ve been clueless on what IT could do from a sustainability aspect. But over the last two years, there has been a lot of focus around ESG components within the technology space where we have done a lot of initiatives since last year to improve and be efficient on the sustainability front. So a couple of key areas that we have done. One is definitely the cloud-first strategy where adopting the cloud-first policy reduces your carbon footprint and it also helps us in migrating away from our data center. So as we speak, we are doing a major project to further reduce our data center size by relocating to a much smaller data center, which will be completed by the end of next year. That will definitely help us to reduce our footprint. 

The second is around adopting the various green IT practices, things like energy efficient devices, be it your PCs or the laptop or virtualizations, and e-based management policies and management aspects. Some of the things are very basic and fundamental in nature. Stuff like we moved away from a dual monitor to a single monitor wherein we could reduce your energy consumption by half, or changing some of your software policies like screen timeouts and putting a monitor in standby. Those kinds of basic things really helped us to optimize and manage. And the last one is around FinOps. So FinOps is a process in the practice that is being heavily adopted in the cloud organization, but it is just not about optimizing your course because by adopting the FinOps practices and tying in with the GreenOps processes, we are able to focus a lot around reducing our CO2 footprint and optimizing sustainability. Those are some of the practices that we have been doing with Cathay. 

Megan: Yeah. fantastic benefits from relatively small changes there. Other than ESG, what are the other benefits for an enterprise like Cathay Pacific in terms of shifting from those legacy systems to the cloud that you found? 

Rajeev: For me, the key is about agility and time-to-market capability. If you ask many of the enterprises, they would probably say that shifting to the cloud is all about cost-saving. But for me, those are secondary aspects. The key is about how to enable the business to be more agile and nimble so that the business capability can be delivered much faster by IT and the technology team. So as an example, gone are the days when we take about a few months before we provision hardware and have the platform and the applications ready. Now the platforms are being delivered to the developers within an hour’s time so that the developers can quickly build their development environment and be ready for development and testing activities. Right? So agility is a key and the number one factor. 

The second is by shifting to the cloud, you’re also liberating many of the latest technologies that the cloud comes up with and the provider has to offer. Things like capacity and the ability to scale up and down your resources and services according to your business needs and fluctuations are a huge help from a technology aspect. That way you can deliver customer-centered solutions faster and more efficiently than many of our airline customers and competitors. 

And the last one is, of course, your cost saving aspect and the operational efficiency. By moving away from the legacy systems, we can reduce a lot of capex [capital expenditure]. Like, say for example, I don’t need to spend money on investing in hardware and spend resources to manage those hardware and data center operations, especially in Hong Kong where human resources are pretty expensive and scarce to find. It is very important that I rely on these sorts of technologies to manage those optimally. Those are some of the key aspects that we see from a cloud adoption perspective. 

Megan: Fantastic. And it sounds like it’s been a several year process so far. So after what sounds like pretty heavy investment when it comes to moving legacy hardware on-prem systems to the cloud. What’s your approach now to adapting your IT operations off the back of that? 

Rajeev: Exactly. That is, sort of, just based early in my transformation journey, but yeah, absolutely. By moving to the cloud, it is just not about the hardware, but it’s also about how your operations and your processes align with this changing technology and new capabilities. And, for example, by adopting more agile and scalable approach to managing IT infrastructures and applications as well. Also leveraging the data and insights that the cloud enables. To achieve this, the fundamental aspect of this is how you can revisit and fine tune your IT service management processes, and that is where your core of IT operations have been built in the past. And to manage that properly we recently, I think, over the last three years we were looking at implementing a new IT service management solution, which is built on a product called ServiceNow. So they are built on the core ITIL processes framework to help us manage the service management, the operations management, and asset management. 

Those are some of the capabilities which we rolled out with the help of our partners like Infosys so that it could provide a framework to fine tune and optimize IT processes. And we also adopted things like DevOps and DevSecOps because what we have also noticed is the processes like ITIL, which was very heavy over the last few years around support activities is also shifting. So we wanted to adopt some of these development practices into the support and operations functions to be more agile by shifting left some of these capabilities. And in this journey, Infosys has been our key partner, not only on the cloud transformation side, but also on implementation of ServiceNow, which is our key service management tool where they provided us end-to-end support starting from the planning phase or the initial conceptual phase and also into the design and development and also to the deployment and maintenance. We haven’t completed this journey and it’s still a project that is currently ongoing, and by 2025 we should be able to complete this successfully across the enterprise. 

Megan: Fascinating. It’s an awful lot of change going on. I mean, there must be an internal shift, therefore, that comes with cloud transformation too, I imagine. I wonder, what’s your approach been to up skilling your team to help it excel in this new way of working? 

Rajeev: Yeah, absolutely. And that is always the hardest part. You can change your technology and processes is but changing your people, that’s always toughest and the hardest bit. And essentially this is all about change management, and that has been one of our struggles in our early part of the cloud transformation journey. What we did is we invested a lot in terms of uplifting our traditional infrastructure team. All the traditional technology teams have to go through that learning curve in adopting cloud technology early in our project. And we also provided a lot of training programs, including some of our cloud partners were able to up skill and train these resources. 

But the key differences that we are seeing is even after providing all those training and upskilling programs, we could see that there was a lot of resistance and a lot of doubts in people’s mind about how cloud is going to help the organization. And the best part is what we did is we included these team members into our project so that they get the hands-on experience. And once they start seeing the benefits around these technologies, there was no looking back. And the team was able to completely embrace the cloud technologies to the point that we still have a traditional technology team who’s supporting the remaining hardware and the servers of the world, but they’re also very keen to shift across the line and adopt and embrace the cloud technology. But it’s been quite a journey for us. 

Megan: That’s great to hear that you’ve managed to bring them along with you. And I suppose it’d be remiss of me if we’re talking about embracing new technologies not to talk about AI, although still in its early stages in most industries. I wonder how is Cathay Pacific approaching AI adoption as well? 

Rajeev: Sure. I think these days none of these conversations can be complete without talking about AI and gen AI. We started this early exploratory phase early into the game, especially in this part of the world. But for us, the key is approaching this based on the customer’s pain points and business needs and then we work backward to identify what type of AI is best suitable or relevant to us. In Cathay, currently, we focus on three main types of AI. One is of course conversational AI. Essentially, it is a form of an internal and external chatbot. Our chatbot, we call it Vera, serves customers directly and can handle about 50% of the inquiries successfully. And just about two weeks back, we upgraded the LLM with a new model, the chatbot with a new model, which is able to be more efficient and much more responsive in terms of the human work. So that’s one part of the AI that we heavily invested on. 

Second is RPA, or robotic process automation, especially what you’re seeing is during the pandemic and post-Covid era, there is limited resources available, especially in Hong Kong and across our supply chain. So RPA or the robotic processes helps to automate mundane repetitive tasks, which doesn’t only fill the resource gap, but it also directly enhances the employee experience. And so far in Cathay, we have about a hundred bots in production serving various business units, serving approximately 30,000 hours every year of human activity. So that’s the second part. 

The third one is around ML and it’s the gen AI. So like our digital team or the data science team has developed about 70-plus ML models in Cathay that turned the organization data into insights or actionable items. These models help us to make a better decision. For example, what meals to be loaded into the aircraft and specific routes, in terms of what quantity and what kind of product offers we promote to customers, and including the fare loading and the pricing of our passenger as well as a cargo bay space. There is a lot of exploration that is being done in this space as well. And a couple of examples I could relate is if you ever happen to come to Hong Kong, next time at the airport, you could hear the public announcement system and that is also AI-powered recently. In the past, our staff used to manually make those announcements and now it has been moved away and has been moved into AI-powered voice technology so that we could be consistent in our announcement. 

Megan: Oh, fantastic. I’ll have to listen for it next time I’m at Hong Kong airport. And you’ve mentioned this topic a couple of times in the conversation. Look, when we’re talking about cloud modernization, cybersecurity can be a roadblock to agility, I guess, if it’s not managed effectively. So could you also tell us in a little more detail how Cathay Pacific has integrated security into its digital transformation journey, particularly with the adoption of development security operations practices that you’ve mentioned? 

Rajeev: Yeah, this is an interesting one. I look after cybersecurity as well as the infrastructure services. With both of these critical functions around my hand, I need to be mindful of both aspects, right? Yes, it’s an interesting one and it has changed over the period of time, and I fully understand why cybersecurity practices needs to be rigid because there is a lot of compliance and it is a highly regulated function, but if something goes wrong, as a CISO we are held accountable for those faults. I can understand why the team is so rigid in their practices. And I also understand from a business perspective it could be perceived as a road blocker to agility. 

One of the key aspects that we have done in Cathay is we have been following DevOps for quite a number of years, and recently, I think in the last two years, we started implementing DevSecOps into our STLC [software testing life cycle]. And what it essentially means is rather than the core cybersecurity team being responsible for many of the security testing and those sorts of aspects, we want to shift left some of these capabilities into the developers so that the people who develop the code now are held accountable for the testing and the quality of the output. And they’re also enabled in terms of the cybersecurity process. Right? 

Of course, when we started off this journey, there has been a huge resistance on the security team itself because they don’t really trust the developers trying to do the testing or the testing outputs. But over a period of time with the introduction of various tools and automation that is put in place, this is now getting into a matured stage wherein it is now enabling the upfront teams to take care of all the aspects of security, like threat modeling, code scanning, and the vulnerability testing. But at the end, the security teams would be still validating and act as a sort of a gatekeeper, but in a very light and inbuilt processes. And this way we can ensure that our cloud applications are secure by design and by default they can deliver them faster and more reliably to our customers. And in this entire process, right? 

In the past, security has been always perceived as an accountability of the cybersecurity team. And by enabling the developers of the security aspects, now you have a better ownership in the organization when it comes to cybersecurity and it is building a better cybersecurity culture within the organization. And that, to me, is a key because from a security aspect, we always say that people are your first line of defense and often they’re also the last line of defense. I’m glad that by these processes we are able to improve that maturity in the organization. 

Megan: Absolutely. And you mentioned that obviously cybersecurity is something that’s really important to a lot of customers nowadays as well. I wondered if you could offer some other examples too of how your digital transformation has improved that customer experience in other ways? 

Rajeev: Yeah, definitely. Maybe I can quote a few examples, Megan. One is around our pilots. You would’ve seen when you travel through the airport or in the aircraft that pilots usually carry a briefcase when they load the flight, and you are often probably wondering what exactly they carry. Basically, that contains a bunch of papers. It contains your weather charts, your navigation routes, and the flight plans, the crew details. It’s a whole stack of paper that they have to carry on each and every flight. And in Cathay, by digitization, we have automated that in their processes, where now they carry an iPad instead of a bunch of papers or briefing pack. So that iPad includes all these softwares that is required for the captain to operate the flight in a legally and a safe manner. 

Paperless cockpit operation is nothing new. Many airlines have attempted to do that, but I should say that Cathay has been on the forefront in truly establishing a paperless operation, where many of the other airlines have shown great interest in using our software. That is one aspect from a fly crew perspective. Second, from a customer perspective, we have an app called Customer 360, which is a completely in-house developed model, which has all the customer direct transactions, surveys, or how they interact at the various checkpoints with our crew or at the boarding. You have all this data feed of a particular customer where our agents or the cabin crew can understand the customer’s sentiment and their reaction to service recovery action. 

Say for example, the customer calls up a call center and ask for a refund or miles compensation. Based on the historical usage, we could prioritize the best action to improve the customer satisfaction. We are connected to all these models and enable the frontline teams so that they can use this when they engage with the customer. An example at the airport, our agents will be able to see a lot of useful insights about the customers beyond the basic information like the flight itinerary or the online shopping history at the Cathay shop, et cetera, so that they can see the overall satisfaction level and get additional insights on recommended actions to restore or improve the customer satisfaction level. This is basically used by our frontline agents at the airport, our cabin crew as well as all the airport team, and the customer team so that they have great consistency in the service no matter what touchpoint the customers are choosing to contact us. 

Megan: Fantastic. 

Rajeev: So these are a few example looking from a back end as well as from a front line of the team perspective. 

Megan: Yeah, absolutely. I’m sure there’s a few people listening who were wondering what pilots carry in that suitcase. So thank you so much for clearing that up. And finally, Rajeev, I guess looking ahead, what emerging technologies are you excited to explore further going forward to enhance digital capabilities and customer experience in the years to come? 

Rajeev: Yeah, so we will continue to explore AI and gen AI capability, which has been the spotlight for the last 18 months or so, be it for the passenger or even for the staff internally. We will continue to explore that. But apart from AI, one other aspect I believe could go at great ways around the AR and the VR capabilities, basically virtual reality. We have been exploring that for quite some time, but we believe that it will continue to be a mainstream technology that can change the way we serve the customer. Say for example, in Cathay, we already have a VR cave for our cabin crew training, virtual reality capabilities, and in a few months’ time, we are actually launching a learning facility based on VR where we could be able to provide more immersive learning experience for the cabin crew and later for the other employees. 

Basically, before a cabin crew is able to operate a flight, they go through a rigorous training in Cathay City in our headquarters, basically to know how to serve our passengers, how to handle an emergency situation, those sorts of aspects. And in many cases, we travel the crew from various outports or various countries back into Hong Kong to train them and equip them for these training activities. You can imagine that costs us a lot of money and effort to bring all the people back to Hong Kong. And by having VR capabilities, we are able to do that anywhere in the world without having that physical presence. That’s one area where it’ll go mainstream. 

The second is around other business units. Apart from the cabin crew, we are also experimenting the VR on the customer front. For example, we are able to launch a new business class seat product we call the Aria Suite by next year. And VR technology will help the customers to visualize the seat details without them able to get on board. So without them flying, even before that, they’re able to experience a product on the ground. At our physical shop in Hong Kong, customers can now use a virtual reality technology to visualize how our designer furniture and lifestyle products fit in the sitting rooms. The list of VR capabilities goes very long. The list goes on. And this is also a great and important way to engage with our customers in particular. 

Megan: Wow. Sounds like some exciting stuff on the way. Thank you ever so much, Rajeev, for talking us through that. That was Rajeev Nair, the general manager of IT infrastructure and security at Cathay Pacific, who I spoke with from an unexpectedly sunny Brighton, England.

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. 

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, this episode was produced by Giro Studios. Thanks for listening. 

Data strategies for AI leaders

Organizations are starting the heavy lifting to get real business value from generative AI. As Arnab Chakraborty, chief responsible AI officer at Accenture, puts it, “2023 was the year when clients were amazed with generative AI and the possibilities. In 2024, we are starting to see scaled implementations of responsible generative AI programs.”

Some generative AI efforts remain modest. As Neil Ward-Dutton, vice president for automation, analytics, and AI at IDC Europe, describes it, this is “a classic kind of automation: making teams or individuals more productive, getting rid of drudgery, and allowing people to deliver better results more quickly.” Most companies, though, have much greater ambitions for generative AI: they are looking to reshape how they operate and what they sell.

Great expectations for generative AI

The expectation that generative AI could fundamentally upend business models and product offerings is driven by the technology’s power to unlock vast amounts of data that were previously inaccessible. “Eighty to 90% of the world’s data is unstructured,” says Baris Gultekin, head of AI at AI data cloud company Snowflake. “But what’s exciting is that AI is opening the door for organizations to gain insights from this data that they simply couldn’t before.”

In a poll conducted by MIT Technology Review Insights, global executives were asked about the value they hoped to derive from generative AI. Many say they are prioritizing the technology’s ability to increase efficiency and productivity (72%), increase market competitiveness (55%), and drive better products and services (47%). Few see the technology primarily as a driver of increased revenue (30%) or reduced costs (24%), which is suggestive of executives’ loftier ambitions. Respondents’ top ambitions for generative AI seem to work hand in hand. More than half of companies say new routes toward market competitiveness are one of their top three goals, and the two likely paths they might take to achieve this are increased efficiency and better products or services.

For companies rolling out generative AI, these are not necessarily distinct choices. Chakraborty sees a “thin line between efficiency and innovation” in current activity. “We are starting to notice companies applying generative AI agents for employees, and the use case is internal,” he says, but the time saved on mundane tasks allows personnel to focus on customer service or more creative activities. Gultekin agrees. “We’re seeing innovation with customers building internal generative AI products that unlock a lot of value,” he says. “They’re being built for productivity gains and efficiencies.”

Chakraborty cites marketing campaigns as an example: “The whole supply chain of creative input is getting re-imagined using the power of generative AI. That is obviously going to create new levels of efficiency, but at the same time probably create innovation in the way you bring new product ideas into the market.” Similarly, Gultekin reports that a global technology conglomerate and Snowflake customer has used AI to make “700,000 pages of research available to their team so that they can ask questions and then increase the pace of their own innovation.”

The impact of generative AI on chatbots—in Gultekin’s words, “the bread and butter of the recent AI cycle”—may be the best example. The rapid expansion in chatbot capabilities using AI borders between the improvement of an existing tool and creation of a new one. It is unsurprising, then, that 44% of respondents see improved customer satisfaction as a way that generative AI will bring value.

A closer look at our survey results reflects this overlap between productivity enhancement and product or service innovation. Nearly one-third of respondents (30%) included both increased productivity and innovation in the top three types of value they hope to achieve with generative AI. The first, in many cases, will serve as the main route to the other.

But efficiency gains are not the only path to product or service innovation. Some companies, Chakraborty says, are “making big bets” on wholesale innovation with generative AI. He cites pharmaceutical companies as an example. They, he says, are asking fundamental questions about the technology’s power: “How can I use generative AI to create new treatment pathways or to reimagine my clinical trials process? Can I accelerate the drug discovery time frame from 10 years to five years to one?”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Productivity Electrified: Tech That Is Supercharging Business

This sponsored session was presented by Ford Pro at MIT Technology Review’s 2024 EmTech MIT event.

A decarbonized transportation system is a necessary pre-requisite for a sustainable economy. In the transportation industry, the road to electrification and greater technology adoption can also increase business bottom lines and reduce downstream costs to tax payers. Focusing on early adopters such as first responders, local municipalities, and small business owners, we’ll discuss common misconceptions, barriers to adoption, implementation strategies, and how these insights carry over into wide-spread adoption of emerging technology and electric vehicles.


About the speaker

Wanda Young, Global Chief Marketing & Experience Officer, Ford Pro

Wanda Young is a visionary brand marketer and digital transformation expert who thrives at the intersection of brand, digital, technology, and data; paired with a deep understanding of the consumer mindset. She gained her experience working for the largest brands in retail, sports & entertainment, consumer products, and electronics. She is a successful brand marketer and change agent that organizations seek to drive digital and data transformation – a Chief Experience Officer years before the title was invented. In her roles managing multiple notable brands, including Samsung, Disney, ESPN, Walmart, Alltel, and Acxiom, she developed knowledge of the interconnectedness of brand, digital, and data; of the importance of customer experience across all touchpoints; the power of data and localization; and the in-the-trenches accountability to drive outcomes. Now at Ford Pro, the Commercial Division of Ford Motor Company, she is focused on helping grow the newly-launched division and brand which only Ford can offer commercial customers – an integrated lineup of vehicles and services designed to meet the needs of all businesses to keep their productivity on pace to drive growth.

Young enjoyed a series of firsts in her career, including launching ESPN+, developing Walmart’s first social media presence and building 5000 of their local Facebook pages (which are still live today and continue to scale), developing the first weather-triggered ad product with The Weather Company, designing an ad product with Google called Local Inventory Ads, being part of team who took Alltel Wireless private (which later sold to Verizon Wireless), launching the Acxiom.com website on her first Mother’s Day with her daughter on her lap. She serves on the board of or is involved in a number of industry memberships and has been the recipient of many prestigious awards. Young received a Bachelor of Arts in English with a minor in Advertising from the University of Arkansas.

Preventing Climate Change: A Team Sport

This sponsored session was presented by MEDC at MIT Technology Review’s 2024 EmTech MIT event.

Michigan is at the forefront of the clean energy transition, setting an example in mobility and automotive innovation. Other states and organizations can learn from Michigan’s approach to public-private partnerships, actionable climate plans, and business-government alignment. Progressive climate policies are not only crucial for sustainability but also for attracting talent in today’s competitive job market.

Read more from MIT Technology Review Insights & MEDC about addressing climate change impacts


About the speaker

Hilary Doe, Chief Growth & Marketing Officer, Michigan Economic Development Corporation

As Chief Growth & Marketing Officer, Hilary Doe leads the state’s efforts to grow Michigan’s population, economy, and reputation as the best place to live, work, raise a family, and start a business. Hilary works alongside the Growing Michigan Together Council on a once-in-a-generation effort to grow Michigan’s population, boost economic growth, and make Michigan the place everyone wants to call home.

Hilary is a dynamic leader in nonprofits, technology, strategy, and public policy. She served as the national director at the Roosevelt Network, where she built and led an organization engaging thousands of young people in civic engagement and social change programming at chapters nationwide, which ultimately earned the organization recognition as a recipient of the MacArthur Award for Creative and Effective Institutions. She also served as Vice President of the Roosevelt Institute, where she oversaw strategy and expanded the Institute’s Four Freedoms Center, with the goal of empowering communities and reducing inequality alongside the greatest economists of our generations. Most recently, she served as President and Chief Strategy Officer at Nationbuilder, working to equip the world’s leaders with software to grow their movements, businesses, and organizations, while spreading democracy.

Hilary is a graduate of the University of Michigan’s Honors College and Ford School of Public Policy, a Detroit resident, and proud Michigander.

Addressing climate change impacts

The reality of climate change has spurred enormous public and private investment worldwide, funding initiatives to mitigate its effects and to adapt to its impacts. That investment has spawned entire industries and countless new businesses, resulting in the creation of new green jobs and contributions to economic growth. In the United States, this includes the single largest climate-related investment in the country’s history, made in 2022 as part of the Inflation Reduction Act.

For most US businesses, however, the costs imposed by climate change and the future risks it poses will outweigh growth opportunities afforded by the green sector. In a survey of 300 senior US executives conducted by MIT Technology Review, every respondent agrees that climate change is either harming the economy today or will do so in the future. Most expect their organizations to contend with extreme weather, such as severe storms, flooding, and extreme heat, in the near term. Respondents also report their businesses are already incurring costs related to climate change.

This research examines how US businesses view their climate change risk and the steps they are taking to adapt to climate change’s impacts. The results make clear that climate considerations, such as frequency of extreme weather and access to natural resources, are now a prime factor in businesses’ site location decisions. As climate change accelerates, such considerations are certain to grow in importance.

Key findings include the following:

Businesses are weighing relocation due to climate risks. Most executives in the survey (62%) deem their physical infrastructure (some or all of it) exposed to the impacts of climate change, with 20% reporting it is “very exposed.” A full 75% of respondents report their organization has considered relocating due to climate risk, with 6% indicating they have concrete plans to relocate facilities within the next five years due to climate factors. And 24% report they have already relocated physical infrastructure to prepare for climate change impacts.

Companies must lock in the costs of climate change adaptation. Nearly all US businesses have already suffered from the effects of climate change, judging by the survey. Weighing most heavily thus far, and likely in the future, are increases in operational costs (affecting 64%) and insurance premiums (63%), as well as disruption to operations (61%) and damage to infrastructure (55%).

Executives know climate change is here, and many are planning for it. Four-fifths (81%) of survey respondents deem climate planning and preparedness important to their business, and one-third describe it as very important. There is a seeming lag at some companies, however, at translating this perceived importance into actual planning: only 62% have developed a climate change adaptation plan, and 52% have conducted a climate risk assessment.

Climate-planning resources are a key criterion in site location. When judging a potential new business site on its climate mitigation features, 71% of executives highlight the availability of climate-planning resources as among their top criteria. Nearly two-thirds (64%) also cite the importance of a location’s access to critical natural resources.

Though climate change will affect everyone, its risks and impacts vary by region. No US region is immune to climate change: a majority of surveyed businesses in every region have experienced at least some negative climate change impacts. However, respondents believe the risks are lowest in the Midwest, with nearly half of respondents (47%) naming that region as least exposed to climate change risk.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Preparing for the unknown: A guide to future-proofing imaging IT

In an era of unprecedented technological advancement, the health-care industry stands at a crossroad. As health expenditure continues to outpace GDP in many countries, health-care executives grapple with crucial decisions on investment prioritization for digitization, innovation, and digital transformation. The imperative to provide high-quality, patient-centric care in an increasingly digital world has never been more pressing. At the forefront of this transformation is imaging IT—a critical component that’s evolving to meet the challenges of modern health care.

The future of imaging IT is characterized by interconnected systems, advanced analytics, robust data security, AI-driven enhancements, and agile infrastructure. Organizations that embrace these trends will be well-positioned to thrive in the changing health-care landscape. But what exactly does this future look like, and how can health-care providers prepare for it?

Networked care models: The new paradigm

The adoption of networked care models is set to revolutionize health-care delivery. These models foster collaboration among stakeholders, making patient information readily available and leading to more personalized and efficient care. As we move forward, expect to see health-care organizations increasingly investing in technologies that enable seamless data sharing and interoperability.

Imagine a scenario where a patient’s entire medical history, including imaging data from various specialists, is instantly accessible to any authorized health-care provider. This level of connectivity not only improves diagnosis and treatment but also enhances the overall patient experience.

Data integration and analytics: Unlocking insights

True data integration is becoming the norm in health care. Robust integrated image and data management solutions (IDM) are consolidating patient data from diverse sources. But the real game-changer lies in the application of advanced analytics and AI to this treasure trove of information.

By leveraging these technologies, medical professionals can extract meaningful insights from complex data sets, leading to quicker and more accurate diagnoses and treatment decisions. The potential for improving patient outcomes through data-driven decision-making is immense.

A case in point is the implementation of Syngo Carbon Image and Data Management (IDM) at Tirol Kliniken GmbH in Innsbruck, Austria. This solution consolidates all patient-centric data points in one place, including different image and photo formats, DICOM CDs, and digitalized video sources from endoscopy or microscopy. The system digitizes all documents in their raw formats, enabling the distribution of native, actionable data throughout the enterprise.

Data privacy and edge computing: Balancing innovation and security

As health care becomes increasingly data-driven, concerns about data privacy remain paramount. Enter edge computing—a solution that enables the processing of sensitive patient data locally, reducing the risk of data breaches during processing and transmission.

This approach is crucial for health-care facilities aiming to maintain patient trust while adopting advanced technologies. By keeping data processing close to the source, health-care providers can leverage cutting-edge analytics without compromising on security.

Workflow integration and AI: Enhancing efficiency and accuracy

The integration of AI into medical imaging workflows is set to dramatically improve efficiency, accuracy, and the overall quality of patient care. AI-powered solutions are becoming increasingly common, reducing the burden of repetitive tasks and speeding up diagnosis.

From automated image analysis to predictive modeling, AI is transforming every aspect of the imaging workflow. This not only improves operational efficiency but also allows health-care professionals to focus more on patient care and complex cases that require human expertise.

A quantitative analysis at the Medical University of South Carolina demonstrates the impact of AI integration. With the support of deep learning algorithms fully embedded in the clinical workflow, cardiothoracic radiologists exhibited a reduction in chest CT interpretation times of 22.1% compared to workflows without AI support.

Virtualization: The key to agility

To future-proof their IT infrastructure, health-care organizations are turning to virtualization. This approach allows for modularization and flexibility, making it easier to adapt to rapidly evolving technologies such as AI-driven diagnostics.

Container technology is playing a pivotal role in optimizing resource utilization and scalability. By embracing virtualization, health-care providers can ensure their IT systems remain agile and responsive to changing needs.

Standardization and compliance: Ensuring long-term compatibility

As imaging IT systems evolve, adherence to industry standards and compliance requirements remains crucial. These systems need to seamlessly interact with Electronic Health Records (EHRs), medical devices, and other critical systems.

This adherence ensures long-term compatibility and the ability to accommodate emerging technologies. It also facilitates smoother integration of new solutions into existing IT ecosystems, reducing implementation challenges and costs.

Real-world success stories

The benefits of these technologies are not theoretical—they are being realized in health-care organizations around the world. For instance, the virtualization strategy implemented at University Hospital Essen (UME), one of Germany’s largest university hospitals, has dramatically improved the hospital’s ability to manage increasing data volumes and applications. UME’s critical clinical information systems now run on modular and virtualized systems, allowing experts to design and use innovative solutions, including AI tools that automate tasks previously done manually by IT and medical staff.

Similarly, the PANCAIM project leverages edge computing for pancreatic cancer detection. This EU-funded initiative uses Siemens Healthineers’ edge computing approach to develop and validate AI algorithms. At Karolinska Institutet, Sweden, an algorithm was implemented for a real pancreatic cancer case, ensuring sensitive patient data remains within the hospital while advancing AI validation in clinical settings.

Another innovative approach is the concept of a Common Patient Data Model (CPDM). This standardized framework defines how patient data is organized, stored, and exchanged across different health-care systems and platforms, addressing interoperability challenges in the current health-care landscape.

The road ahead: Continuous innovation

As we look to the future, it’s clear that technological advancements in radiology will continue at a rapid pace. To stay competitive and provide the best patient care, health-care organizations must prioritize ongoing innovation and the adoption of new technologies.

This includes not only IT systems but also medical devices and treatment methodologies. The health-care providers who embrace this ethos of continuous improvement will be best positioned to navigate the challenges and opportunities that lie ahead.

In conclusion, the future of imaging IT is bright, promising unprecedented levels of efficiency, accuracy, and patient-centricity. By embracing networked care models, leveraging advanced analytics and AI, prioritizing data security, and maintaining agile IT infrastructure, health-care organizations can ensure they’re prepared for whatever the future may hold.

The journey towards future-proof imaging IT may seem daunting, but it’s a necessary evolution in our quest to provide the best possible health care. As we stand on the brink of this new era, one thing is clear: the future of health care is digital, data-driven, and more connected than ever before.

If you want to learn more, you can find more information from Siemens Healthineers.

Syngo Carbon consists of several products which are (medical) devices in their own right. Some products are under development and not commercially available. Future availability cannot be ensured.

The results by Siemens Healthineers customers described herein are based on results that were achieved in the customer’s unique setting. Since there is no “typical” hospital and many variables exist (e.g., hospital size, case mix, level of IT adoption), it cannot be guaranteed that other customers will achieve the same results.

This content was produced by Siemens Healthineers. It was not written by MIT Technology Review’s editorial staff.

Integrating security from code to cloud

The Human Genome Project, SpaceX’s rocket technology, and Tesla’s Autopilot system may seem worlds apart in form and function, but they all share a common characteristic: the use of open-source software (OSS) to drive innovation.

Offering publicly accessible code that can be viewed, modified, and distributed freely, OSS expedites developer productivity and creates a collaborative space for groundbreaking advancements.

“Open source is critical,” says David Harmon, director of software engineering for AMD. “It provides an environment of collaboration and technical advancements. Savvy users can look at the code themselves; they can evaluate it; they can review it and know that the code that they’re getting is legit and functional for what they’re trying to do.”

But OSS can also compromise an organization’s security posture by introducing hidden vulnerabilities that fall under the radar of busy IT teams, especially as cyberattacks targeting open source are on the rise. OSS may contain weaknesses, for example, that can be exploited to gain unauthorized access to confidential systems or networks. Bad actors can even intentionally introduce into OSS a space for exploits—“backdoors”—that can compromise an organization’s security posture. 

“Open source is an enabler to productivity and collaboration, but it also presents security challenges,” says Vlad Korsunsky, corporate vice president of cloud and enterprise security for Microsoft. Part of the problem is that open source introduces into the organization code that can be hard to verify and difficult to trace. Organizations often don’t know who made changes to open-source code or the intent of those changes, factors that can increase a company’s attack surface.

Complicating matters is that OSS’s increasing popularity coincides with the rise of cloud and its own set of security challenges. Cloud-native applications that run on OSS, such as Linux, deliver significant benefits, including greater flexibility, faster release of new software features, effortless infrastructure management, and increased resiliency. But they also can create blind spots in an organization’s security posture, or worse, burden busy development and security teams with constant threat signals and never-ending to-do lists of security improvements.

“When you move into the cloud, a lot of the threat models completely change,” says Harmon. “The performance aspects of things are still relevant, but the security aspects are way more relevant. No CTO wants to be in the headlines associated with breaches.”

Staying out of the news, however, is becoming increasingly more difficult: According to cloud company Flexera’s State of the Cloud 2024 survey, 89% of enterprises use multi-cloud environments. Cloud spend and security top respondents’ lists of cloud challenges. Security firm Tenable’s 2024 Cloud Security Outlook reported that 95% of its surveyed organizations suffered a cloud breach during the 18 months before their survey.

Code-to-cloud security

Until now, organizations have relied on security testing and analysis to examine an application’s output and identify security issues in need of repair. But these days, addressing a security threat requires more than simply seeing how it is configured in runtime. Rather, organizations must get to the root cause of the problem.

It’s a tall order that presents a balancing act for IT security teams, according to Korsunsky. “Even if you can establish that code-to-cloud connection, a security team may be reluctant to deploy a fix if they’re unsure of its potential impact on the business. For example, a fix could improve security but also derail some functionality of the application itself and negatively impact employee productivity,” he says.

Rather, to properly secure an application, says Korsunsky, IT security teams should collaborate with developers and application security teams to better understand the software they’re working with and to determine the impacts of applying security fixes.

Fortunately, a code-to-cloud security platform with comprehensive cloud-native security can help by identifying and stopping software vulnerabilities at the root. Code-to-cloud creates a pipeline between code repositories and cloud deployment, linking how the application was written to how it performs—“connecting the things that you see in runtime to where they’re developed and how they’re deployed,” says Korsunsky.

The result is a more collaborative and consolidated approach to security that enables security teams to identify a code’s owner and to work with that owner to make an application more secure. This ensures that security is not just an afterthought but a critical aspect of the entire software development lifecycle, from writing code to running it in the cloud.

Better yet, an IT security team can gain complete visibility into the security posture of preproduction application code across multi-pipeline and multi-cloud environments while, at the same time, minimizing cloud misconfigurations from reaching production environments. Together, these proactive strategies not only prevent risks from arising but allow IT security teams to focus on critical emerging threats.

The path to security success

Making the most of a code-to-cloud security platform requires more than innovative tools. Establishing best practices in your organization can ensure a stronger, long-term security posture.

Create a comprehensive view of assets: Today’s organizations rely on a wide array of security tools to safeguard their digital assets. But these solutions must be consolidated into a single pane of glass to manage exposure of the various applications and resources that operate across an entire enterprise, including the cloud. “Companies can’t have separate solutions for separate environments, separate cloud, separate platforms,” warns Korsunsky. “At the end of the day, attackers don’t think in silos. They’re after the crown jewels of an enterprise and they’ll do whatever it takes to get those. They’ll move laterally across environments and clouds—that’s why companies need a consolidated approach.”

Take advantage of artificial intelligence (AI): Many IT security teams are overwhelmed with incidents that require immediate attention. That’s all the more reason for organizations to outsource straightforward security tasks to AI. “AI can sift through the noise so that organizations don’t have to deploy their best experts,” says Korsunsky. For instance, by leveraging its capabilities for comparing and distinguishing written texts and images, AI can be used as a copilot to detect phishing emails. After all, adds Korsunsky, “There isn’t much of an advantage for a human being to read long emails and try to determine whether or not they’re credible.” By taking over routine security tasks, AI frees employees to focus on more critical activities.

Find the start line: Every organization has a long list of assets to secure and vulnerabilities to fix. So where should they begin? “Protect your most critical assets by knowing where your most critical data is and what’s effectively exploitable,” recommends Korsunsky. This involves conducting a comprehensive inventory of a company’s assets and determining how their data interconnects and what dependencies they require.

Protect data in use: The Confidential Computing Consortium is a community, part of the Linux Foundation, focused on accelerating the adoption of confidential computing through open collaboration. Confidential computing can protect an organization’s most sensitive data during processing by performing computations in a hardware-based Trusted Execution Environment (TEE), such as Azure confidential virtual machines based on AMD EPYC CPUs. By encrypting data in memory in a TEE, organizations can ensure that their most sensitive data is only processed after a cloud environment has been verified, helping prevent data access by cloud providers, administrators, or unauthorized users.

A solution for the future As Linux, OSS, and cloud-native applications continue to increase in popularity, so will the pressure on organizations to prioritize security. The good news is that a code-to-cloud approach to cloud security can empower organizations to get a head start on security—during the software development process—while providing valuable insight into an organization’s security posture and freeing security teams to focus on business-critical tasks.

Secure your Linux and open source workloads from code to cloud with Microsoft Azure and AMD. Learn more about Linux on Azure  and Microsoft Security.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Readying business for the age of AI

Rapid advancements in AI technology offer unprecedented opportunities to enhance business operations, customer and employee engagement, and decision-making. Executives are eager to see the potential of AI realized. Among 100 c-suite respondents polled in WNS Analytics’ “The Future of Enterprise Data & AI” report, 76% say they are already implementing or planning to implement generative AI solutions. Among those same leaders, however, 67% report struggling with data migration, and others cite grappling with data quality, talent shortages, and data democratization issues. 

MIT Technology Review Insights recently had a conversation with Alex Sidgreaves, chief data officer at Zurich Insurance; Bogdan Szostek, chief data officer at Animal Friends; Shan Lodh, director of data platforms at Shawbrook Bank; and Gautam Singh, head of data, analytics, and AI at WNS Analytics, to discuss how enterprises can navigate the burgeoning era of AI.

AI across industries

There is no shortage of AI use cases across sectors. Retailers are tailoring shopping experiences to individual preferences by leveraging customer behavior data and advanced machine learning models. Traditional AI models can deliver personalized offerings. However, with generative AI, these personalized offerings are elevated by incorporating tailored communication that considers the customer’s persona, behavior, and past interactions. In insurance, by leveraging generative AI, companies can identify subrogation recovery opportunities that a manual handler might overlook, enhancing efficiency and maximizing recovery potential. Banking and financial services institutions are leveraging AI to bolster customer due diligence and enhance anti-money laundering efforts by leveraging AI-driven credit risk management practices. AI technologies are enhancing diagnostic accuracy through sophisticated image recognition in radiology, allowing for earlier and more precise detection of diseases while predictive analytics enable personalized treatment plans.

The core of successful AI implementation lies in understanding its business value, building a robust data foundation, aligning with the strategic goals of the organization, and infusing skilled expertise across every level of an enterprise.

  • “I think we should also be asking ourselves, if we do succeed, what are we going to stop doing? Because when we empower colleagues through AI, we are giving them new capabilities [and] faster, quicker, leaner ways of doing things. So we need to be true to even thinking about the org design. Oftentimes, an AI program doesn’t work, not because the technology doesn’t work, but the downstream business processes or the organizational structures are still kept as before.” Shan Lodh, director of data platforms, Shawbrook Bank

Whether automating routine tasks, enhancing customer experiences, or providing deeper insights through data analysis, it’s essential to define what AI can do for an enterprise in specific terms. AI’s popularity and broad promises are not good enough reasons to jump headfirst into enterprise-wide adoption. 

“AI projects should come from a value-led position rather than being led by technology,” says Sidgreaves. “The key is to always ensure you know what value you’re bringing to the business or to the customer with the AI. And actually always ask yourself the question, do we even need AI to solve that problem?”

Having a good technology partner is crucial to ensure that value is realized. Gautam Singh, head of data, analytics, and AI at WNS, says, “At WNS Analytics, we keep clients’ organizational goals at the center. We have focused and strengthened around core productized services that go deep in generating value for our clients.” Singh explains their approach, “We do this by leveraging our unique AI and human interaction approach to develop custom services and deliver differentiated outcomes.”

The foundation of any advanced technology adoption is data and AI is no exception. Singh explains, “Advanced technologies like AI and generative AI may not always be the right choice, and hence we work with our clients to understand the need, to develop the right solution for each situation.” With increasingly large and complex data volumes, effectively managing and modernizing data infrastructure is essential to provide the basis for AI tools. 

This means breaking down silos and maximizing AI’s impact involves regular communication and collaboration across departments from marketing teams working with data scientists to understand customer behavior patterns to IT teams ensuring their infrastructure supports AI initiatives. 

  • “I would emphasize the growing customer’s expectations in terms of what they expect our businesses to offer them and to provide us a quality and speed of service. At Animal Friends, we see the generative AI potential to be the biggest with sophisticated chatbots and voice bots that can serve our customers 24/7 and deliver the right level of service, and being cost effective for our customers. Bogdan Szostek, chief data officer, Animal Friends

Investing in domain experts with insight into the regulations, operations, and industry practices is just as necessary in the success of deploying AI systems as the right data foundations and strategy. Continuous training and upskilling are essential to keep pace with evolving AI technologies.

Ensuring AI trust and transparency

Creating trust in generative AI implementation requires the same mechanisms employed for all emerging technologies: accountability, security, and ethical standards. Being transparent about how AI systems are used, the data they rely on, and the decision-making processes they employ can go a long way in forging trust among stakeholders. In fact, The Future of Enterprise Data & AI report cites 55% of organizations identify “building trust in AI systems among stakeholders” as the biggest challenge when scaling AI initiatives. 

“We need talent, we need communication, we need the ethical framework, we need very good data, and so on,” says Lodh. “Those things don’t really go away. In fact, they become even more necessary for generative AI, but of course the usages are more varied.” 

AI should augment human decision-making and business workflows. Guardrails with human oversight ensure that enterprise teams have access to AI tools but are in control of high-risk and high-value decisions.

“Bias in AI can creep in from almost anywhere and will do so unless you’re extremely careful. Challenges come into three buckets. You’ve got privacy challenges, data quality, completeness challenges, and then really training AI systems on data that’s biased, which is easily done,” says Sidgreaves. She emphasizes it is vital to ensure that data is up-to-date, accurate, and clean. High-quality data enhances the reliability and performance of AI models. Regular audits and data quality checks can help maintain the integrity of data.

An agile approach to AI implementation

ROI is always top of mind for business leaders looking to cash in on the promised potential of AI systems. As technology continues to evolve rapidly and the potential use cases of AI grow, starting small, creating measurable benchmarks, and adopting an agile approach can ensure success in scaling solutions. By starting with pilot projects and scaling successful initiatives, companies can manage risks and optimize resources. Sidgreaves, Szostek, and Lodh stress that while it may be tempting to throw everything at the wall and see what sticks, accessing the greatest returns from expanding AI tools means remaining flexible, strategic, and iterative. 

In insurance, two areas where AI has a significant ROI impact are risk and operational efficiency. Sidgreaves underscores that reducing manual processes is essential for large, heritage organizations, and generative AI and large language models (LLMs) are revolutionizing this aspect by significantly diminishing the need for manual activities.

To illustrate her point, she cites a specific example: “Consider the task of reviewing and drafting policy wording. Traditionally, this process would take an individual up to four weeks. However, with LLMs, this same task can now be completed in a matter of seconds.”  

Lodh adds that establishing ROI at the project’s onset and implementing cross-functional metrics are crucial for capturing a comprehensive view of a project’s impact. For instance, using LLMs for writing code is a great example of how IT and information security teams can collaborate. By assessing the quality of static code analysis generated by LLMs, these teams can ensure that the code meets security and performance standards.

“It’s very hard because technology is changing so quickly,” says Szostek. “We need to truly apply an agile approach, do not try to prescribe all the elements of the future deliveries in 12, 18, 24 months. We have to test and learn and iterate, and also fail fast if that’s needed.” 

Navigating the future of the AI era 

The rapid evolution of the digital age continues to bring immense opportunities for enterprises globally from the c-suite to the factory floor. With no shortage of use cases and promises to boost efficiencies, drive innovation, and improve customer and employee experiences, few business leaders dismiss the proliferation of AI as mere hype. However, the successful and responsible implementation of AI requires a careful balance of strategy, transparency, and robust data privacy and security measures.

  • “It’s really easy as technology people to be driven by the next core thing, but we would have to be solving a business problem. So the key is to always ensure you know what value you’re bringing to the business or to the customer with the AI. And actually always ask yourself the question, do we even need AI to solve that problem?” — Alex Sidgreaves, chief data officer, Zurich Insurance

Fully harnessing the power of AI while maintaining trust means defining clear business values, ensuring accountability, managing data privacy, balancing innovation with ethical use, and staying ahead of future trends. Enterprises must remain vigilant and adaptable, committed to ethical practices and an agile approach to thrive in this rapidly changing business landscape.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The rise of the data platform for hybrid cloud

Whether pursuing digital transformation, exploring the potential of AI, or simply looking to simplify and optimize existing IT infrastructure, today’s organizations must do this in the context of increasingly complex multi-cloud environments. These complicated architectures are here to stay—2023 research by Enterprise Strategy Group, for example, found that 87% of organizations expect their applications to be distributed across still more locations in the next two years.

Scott Sinclair, practice director at Enterprise Strategy Group, outlines the problem: “Data is becoming more distributed. Apps are becoming more distributed. The typical organization has multiple data centers, multiple cloud providers, and umpteen edge locations. Data is all over the place and continues to be created at a very rapid rate.”

Finding a way to unify this disparate data is essential. In doing so, organizations must balance the explosive growth of enterprise data; the need for an on-premises, cloud-like consumption model to mitigate cyberattack risks; and continual pressure to cut costs and improve performance.

Sinclair summarizes: “What you want is something that can sit on top of this distributed data ecosystem and present something that is intuitive and consistent that I can use to leverage the data in the most impactful way, the most beneficial way to my business.”

For many, the solution is an overarching software-defined, virtualized data platform that delivers a common data plane and control plane across hybrid cloud environments. Ian Clatworthy, head of data platform product marketing at Hitachi Vantara, describes a data platform as “an integrated set of technologies that meets an organization’s data needs, enabling storage and delivery of data, the governance of data, and the security of data for a business.”

Gartner projects that these consolidated data storage platforms will constitute 70% of file and object storage by 2028, doubling from 35% in 2023. The research firm underscores that “Infrastructure and operations leaders must prioritize storage platforms to stay ahead of business demands.”

A transitional moment for enterprise data

Historically, organizations have stored their various types of data—file, block, object—in separate silos. Why change now? Because two main drivers are rendering traditional data storage schemes inadequate for today’s business needs: digital transformation and AI.

As digital transformation initiatives accelerate, organizations are discovering that having distinct storage solutions for each workload is inadequate for their escalating data volumes and changing business landscapes. The complexity of the modern data estate hinders many efforts toward change.

Clatworthy says that when organizations move to hybrid cloud environments, they may find, for example, that they have mainframe or data center data stored in one silo, block storage running on an appliance, apps running file storage, another silo for public cloud, and a separate VMware stack. The result is increased complexity and
cost in their IT infrastructure, as well as reduced flexibility and efficiency.

Then, Clatworthy adds, “When we get to the world of generative AI that’s bubbling around the edges, and we’re going to have this mass explosion of data, we need to simplify how that data is managed so that applications can consume it. That’s where a platform comes in.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.