Actionable insights enable smarter business buying

For decades, procurement was seen as a back-office function focused on cost-cutting and supplier management. But that view is changing as supply chain disruptions and fluctuating consumer behavior ripple across the economy. Savvy leaders now understand procurement’s potential to deliver unprecedented levels of efficiency, insights, and strategic capability across the business.

However, tapping into procurement’s potential for generating value requires mastering the diverse needs of today’s global and hybrid businesses, navigating an increasingly complex supplier ecosystem, and wrangling the vast volumes of data generated by a rapidly digitalizing supply chain. Advanced procurement tools and technologies can support all three.

Purchasing the products and services a company needs to support its daily operations aggregates thousands of individual decisions, from a remote worker selecting a computer keyboard to a materials expert contracting with suppliers. Keeping the business running requires procurement processes and policies set by a chief procurement officer (CPO) and team who “align their decisions with company goals, react to changes with speed, and are agile enough to ensure a company has the right products at the right time,” says Rajiv Bhatnagar, director of product and technology at Amazon Business.

At the same time, he says, the digitalization of the supply chain has created “a jungle of data,” challenging procurement to “glean insights, identify trends, and detect anomalies” with record speed. The good news is advanced analytics tools can tackle these obstacles, and establish a data-driven, streamlined approach to procurement. Aggregating the copious data produced by enterprise procurement—and empowering procurement teams to recognize and act on patterns in that data—enables speed, agility, and smarter decision-making.

Today’s executives increasingly look to data and analytics to enable better decision-making in a challenging and fast-changing business climate. Procurement teams are no exception. In fact, 65% of procurement professionals report having an initiative aimed at improving data and analytics, according to The Hackett Group’s 2023 CPO Agenda report.

And for good reason—analytics can significantly enhance supply chain visibility, improve buying behavior, strengthen supply chain partnerships, and drive productivity and sustainability. Here’s how.

Gaining full visibility into purchasing activity

Just getting the full view of a large organization’s procurement is a challenge. “People involved in the procurement process at different levels with different goals need insight into the entire process,” says Bhatnagar. But that’s not easy given the layers upon layers of data being managed by procurement teams, from individual invoice details to fluctuating supplier pricing. Complicating matters further is the fact that this data exists both within and outside of the procurement organization.

Fortunately, analytics tools deliver greater visibility into procurement by consolidating data from myriad sources. This allows procurement teams to mine the most comprehensive set of procurement information for “opportunities for optimization,” says Bhatnagar. For instance, procurement teams with a clear view of their organization’s data may discover an opportunity to reduce complexity by consolidating suppliers or shifting from making repeated small orders to more cost-efficient bulk purchasing.

Identifying patterns—and responding quickly

When carefully integrated and analyzed over time, procurement data can reveal meaningful patterns—indications of evolving buying behaviors and emerging trends. These patterns can help to identify categories of products with higher-than-normal spending, missed targets for meeting supplier commitments, or a pattern of delays for an essential business supply. The result, says Bhatnagar, is information that can improve budget management by allowing procurement professionals to “control rogue spend” and modify a company’s buying behavior.

In addition to highlighting unwieldy spending, procurement data can provide a glimpse into the future. These days, the world moves at a rapid clip, requiring organizations to react quickly to changing business circumstances. Yet only 25% of firms say they are able to identify and predict supply disruptions in a timely manner “to a large extent,” according to Deloitte’s 2023 Global CPO survey.

“Machine learning-based analytics can look for patterns much faster,” says Bhatnagar. “Once you have detected a pattern, you can take action.” By detecting patterns in procurement data that could indicate supply chain interruptions, looming price increases, or new cost drivers, procurement teams can proactively account for market changes. For example, a team might enable automatic reordering of an essential product that is likely to be impacted by a supply chain bottleneck.

Sharing across the partner ecosystem

Data analysis allows procurement teams to “see some of the challenges and react to them in real-time,” says Bhatnagar. But in an era of interconnectedness, no one organization acts alone. Instead, today’s supplier ecosystems are deeply interconnected networks of supply-chain partners with complex interdependencies.

For this reason, sharing data-driven insights with suppliers helps organizations better pinpoint causes for delays or inaccurate orders and work collaboratively to overcome obstacles. Such “discipline and control” over data, says Bhatnagar, not only creates a single source of truth for all supply-chain partners, but helps eliminate finger-pointing while also empowering procurement teams to negotiate mutually beneficial terms with suppliers.

Improving employee productivity and satisfaction

Searching for savings opportunities, negotiating with suppliers, and responding to supply-chain disruptions—these time-consuming activities can negatively impact a procurement team’s productivity. However, by relying on analytics to discover and share meaningful patterns in data, procurement teams can shift focus from low-value tasks to business-critical decision-making.

Shifting procurement teams to higher-impact work results in a better overall employee experience. “Using analytics, employees feel more productive and know that they’re bringing more value to their job,” says Bhatnagar.

Another upside of heightening employee morale is improved talent retention. After all, workers with a sense of value and purpose are likelier to stay with an employer. This is a huge benefit in a time when nearly half (46%) of CPOs cite the loss of critical talent as a high or moderate risk, according to Deloitte’s 2023 Global CPO survey.

Meeting compliance metrics and organizational goals

Procurement analytics can also deliver on a broader commitment to changing how products and services are purchased.

According to a McKinsey Global Survey on environmental, social, and governance (ESG) issues, more than nine in ten organizations say ESG is on their agenda. Yet 40% of CPOs in the Deloitte survey report their procurement organizations need to define or measure their own set of relevant ESG factors.

Procurement tools can bridge this gap by allowing procurement teams to search for vendor or product certifications and generate credentials reports to help them shape their organization’s purchases toward financial, policy, or ESG goals. They can develop flexible yet robust spending approval workflows, designate restricted and out-of-policy purchases, and encourage the selection of sustainable products or preference for local or minority-owned suppliers.

“A credentials report can really allow organizations to improve their visibility into sustainability [initiatives] when they’re looking for seller credentials or compliant credentials,” says Bhatnagar. “They can track all of their spending from diverse sellers or small sellers—whatever their goals are for the organization.”

Delivering the procurement of tomorrow

Advanced analytics can free procurement teams to glean meaningful insights from their data—information that can drive tangible business results, including a more robust supplier ecosystem, improved employee productivity, and a greener planet.

As supply chains become increasingly complex and the ecosystem increasingly digital, data-driven procurement will become critical. In the face of growing economic instability, talent shortages, and technological disruption, advanced analytics capabilities will enable the next generation of procurement.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Learn how Amazon Business is leveraging AI/ML to offer procurement professionals more efficient processes, a greater understanding of smart business buying habits and, ultimately, reduced prices.

Start with data to build a better supply chain

In business, the acceleration of change means enterprises have to live in the future, not the present. Having the tools and technologies to enable forward-thinking and underpin digital transformation is key to survival. Supply chain procurement leaders are tasked with improving operational efficiencies and keeping an eye on the bottom line. For Raimundo Martinez, global digital solutions manager of procurement and supply chain at bp, the journey toward building a better supply chain starts with data.

“So, today, everybody talks about AI, ML, and all these tools,” says Martinez. “But to be honest with you, I think your journey really starts a little bit earlier. I think when we go out and think about this advanced technology, which obviously, have their place, I think in the beginning, what you really need to focus is in your foundational [layer], and that is your data.”

In that vein, all of bp’s data has been migrated to the cloud and its multiple procurement departments have been consolidated into a single global procurement organization. Having a centralized, single data source can reduce complexities and avoid data discrepancies. The biggest challenge to changes like data centralization and procurement reorganization is not technical, Martinez says, but human. Bringing another tool or new process into the fold can cause some to push back. Making sure that employees understand the value of these changes and the solutions they can offer is imperative for business leaders.

Honesty toward both employees and end users—where an enterprise keeps track of its logistics, inventory, and processes—can be a costly investment. For a digital transformation journey of bp’s scale, an investment in supply chain visibility is an investment in customer trust and business reputability.

“They feel part of it. They’re more willing to give you feedback. They’re also willing to give you a little bit more leeway. If you say that the tool is going to be, or some feature is going to be delayed a month, for example, but you don’t give the reasons and they don’t have that transparency and visibility into what is driving that delay, people just lose faith in your tool,” says Martinez.

Looking to the future, Martinez stresses the importance of a strong data foundation as a precursor to taking advantage of emerging technologies like AI and machine learning that can work to create a semi-autonomous supply chain.

“Moving a supply chain from a transactional item to a much more strategic item with the leverage of this technology, I think, that, to me, is the ultimate vision for the supply chain,” says Martinez.

This episode of Business Lab is produced in partnership with Infosys Cobalt.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. And this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic is building a better supply chain. AI can bring efficiencies to many aspects of an enterprise, including supply chain. And where better to start than internal procurement processes. With better data, better decisions can be made quicker, both internally and by customers and partners. And that is better for everyone.

Two words for you: automating transformation.

My guest is Raimundo Martinez, who is the global digital solutions manager of procurement and supply chain at bp.

This episode of Business Lab is produced in partnership with Infosys Cobalt.
Welcome, Raimundo.

Raimundo Martinez: Hi, Laurel. Thanks for having me today.

Laurel: So, let’s start with providing some context to our conversation. bp has been on a digital transformation journey. What spurred it, and how is it going?

Raimundo: I think there’s many factors spurring digital transformation. But if I look at all of this, I think probably the key one is the rate of change in the world today and in the past. I think instead of slowing down, I think the rate of change is accelerating, and that makes business survivability the need to have quick access to the data to almost not live in today, but live in the future. And having tools and technologies that allow them to see what is coming up, what routes of action they can take, and then to enact those mitigation plans faster.

And I think that’s where the digital transformation is the key enabler of that. And I would say that’s on the business side. I think the other one is the people mindset change, and that ties into how things are going. I think things are going pretty good. Technology wise, I’ve seen a large number of tools and technologies adopted. But I think probably the most important thing is this mindset and the workforce and the adoption of agile. This rate of change that we just talked in the first part can only probably be achieved in tame when the whole workforce has this agile mindset to react to it.

Laurel: Well, supply chain procurement leaders are under pressure to improve operational efficiencies while keeping a careful eye on the bottom line. What is bp’s procurement control tower, and how has it helped with bp’s digital transformation?

Raimundo: Yeah, sure. In a nutshell, think about old as myriad of systems of record where you have your data and users having to go to all of those. So, our control tower, what it does, is consolidate all the data in a single platform. And what we have done is not just present the data, but truly configured the data in form of alerts. And the idea is to tell our user, “This is what’s important. This are the three things that you really need to take care now.” And not stopping there, but then saying, “Look, in order to take that action, we’re giving you a summary information so you don’t have to go to any other system to actually understand what is driving that alert.” But then on top of that, we’re integrating that platform with this system’s record so that request can complete it in seconds instead of in weeks.

So, that in a nutshell, it’s the control tower platform. And the way have helped… Again, we talk about tools and people. So, on the tool side, being able to demonstrate how this automation is done and the value of it and being able for other units to actually recycle the work that you have done, it accelerates and inspire other technical resources to take advantage of that. And then on the user side, one of the effects that have, again, this idea of the ability mindset, everything that we’ve done in the tool development is agile. So, bringing the users into that journey have actually helped us to also accelerate that aspect of our digital transformation.

Laurel: On that topic of workplace agility. In 2020, bp began a reorganization that consolidated its procurement departments into that single global procurement organization. What were the challenges that resulted from this reorganization?

Raimundo: Yeah. To give you a more context on that. So, if you think about bp being this really large global organizations divided in business units, before the organizations, every one of these business units have their own procurement departments, which handle literally billions of dollars that’s how big they were. And in that business, they have the ERP systems, your contract repository, your process and process deviation. But you only manage the portfolio to that. Once you integrate all of those organizations into a single one, now your responsibility become across some of those multiple business units, that has your data in all of these business systems.

So, if you want to create a report, then it’s really complicated because you have to not only go to these different systems, but the taxonomy of the data is different. So, an example, some business will call their territory, North America, the other one will call it east and west coast. So, if you want a report for a new business owner, it becomes really, really hard, and also the reports might not be as complete as they are. So, that really calls for some tools that we need to put in place to support that. And on top of that, the volume of requests now is so greater that just changing and adding steps to process aren’t going to be enough. You really need to look into automation to satisfy this higher demand.

Laurel: Well, speaking of automation, it can leverage existing technology and build efficiencies. So, what is the role of advanced technologies, like AI, machine learning and advanced analytics in the approach to your ongoing transformation?

Raimundo: So, today, everybody talks about AI, ML, and all these tools. But to be honest with you, I think your journey really starts a little bit earlier. I think when we go out and think about this advanced technology, which obviously, have their place, I think in the beginning, what you really need to focus is in your foundational, and that is your data. So, you ask about the role of the cloud. So, for bp, what we have done is all of the data used to reside in multiple different sites out there. So, what we have done is all the data now has been migrated to the cloud. And then what the cloud also allows is to do transformations in place that help us really homogenize, what I just described before, North America, South America, then you can create another column and say, okay, now call it, whatever, United States, or however you want to call it.

So, all of this data transformation happened in a single spot. And what that does is also allow our users that need this data to go to a single source of truth and not be pulling data from multiple systems. An example of the chaos that that creates is somebody will be pulling invoice and data from Spence, somebody will pull in PayData. So, then you already have data discrepancy on the reporting. And having a centralized tool where everybody goes for the data reduces so much complexity on the system.

Laurel: And speaking about that kind of complexity, it’s clear that multiple procurement systems made it difficult to maintain quality compliance as well, and as well as production tracking in bp supply chain. So, what are some of the most challenging aspects of realizing this new vision with a centralized one-stop platform?

Raimundo: Yeah, we have a good list in there. So, let me break it into maybe technical and people, because I think people is something that we should talk about it. So, technical. I think one of the biggest things in technical is working with your technical team to find the right architecture. This is how your vision fits into our architecture, which will create less, let’s say, confusion and complexity into your architecture. And the other side of the technical challenge is finding the right tools. I’ll give you an example for our project. Initially, I thought, okay, RPA [robotic process automation] will be the technology to do this. So, we run a pilot RPA. And obviously, RPA has incredible applications out there. But at this point, RPA really wasn’t the tool for us given the changes that could happen on the screens from the system that we’re using. So, then we decided instead of going to RPA, going to API.

So, that’s an example of a challenge of finding exactly the right tool that you have. But to be honest with you, I think the biggest challenge is not technical, but human. Like I mentioned before, people are immersed in the sea of change that is going on, and here you come with yet another tool. So, even the tool you’re giving them might be a lot more efficient, people still want to cling to what they know. So, if they say, “Look, if I have to spend another two hours extracting data, putting Excel, collating and running a report…” Some people may rather do that than go to a platform where all of that is done for them. So, I think change management is key in these transformations to make sure that they’re able to sell or make people understand what the value of the tool is, and overcome that challenge, which is human normal aversion to change. And especially when you’re immersed on this really, really sea of change that was already going as a result of the reorganization.

Laurel: Yeah. People are hard, and tech can be easy. So, just to clarify, RPA is the robotic process automation in this context, correct?

Raimundo: Yeah, absolutely. Yeah. Sorry about the pretty layered… Yeah.

Laurel: No, no. There’s lots of acronyms going around.

So, inversely, we’re just discussing the challenges, what are the positive outcomes from making this transformation? And could you give us an example or a use case of how that updated platform boosted efficiency across existing processes?

Raimundo: Absolutely. Just quick generic. So, generic things is you find yourself a lot in this cycle of that data. The users look at the applications that said that data’s not correct, and they lose the appetite for using that, but the problem is they own the data, but the process to change the data is so cumbersome that people don’t really want to take ownership of that because they said, “Look, I have 20 things to do. The least in my list is updating that data.”

So, we’re in this cycle of trying to put tools out for the user, the data is not correct, but we’re not the ones who own the data. So, the specific example of how we broke that cycle is using automation. So, to give you an example, before we create automation, if you needed to change any contract data, you have to find what the contract is, then you have to go to a tool like Salesforce and create a case. That case goes to our global business support team, and then they have to read the case, open the system of record, make the change. And that could take between days or weeks. Meantime, the user is like, “Well, I requested this thing, and it hasn’t even happened.”

So, what we did is leverage internal technology. We already had a large investment on Microsoft, as you can imagine. And we said, look, “From Power BI, you can look at your contract, you can click on the record you want to change. Power App comes up and tells you what do you want to do.” Say, I want to change the contract owner, for example. It opens a window, says, “Who’s the new person you want to put in?” And as soon as you submit it, literally, within less than a second, the API goes to the system of record, change the owner, creates an email that notifies everybody who is an stakeholder in that contract, which then increases visibility to changes across the organization.

And at the same time, it leaves you an audit trail. So, if somebody wants to challenge that, you know exactly what happened. So, that has been an incredible outcome of reducing cycle time from days and weeks to merely seconds, at the same time, increasing communication and visibility into the data. That has been proved one of the greatest achievements that we have.

Laurel: Well, I think you’ve really outlined this challenge. So, investing in supply chain visibility can be costly, but often bolsters trust and reputability among customers. What’s the role of transparency and visibility in a digital transformation journey of this size?

Raimundo: I keep talking about agile, and I think that’s one of the tenets. And what I will add to transparent visibility, I would add actually honesty. I think it’s very, very easy to learn from success. Everybody wants to tout the great things that they have done, but people may a little bit less inclined to speak out about their mistakes. I’ll just give you an example of our situation with RPA. We don’t feel bad about it. We feel that the more we share that knowledge with the technical teams, the much more value it has because then people will learn from that again and not commit the same mistake obviously.

But I think also what honesty do in this visibility is when you bring your users into the development team, you have that visibility. They feel part of it. They’re more willing to give you feedback. And also, they’re also willing to give you a little bit more leeway. If you say that the tool is going to be, or some feature is going to be delayed a month, for example, but you don’t give the reasons and they don’t have that transparency and visibility into what is driving that delay, people just lose faith in your tool.

Where I think the more open, the more visible you are, but also, again, with honesty, is you have a product that is so much more well received and that everybody feels part of the tool. It’s something that in every training, at the end of the training, I just say, “By the way, this is not my tool. This is your tool. And the more engaged you are with us, the much better outcome you’re going to have.” And that’s just achieved through transparency and visibility.

Laurel: So, for other large organizations looking to create a centralized platform to improve supply chain visibility, what are some of the key best practices that you’ve found that leadership can adopt to achieve the smoothest transition?

Raimundo: So, I probably think about three things. I think, one, the leadership needs to really, really do is understand the project. And when I say, understand the project, is really understanding the technical complexity, the human aspect of it, because I think that’s where your leadership has a lot of role to play. They’re able to influence their teams on this project that you’re trying to… And then they really need to understand also what are the risks associated with this project. And also that these could be a very lengthy journey. Hopefully, obviously, there’ll be results and milestones along, but they need to feel comfortable with also this agile mentality that we’re going to do features, fail, adapt, and they really need to be part of that journey.

The second biggest, I think, most important thing is having the right team. And in that, I think I’ve been super fortunate. We have a great partnership with Infosys. I’ve got one of the engineers named Sai. What the Infosys team and my technical team says is, “Look, do not shortchange yourself on the ideas that you bring from the business side.” A lot of times, we might think about something as impossible. They really encourage me to come up with almost crazy ideas. Just come with everything that you can think about. And they’re really, really incredible of delivering all the resources to bring in a solution to that. We almost end up using each other’s phrases. So, having a team that is really passionate about change, about being honest, about working together is the key to delivery. And finally, data foundation. I think that we get so stuck looking at the shiny tools out there that seem like science fiction and they’ll be great, and we forget that the outcome of those technologies are only as good as the data that we are supporting.

And data, a lot of times, it seem as like the, I don’t know, I don’t want to call it ugly sister, the ugly person in the room. But it’s really people… They’re like, “Oh, I don’t want to deal with that. I just want to do AI.” Well, your AI is not going to give you what you want if it doesn’t understand where you’re at. So, data foundation is key. Having the perfect team and technology partners and understanding the project length, the risk and being really engaged will be, for me, the key items there.

Laurel: That’s certainly helpful. So, looking ahead, what technologies or trends do you foresee will enable greater efficiencies across supply chain and operations?

Raimundo: It’s not like a broken record, bet. I really think that technologies that look at our data and help us clean the data, foresee what items we’re going to have with the data, how we can really have a data set that is really, really powerful, that is easy, and it has reflects exactly our situation, it’s the key for then the next step, which is all of these amazing technologies. If I think about our vision, for the platform is to create a semi-autonomous supply chain. And the vision is imagine having, again, first, the right data, and now what you have is AI/ML and all these models that look at that internal data, compare that with external factors.

And what it does is instead of presenting us alerts, we’ll go to the next level, and it, basically, presents scenarios. And say, “Look, based on the data that I see on the market, what you have had in your history, these are the three things that can happen, these are the plans that the tool recommends, and this is how you interact or affect that change.” So, moving a supply chain from a transactional item to a much more strategic item with the leverage of this technology, I think, that, to me, is the ultimate vision for supply chain.

Laurel: Well, Raimundo, thank you so much for joining us today on the Business Lab. This has been very enlightening.

Raimundo: Thank you. I’ve been a pleasure. And I wish everybody a great journey out there. It’s definitely a very exciting moment right now.

Laurel: Thank you.

That was Raimundo Martinez, who is a global digital solutions manager, procurement and supply chain at bp, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Outperforming competitors as a data-driven organization

In 2006, British mathematician Clive Humby said, “data is the new oil.” While the phrase is almost a cliché, the advent of generative AI is breathing new life into this idea. A global study on the Future of Enterprise Data & AI by WNS Triange and Corinium Intelligence shows 76% of C-suite leaders and decision-makers are planning or implementing generative AI projects. 

Harnessing the potential of data through AI is essential in today’s business environment. A McKinsey report says data-driven organizations demonstrate EBITDA increases of up to 25%. AI-driven data strategy can boost growth and realize untapped potential by increasing alignment with business objectives, breaking down silos, prioritizing data governance, democratizing data, and incorporating domain expertise.

“Companies need to have the necessary data foundations, data ecosystems, and data culture to embrace an AI-driven operating model,” says Akhilesh Ayer, executive vice president and global head of AI, analytics, data, and research practice at WNS Triange, a unit of business process management company WNS Global Services.

A unified data ecosystem

Embracing an AI-driven operating model requires companies to make data the foundation of their business. Business leaders need to ensure “every decision-making process is data-driven, so that individual judgment-based decisions are minimized,” says Ayer. This makes real-time data collection essential. “For example, if I’m doing fraud analytics for a bank, I need real-time data of a transaction,” explains Ayer. “Therefore, the technology team will have to enable real-time data collection for that to happen.” 

Real-time data is just one element of a unified data ecosystem. Ayer says an all-round approach is necessary. Companies need clear direction from senior management; well-defined control of data assets; cultural and behavioral changes; and the ability to identify the right business use cases and assess the impact they’ll create. 

Aligning business goals with data initiatives  

An AI-driven data strategy will only boost competitiveness if it underpins primary business goals. Ayer says companies must determine their business goals before deciding what to do with data. 

One way to start, Ayer explains, is a data-and-AI maturity audit or a planning exercise to determine whether an enterprise needs a data product roadmap. This can determine if a business needs to “re-architect the way data is organized or implement a data modernization initiative,” he says. 

The demand for personalization, convenience, and ease in the customer experience is a central and differentiating factor. How businesses use customer data is particularly important for maintaining a competitive advantage, and can fundamentally transform business operations. 

Ayer cites WNS Triange’s work with a retail client as an example of how evolving customer expectations drive businesses to make better use of data. The retailer wanted greater value from multiple data assets to improve customer experience. In a data triangulation exercise while modernizing the company’s data with cloud and AI, WNS Triange created a unified data store with personalization models to increase return on investment and reduce marketing spend. “Greater internal alignment of data is just one way companies can directly benefit and offer an improved customer experience,” says Ayer. 

Weeding out silos 

Regardless of an organization’s data ambitions, few manage to thrive without clear and effective communication. Modern data practices have process flows or application programming interfaces that enable reliable, consistent communication between departments to ensure secure and seamless data-sharing, says Ayer. 

This is essential to breaking down silos and maintaining buy-in. “When companies encourage business units to adopt better data practices through greater collaboration with other departments and data ecosystems, every decision-making process becomes automatically data-driven,” explains Ayer.  

WNS Triange helped a well-established insurer remove departmental silos and establish better communication channels. Silos were entrenched. The company had multiple business lines in different locations and legacy data ecosystems. WNS Triange brought them together and secured buy-in for a common data ecosystem. “The silos are gone and there’s the ability to cross leverage,” says Ayer. “As a group, they decide what prioritization they should take; which data program they need to pick first; and which businesses should be automated and modernized.”

Data ownership beyond IT

Removing silos is not always straightforward. In many organizations, data sits in different departments. To improve decision-making, Ayer says, businesses can unite underlying data from various departments and broaden data ownership. One way to do this is to integrate the underlying data and treat this data as a product. 

While IT can lay out the system architecture and design, primary data ownership shifts to business users. They understand what data is needed and how to use it, says Ayer. “This means you give the ownership and power of insight-generation to the users,” he says. 

This data democratization enables employees to adopt data processes and workflows that cultivate a healthy data culture. Ayer says companies are investing in trainings in this area. “We’ve even helped a few companies design the necessary training programs that they need to invest in,” he says. 

Tools for data decentralization

Data mesh and data fabric, powered by AI, empower businesses to decentralize data ownership, nurture the data-as-a-product concept, and create a more agile business. 

For organizations adopting a data fabric model, it’s crucial to include a data ingestion framework to manage new data sources. “Dynamic data integration must be enabled because it’s new data with a new set of variables,” says Ayer. “How it integrates with an existing data lake or warehouse is something that companies should consider.” 

Ayer cites WNS Triange’s collaboration with a travel client as an example of improving data control. The client had various business lines in different countries, meaning controlling data centrally was difficult and ineffective. WNS Triange deployed a data mesh and data fabric ecosystem that allowed for federated governance controls. This boosted data integration and automation, enabling the organization to become more data-centric and efficient. 

A governance structure for all

“Governance controls can be federated, which means that while central IT designs the overall governance protocols, you hand over some of the governance controls to different business units, such as data-sharing, security, and privacy, making data deployment more seamless and effective,” says Ayer. 

AI-powered data workflow automation can add precision and improve downstream analytics. For example, Ayer says, in screening insurance claims for fraud, when an insurer’s data ecosystem and workflows are fully automated, instantaneous AI-driven fraud assessments are possible. 

“The ability to process a fresh claim, bring it into a central data ecosystem, match the policyholder’s information with the claim’s data, and make sure that the claim-related information passes through a model to give a recommendation, and then push back that recommendation into the company’s workflow is the phenomenal experience of improving downstream analytics,” Ayer says. 

Data-driven organizations of the future

A well-crafted data strategy aligned with clear business objectives can seamlessly integrate AI tools and technologies into organizational infrastructure. This helps ensure competitive advantage in the digital age. 

To benefit from any data strategy, organizations must continuously overcome barriers such as legacy data platforms, slow adoption, and cultural resistance. “It’s extremely critical that employees embrace it for the betterment of themselves, customers, and other stakeholders,” Ayer points out. “Organizations can stay data-driven by aligning data strategy with business goals, ensuring stakeholders’ buy-in and employees’ empowerment for smoother adoption, and using the right technologies and frameworks.” 

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Deploying high-performance, energy-efficient AI

Although AI is by no means a new technology there have been massive and rapid investments in it and large language models. However, the high-performance computing that powers these rapidly growing AI tools — and enables record automation and operational efficiency — also consumes a staggering amount of energy. With the proliferation of AI comes the responsibility to deploy that AI responsibly and with an eye to sustainability during hardware and software R&D as well as within data centers.

“Enterprises need to be very aware of the energy consumption of their digital technologies, how big it is, and how their decisions are affecting it,” says corporate vice president and general manager of data center platform engineering and architecture at Intel, Zane Ball.

One of the key drivers of a more sustainable AI is modularity, says Ball. Modularity breaks down subsystems of a server into standard building blocks, defining interfaces between those blocks so they can work together. This system can reduce the amount of embodied carbon in a server’s hardware components and allows for components of the overall ecosystem to be reused, subsequently reducing R&D investments.

Downsizing infrastructure within data centers, hardware, and software can also help enterprises reach greater energy efficiency without compromising function or performance. While very large AI models require megawatts of super compute power, smaller, fine-tuned models that operate within a specific knowledge domain can maintain high performance but low energy consumption.

“You give up that kind of amazing general purpose use like when you’re using ChatGPT-4 and you can ask it everything from 17th century Italian poetry to quantum mechanics, if you narrow your range, these smaller models can give you equivalent or better kind of capability, but at a tiny fraction of the energy consumption,” says Ball.

The opportunities for greater energy efficiency within AI deployment will only expand over the next three to five years. Ball forecasts significant hardware optimization strides, the rise of AI factories — facilities that train AI models on a large scale while modulating energy consumption based on its availability — as well as the continued growth of liquid cooling, driven by the need to cool the next generation of powerful AI innovations.

“I think making those solutions available to our customers is starting to open people’s eyes how energy efficient you can be while not really giving up a whole lot in terms of the AI use case that you’re looking for.”

This episode of Business Lab is produced in partnership with Intel.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic is building a better AI architecture. Going green isn’t for the faint of heart, but it’s also a pressing need for many, if not all enterprises. AI provides many opportunities for enterprises to make better decisions, so how can it also help them be greener?

Two words for you: sustainable AI.

My guest is Zane Ball, corporate vice president and general manager of data center platform engineering and architecture at Intel.

This podcast is produced in partnership with Intel.

Welcome Zane.

Zane Ball: Good morning.

Laurel: So to set the stage for our conversation, let’s start off with the big topic. As AI transforms businesses across industries, it brings the benefits of automation and operational efficiency, but that high-performance computing also consumes more energy. Could you give an overview of the current state of AI infrastructure and sustainability at the large enterprise level?

Zane: Absolutely. I think it helps to just kind of really zoom out big picture, and if you look at the history of IT services maybe in the last 15 years or so, obviously computing has been expanding at a very fast pace. And the good news about that history of the last 15 years or so, is while computing has been expanding fast, we’ve been able to contain the growth in energy consumption overall. There was a great study a couple of years ago in Science Magazine that talked about how compute had grown by maybe 550% over a decade, but that we had just increased electricity consumption by a few percent. So those kind of efficiency gains were really profound. So I think the way to kind of think about it is computing’s been expanding rapidly, and that of course creates all kinds of benefits in society, many of which reduce carbon emissions elsewhere.

But we’ve been able to do that without growing electricity consumption all that much. And that’s kind of been possible because of things like Moore’s Law, Big Silicon has been improving with every couple of years and make devices smaller, they consume less power, things get more efficient. That’s part of the story. Another big part of this story is the advent of these hyperscale data centers. So really, really large-scale computing facilities, finding all kinds of economies of scale and efficiencies, high utilization of hardware, not a lot of idle hardware sitting around. That also was a very meaningful energy efficiency. And then finally this development of virtualization, which allowed even more efficient utilization of hardware. So those three things together allowed us to kind of accomplish something really remarkable. And during that time, we also had AI starting to play, I think since about 2015, AI workloads started to play a pretty significant role in digital services of all kinds.

But then just about a year ago, ChatGPT happens and we have a non-linear shift in the environment and suddenly large language models, probably not news to anyone on this listening to this podcast, has pivoted to the center and there’s just a breakneck investment across the industry to build very, very fast. And what is also driving that is that not only is everyone rushing to take advantage of this amazing large language model kind of technology, but that technology itself is evolving very quickly. And in fact also quite well known, these models are growing in size at a rate of about 10x per year. So the amount of compute required is really sort of staggering. And when you think of all the digital services in the world now being infused with AI use cases with very large models, and then those models themselves growing 10x per year, we’re looking at something that’s not very similar to that last decade where our efficiency gains and our greater consumption were almost penciling out.

Now we’re looking at something I think that’s not going to pencil out. And we’re really facing a really significant growth in energy consumption in these digital services. And I think that’s concerning. And I think that means that we’ve got to take some strong actions across the industry to get on top of this. And I think just the very availability of electricity at this scale is going to be a key driver. But of course many companies have net-zero goals. And I think as we pivot into some of these AI use cases, we’ve got work to do to square all of that together.

Laurel: Yeah, as you mentioned, the challenges are trying to develop sustainable AI and making data centers more energy efficient. So could you describe what modularity is and how a modularity ecosystem can power a more sustainable AI?

Zane: Yes, I think over the last three or four years, there’ve been a number of initiatives. Intel’s played a big part of this as well of re-imagining how servers are engineered into modular components. And really modularity for servers is just exactly as it sounds. We break different subsystems of the server down into some standard building blocks, define some interfaces between those standard building blocks so that they can work together. And that has a number of advantages. Number one, from a sustainability point of view, it lowers the embodied carbon of those hardware components. Some of these hardware components are quite complex and very energy intensive to manufacture. So imagine a 30 layer circuit board, for example, is a pretty carbon intensive piece of hardware. I don’t want the entire system, if only a small part of it needs that kind of complexity. I can just pay the price of the complexity where I need it.

And by being intelligent about how we break up the design in different pieces, we bring that embodied carbon footprint down. The reuse of pieces also becomes possible. So when we upgrade a system, maybe to a new telemetry approach or a new security technology, there’s just a small circuit board that has to be replaced versus replacing the whole system. Or maybe a new microprocessor comes out and the processor module can be replaced without investing in new power supplies, new chassis, new everything. And so that circularity and reuse becomes a significant opportunity. And so that embodied carbon aspect, which is about 10% of carbon footprint in these data centers can be significantly improved. And another benefit of the modularity, aside from the sustainability, is it just brings R&D investment down. So if I’m going to develop a hundred different kinds of servers, if I can build those servers based on the very same building blocks just configured differently, I’m going to have to invest less money, less time. And that is a real driver of the move towards modularity as well.

Laurel: So what are some of those techniques and technologies like liquid cooling and ultrahigh dense compute that large enterprises can use to compute more efficiently? And what are their effects on water consumption, energy use, and overall performance as you were outlining earlier as well?

Zane: Yeah, those are two I think very important opportunities. And let’s just take them one at a  time. Emerging AI world, I think liquid cooling is probably one of the most important low hanging fruit opportunities. So in an air cooled data center, a tremendous amount of energy goes into fans and chillers and evaporative cooling systems. And that is actually a significant part. So if you move a data center to a fully liquid cooled solution, this is an opportunity of around 30% of energy consumption, which is sort of a wow number. I think people are often surprised just how much energy is burned. And if you walk into a data center, you almost need ear protection because it’s so loud and the hotter the components get, the higher the fan speeds get, and the more energy is being burned in the cooling side and liquid cooling takes a lot of that off the table.

What offsets that is liquid cooling is a bit complex. Not everyone is fully able to utilize it. There’s more upfront costs, but actually it saves money in the long run. So the total cost of ownership with liquid cooling is very favorable, and as we’re engineering new data centers from the ground up. Liquid cooling is a really exciting opportunity and I think the faster we can move to liquid cooling, the more energy that we can save. But it’s a complicated world out there. There’s a lot of different situations, a lot of different infrastructures to design around. So we shouldn’t trivialize how hard that is for an individual enterprise. One of the other benefits of liquid cooling is we get out of the business of evaporating water for cooling. A lot of North America data centers are in arid regions and use large quantities of water for evaporative cooling.

That is good from an energy consumption point of view, but the water consumption can be really extraordinary. I’ve seen numbers getting close to a trillion gallons of water per year in North America data centers alone. And then in humid climates like in Southeast Asia or eastern China for example, that evaporative cooling capability is not as effective and so much more energy is burned. And so if you really want to get to really aggressive energy efficiency numbers, you just can’t do it with evaporative cooling in those humid climates. And so those geographies are kind of the tip of the spear for moving into liquid cooling.

The other opportunity you mentioned was density and bringing higher and higher density of computing has been the trend for decades. That is effectively what Moore’s Law has been pushing us forward. And I think it’s just important to realize that’s not done yet. As much as we think about racks of GPUs and accelerators, we can still significantly improve energy consumption with higher and higher density traditional servers that allows us to pack what might’ve been a whole row of racks into a single rack of computing in the future. And those are substantial savings. And at Intel, we’ve announced we have an upcoming processor that has 288 CPU cores and 288 cores in a single package enables us to build racks with as many as 11,000 CPU cores. So the energy savings there is substantial, not just because those chips are very, very efficient, but because the amount of networking equipment and ancillary things around those systems is a lot less because you’re using those resources more efficiently with those very high dense components. So continuing, if perhaps even accelerating our path to this ultra-high dense kind of computing is going to help us get to the energy savings we need maybe to accommodate some of those larger models that are coming.

Laurel: Yeah, that definitely makes sense. And this is a good segue into this other part of it, which is how data centers and hardware as well software can collaborate to create greater energy efficient technology without compromising function. So how can enterprises invest in more energy efficient hardware such as hardware-aware software, and as you were mentioning earlier, large language models or LLMs with smaller downsized infrastructure but still reap the benefits of AI?

Zane: I think there are a lot of opportunities, and maybe the most exciting one that I see right now is that even as we’re pretty wowed and blown away by what these really large models are able to do, even though they require tens of megawatts of super compute power to do, you can actually get a lot of those benefits with far smaller models as long as you’re content to operate them within some specific knowledge domain. So we’ve often referred to these as expert models. So take for example an open source model like the Llama 2 that Meta produced. So there’s like a 7 billion parameter version of that model. There’s also, I think, a 13 and 70 billion parameter versions of that model compared to a GPT-4, maybe something like a trillion element model. So it’s far, far, far smaller, but when you fine tune that model with data to a specific use case, so if you’re an enterprise, you’re probably working on something fairly narrow and specific that you’re trying to do.

Maybe it’s a customer service application or it’s a financial services application, and you as an enterprise have a lot of data from your operations, that’s data that you own and you have the right to use to train the model. And so even though that’s a much smaller model, when you train it on that domain specific data, the domain specific results can be quite good in some cases even better than the large model. So you give up that kind of amazing general purpose use like when you’re using ChatGPT-4 and you can ask it everything from 17th century Italian poetry to quantum mechanics, if you narrow your range, these smaller models can give you equivalent or better kind of capability, but at a tiny fraction of the energy consumption.

And we’ve demonstrated a few times, even with just a standard Intel Xeon two socket server with some of the AI acceleration technologies we have in those systems, you can actually deliver quite a good experience. And that’s without even any GPUs involved in the system. So that’s just good old-fashioned servers and I think that’s pretty exciting.

That also means the technology’s quite accessible, right? So you may be an enterprise, you have a general purpose infrastructure that you use for a lot of things, you can use that for AI use cases as well. And if you’ve taken advantage of these smaller models that fit within infrastructure we already have or infrastructure that you can easily obtain. And so those smaller models are pretty exciting opportunities. And I think that’s probably one of the first things the industry will adopt to get energy consumption under control is just right sizing the model to the activity to the use case that we’re targeting. I think there’s also… you mentioned the concept of hardware-aware software. I think that the collaboration between hardware and software has always been an opportunity for significant efficiency gains.

I mentioned early on in this conversation how virtualization was one of the pillars that gave us that kind of fantastic result over the last 15 years. And that was very much exactly that. That’s bringing some deep collaboration between the operating system and the hardware to do something remarkable. And a lot of the acceleration that exists in AI today actually is a similar kind of thinking, but that’s not really the end of the hardware software collaboration. We can deliver quite stunning results in encryption and in memory utilization in a lot of areas. And I think that that’s got to be an area where the industry is ready to invest. It is very easy to have plug and play hardware where everyone programs at a super high level language, nobody thinks about the impact of their software application downstream. I think that’s going to have to change. We’re going to have to really understand how our application designs are impacting energy consumption going forward. And it isn’t purely a hardware problem. It’s got to be hardware and software working together.

Laurel: And you’ve outlined so many of these different kind of technologies. So how can enterprise adoption of things like modularity and liquid cooling and hardware aware software be incentivized to actually make use of all these new technologies?

Zane: A year ago, I worried a lot about that question. How do we get people who are developing new applications to just be aware of the downstream implications? One of the benefits of this revolution in the last 12 months is I think just availability of electricity is going to be a big challenge for many enterprises as they seek to adopt some of these energy intensive applications. And I think the hard reality of energy availability is going to bring some very strong incentives very quickly to attack these kinds of problems.

But I do think beyond that like a lot of areas in sustainability, accounting is really important. There’s a lot of good intentions. There’s a lot of companies with net-zero goals that they’re serious about. They’re willing to take strong actions against those goals. But if you can’t accurately measure what your impact is either as an enterprise or as a software developer, I think you have to kind of find where the point of action is, where does the rubber meet the road where a micro decision is being made. And if the carbon impact of that is understood at that point, then I think you can see people take the actions to take advantage of the tools and capabilities that are there to get a better result. And so I know there’s a number of initiatives in the industry to create that kind of accounting, and especially for software development, I think that’s going to be really important.

Laurel: Well, it’s also clear there’s an imperative for enterprises that are trying to take advantage of AI to curb that energy consumption as well as meet their environmental, social, and governance or ESG goals. So what are the major challenges that come with making more sustainable AI and computing transformations?

Zane: It’s a complex topic, and I think we’ve already touched on a couple of them. Just as I was just mentioning, definitely getting software developers to understand their impact within the enterprise. And if I’m an enterprise that’s procuring my applications and software, maybe cloud services, I need to make sure that accounting is part of my procurement process, that in some cases that’s gotten easier. In some cases, there’s still work to do. If I’m operating my own infrastructure, I really have to look at liquid cooling, for example, an adoption of some of these more modern technologies that let us get to significant gains in energy efficiency. And of course, really looking at the use cases and finding the most energy efficient architecture for that use case. For example, like using those smaller models that I was talking about. Enterprises need to be very aware of the energy consumption of their digital technologies, how big it is and how their decisions are affecting it.

Laurel: So could you offer an example or use case of one of those energy efficient AI driven architectures and how AI was subsequently deployed for it?

Zane: Yes. I think that some of the best examples I’ve seen in the last year were really around these smaller models where Intel did an example that we published around financial services, and we found that something like three hours of fine-tuning training on financial services data allowed us to create a chatbot solution that performed in an outstanding manner on a standard Xeon processor. And I think making those solutions available to our customers is starting to open people’s eyes how energy efficient you can be while not really giving up a whole lot in terms of the AI use case that you’re looking for. And so I think we need to just continue to get those examples out there. We have a number of collaborations such as with Hugging Face with open source models, enabling those solutions on our products like our Gaudi2 accelerator has also performed very well from a performance per watt point of view, the Xeon processor itself. So those are great opportunities.

Laurel: And then how do you envision the future of AI and sustainability in the next three to five years? There seems like so much opportunity here.

Zane: I think there’s going to be so much change in the next three to five years. I hope no one holds me to what I’m about to say, but I think there are some pretty interesting trends out there. One thing, I think, to think about is the trend of AI factories. So training a model is a little bit of an interesting activity that’s distinct from what we normally think of as real time digital services. You have real time digital service like Vinnie, the app on your iPhone that’s connected somewhere in the cloud, and that’s a real time experience. And it’s all about 99.999% uptime, short latencies to deliver that user experience that people expect. But AI training is different. It’s a little bit more like a factory. We produce models as a product and then the models are used to create the digital services. And that I think becomes an important distinction.

So I can actually build some giant gigawatt facility somewhere that does nothing but train models on a large scale. I can partner with the infrastructure of the electricity providers and utilities much like an aluminum plant or something would do today where I actually modulate my energy consumption with its availability. Or maybe I take advantage of solar or wind power’s ability, I can modulate when I’m consuming power, not consuming power. And so I think if we’re going to see some really large scale kinds of efforts like that, and those AI factories could be very, very efficient, they can be liquid cooled and they can be closely coupled to the utility infrastructure. I think that’s a pretty exciting opportunity. And while that’s kind of an acknowledgement that there’s going to be gigawatts and gigawatts of AI training going on. Second opportunity, I think in this three to five years, I do think liquid cooling will become far more pervasive.

I think that will be driven by the need to cool the next generation of accelerators and GPUs will make it a requirement, but then that will be able to build that technology out and scale it more ubiquitously for all kinds of infrastructure. And that will let us shave huge amounts of gigawatts out of the infrastructure, save hundreds of billions of gallons of water annually. I think that’s incredibly exciting. And if I just… the innovation on the model size as well, so much has changed with just the last five years with large language models like ChatGPT, let’s not assume there’s not going to be even bigger change in the next three to five years. What are the new problems that are going to be solved, new innovations? So I think as the costs and impact of AI are being felt more substantively, there’ll be a lot of innovation on the model side and people will come up with new ways of cracking some of these problems and there’ll be new exciting use cases that come about.

Finally, I think on the hardware side, there will be new AI architectures. From an acceleration point of view today, a lot of AI performance is limited by memory bandwidth, memory bandwidth and networking bandwidth between the various accelerator components. And I don’t think we’re anywhere close to having an optimized AI training system or AI inferencing systems. I think the discipline is moving faster than the hardware and there’s a lot of opportunity for optimization. So I think we’ll see significant differences in networking, significant differences in memory solutions over the next three to five years, and certainly over the next 10 years that I think can open up a substantial set of improvements.

And of course, Moore’s Law itself continues to advance advanced packaging technologies, new transistor types that allow us to build ever more ambitious pieces of silicon, which will have substantially higher energy efficiency. So all of those things I think will be important. Whether we can keep up with our energy efficiency gains with the explosion in AI functionality, I think that’s the real question and it’s just going to be a super interesting time. I think it’s going to be a very innovative time in the computing industry over the next few years.

Laurel: And we’ll have to see. Zane, thank you so much for joining us on the Business Lab.

Zane: Thank you.

Laurel: That was Zane Ball, corporate vice president and general manager of data center platform engineering and architecture at Intel, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can also find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.


This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Bringing breakthrough data intelligence to industries

As organizations recognize the transformational opportunity presented by generative AI, they must consider how to deploy that technology across the enterprise in the context of their unique industry challenges, priorities, data types, applications, ecosystem partners, and governance requirements. Financial institutions, for example, need to ensure that data and AI governance has the built-in intelligence to fully align with strict compliance and regulatory requirements. Media and entertainment (M&E) companies seek to build AI models to drive deeper product personalization. And manufacturers want to use AI to make their internet of things (IoT) data insights readily accessible to everyone from the data scientist to the shop floor worker.

In any of these scenarios, the starting point is access to all relevant data—of any type, from any source, in real time—governed comprehensively and shared across an industry ecosystem. When organizations can achieve this with the right data and AI foundation, they have the beginnings of data intelligence: the ability to understand their data and break free from data silos that would block the most valuable AI outcomes.

But true data intelligence is about more than establishing the right data foundation. Organizations are also wrestling with how to overcome dependence on highly technical staff and create frameworks for data privacy and organizational control when using generative AI. Specifically, they are looking to enable all employees to use natural language to glean actionable insight from the company’s own data; to leverage that data at scale to train, build, deploy, and tune their own secure large language models (LLMs); and to infuse intelligence about the company’s data into every business process.

In this next frontier of data intelligence, organizations will maximize value by democratizing AI while differentiating through their people, processes, and technology within their industry context. Based on a global, cross-industry survey of 600 technology leaders as well as in-depth interviews with technology leaders, this report explores the foundations being built and leveraged across industries to democratize data and AI. Following are its key findings:

• Real-time access to data, streaming, and analytics are priorities in every industry. Because of the power of data-driven decision-making and its potential for game-changing innovation, CIOs require seamless access to all of their data and the ability to glean insights from it in real time. Seventy-two percent of survey respondents say the ability to stream data in real time for analysis and action is “very important” to their overall technology goals, while another 20% believe it is “somewhat important”—whether that means enabling real-time recommendations in retail or identifying a next best action in a critical health-care triage situation.

• All industries aim to unify their data and AI governance models. Aspirations for a single approach to governance of data and AI assets are strong: 60% of survey respondents say a single approach to built-in governance for data and AI is “very important,” and an additional 38% say it is “somewhat important,” suggesting that many organizations struggle with a fragmented or siloed data architecture. Every industry will have to achieve this unified governance in the context of its own unique systems of record, data pipelines, and requirements for security and compliance.

• Industry data ecosystems and sharing across platforms will provide a new foundation for AI-led growth. In every industry, technology leaders see promise in technology-agnostic data sharing across an industry ecosystem, in support of AI models and core operations that will drive more accurate, relevant, and profitable outcomes. Technology teams at insurers and retailers, for example, aim to ingest partner data to support real-time pricing and product offer decisions in online marketplaces, while manufacturers see data sharing as an important capability for continuous supply chain optimization. Sixty-four percent of survey respondents say the ability to share live data across platforms is “very important,” while an additional 31% say it is “somewhat important.” Furthermore, 84% believe a managed central marketplace for data sets, machine learning models, and notebooks is very or somewhat important.

• Preserving data and AI flexibility across clouds resonates with all verticals. Sixty-three percent of respondents across verticals believe that the ability to leverage multiple cloud providers is at least somewhat important, while 70% feel the same about open-source standards and technology. This is consistent with the finding that 56% of respondents see a single system to manage structured and unstructured data across business intelligence and AI as “very important,” while an additional 40% see this as “somewhat important.” Executives are prioritizing access to all of the organization’s data, of any type and from any source, securely and without compromise.

• Industry-specific requirements will drive the prioritization and pace by which generative AI use cases are adopted. Supply chain optimization is the highest-value generative AI use case for survey respondents in manufacturing, while it is real-time data analysis and insights for the public sector, personalization and customer experience for M&E, and quality control for telecommunications. Generative AI adoption will not be one-size-fits-all; each industry is taking its own strategy and approach. But in every case, value creation will depend on access to data and AI permeating the enterprise’s ecosystem and AI being embedded into its products and services.

Maximizing value and scaling the impact of AI across people, processes, and technology is a common goal across industries. But industry differences merit close attention for their implications on how intelligence is infused into the data and AI platforms. Whether it be for the retail associate driving omnichannel sales, the health-care practitioner pursuing real-world evidence, the actuary analyzing risk and uncertainty, the factory worker diagnosing equipment, or the telecom field agent assessing network health, the language and scenarios AI will support vary significantly when democratized to the front lines of every industry.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Navigating a shifting customer-engagement landscape with generative AI

One can’t step into the same river twice. This simple representation of change as the only constant was taught by the Greek philosopher Heraclitus more than 2000 years ago. Today, it rings truer than ever with the advent of generative AI. The emergence of generative AI is having a profound effect on today’s enterprises—business leaders face a rapidly changing technology that they need to grasp to meet evolving consumer expectations.

“Across all industries, customers are at the core, and tapping into their latent needs is one of the most important elements to sustain and grow a business,” says Akhilesh Ayer, executive vice president and global head of AI, analytics, data, and research practice at WNS Triange, a unit of WNS Global Services, a leading business process management company. “Generative AI is a new way for companies to realize this need.”

A strategic imperative

Generative AI’s ability to harness customer data in a highly sophisticated manner means enterprises are accelerating plans to invest in and leverage the technology’s capabilities. In a study titled “The Future of Enterprise Data & AI,” Corinium Intelligence and WNS Triange surveyed 100 global C-suite leaders and decision-makers specializing in AI, analytics, and data. Seventy-six percent of the respondents said that their organizations are already using or planning to use generative AI.

According to McKinsey, while generative AI will affect most business functions, “four of them will likely account for 75% of the total annual value it can deliver.” Among these are marketing and sales and customer operations. Yet, despite the technology’s benefits, many leaders are unsure about the right approach to take and mindful of the risks associated with large investments.

Mapping out a generative AI pathway

One of the first challenges organizations need to overcome is senior leadership alignment. “You need the necessary strategy; you need the ability to have the necessary buy-in of people,” says Ayer. “You need to make sure that you’ve got the right use case and business case for each one of them.” In other words, a clearly defined roadmap and precise business objectives are as crucial as understanding whether a process is amenable to the use of generative AI.

The implementation of a generative AI strategy can take time. According to Ayer, business leaders should maintain a realistic perspective on the duration required for formulating a strategy, conduct necessary training across various teams and functions, and identify the areas of value addition. And for any generative AI deployment to work seamlessly, the right data ecosystems must be in place.

Ayer cites WNS Triange’s collaboration with an insurer to create a claims process by leveraging generative AI. Thanks to the new technology, the insurer can immediately assess the severity of a vehicle’s damage from an accident and make a claims recommendation based on the unstructured data provided by the client. “Because this can be immediately assessed by a surveyor and they can reach a recommendation quickly, this instantly improves the insurer’s ability to satisfy their policyholders and reduce the claims processing time,” Ayer explains.

All that, however, would not be possible without data on past claims history, repair costs, transaction data, and other necessary data sets to extract clear value from generative AI analysis. “Be very clear about data sufficiency. Don’t jump into a program where eventually you realize you don’t have the necessary data,” Ayer says.

The benefits of third-party experience

Enterprises are increasingly aware that they must embrace generative AI, but knowing where to begin is another thing. “You start off wanting to make sure you don’t repeat mistakes other people have made,” says Ayer. An external provider can help organizations avoid those mistakes and leverage best practices and frameworks for testing and defining explainability and benchmarks for return on investment (ROI).

Using pre-built solutions by external partners can expedite time to market and increase a generative AI program’s value. These solutions can harness pre-built industry-specific generative AI platforms to accelerate deployment. “Generative AI programs can be extremely complicated,” Ayer points out. “There are a lot of infrastructure requirements, touch points with customers, and internal regulations. Organizations will also have to consider using pre-built solutions to accelerate speed to value. Third-party service providers bring the expertise of having an integrated approach to all these elements.”

Ayer offers the example of WNS Triange helping a travel intermediary use generative AI to deal with customer inquiries about airline rescheduling, cancellations, and other itinerary complications. “Our solution is immediately able to go into a thousand policy documents, pick out the policy parameters relevant to the query… and then come back quickly not only with the response but with a nice, summarized, human-like response,” he says.

In another example, Ayer shares that his company helped a global retailer create generative AI–driven designs for personalized gift cards. “The customer experience goes up tremendously,” he says.

Hurdles in the generative AI journey

As with any emerging technology, however, there are organizational, technical, and implementation barriers to overcome when adopting generative AI.

Organizational:  One of the major hurdles businesses can face is people. “There is often immediate resistance to the adoption of generative AI because it affects the way people work on a daily basis,” says Ayer.

As a result, securing internal buy-in from all teams and being mindful of a skills gap is a must. Additionally, the ability to create a business case for investment—and getting buy-in from the C-suite—will help expedite the adoption of generative AI tools.

Technical: The second set of obstacles relates to large language models (LLMs) and mechanisms to safeguard against hallucinations and bias and ensure data quality. “Companies need to figure out if generative AI can solve the whole problem or if they still need human input to validate the outputs from LLM models,” Ayer explains. At the same time, organizations must ask whether the generative AI models being used have been appropriately trained within the customer context or with the enterprise’s own data and insights. If not, there is a high chance that the response will be incorrect. Another related challenge is bias: If the underlying data has certain biases, the modeling of the LLM could be unfair. “There have to be mechanisms to address that,” says Ayer. Other issues, such as data quality, output authenticity, and explainability, also must be addressed.

Implementation: The final set of challenges relates to actual implementation. The cost of implementation can be significant, especially if companies cannot orchestrate a viable solution, says Ayer. In addition, the right infrastructure and people must be in place to avoid resource constraints. And users must be convinced that the output will be relevant and of high quality, so as to gain their acceptance for the program’s implementation.

Lastly, privacy and ethical issues must be addressed. The Corinium Intelligence and WNS Triange survey showed that almost 72% of respondents were concerned about ethical AI decision-making.

The focus of future investment

The entire ecosystem of generative AI is moving quickly. Enterprises must be agile and adapt quickly to change to ensure customer expectations are met and maintain a competitive edge. While it is almost impossible to anticipate what’s next with such a new and fast-developing technology, Ayer says that organizations that want to harness the potential of generative AI are likely to increase investment in three areas:

  • Data modernization, data management, data quality, and governance: To ensure underlying data is correct and can be leveraged.
  • Talent and workforce: To meet demand, training, apprenticeships, and injection of fresh talent or leveraging market-ready talent from service providers will be required.
  • Data privacy solutions and mechanisms: To ensure privacy is maintained, C-suite leaders must also keep pace with relevant laws and regulations across relevant jurisdictions.

However, it shouldn’t be a case of throwing everything at the wall and seeing what sticks. Ayer advises that organizations examine ROI from the effectiveness of services or products provided to customers. Business leaders must clearly demonstrate and measure a marked improvement in customer satisfaction levels using generative AI–based interventions.

“Along with a defined generative AI strategy, companies need to understand how to apply and build use cases, how to execute them at scale and speed to market, and how to measure their success,” says Ayer. Leveraging generative AI for customer engagement is typically a multi-pronged approach, and a successful partnership can help with every stage.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Developing climate solutions with green software

After years of committing to sustainable practices in his personal life from recycling to using cloth-based diapers, Asim Hussain, currently the director of green software and ecosystems at Intel, began to ask questions about the practices in his work: software development.

Developers often asked if their software was secure enough, fast enough, or cost-effective enough but, Hussain says, they rarely considered the environmental consequences of their applications. Hussain would go on to work at Intel and become the executive director and chairperson of the Green Software Foundation, a non-profit aiming to create an ecosystem of people, tooling, and best practices around sustainable software development.

“What we need to do as software developers and software engineers is we need to make sure that it is emitting the least amount of carbon for the same amount of value and user functionality that we’re getting out of it,” says Hussain.

The three pillars of green software are energy efficiency, hardware efficiency, and carbon awareness. Making more efficient use of hardware and energy consumption when developing applications can go a long way toward reducing emissions, Hussain says. And carbon-aware computing involves divestment from fossil fuels in favor of renewable energy sources to improve efficiency without compromising performance.

Often, when something is dubbed “green,” there is an assumption that the product, application, or practice functions worse than its less environmentally friendly version. With software, however, the opposite is true.

“Being green in the software space means being more efficient, which translates almost always to being faster,” says Hussain. “When you factor in the hardware efficiency component, oftentimes it translates to building software that is more resilient, more fault-tolerant. Oftentimes it also translates then into being cheaper.”

Instituting green software necessitates not just a shift in practices and tooling but also a culture change within an enterprise. While regulations and ESG targets help to create an imperative, says Hussain, a shift in mindset can enable some of the greatest strides forward.

“If there’s anything we really need to do is to drive that behavior change, we need to drive behavior change so people actually invest their time on making software more energy efficient, more hardware efficient, or more carbon aware.”

This episode of Business Lab is produced in partnership with Intel.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic is green software, from apps to devices to the cloud. Computing runs the world around us. However, there is a better way to do it with a focus on sustainability.

Two words for you: sustainable code.

My guest is Asim Hussain, who is the director of the Office of Green Software and Ecosystems at Intel, as well as the chairperson of the Green Software Foundation.

This podcast is produced in partnership with Intel.

Welcome, Asim.

Asim Hussain: Hi Laurel. Thank you very much for having me.

Laurel: Well, glad you’re here. So for a bit of background, you’ve been working in software development and sustainability advocacy from startups to global enterprises for the last two decades. What drew you into sustainability as a focus and what are you working on now?

Asim: I’ve personally been involved and interested in the sustainability space for quite a while on a very personal level. Then around the birth of my first son, about five years ago now, I started asking myself this one question, which was how come I was willing to do all these things I was doing for sustainability to recycle, we were using cloth-based nappies, all sorts of these different things. Yet I could not remember in my entire career, my entire career, I could not remember one single moment where in any technical discussion, in any architectural meeting, in any discussion about how we’re going to build this piece of software. I mean, people oftentimes raise points around is this secure enough? Is this fast enough? Does this cost too much? But at no point I’d ever heard anybody ask the question, is this emitting too much carbon? Is this piece of software, is this solution that we’re talking about right now, how does that solution, what kind of environmental impacts does that have? I’ve never, ever, ever heard anybody raise that question.

So I really started to ask that question myself. I found other people who are like me. Five years ago, there weren’t many of us, but were all asking the same questions. I joined and then I started to become a co-organizer of a community called ClimateAction.Tech. Then the community just grew. A lot of people were starting to ask themselves these questions and some answers were coming along. At the time, I used to work at Microsoft and I pitched and formed something called the green cloud advocacy team, where we talked about how to actually build applications in a greener way on the cloud.

We formed something called the Green Software Foundation, which is a consortium of now 60 member organizations, which I am a chairperson of. Over a year ago I joined Intel because Intel has been heavily investing in sustainable software space. If you think about what Intel does, pretty much everything that Intel produces, developers use it and developers write software and write code on Intel’s products. So it makes sense for Intel to have a strong green software strategy. That’s kind of why I was brought in and I’ve since then been working on Intel’s green software strategy internally.

Laurel: So a little bit more about that. How can organizations make their software greener? Then maybe we should take a step back and define what green software actually is.

Asim: Well, I think we have to define what green software actually is first. The way the conversation’s landed in recent years and the Green Software Foundation has been a large part of this is we’ve coalesced around this idea of carbon efficiency, which is if you are building a piece of software … Everything we do emits carbon, everything we do emits carbon, this tool we’re using right now to record this session is emitting carbon right now. What we need to do as software developers and software engineers is we need to make sure that it is emitting the least amount of carbon for the same amount of value and user functionality that we’re getting out of it. That’s what we call carbon efficiency.

What we say is there’s three pillars underneath, there’s only really three ways to make your software green. The first is to make it more energy efficient, to use less energy. Most electricity is still created through the burning of fossil fuels. So just using less electricity is going to emit fewer carbon emissions into the atmosphere. So the first is energy efficiency. The second is hardware efficiency because all software runs on hardware and depends on the, if you’re talking about a mobile phone, typically people are forced to move on from mobile phones because the software just doesn’t run on their older models. In the cloud it tends to be more around a conversation around utilization by making more use of the servers that you already have in the cloud, making just more efficient use of the hardware. The third one is a very interesting space. It’s a very new space. It’s called carbon awareness or carbon-aware computing. That is you are going to be using electricity anyway. Can you make your software? Can you architect your software in such a way?

So it does more when the electricity is clean and does less when the electricity is dirty. So can you architect an application? So for instance, it does more when there’s more renewable energy on the grid right now, and it does less when more coal or gas is getting burnt. There’s some very interesting projects in this space that have been happening, very high-profile projects and carbon-aware computing is an area where there’s a lot of interest because it’s a stepping stone. It might not get you your 50, 60, 70% carbon reductions, but it will get you your 1, 2, 3, and 4% carbon reductions and it’ll get you that with very minimal investments. There’s a lot of interest in carbon-aware computing. But those are basically the three areas, what we call the three pillars of green software, energy efficiency, hardware efficiency, and carbon awareness.

Laurel: So another reason we’re talking about all of this is that technology can contribute to the environmental issues that it is trying to actually help. So for example, a lot of energy is needed to train AI models. Also, blockchain was key in the development of energy-efficient microgrids, but it’s also behind the development of cryptocurrency platforms, some of which consume more energy than that of a small country. So how can advanced technologies like AI, machine learning, and blockchain contribute positively to the development of green software?

Asim: That’s an interesting question because sometimes the focus oftentimes is how do we actually make that technology greener? But I don’t believe that is necessarily the whole story. The story is the broader story. How can we use that technology to make software greener? I think there’s many ways you can probably tackle that question. One thing that’s been interesting for me since my journey as a software developer joining Intel is me realizing how little I knew about hardware. There is so much, I describe it as the gap between software and silicon. The gap is quite large right now. If you’re building software these days, you have very little understanding of the silicon that’s running that software. Through a greater understanding of exactly how your software is exactly getting executed by the silicon to implement the functionality, that’s where we are seeing that there’s a lot of great opportunities to reduce emissions and to make that software more energy efficient, more hardware efficient.

I think that’s where places like AI can really help out. Developer productivity has been the buzzword in this space for a very long time. Developers are extremely expensive. Getting to market fast and beating your competition is the name of the game these days. So it’s always been about how do we implement the functionality we need as fast as possible, make sure it’s secure, get it out the door. But oftentimes the only way you can do that is to increase the gap between the software and silicon and just make it a little bit more inefficient. I think AI can really help there. You can build AI solutions that can, there’s copilot solutions which can help as you’re developing code could actually suggest to you. If you were to write your code in a slightly different way, it could be more efficient. So that’s one way AI can help out.

Another way that I’m seeing AI utilized in this space as well is when you deploy … Silicon and the products that we produce can actually, they come out of the box configured in a certain way, but they can actually be tuned to actually execute that particular piece of software much more efficiently. So if you have a data center running just one type of software, you can actually tune the hardware so that software is run more efficiently on that hardware. We’re seeing AI solutions come on the market these days, which can then automatically just figure out what type of application are you, how do you run, how do you work? We have a solution called Granulate, which does part of this as well. It can then figure out how do you tune the underlying hardware in such a way so it executes that software more efficiently. So I think that’s kind of a couple of ways that this technology could actually be used to make software itself greener.

Laurel: To bridge that gap between software and silicon, you must be able to measure the progress and meet targets. So what parameters do you use to measure the energy efficiency of software? Could you talk us through the tenets of actually measuring?

Asim: So measuring is an extremely challenging problem. When we first launched the Green Software Foundation three years ago, I remember asking all the members, what is your biggest pain point? They all came back, almost all came back with measuring. Measuring is very, very challenging. It’s so nuanced, there’s so many different levels to it. For instance, at Intel, we have technology in our chips to actually measure the energy of the whole chip. Those counters on the chip which measure it. Unfortunately, that only gives you the energy of the entire chip itself. So it does give you a measurement, but then if you are a developer, there’s maybe 10 processes running on that chip and only one of them is yours. You need to know how much energy is your process consuming because that’s what you can optimize for. That’s what you can see. Currently, the best way to measure at that level is using models, models which are either generated again through AI or through other processes where you can effectively just run lots large amounts of data and generate statistical models.

Oftentimes a model that’s used is one that uses CPU [central processing unit] utilization, so how busy a CPU is and translate that into energy. So you can see my process is consuming 10% of the CPU. There are models out there that can convert that into energy, but again, all models are wrong, some models are useful. So there’s always so much nuance to this whole space as well, because how have you tweaked your computer? What else is running on your computer? It can also affect how those numbers are measured. So, unfortunately, this is a very, very challenging area.

But this is really the really big area that a lot of people are trying to resolve right now. We are not at the perfect solution, but we are way, way better than we were three, four or five years ago. It’s actually a very exciting time for measurement in this space.

Laurel: Well, and I guess part of it is that green software seems to be developed with greater scrutiny and higher quality controls to ensure that the product actually meets these standards to reduce emissions. Measurement is part of that, right? So what are some of the rewards beyond emissions reduction or meeting green goals of developing software? You kind of touched on that earlier with the carbon efficiency as well as hardware efficiency.

Asim: Yeah, so this is something I used to think about a lot because the term green has a lot associated with it. I mean, oftentimes when people historically have used the word green, you can have the main product or the green version of the product. There’s an idea in your mind that the green version is somehow less than, it’s somehow not as good. But actually in the software space it’s so interesting because the exact opposite. Being green in the software space means being more efficient, which translates almost always to being faster. When you factor in the hardware efficiency component, oftentimes it translates to building software that is more resilient, more fault-tolerant. Oftentimes it also translates then into being cheaper. So actually green has a lot of positive associations with it already.

Laurel: So in that vein, how can external standards help provide guidance for building software and solutions? I mean, obviously, there’s a need to create something like the Green Software Foundation, and with the focus that most enterprises have now on environmental, social, and governance goals or ESG, companies are now looking more and more to build those ideas into their everyday workflow. So how do regulations help and not necessarily hinder this kind of progress?

Asim: So standards are very, very important in this space. Standards, I mean, one of the things, again, when we look to the ecosystem about three, four years ago, the biggest problem the enterprises had, I mean a lot of them were very interested in green software, but the biggest problem they had was what do they trust? What can I trust? Whose advice should I take? That’s where standards come in. That’s where standards are most important. Standards are, at least the way we develop standards inside the Green Software Foundation, they’re done via consensus. There are like 60 member organizations. So when you see a standard that’s been created by that many people and that many people have been involved with it, it really builds up that trust. So now you know what to do. Those standards give you that compass direction to tell you which direction to go in and that you can trust.

There’s several standards that we’ve been focusing on in the Green Software Foundation, one’s called the SEI, which is a software carbon intensity specification. Again, to prove it as an ISO standard, you have to reach consensus through 196 countries. So then you get even more trust into a standard so you can use it. So standards really help to build up that trust, which organizations can use to help guide them in the directions to take. There’s a couple of other standards that are really coming up in the foundation that I think are quite interesting. One is called Real-Time Cloud. One of the challenges right now is, and again always comes back to measurement, it always always comes back to measurement. Right now measurement is very discreet, it happens oftentimes just a few times a year. Oftentimes when you get measurement data, it is very delayed. So one of the specs that’s been worked on right now is called Real-Time Cloud.

It’s trying to ask the question, is it possible? Is it possible to get data that is real-time? Oftentimes when you want to react and change behaviors, you need real-time data. If you want data so that when somebody does something, they know instantly the impact of that action so they can make adjustments instantly. If they’re having to wait three months, that behavior change might not happen. Real-time [data] is oftentimes at log aheads with regulations because oftentimes you have to get your data audited and auditing data that’s real-time is very, very challenging. So one of the questions we’re trying to ask is, is it possible to have data which is real-time, which then over the course of a year, you can imagine it just aggregates up over the course of a year. Can that aggregation then provide enough trust so that an auditor can then say, actually, we now trust this information and we will allow that to be used in regulatory reporting.

That’s something that we’re very excited about because you really need real-time data to drive behavior change. If there’s anything we really need to do is to drive that behavior change, we need to drive behavior change so people actually invest their time on making software more energy efficient, more hardware efficient, or more carbon aware. So that’s some of the ways where standards are really helping in this space.

Laurel: I think it’s really helpful to talk about standards and how they are so ingrained with software development in general because there are so many misconceptions about sustainability. So what are some of the other misconceptions that people kind of get stuck on, maybe that even calling it green, right? Are there philosophies or strategies that you can caution against or you try to advocate for?

Asim: So as a couple of things I talk about, so one of the things I talk about is it does take everybody, I mean, I remember very early on when I was talking in this space, oftentimes a conversation went, oh, don’t bother talking to that person or don’t talk to this sector of developers, don’t talk to that type of developers. Only talk to these people, these people who have the most influence to make the kind of changes to make software greener. But it really takes a cultural change. This is what’s very important, really takes a cultural change inside an organization. It takes everybody. You can’t really talk to one slice of the developer ecosystem. You need to talk to everybody. Every single developer or engineer inside an organization really needs to take this on board. So that’s one of the things I say is that you have to speak to every single person. You cannot just speak to one set of people and exclude another set of people.

Another challenge that I often see is that people, when they talk about this space, one of the misconceptions they talk about is they rank where effort should be spent in terms of the carbon slice of the pie that it is responsible for and I’ll talk about this in general. But really how you should be focusing is you should be focusing not on the slice of the pie, but on the ability to decarbonize that slice of the pie. That’s why green software is so interesting and that’s why it’s such a great place to spend effort and time. It might not be, I mean it is, depending on which academic paper you look at, it can be between 2 to 4% of global emissions. So some people might say, well, that’s not really worth spending the time in.

But my argument is actually the ability for us to decarbonize that 2 to 4% is far easier than our ability to decarbonize other sectors like airlines or concrete or these other sectors. We know what we need to do oftentimes in the software space, we know the choices. There doesn’t need to be new technology made, there just needs to be decisions made to prioritize this work. That’s something I think is very, very important. We should rank everything in terms of our ability to decarbonize the ease of decarbonization and then work on the topmost item first down, rather than just looking at things in just terms of tons of carbon, which I think leads to wrong decision making.

Laurel: Well, I think you’re laying out a really good argument because green initiatives, they can be daunting, especially for large enterprises looking to meet those decarbonization thresholds within the next decade. For those companies that are making the investment into this, how should they begin? Where are the fundamental things just to be aware of when you’re starting this journey?

Asim: So the first step is, I would say training. What we’re describing here in terms of, especially in terms of the green software space, it’s a very new movement. It’s a very new field of computing. So a lot of the terms that I talk about are just not well understood and a lot of the reasons for those terms are not well understood as well. So the number one thing I always say is you need to focus on training. There’s loads of training out there. The Green Software Foundation’s got some training, learn.GreenSoftware.Foundation, it’s just two hours, it’s free. We send that over to anybody who’s starting in this space just to understand the language, the terminology, just to get everybody on the same page. That is usually a very good start. Now in terms of how do you motivate inside, I think about this a lot.

If you’re the lead of an organization and you want to make a change, how do you actually make that change? I’m a big, big believer in trusting your team, trusting your people. If you give engineers a problem, they will find a solution to that problem. But what they oftentimes need is permission, a thumbs up from leadership that this is a priority. So that’s why it’s very important for organizations to be very public about their commitments, make public commitments. Same way Intel has made public commitments. Be very vocal as a leader inside your organization and be very clear that this is a priority for you, that you will listen to people and to teams who bring you solutions in this space.

You will find that people within your organization are already thinking about this space, already have ideas, already probably have decks ready to present to you. Just create an environment where they feel capable of presenting it to you. I guarantee you, your solutions are already within your organization and already within the minds of your employees.

Laurel: Well, that is all very inspiring and interesting and so exciting. So when you think about the next three to five years in green software development and adoption, what are you looking forward to the most? What excites you?

Asim: I think I’m very excited right now, to be honest with you. I look back, I look back five years ago the very, very early days, first looked at this, and I still remember if there was one article, one mentioning green software, we would all lose our heads. We’d get so excited about it, we’d share it, we’d pour over it. Now I’m inundated with information. This movement has grown significantly. There are so many organizations that are deeply interested in this space. There’s so much research, so much academic research.

I have so many articles coming my way every single week. I do not have time to read them. So that gives me just a lot of hope for the future. That really excites me. I might just be because I’m at this kind of cutting edge of this space, so I see a lot of this stuff before anybody else, but I see a huge amount of interest and I see also a huge amount of activity as well. I see a lot of people working on solutions, not just talking about problems, but working on solutions to those problems. That honestly just excites me. I don’t know where we’re going to end up in five years time, but if this is our growth so far, I think we’re going to end up in a very good place.

Laurel: Oh, that’s excellent. Awesome. Thank you so much for joining us today on the Business Lab.

Asim: Thank you very much for having me.

Laurel: That was Asim Hussain, the director of the Office of Green Software and Ecosystems at Intel, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can also find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Mapping the micro and macro of biology with spatial omics and AI

37 trillion. That is the number or cells that form a human being. How they all work together to sustain life is possibly the biggest unsolved puzzle in biology. A group of up-and-coming technologies for spatially resolved multi omics, here collectively called “spatial omics,” may provide researchers with the solution.

Over the last 20 years, the omics revolution has enabled us to understand cell and tissue biology at ever increasing resolutions. Bulk sequencing techniques that emerged in the mid 2000s allowed the study of mixed populations of cells. A decade later, single-cell omics methods became commercially available, revolutionizing our understanding of cell physiology and pathology. These methods, however, required dissociating cells from their tissue of origin, making it impossible to study their spatial organization in tissue.

Spatial omics refers to the ability to measure the activity of biomolecules (RNA, DNA, proteins, and other omics) in situ—directly from tissue samples. This is important because many biological processes are controlled by highly localized interactions between cells that take place in spatially heterogeneous tissue environments. Spatial omics allows previously unobservable cellular organization and biological events to be viewed in unprecedented detail.

A few years ago, these technologies were just prototypes in a handful of labs around the world. They worked only on frozen tissue and they required impractically large amounts of precious tissue biopsies. But as these challenges have been overcome and the technologies commercialized by life science technology providers, these tools have become available to the wider scientific community. Spatial omics technologies are now improving at a rapid pace, increasing the number of biomolecules that can be profiled from hundreds to tens of thousands, while increasing resolution to single-cell and even subcellular scales.

Complementary advances in data and AI will expand the impact of spatial omics on life sciences and health care—while also raising new questions. How are we going to generate the large datasets that are necessary to make clinically relevant discoveries? What will data scientists see in spatial omics data through the lens of AI?

Discovery requires large-scale spatial omics datasets

Several areas of life science are already benefiting from discoveries made possible by spatial omics, with the biggest impacts in cancer and neurodegenerative disease research. However, spatial omics technologies are very new, and experiments are challenging and costly to execute. Most present studies are performed by single institutions and include only a few dozen patients. Complex cell interactions are highly patient-specific, and they cannot be fully understood from these small cohorts. Researchers need the data to enable hypothesis generation and discovery. 

This requires a shift in mentality toward collaborative projects, which can generate large-scale reference datasets both on healthy organs and human diseases. Initiatives such as The Cancer Genome Atlas (TCGA) have transformed our understanding of cancer. Similar large-scale spatial omics efforts are needed to systematically interrogate the role of spatial organization in healthy and diseased tissues; they will generate large datasets to fuel many discovery programs. In addition, collaborative initiatives steer further improvement of spatial omics technologies, generate data standards and infrastructures for data repositories, and drive the development and adoption of computational tools and algorithms.

At Owkin we are pioneering the generation of such datasets. In June 2023, we launched an initiative to create the world largest spatial omics dataset in cancer, with a vision to include data from 7,000 patients with seven difficult-to-treat cancers. The project, known as MOSAIC (Multi-Omics Spatial Atlas in Cancer), won’t stop at the data generation, but will mine the data to learn disease biology and identify new molecular targets against which to design new drugs.

Owkin is well placed to drive this kind of initiative. We can tap a vast network of collaborating hospitals across the globe: to create the MOSAIC dataset, we are working with five world-class cancer research hospitals. And we have deep experience in AI: In the last five years, we have published 54 research papers generating AI methodological innovation and building predictive models in several disease areas, including many types of cancer.

AI’s transformative role in discovering new biology

Spatial omics was recognized as method of the year 2020 by Nature Methods, and it was named one of the top 10 emerging technologies by the World Economic Forum in 2023—alongside generative AI.

With these two technologies developing in tandem, the opportunities for AI-driven biological discoveries from spatial omics are numerous. Looking at the fast-evolving landscape of spatial omics AI methods, we see two broad categories of new methods breaking through.

In the first category are AI methods that aim to improve the usability of spatial omics and enable richer downstream analyses for researchers. Such methods are specially designed to deal with the high dimensionality and the signal-to-noise ratio that are specific to spatial omics. Some are used to remove technical artifacts and batch effects from the data. Other methods, collectively known as “super-resolution methods,” use AI to increase the resolution of spatial omics assays to near single-cell levels. Another group of approaches looks to integrate dissociated single-cell omics with spatial omics. Collectively, these AI methods are bridging the gap with future spatial omics technologies.

In the second category, AI methods aim at discovering new biology from spatial omics. By exploiting the localization information of spatial omics, they shed light on how groups of cells organize and communicate with unprecedented resolution.  Such methods are sharpening our understanding of how cells interact to form complex tissues.

At Owkin, we are developing methods to identify new therapeutic targets and patient subpopulations using spatial omics. We have pioneered methods allowing researchers to understand how cancer patient outcomes are linked to tumor heterogeneity, directly from tumor biopsy images. Building on this expertise and the MOSAIC consortium, we are developing the next generation of AI methods, which will link patient-level outcomes with an understanding of disease heterogeneity at the molecular level.

Looking ahead

Spatial biology has the potential to radically change our understanding of biology. It will change how we see a biomarker, going from the mere presence of a particular molecule in a sample to patterns of cells expressing a certain molecule in a tissue. Promising research on spatial biomarkers has been published for several diseases, including Alzheimer’s disease and ovarian cancer. Spatial omics has already been used in research associated with clinical trials to monitor tumor progression in patients.

Five years from now, spatial technologies will be capable of mapping every human protein, RNA, and metabolite at subcellular resolution. The computing infrastructure to store and analyze spatial omics data will be in place, as will the necessary standards for data and metadata and the analytical algorithms. The tumor microenvironment and cellular composition of difficult-to-treat cancers will be mapped through collaborative efforts such as MOSAIC.

Spatial omics datasets from patient biopsies will quickly become an essential part of pharmaceutical R&D, and through the lens of AI methods, they will be used to inform the design of new, more efficacious drugs and to drive faster and better-designed clinical trials to bring those drugs to patients. In the clinic, spatial omics data will routinely be collected from patients, and doctors will use purpose-built AI models to extract clinically relevant information about a patient’s tumor and what drugs it will best respond to.

Today we are witnessing the convergence of three forces: spatial omics technologies becoming increasingly high-throughput and high-resolution, large-scale datasets from patient biopsies being generated, and AI models becoming ever more sophisticated. Together, they will allow researchers to dissect the complex biology of health and diseases, enabling ever more sophisticated therapeutic interventions. 

Davide Mantiero, PhD, Joseph Lehár, PhD, and Darius Meadon also contributed to this piece.

This content was produced by Owkin. It was not written by MIT Technology Review’s editorial staff.

Capitalizing on machine learning with collaborative, structured enterprise tooling teams

Advances in machine learning (ML) and AI are emerging on a near-daily basis—meaning that industry, academia, government, and society writ large are evolving their understanding of the associated risks and capabilities in real time. As enterprises seek to capitalize on the potential of AI, it’s critical that they develop, maintain, and advance state-of-the-art ML practices and processes that will offer both strong governance and the flexibility to change as the demands of technology requirements, capabilities, and business imperatives change.

That’s why it’s critical to have strong ML operations (MLOps) tooling, practices, and teams—those that build and deploy a set of software development practices that keep ML models running effectively and with agility. Capital One’s core ML engineering teams demonstrate firsthand the benefits collaborative, well-managed, and adaptable MLOps teams can bring to enterprises in the rapidly evolving AI/ML space. Below are key insights and lessons learned during Capital One’s ongoing technology and AI journey.

Standardized, reusable components are critical

Most MLOps teams have people with extensive software development skills who love to build things. But the continuous build of new AI/ML tools must also be balanced with risk efficiency, governance, and risk mitigation.

Many engineers today are experimenting with new generative AI capabilities. It’s exciting to think about the possibilities that something like code generation can unlock for efficiency and standardization, but auto-generated code also requires sophisticated risk management and governance processes before it can be accepted into any production environment. Furthermore, a one-size-fits-all approach to things like generating code won’t work for most companies, which have industry, business, and customer-specific circumstances to account for.

As enterprise platform teams continue to explore the evolution of ML tools and techniques while prioritizing reusable tools and components, they can look to build upon open-source capabilities. One example is Scikit-Learn, a Python library containing numerous supervised and unsupervised learning algorithms that has a strong user community behind it and which can be used as a foundation to further customize for specific and reusable enterprise needs.

Cross-team communication is vital

Most large enterprises have data scientists and engineers working on projects through different parts of the company. This means it can also be difficult to know where new technologies and tools are built, resulting in arbitrary uniqueness.

This underscores the importance of creating a collaborative team culture where communication about the big picture, strategic goals, and initiatives is prioritized—including the ability to find out where tools are being built and evolved. What does this look like in practice?

Ensure your team knows what tools and processes it owns and contributes to. Make it clear how their work supports the broader company’s mission. Demonstrate how your team can feel empowered not to build something from scratch. Incentivize reuse and standardization. It takes time and effort to create a culture of “innersourcing” innovation and build communications mechanisms for clarity and context, but it’s well worth it to ensure long-term value creation, innovation, and efficiency.

Tools must map to business outcomes

Enterprise MLOps teams have a broader role than building tools for data scientists and engineers: they need to ensure those tools both mitigate risk and enable more streamlined, nimble technology capabilities for their business partners. Before setting off on building new AI/ML capabilities, engineers and their partners should ask themselves a few core questions. Does this tool actually help solve a core problem for the business? Will business partners be able to use it? Will it work with existing tools and processes? How quickly can we deliver it, and is there something similar that already exists that we should build upon first?

Having centralized enterprise MLOps and engineering teams ask these questions can free up the business to solve customer problems, and to consider how technology can continue to support the evolution of new solutions and experiences.

Don’t simply hire unicorns, build them

There’s no question that delivering for the needs of business partners in the modern enterprise takes significant amounts of MLOps expertise. It requires both software engineering and ML engineering experience, and—especially as AI/ML capabilities evolve—people with deeply specialized skill sets, such as those with deep graphics processing (GPU) expertise.

Instead of hiring a “unicorn” individual, companies should focus on building a unicorn team with the best of both worlds. This means having deep subject matter experts in science, engineering, statistics, product management, DevOps, and other disciplines. These are all complementary skill sets that add up to a more powerful collective. Together, individuals who can work effectively as a team, show a curiosity for learning, and an ability to empathize with the problems you’re solving are just as important as their unique domain skills.

Develop a product mindset to produce better tools

Last but not least, it’s important to take a product-backed mindset when building new AI and ML tools for internal customers and business partners. It requires not just thinking about what you build as just a task or project to be checked off the list, but understanding the customer you’re building for and taking a holistic approach that works back from their needs.

Often, the products MLOps teams build—whether it’s a new feature library or an explainability tool—look different than what traditional product managers deliver, but the process for creating great products should be the same. Focusing on the customer needs and pain points helps everyone deliver better products; it’s a muscle that many data science and engineering experts have to build, but ultimately helps us all create better tooling and deliver more value for the customer.

The bottom line is that today, the most effective MLOps strategies are not just about technical capabilities, but also involve intentional and thoughtful culture, collaboration, and communication strategies. In large enterprises, it’s important to be cognizant that no one operates in a vacuum. As hard as it may be to see in the day-to-day, everything within the enterprise is ultimately connected, and the capabilities that AI/ML tooling and engineering teams bring to bear have important implications for the entire organization.

This content was produced by Capital One. It was not written by MIT Technology Review’s editorial staff.

Sustainability starts with the data center

When asked why he targeted banks, notorious criminal Willie Sutton reportedly answered, “Because that’s where the money is.” Similarly, when thoughtful organizations target sustainability, they look to their data centers—because that’s where the carbon emissions are.

The International Energy Agency (IEA) attributes about 1.5% of total global electricity use to data centers and data transmission networks. This figure is much higher, however, in countries with booming data storage sectors: in Ireland, 18% of electricity consumption was attributable to data centers in 2022, and in Denmark, it is projected to reach 15% by 2030. And while there have been encouraging shifts toward green-energy sources and increased deployment of energy-efficient hardware and software, organizations need to accelerate their data center sustainability efforts to meet ambitious net-zero targets.

For data center operators, options for boosting sustainability include shifting energy sources, upgrading physical infrastructure and hardware, improving and automating workflows, and updating the software that manages data center storage. Hitachi Vantara estimates that emissions attributable to data storage infrastructure can be reduced as much as 96% by using a combination of these approaches.

Critics might counter that, though data center decarbonization is a worthy social goal, it also imposes expenses that a company focused on its bottom line can ill afford. This, however, is a shortsighted view.

Data center decarbonization initiatives can provide an impetus that enables organizations to modernize, optimize, and automate their data centers. This leads directly to improved performance of mission-critical applications, as well as a smaller, denser, more efficient data center footprint—which then creates savings via reduced energy costs. And modern data storage and management solutions, beyond supporting sustainability, also create a unified platform for innovation and new business models through advanced data analytics, machine learning, and AI.

Dave Pearson, research vice president at IDC, says, “Decarbonization and the more efficient energy utilization of the data center are supported by the same technologies that support data center modernization. Modernization has sustainability goals, but obviously it provides all kinds of business benefits, including enabling data analytics and better business processes.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.