Providing the right products at the right time with machine learning

Whether your favorite condiment is Heinz ketchup or your preferred spread for your bagel is Philadelphia cream cheese, ensuring that all customers have access to their preferred products at the right place, at the right price, and at the right time requires careful supply chain organization and distribution. Amid the proliferation of e-commerce and shifting demand within the consumer-packaged goods (CPG) sector, AI and machine learning (ML) have become helpful tools in enabling efficiency and better business outcomes.

The journey toward successfully deployed machine learning operations (MLOps) starts with data, says global head of machine learning operations and platforms at Kraft Heinz Company, Jorge Balestra. Curating well-organized and accessible data means enterprises can leverage their data volumes to train and develop AI and machine learning models. A strong data strategy lays the foundation for these AI and machine learning tools to use data to detect supply chain disruptions, identify and address cost inefficiencies, and predict demand for products.

“Never forget that data is the fuel, and data, it takes effort, it is a journey, it never ends, because that’s what is really what I would call what differentiates a lot of successful efforts compared to unsuccessful ones,” says Balestra.

This is especially crucial but challenging within the CPG sector where data is often incomplete given the inconsistent methods for consumer habit tracking among different retailers.

He explains, “We don’t know exactly and we don’t even want to know exactly what people are doing in their daily lives. What we want is just to get enough of the data so we can provide the right product for our consumers.”

To deploy AI and machine learning tools at scale, the Heinz Kraft Company has turned to the flexibility of the cloud. Using the cloud can allow for much-needed data accessibility while mitigating compute power.  “The agility of the whole thing increases exponentially because what used to take months, now can be done in a matter of seconds via code. So, definitely, I see how all of this explosion around analytics, around AI, is possible, because of cloud really powering all of these initiatives that are popping up left, right, and center.” says Balestra.

While it may be challenging to predict future trends in a sector so prone to change, Balestra says that preparing for the road ahead means focusing on adaptability and agility.

“Our mission is to delight people via food. And the technology, AI or what have you, is our tool to excel at our mission. Being able to learn how to leverage existing and future [technology] to get the right product at the right price, at the right location is what we are all about.”

This episode of Business Lab is produced in partnership with Infosys Topaz and Infosys Cobalt.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic is machine learning in the food and beverage industry. AI offers opportunities for innovation for customers and operational efficiencies for employees, but having a data strategy in place to capture these benefits is crucial.

Two words for you: global innovation.

My guest is Jorge Balestra, global head of machine learning operations and platforms at Kraft Heinz Company.

This episode of Business Lab is produced in partnership with Infosys Topaz and Infosys Cobalt.

Welcome, Jorge.

Jorge Balestra: Thank you very much. Glad to be here.

Laurel: Well, wonderful to have you. So people are likely familiar with Kraft Heinz since it is one of the world’s largest food and beverage companies. Could you talk to us about your role at Kraft Heinz and how machine learning can help consumers in the grocery aisle?

Jorge: Certainly. My role, I will call, has two major focuses in two areas. One of them is I lead the machine learning engineering operations of the company globally. And on the other hand, I provide all of the analytical platforms that the company is using also on a global basis. So in role number one in my machine learning engineering and operations, what my team does is we grab all of these models that our community of data scientists that are working globally are coming up with, and we grabbed them and we strengthened it. Our major mission here is the first thing we need to do is we need to make sure that we are applying engineering practices to make them production ready and they can scale, they can also run in a cost-effective manner, and from there we ensure that in my operations hat they are there when needed.

So a lot of these models, because they become part of our day-to-day operations, they’re going to come with certain specific service level commitments that we need to make, so my team makes sure that we are delivering on those with the right expectations. And on my other hand, which is the analytical platforms, is that we do a lot of descriptive, predictive, and prescriptive work in terms of analytics. The descriptive portion where you’re talking about just the regular dashboarding, summarization piece around our data and where the data lives, all of those analytical platforms that the company is using are also something that I take care of. And with that, you would think that I have a very broad base of customers in the company both in terms of geographies where they are from some of our businesses in Asia, all the way to North America, but also across the organization from marketing to HR and everything in between.

Going into your other question about how machine learning is helping our consumers in the grocery aisle, I’ll probably summarize that for a CPG it’s all about having the right product at the right price, at the right location for you. What that means is on the right product, their machine learning can help a lot of our marketing teams, for example, even when they are now with the latest generative AI capabilities are showing up like brainstorming and creating new content to R&D, what we’re trying to figure out what is the best formulas for our products, there’s definitely now ML is making inroads in that space, the right price, all about cost efficiencies throughout from our plans to our distribution centers, making sure that we are eliminating waste. Leveraging machine learning capabilities is something that we are doing across the board from our revenue management, which is the right price for people to buy our products.

And then last but not least is the right location. So we need to make sure that when our consumers are going into their stores or are buying our products online that the product is there for you and you’re going to find the product you like, the flavor you like immediately. And so there is a huge effort around predicting our demand, organizing our supply chain, our distribution, scheduling our plans to make sure that we are producing the right quantities and delivering them to the right places so our consumers can find our products.

Laurel: Well, that certainly makes sense since data does play such a crucial role in deploying advanced technologies, especially machine learning. So how does Kraft Heinz ensure the accessibility, quality and security of all of that data at the right place at the right time to drive effective machine learning operations or MLOps? Are there specific best practices that you’ve discovered?

Jorge: Well, the best practice that I can probably advise people on is definitely data is the fuel of machine learning. So without data, there is no modeling. And data, organizing your data, both the data that you have internally and externally takes time. Making sure that it’s not only accessible and you are organizing it in a way that you don’t have a gazillion technologies to deal with is important, but also I would say the curation of it. That is a long-term commitment. So I strongly advise anyone that is listening right now to understand that your data journey, as it is, is a journey, it doesn’t have an end destination, and also it’s going to take time.

And the more you are successful in terms of getting all the data that you need organized and making sure that is available, the more successful you’re going to be leveraging all of that with models in machine learning and great things that are there to actually then accomplish a specific business outcome. So a good metaphor that I like to say is there’s a lot of researchers, and MIT is known for its research, but the researchers cannot do anything without the librarians, with all the people that’s organizing the knowledge around so you can go and actually do what you need to do, which is in this case research. Never forget that data is the fuel, and data, it takes effort, it is a journey, it never ends, because that’s what is really what I would call what differentiates a lot of successful efforts compared to unsuccessful ones.

Laurel: Getting back to that right place at the right time mentality, within the last few years, the consumer packaged goods, or you mentioned earlier, the CPG sector, has seen such major shifts from changing customer demands to the proliferation of e-commerce channels. So how can AI and machine learning tools help influence business outcomes or improve operational efficiency?

Jorge: I’ve got two examples that I can say. One is, well, obviously we all want to forget about what happened during the pandemic, but for us it was a key, very challenging time, because out of nowhere all of our supply chains got disrupted, our consumers needed our products more than ever because there were more hunkered down at home. So one of the things that I tell you, at least for us, that was key was through our modeling, through the data that we’ve had, we’ve had some good early warning of certain disruptions in the supply chain and we were able to at least get… Especially when the outbreak started, a couple of weeks in advance, we were moving product, we were taking early actions in terms of ensuring that we were delivering an increased amount of product that was needed.

And that was because we had the data and we had some of those models that were alerting us about, “Hey, something is wrong here, something is happening with our supply chain, you need to take action.” And taking action at the right time, it’s key in terms of getting ahead of a lot of the things that can happen. And in our case, obviously we live in a competitive world, so taking actions before competition is important, that timing component. Another example I can give you and is more of something that is we’re doing more and more nowadays is this piece that I was referring to about the right location about product availability is key for CPG, and that is measured in something that is called the CFR, and is the customer field rate, which means is when someone is ordering product from Kraft Heinz that we are able to fulfill that order to 100%, and we are expecting to be really high with high 90s in terms of how efficient we are filling those orders.

We have developed new technology that I think we are pretty proud of because I think it is unique within CPG that allows us to really predict what is going to happen with CFR in the future based on the specific actions we’re taking today, whereas it’s changing our production lines, whereas changes in distribution, et cetera, we’re able to see not only the immediate effect, but what’s going to happen in the future with that CFR so we can really act on it and deliver actions right now that are in the benefit of our distribution in the future. So those are, I would call it, say, two examples in terms of how we’re leveraging AI and machine learning tools in our day-to-day operations.

Laurel: Are those examples, the CFR as well as the supply chain and making sure consumers had everything on demand almost, is this unique to the food and beverage industry? And what are perhaps some other unique challenges that the food and beverage industry faces when you’re implementing AI and machine learning innovations? And how do you navigate challenges like that?

Jorge: Yeah, I think something that is very unique for us is that we always have to deal with an incomplete picture in terms of the data that we have in our consumers. So if you think about it, when you go into a grocery store, a couple of things, well, you are buying from that store, the Kroger’s, Walmart’s, et cetera, and some of those will have you identified in terms of what is your consumption patterns, some will not. But also, in our case, if you are going to go buy a Philadelphia [cream cheese], for example, you may choose to buy your Philadelphia in multiple outlets. Sometimes you want more and you go to Costco, sometimes you need less, in my case, I live in the Chicago land area, I go to a Jewel supermarket.

We always have to deal with incomplete data on our customers, and that is a challenge because what we are trying to figure out is how to better serve our consumers based on what product you like, where you’re buying them, what is the right price point for you, but we’re always dealing with data that is incomplete. So in this case, having a clear data strategy around what we have there and a clear understanding of the markets that we have out there so we can really grab that incomplete data that we have out there and still come up with the right actions in terms of what are the right products to put, just to give you an example, a clear example of it is… And I’m going back to Philadelphia because, by the way, that’s my favorite Kraft product ever…

Laurel: Philadelphia cream cheese, right?

Jorge: Yes, absolutely. It’s followed by a close second with our ketchup. I have a soft spot for Philadelphia, pun intended.

Laurel: – and the ketchup.

Jorge: Exactly. No, but you have different presentations. You have the spreadable, you have the brick of cream cheese, within the brick you have some flavors, and what we want to do is make sure that we are providing the flavors that people really want, not producing the ones that people don’t want, because that’s just waste, without knowing specifically who is buying on the other side and you want to buy it in a supermarket, one or two, or sometimes you are shifting. But those are the things that we are constantly on the lookout for, and obviously dealing with the reality about, hey, data is going to be incomplete. We don’t know exactly and we don’t even want to know exactly what people are doing in their daily lives. What we want is just to get enough of the data so we can provide the right product for our consumers.

Laurel: And an example like cream cheese and ketchup probably, especially if a kid is in the house, it’s one of those products that you use on a fairly daily basis. So knowing all of this, how does Kraft Heinz prepare data for AI projects, because that in itself is a project? So what are the first steps to get ready for AI?

Jorge: One thing that we have been pretty successful on is what I would call the potluck approach for data. Meaning that individual projects, individual groups are focused on delivering a very specific use case, and that is the right thing to do. When you are dealing with a project in supply chain and you’re trying just to, for example, say, “Hey, I want to optimize my CFR,” you are really not going to be caring that much about what sales wants to do. However, if you implement a potluck approach, meaning that, okay, you need data from somebody else, and it’s very likely that you have data to offer because that’s part of your use case. So the potluck approach means that if you want to try out the food of somebody else, you need to bring your own to the table. So if you do that, what starts happening is your data, your enterprise data, becomes little by little more accessible, and if you do it right eventually you pretty much have a lot and almost everything in there.

That is one thing that I will strongly advise people to do. Think big, think strategically, but act tactically, act knowing that individual projects, they’re going to have more limited scope, but if you establish certain practices around sharing around how data should be managed, then each individual projects are going to be contributing to the larger strategy without the largest strategy being a burden for the individual projects, if that makes sense.

Laurel: Sure.

Jorge: So at least for us that has been pretty successful over time. So we have data challenges absolutely as everybody else has, but at least from what I’ve been able to hear from other people, but Kraft Heinz is in a good place in terms of that availability. Because once you reach a certain critical mass, what ends up happening is there’s no need to bring additional data, you are always now reusing it because data is large but it’s finite. So it’s not infinite. It’s not something that’s going to grow forever. If you do it right, you should see that eventually, you don’t need to bring in more and more data. You just need to fine-tune and really leverage the data that you have, probably be more granular, and probably get it faster. That’s a good signal. I have the data, but I need it faster because I need to act on it. Great, you’re on the right track. And also your associated cost around data should reflect that. It should not grow to infinity. Data is large but is finite.

Laurel: So speaking of getting data quickly and making use of it, how does Kraft Heinz use compute power and the cloud scaling ability for AI projects? How do you see these two strategies coming together?

Jorge: Definitely the technology has come a long way in the last few years, because what cloud is offering is more of that flexibility, and it’s removing a lot of the limitations, both in terms of the scale and performance we used to have. So to give you an example, a few years back I had to worry about “Do I have enough storage in my servers to host all the data that we are getting in?” And then if I didn’t, how long is it going to take for me to add another server? With the cloud as an enabler, that’s no longer an issue. It’s a few lines of code and you get what you need. Also, especially on the data side, some of the more modern technologies, talking about Snowflake or BigQuery, enable you to separate your compute from your storage. What it basically means in practical terms is you don’t have people fighting over limited compute power.

So data can be the same for everyone and everybody can be accessing the data without having to overlap each other and then fighting about, oh, if you run this, I cannot run that, and then we have all sorts of problems so definitely what the cloud allowed us to do is get out of the way in terms of the technology as a limitation. And the great thing that happened down there now with all the AI projects is now you could focus on actually delivering on the use cases that you have without having to have limitations around “how am I going to scale?”. That is no longer the case. You have to worry about costs because it could cost you an arm and a leg, but not necessarily around how to scale and how long it’s going to take you to scale.

The agility of the whole thing increases exponentially because what used to take months, now can be done in a matter of seconds via code. So definitely I see how all of this explosion around analytics, around AI is possible, because of cloud really powering all of these initiatives that are popping up left, right, and center.

Laurel: And speaking about this, you can’t really go it alone, so how do partners like Infosys help bring in those new skills and industry know-how to help build the overall digital strategy for data, AI, cloud, and whatever comes next?

Jorge: Much in the same way that I think cloud has been an enabler in terms of this, I think companies and partners like Infosys are also that kind of enablers, because, in a way, they are part of what I would call an expertise ecosystem. I don’t think any company nowadays can do any of this on its own. You need partners. You need partners that both are bringing in new ideas, new technology, but also they are bringing in the right level of expertise in terms of people that you need, and in a global sense, at least for us, having someone that has a global footprint is important because we are a global company. So I will say that it’s the same thing that we talked about earlier about cloud being an enabler: that expert ecosystem represented by companies like Infosys is just another key enabler without which you will really struggle to deliver. So that’s what I’ll probably say to anyone that is listening right now, make sure that your ecosystem, your expert ecosystem is good and is thriving and you have the right partners for the right job.

Laurel: When you think about the future and also all these tough problems that you’re tackling at Kraft Heinz, how important will something like synthetic data be to your data strategy and business strategy as well? What is synthetic data? And then what are some of those challenges associated with using it to fill in the gaps for real-world data?

Jorge: In our case, we don’t use a lot of synthetic data nowadays because at least from the areas that we have holes to fill in terms of data is something that we’ve been dealing with for a while. So we are, let’s put it this way, already familiar on how to produce and fill in the gaps using some of the synthetic data techniques, but not really to the same extent as other organizations are. So we are still looking for opportunities when that is the case in terms of what we need to use and leverage synthetic data, but it’s not something that least for Kraft Heinz and CPG at all we use extensively in multiple places as other organizations are.

Laurel: And so, lastly, when you think ahead to the future, what will the digital operating model for an AI-first firm that’s focused on data look like? What do you see for the future?

Jorge: What I see for the future is, well, first of all, uncertainty, meaning that I don’t think we can predict exactly what’s going to happen because the area in particular is growing and evolving at a speed that I think is just honestly dazzling just because of the major things. I think at least what I would say is the real muscle that we need to be exercising and be ready for is adaptability. Meaning that we can learn, we can react, and apply all of the new things that are coming in hopefully at the same speed that they’re occurring and really leveraging new opportunities when they present themselves in an agile way. But at least from the how to prepare for it I think it’s more about preparing the organization, your team, to be ready for that, really act on it, and be ready also to understand the specific business challenges that are there, and look for opportunities where any of the new things or maybe existences that are happening can be applied to solve a specific problem.

We are a CPG company, and that means the right product, right price, right location, so anything boils down to how can I be better in those three dimensions leveraging whatever is available today, whatever’s going to be available tomorrow. But keep focusing on, at least for us, we are a CPG company, we manufacture in Philadelphia, we manufacture ketchup, we feed people. Our mission is to delight people via food. And the technology, AI or what have you, is our tool to excel at our mission. Being able to learn how to leverage existing and future to get the right product at the right price at the right location is what we are all about.

Laurel: That’s fantastic. Thank you so much, Jorge. I appreciate you being with us today on the Business Lab.

Jorge: Thank you very much. Thank you for inviting me.

Laurel: That was Jorge Balestra, global head of machine learning operations and platforms at Kraft Heinz Company, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma, I’m the director of insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can also find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

 This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Unlocking the power of sustainability

According to UN climate experts, 2023 was the warmest year on record. This puts the heat squarely on companies to accelerate their sustainability efforts. “It’s quite clear that the sense of urgency is increasing,” says Jonas Bohlin, chief product officer for environmental, social, and governance (ESG) platform provider Position Green.

That pressure is coming from all directions. New regulations, such as the Corporate Sustainability Reporting Directive (CSDR) in the EU, require that companies publicly report on their sustainability efforts. Investors want to channel their money into green opportunities. Customers want to do business with environmentally responsible companies. And organizations’ reputations for sustainability are playing a bigger role in attracting and retaining employees.

On top of all these external pressures, there is also a significant business case for sustainability efforts. When companies conduct climate risk audits, for example, they are confronted with escalating threats to business continuity from extreme weather events such as floods, wildfires, and hurricanes, which are occurring with increasing frequency and severity.

Mitigating the risks associated with direct damage to facilities and assets, supply chain disruptions, and service outages very quickly becomes a high-priority issue of business resiliency and competitive advantage. A related concern is the impact of climate change on the availability of natural resources, such as water in drought-prone regions like the American Southwest.

Much more than carbon

“The biggest misconception that people have is that sustainability is about carbon emissions,” says Pablo Orvananos, global sustainability consulting lead at Hitachi Digital Services. “That’s what we call carbon tunnel vision. Sustainability is much more than carbon. It’s a plethora of environmental issues and social issues, and companies need to focus on all of it.”

Companies looking to act will find a great deal of complexity surrounding corporate sustainability efforts. Companies are responsible not only for their own emissions and fossil fuels usage (Scope 1), but also the sustainability efforts of their energy suppliers (Scope 2) and their supply chain partners (Scope 3). New regulations require organizations to look beyond just emissions. Companies must ask questions about a broad range of environmental and societal issues: Are supply chain partners sourcing raw materials in an environmentally conscious manner? Are they treating workers fairly?

Sustainability can’t be siloed into one specific task, such as decarbonizing the data center. The only way to achieve sustainability is with a comprehensive, holistic approach, says Daniel Versace, an ESG research analyst at IDC. “A siloed approach to ESG is an approach that’s bound to fail,” he adds.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Building innovation with blockchain

In 2015, JPMorgan Chase embarked on a journey to build a more secure and open wholesale banking. For chief technology officer at Onyx by J.P.Morgan, Suresh Shetty, investing in blockchain, a distributed ledger technology in its early days, was about ubiquity.

“We actually weighted ubiquity in terms of who can use the technology, who was trying to use the technology over technology superiority,” says Shetty. “Because eventually, our feeling was that the network effect, the community effect of ubiquity, actually overcomes any technology challenges that a person or a firm might have.”

Years later, JPMorgan Chase has Onyx, a blockchain-based platform to leverage innovations at scale and solve real-world banking challenges. Chief among them are global wholesale payment transactions. Much more complicated than moving money from point A to point B, Shetty says, wholesale transactions require multiple hops and fulfilling regulatory obligations.

Transferring money around the world requires several steps, including a credit check, sanctions check, and account validation. The process can lead to errors and hiccups. This is where blockchain comes in.

“Now, as you can imagine, because of the friction in this process and the multiple hops, it is a process that’s very prone to error. So this is one of the ideal use cases for a blockchain, where we try to take out that operational friction from any process.”

Although blockchain has the potential to cause major waves in financial services from securing transactions to ensuring smooth operations, sustainability remains a major consideration with any technology deployed at this scale. The shift from proof-of-work to proof-of-stake systems, says Shetty, reduces emissions and computing energy.

“The amount of energy that’s being used in a proof of stakes system goes down to 1% of the overall carbon impact of a proof of work system, so thereby, that shift alone was very important from a carbon emission perspective.”

This episode of Business Lab is produced in association with JPMorgan Chase.

Full Transcript 

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is blockchain. Technology has changed how money moves around the world, but the opportunity and value from distributed ledger technology is still in its early days. However, deploying on a large scale openly and securely should move it along quickly.

Two words for you: building innovation.

My guest is Suresh Shetty, who is the chief technology officer at Onyx by J.P.Morgan at JPMorgan Chase.

This podcast is produced in association with JPMorgan Chase.

Welcome, Suresh.

Suresh Shetty: Thank you so much, Laurel. Looking forward to the conversation.

Laurel: So to set the context of this conversation, JPMorgan Chase began investing in blockchain in 2015, which as we all know, in technology years is forever ago. Could you describe the current capabilities of blockchain and how it’s evolved over time at JPMorgan Chase?

Suresh: Absolutely. So when we began this journey, as you mentioned, in 2015, 2016, as any strategy and exploration of new technologies, we had to choose a path. And one of the interesting things is that when you’re looking at strategic views of five, 10 years into the future, inevitably, there needs to be some course correction. So what we did in JPMorgan Chase was we looked at a number of different lines of inquiry, and in each of these lines of inquiries, our focus was trying to be as inclusive as possible. So what we mean by that is that we actually weighted ubiquity in terms of who can use the technology, who was trying to use the technology over technology superiority. Because eventually, our feeling was that the network effect, the community effect of ubiquity, actually overcomes any technology challenges that a person or a firm might have.

Now, I think that a very relevant example is the Betamax-VHS example. It’s a bit dated but I think it really is important in this type of use case. So as many of you know, Betamax was a superior technology at the time and VHS was much more ubiquitous in the marketplace. And over time, what happened was that people gravitated, firms gravitated towards that ubiquity over the superiority of the technology that was in Betamax. And similarly, that was our feeling too in terms of blockchain in general and specifically the path that we took, which was in and around the Ethereum ecosystem. We felt that the Ethereum ecosystem had the largest developer community, and we thought over time, that was where we needed to focus in on.

So I think that that was our journey to date in terms of looking, and we continue to make those decisions in terms of collaboration, inclusiveness, as opposed to just purely looking at technology itself.

Laurel:And let’s really focus on those efforts. In 2020, the firm debuted Onyx by J.P.Morgan, which is a blockchain-based platform for wholesale payment transactions. Could you explain what wholesale payment transactions are and why they’re the basis of Onyx’s mission?

Suresh: Absolutely. Now, it was interesting. My background is that I came from the markets world and markets is really involved in front office trading, investment banking and so forth, and eventually, went over to the payments world. And if you juxtapose the two, it’s actually very interesting because initially, people feel that the market space is much more complicated, much more exciting than payments, and they feel that payments is a relatively straightforward exercise. You’re moving money from point A to point B.

What actually happens is actually, payments is much more complicated, especially from a transactional perspective. So what I mean by that is that if you look at markets, what happens is if you do a transaction, it flows through. If there’s an error, what you do is that you correct the initial transaction, cancel it, and put in a new transaction. So all you do is that there’s a series of cancel corrects, all of which are linked together by the previous transaction, so there’s a daisy chain of transactions which are relatively straightforward and easy to migrate upon.

But if you look at the payments world, what happens is that you have a transaction, it flows through. If there’s an error, you hold the transaction, you correct it, and then keep going. Now, if you think about it from a technology perspective, this is a lot more complicated because what you have to do is you have to keep in mind the state engine of the transactional flow, and you have to store it somewhere, and then you have to constantly make sure that as it flows to the next unit of work, it actually is not only referenced but it actually has the data and transactionality from the previous unit of work. So a lot more complicated.

Now, from a business perspective, what cross-border payments or wholesale payments involved is that, as I mentioned, you’re moving money from point A to point B. In an ideal fashion, and I’ll give you an example. Since I’m in India, in an ideal example, we would move money from JPMorgan Chase to State Bank of India, and the transaction is complete, and everybody is happy. And in between that transaction, we do things like a credit check to make sure that the money that is being sent, there’s money in the account of the sender. We need to make sure that the receiver of the account has a valid bank account, so you need to do that validation, so there’s a credit check. Then on top of that, you do a sanctions check. A sanctions check means that we are evaluating whether the money is being moved to a bad actor, and if it is, we stop the transaction and we inform the relevant parties. So it looks relatively straightforward in an idealized version.

Unfortunately, what happens is because of the fractured nature of banking across the world as well as regulatory obligations, what happens is that it’s never a single point to point movement. It involves multiple hops. So in that same example where I’m moving money from JPMorgan Chase to India, what usually happens is JPMorgan Chase sends it to, let’s say Standard Chartered in England. Standard Chartered then sends it to State Bank of India. State Bank of India then sends it to Bank of Baroda, and then Bank of Baroda eventually sends it to my bank, which is Vijaya Bank in India.

In each of those steps or hops, a credit check happens, a sanctions check happens, and the money moves forward. Also, there’s an account validation step that also happens to make sure that the payment transactional flow is correct, as well as the detail in the payment messages are correct as well. Now, as you can imagine, because of the friction in this process and the multiple hops, it is a process that’s very prone to error. So this is one of the ideal use cases for a blockchain, where we try to take out that operational friction from any process.

Laurel: That’s a really good illustrative example since one of the benefits of being a global firm is that JPMorgan Chase can operate at this massive scale. So what are some of the benefits and challenges that blockchain technology presents to the firm? I think you kind of alluded to some there.

Suresh: Absolutely, and it’s interesting, people sometimes conflate the technology innovation in the blockchain with a moonshot. Now, what’s interesting is that blockchain itself is based on very sound computing principles that have been around for a very long time, which are actually based on distributed computing. So at the heart of blockchain, it is a distributed computing system or platform. Now, all the challenges that you would have in a distributed computing platform, you would have within blockchain. Now this is further heightened by the fact that you have multiple nodes on the network. Each of those nodes has a copy of the data as well as the business logic. So one of the real challenges that we have is around latency in the network. So the number of nodes is directly correlated to the amount of latency that you have in the network, and that’s something that in a financial transaction, we have to be very cognizant about.

Secondarily is that there is an enormous amount of existing assets that are already in place from a code perspective within the enterprise. So the question is do we need to rewrite the entire code base in the languages that are supported by the various blockchains? So in Ethereum, do we need to rewrite all of this in Solidity, or can we somehow leverage the language or the code base that’s already been created? Now in our experience, we’ve had to actually do quite an extensive analysis on what needs to be on chain as opposed to what needs to be off chain. The off chain code base is something that we need to be able to leverage as we go forward because the business feels comfortable about that, and the question is why would we need to rewrite that? And the stuff that’s on the chain itself, that needs to be something that we really feel is important to be able to be distributed to the various nodes in the network itself. So I think that that’s some of the challenges that we’ve had in the blockchain space.

Now, in terms of benefits, I think that at the end of the day, we want to be able to have a cryptographically secure, auditable transactional record. And I think that there are many use cases within banking, especially those that are really within the sweet spot of the blockchain, such as those that require a lot of reconciliation. There are multiple actors, and in a distributed platform, regardless of whether it’s in blockchain or not.

Laurel: And cybersecurity is definitely one of those areas where blockchain can help, for example, transactions, improve transparency, et cetera. But how can organizations ensure safe and robust blockchain transactions and networks?

Suresh: Fantastic question. It’s interesting, that JPMorgan Chase is a private permission network. Now what does that mean? That means that every actor within our blockchain network is actually known to us. Now, it’s also interesting that hand in hand with that security aspect is the operational considerations of actually running a network. So we would need to be able to not only ensure security across the network, but we need to also ensure that we have transactional flow that meet the service level agreements between the various actors. Now, in a centralized private permission network, which is what Onyx is, is that JPMorgan Chase has taken the onus in terms of running the network itself.

Now, people want to be able to say that they want to run their own nodes and they want to be able to ensure their own security, which is great if it’s unique and singular to themselves, but when you’re participating in a network, the weakest link in the chain actually becomes your greatest challenge. So all of the actors or all the nodes that are participating in our network would have to meet the same security and operational considerations that everyone else has. So when we pose that question to the participants in our network and say, “Listen, you have an opportunity to run your own node or you can have us do it for you,” most of them, 95% of them, want us to run their nodes for them. So I think that that’s one of the big challenges or one of the big aspects of a private permission network as opposed to a public network.

Now, we’re increasingly seeing that there needs to be some integration across private permission networks and public networks. And again, when we have to integrate between these, we again run into classical problems or classical challenges, I should say, with the interconnected distributed platforms. But I think that this goes directly to the level of scale that we need to be at, especially within JPMorgan Chase, in order to be successful.

Laurel: So there’s also the challenge of keeping up with emerging technologies. What are some of the emerging or advanced technologies that have enabled blockchain innovations? And this is clearly important to Onyx since it created a blockchain launch to focus on developing and commercializing those new applications and networks.

Suresh: Absolutely. So within Onyx, we have three main lines of businesses and then we have Blockchain Launch, which looks at the hill beyond the hill in terms of evaluating new technologies. And we’ve done everything from looking at zero-knowledge proofs to trying to beam payments through a satellite back down to Earth and all of those types of things to create business value for our clients.

So I would say that the two most exciting things, or there’s a third one which I think there’s a topic that we’ll broach later, but the two most exciting topics that we’ve talked about so far and we’re very excited about is around zero-knowledge proofs as well as artificial intelligence and machine learning. If you think about the network that we have right now within JPMorgan Chase for Onyx, the various participants within the network will eventually start to create enough data through the network effect that it might be very interesting to see what other data enrichment, data mining type use cases can come out of that, and we are only going to see an uptick in that as you start to expand the network and we start to get more scale as we add more use cases onto the Onyx network.

Laurel: And so while we’re on that topic of emerging technologies, how does quantum computing and blockchain, how do those two technologies work together?

Suresh: So the quantum computing piece and the blockchain piece are very collaborative and very symbiotic in nature. Now, if you think about the idea of utilizing quantum mechanics, it’s been around since the mid 1970s when it was first proposed that there was an algorithm that we can with very large numbers that can be factored using a theoretical quantum computing. It was pretty much in the background, and then suddenly in October 2019, Google announced that it achieved quantum supremacy by solving a problem in 200 seconds that would’ve taken thousands of years to be able to solve.

And although that specific use case was sort of not specific to a business use case, the impact of that is very far-reaching because all of a sudden, it demonstrated that you could use quantum computing to actually create a mechanism that would impact pretty much every cryptographically secure transactional flow.

So as we are looking through this, some of the things that we looked at in quantum computing was around looking at the quantum key distribution, looking at cryptographically secure vaulting, distributed identity. All of these we believe are key to the future of blockchain and actually impact even things as mundane as the topic that we spoke about before, which is around the cross-border transactional flow as well.

Laurel: So while blockchain certainly seems to have the potential to shift the financial services industry, the need to focus on sustainability goals and follow regulations are also a major consideration. So how can innovations in blockchain be balanced with mitigating its emissions and environmental impact?

Suresh: This is a question that we’ve been asked many times by our businesses in terms of how environmentally conscious are we? I would say that one of the big advances recently, especially within the Ethereum space, was the shift from proof to work to proof of stake.

Now in proof-of-work systems, miners compete with one another to see who can problem solve the fastest in exchange for crypto rewards, and because of this, the proof-of-work systems take up a large amount of energy. Juxtaposing this is the proof-of-stake systems which rely on market incentives and validators, and in exchange for the right to add blocks, they remove the competition from the system. Now, because of this, the amount of energy that’s being used in a proof-of-stakes system goes down to 1% of the overall carbon impact of a proof-of-work system, so thereby, that shift alone was very important from a carbon emission perspective.

Secondarily, within the Onyx system itself, we’ve shifted to a situation where we have set our gas fees to zero and the only compute is minimalistic in terms of just computing the business logic itself. And also, we’re using the BFT class of algorithms as well as RafT. Both are not compute intensive, aside from the business logic itself.

Laurel: Thank you, Suresh. You’ve certainly given us a lot to think about. So looking to the future, what are some trends in technology that you’re excited about in the next three to five years?

Suresh: So I think that we mentioned some of the topics before around quantum computing, artificial intelligence, machine learning. All of those I think are very important to us. Now, I would also say that the three to five year time horizon is probably too long. I think that when we speak in investment banking, we speak about an 18- to 24-month time horizon. We think that that’s probably a similar time horizon that we’re seeing in the blockchain space itself. So as we evolve, I think that the really interesting aspect of this is going to be where social networks and business networks overlap and how they organically evolve to support each other as we go forward, and how the payment space itself evolves in order to take advantage of this.

Laurel: Excellent. Thank you so much for joining us today on the Business Lab, Suresh.

Suresh: Fantastic. Thank you so much, Laurel.

Laurel: That was Suresh Shetty, who is the chief technology officer at Onyx by J.P.Morgan, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the global director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This podcast is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations are not the responsibility of JPMorgan Chase & Co.

Actionable insights enable smarter business buying

For decades, procurement was seen as a back-office function focused on cost-cutting and supplier management. But that view is changing as supply chain disruptions and fluctuating consumer behavior ripple across the economy. Savvy leaders now understand procurement’s potential to deliver unprecedented levels of efficiency, insights, and strategic capability across the business.

However, tapping into procurement’s potential for generating value requires mastering the diverse needs of today’s global and hybrid businesses, navigating an increasingly complex supplier ecosystem, and wrangling the vast volumes of data generated by a rapidly digitalizing supply chain. Advanced procurement tools and technologies can support all three.

Purchasing the products and services a company needs to support its daily operations aggregates thousands of individual decisions, from a remote worker selecting a computer keyboard to a materials expert contracting with suppliers. Keeping the business running requires procurement processes and policies set by a chief procurement officer (CPO) and team who “align their decisions with company goals, react to changes with speed, and are agile enough to ensure a company has the right products at the right time,” says Rajiv Bhatnagar, director of product and technology at Amazon Business.

At the same time, he says, the digitalization of the supply chain has created “a jungle of data,” challenging procurement to “glean insights, identify trends, and detect anomalies” with record speed. The good news is advanced analytics tools can tackle these obstacles, and establish a data-driven, streamlined approach to procurement. Aggregating the copious data produced by enterprise procurement—and empowering procurement teams to recognize and act on patterns in that data—enables speed, agility, and smarter decision-making.

Today’s executives increasingly look to data and analytics to enable better decision-making in a challenging and fast-changing business climate. Procurement teams are no exception. In fact, 65% of procurement professionals report having an initiative aimed at improving data and analytics, according to The Hackett Group’s 2023 CPO Agenda report.

And for good reason—analytics can significantly enhance supply chain visibility, improve buying behavior, strengthen supply chain partnerships, and drive productivity and sustainability. Here’s how.

Gaining full visibility into purchasing activity

Just getting the full view of a large organization’s procurement is a challenge. “People involved in the procurement process at different levels with different goals need insight into the entire process,” says Bhatnagar. But that’s not easy given the layers upon layers of data being managed by procurement teams, from individual invoice details to fluctuating supplier pricing. Complicating matters further is the fact that this data exists both within and outside of the procurement organization.

Fortunately, analytics tools deliver greater visibility into procurement by consolidating data from myriad sources. This allows procurement teams to mine the most comprehensive set of procurement information for “opportunities for optimization,” says Bhatnagar. For instance, procurement teams with a clear view of their organization’s data may discover an opportunity to reduce complexity by consolidating suppliers or shifting from making repeated small orders to more cost-efficient bulk purchasing.

Identifying patterns—and responding quickly

When carefully integrated and analyzed over time, procurement data can reveal meaningful patterns—indications of evolving buying behaviors and emerging trends. These patterns can help to identify categories of products with higher-than-normal spending, missed targets for meeting supplier commitments, or a pattern of delays for an essential business supply. The result, says Bhatnagar, is information that can improve budget management by allowing procurement professionals to “control rogue spend” and modify a company’s buying behavior.

In addition to highlighting unwieldy spending, procurement data can provide a glimpse into the future. These days, the world moves at a rapid clip, requiring organizations to react quickly to changing business circumstances. Yet only 25% of firms say they are able to identify and predict supply disruptions in a timely manner “to a large extent,” according to Deloitte’s 2023 Global CPO survey.

“Machine learning-based analytics can look for patterns much faster,” says Bhatnagar. “Once you have detected a pattern, you can take action.” By detecting patterns in procurement data that could indicate supply chain interruptions, looming price increases, or new cost drivers, procurement teams can proactively account for market changes. For example, a team might enable automatic reordering of an essential product that is likely to be impacted by a supply chain bottleneck.

Sharing across the partner ecosystem

Data analysis allows procurement teams to “see some of the challenges and react to them in real-time,” says Bhatnagar. But in an era of interconnectedness, no one organization acts alone. Instead, today’s supplier ecosystems are deeply interconnected networks of supply-chain partners with complex interdependencies.

For this reason, sharing data-driven insights with suppliers helps organizations better pinpoint causes for delays or inaccurate orders and work collaboratively to overcome obstacles. Such “discipline and control” over data, says Bhatnagar, not only creates a single source of truth for all supply-chain partners, but helps eliminate finger-pointing while also empowering procurement teams to negotiate mutually beneficial terms with suppliers.

Improving employee productivity and satisfaction

Searching for savings opportunities, negotiating with suppliers, and responding to supply-chain disruptions—these time-consuming activities can negatively impact a procurement team’s productivity. However, by relying on analytics to discover and share meaningful patterns in data, procurement teams can shift focus from low-value tasks to business-critical decision-making.

Shifting procurement teams to higher-impact work results in a better overall employee experience. “Using analytics, employees feel more productive and know that they’re bringing more value to their job,” says Bhatnagar.

Another upside of heightening employee morale is improved talent retention. After all, workers with a sense of value and purpose are likelier to stay with an employer. This is a huge benefit in a time when nearly half (46%) of CPOs cite the loss of critical talent as a high or moderate risk, according to Deloitte’s 2023 Global CPO survey.

Meeting compliance metrics and organizational goals

Procurement analytics can also deliver on a broader commitment to changing how products and services are purchased.

According to a McKinsey Global Survey on environmental, social, and governance (ESG) issues, more than nine in ten organizations say ESG is on their agenda. Yet 40% of CPOs in the Deloitte survey report their procurement organizations need to define or measure their own set of relevant ESG factors.

Procurement tools can bridge this gap by allowing procurement teams to search for vendor or product certifications and generate credentials reports to help them shape their organization’s purchases toward financial, policy, or ESG goals. They can develop flexible yet robust spending approval workflows, designate restricted and out-of-policy purchases, and encourage the selection of sustainable products or preference for local or minority-owned suppliers.

“A credentials report can really allow organizations to improve their visibility into sustainability [initiatives] when they’re looking for seller credentials or compliant credentials,” says Bhatnagar. “They can track all of their spending from diverse sellers or small sellers—whatever their goals are for the organization.”

Delivering the procurement of tomorrow

Advanced analytics can free procurement teams to glean meaningful insights from their data—information that can drive tangible business results, including a more robust supplier ecosystem, improved employee productivity, and a greener planet.

As supply chains become increasingly complex and the ecosystem increasingly digital, data-driven procurement will become critical. In the face of growing economic instability, talent shortages, and technological disruption, advanced analytics capabilities will enable the next generation of procurement.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Learn how Amazon Business is leveraging AI/ML to offer procurement professionals more efficient processes, a greater understanding of smart business buying habits and, ultimately, reduced prices.

Start with data to build a better supply chain

In business, the acceleration of change means enterprises have to live in the future, not the present. Having the tools and technologies to enable forward-thinking and underpin digital transformation is key to survival. Supply chain procurement leaders are tasked with improving operational efficiencies and keeping an eye on the bottom line. For Raimundo Martinez, global digital solutions manager of procurement and supply chain at bp, the journey toward building a better supply chain starts with data.

“So, today, everybody talks about AI, ML, and all these tools,” says Martinez. “But to be honest with you, I think your journey really starts a little bit earlier. I think when we go out and think about this advanced technology, which obviously, have their place, I think in the beginning, what you really need to focus is in your foundational [layer], and that is your data.”

In that vein, all of bp’s data has been migrated to the cloud and its multiple procurement departments have been consolidated into a single global procurement organization. Having a centralized, single data source can reduce complexities and avoid data discrepancies. The biggest challenge to changes like data centralization and procurement reorganization is not technical, Martinez says, but human. Bringing another tool or new process into the fold can cause some to push back. Making sure that employees understand the value of these changes and the solutions they can offer is imperative for business leaders.

Honesty toward both employees and end users—where an enterprise keeps track of its logistics, inventory, and processes—can be a costly investment. For a digital transformation journey of bp’s scale, an investment in supply chain visibility is an investment in customer trust and business reputability.

“They feel part of it. They’re more willing to give you feedback. They’re also willing to give you a little bit more leeway. If you say that the tool is going to be, or some feature is going to be delayed a month, for example, but you don’t give the reasons and they don’t have that transparency and visibility into what is driving that delay, people just lose faith in your tool,” says Martinez.

Looking to the future, Martinez stresses the importance of a strong data foundation as a precursor to taking advantage of emerging technologies like AI and machine learning that can work to create a semi-autonomous supply chain.

“Moving a supply chain from a transactional item to a much more strategic item with the leverage of this technology, I think, that, to me, is the ultimate vision for the supply chain,” says Martinez.

This episode of Business Lab is produced in partnership with Infosys Cobalt.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. And this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic is building a better supply chain. AI can bring efficiencies to many aspects of an enterprise, including supply chain. And where better to start than internal procurement processes. With better data, better decisions can be made quicker, both internally and by customers and partners. And that is better for everyone.

Two words for you: automating transformation.

My guest is Raimundo Martinez, who is the global digital solutions manager of procurement and supply chain at bp.

This episode of Business Lab is produced in partnership with Infosys Cobalt.
Welcome, Raimundo.

Raimundo Martinez: Hi, Laurel. Thanks for having me today.

Laurel: So, let’s start with providing some context to our conversation. bp has been on a digital transformation journey. What spurred it, and how is it going?

Raimundo: I think there’s many factors spurring digital transformation. But if I look at all of this, I think probably the key one is the rate of change in the world today and in the past. I think instead of slowing down, I think the rate of change is accelerating, and that makes business survivability the need to have quick access to the data to almost not live in today, but live in the future. And having tools and technologies that allow them to see what is coming up, what routes of action they can take, and then to enact those mitigation plans faster.

And I think that’s where the digital transformation is the key enabler of that. And I would say that’s on the business side. I think the other one is the people mindset change, and that ties into how things are going. I think things are going pretty good. Technology wise, I’ve seen a large number of tools and technologies adopted. But I think probably the most important thing is this mindset and the workforce and the adoption of agile. This rate of change that we just talked in the first part can only probably be achieved in tame when the whole workforce has this agile mindset to react to it.

Laurel: Well, supply chain procurement leaders are under pressure to improve operational efficiencies while keeping a careful eye on the bottom line. What is bp’s procurement control tower, and how has it helped with bp’s digital transformation?

Raimundo: Yeah, sure. In a nutshell, think about old as myriad of systems of record where you have your data and users having to go to all of those. So, our control tower, what it does, is consolidate all the data in a single platform. And what we have done is not just present the data, but truly configured the data in form of alerts. And the idea is to tell our user, “This is what’s important. This are the three things that you really need to take care now.” And not stopping there, but then saying, “Look, in order to take that action, we’re giving you a summary information so you don’t have to go to any other system to actually understand what is driving that alert.” But then on top of that, we’re integrating that platform with this system’s record so that request can complete it in seconds instead of in weeks.

So, that in a nutshell, it’s the control tower platform. And the way have helped… Again, we talk about tools and people. So, on the tool side, being able to demonstrate how this automation is done and the value of it and being able for other units to actually recycle the work that you have done, it accelerates and inspire other technical resources to take advantage of that. And then on the user side, one of the effects that have, again, this idea of the ability mindset, everything that we’ve done in the tool development is agile. So, bringing the users into that journey have actually helped us to also accelerate that aspect of our digital transformation.

Laurel: On that topic of workplace agility. In 2020, bp began a reorganization that consolidated its procurement departments into that single global procurement organization. What were the challenges that resulted from this reorganization?

Raimundo: Yeah. To give you a more context on that. So, if you think about bp being this really large global organizations divided in business units, before the organizations, every one of these business units have their own procurement departments, which handle literally billions of dollars that’s how big they were. And in that business, they have the ERP systems, your contract repository, your process and process deviation. But you only manage the portfolio to that. Once you integrate all of those organizations into a single one, now your responsibility become across some of those multiple business units, that has your data in all of these business systems.

So, if you want to create a report, then it’s really complicated because you have to not only go to these different systems, but the taxonomy of the data is different. So, an example, some business will call their territory, North America, the other one will call it east and west coast. So, if you want a report for a new business owner, it becomes really, really hard, and also the reports might not be as complete as they are. So, that really calls for some tools that we need to put in place to support that. And on top of that, the volume of requests now is so greater that just changing and adding steps to process aren’t going to be enough. You really need to look into automation to satisfy this higher demand.

Laurel: Well, speaking of automation, it can leverage existing technology and build efficiencies. So, what is the role of advanced technologies, like AI, machine learning and advanced analytics in the approach to your ongoing transformation?

Raimundo: So, today, everybody talks about AI, ML, and all these tools. But to be honest with you, I think your journey really starts a little bit earlier. I think when we go out and think about this advanced technology, which obviously, have their place, I think in the beginning, what you really need to focus is in your foundational, and that is your data. So, you ask about the role of the cloud. So, for bp, what we have done is all of the data used to reside in multiple different sites out there. So, what we have done is all the data now has been migrated to the cloud. And then what the cloud also allows is to do transformations in place that help us really homogenize, what I just described before, North America, South America, then you can create another column and say, okay, now call it, whatever, United States, or however you want to call it.

So, all of this data transformation happened in a single spot. And what that does is also allow our users that need this data to go to a single source of truth and not be pulling data from multiple systems. An example of the chaos that that creates is somebody will be pulling invoice and data from Spence, somebody will pull in PayData. So, then you already have data discrepancy on the reporting. And having a centralized tool where everybody goes for the data reduces so much complexity on the system.

Laurel: And speaking about that kind of complexity, it’s clear that multiple procurement systems made it difficult to maintain quality compliance as well, and as well as production tracking in bp supply chain. So, what are some of the most challenging aspects of realizing this new vision with a centralized one-stop platform?

Raimundo: Yeah, we have a good list in there. So, let me break it into maybe technical and people, because I think people is something that we should talk about it. So, technical. I think one of the biggest things in technical is working with your technical team to find the right architecture. This is how your vision fits into our architecture, which will create less, let’s say, confusion and complexity into your architecture. And the other side of the technical challenge is finding the right tools. I’ll give you an example for our project. Initially, I thought, okay, RPA [robotic process automation] will be the technology to do this. So, we run a pilot RPA. And obviously, RPA has incredible applications out there. But at this point, RPA really wasn’t the tool for us given the changes that could happen on the screens from the system that we’re using. So, then we decided instead of going to RPA, going to API.

So, that’s an example of a challenge of finding exactly the right tool that you have. But to be honest with you, I think the biggest challenge is not technical, but human. Like I mentioned before, people are immersed in the sea of change that is going on, and here you come with yet another tool. So, even the tool you’re giving them might be a lot more efficient, people still want to cling to what they know. So, if they say, “Look, if I have to spend another two hours extracting data, putting Excel, collating and running a report…” Some people may rather do that than go to a platform where all of that is done for them. So, I think change management is key in these transformations to make sure that they’re able to sell or make people understand what the value of the tool is, and overcome that challenge, which is human normal aversion to change. And especially when you’re immersed on this really, really sea of change that was already going as a result of the reorganization.

Laurel: Yeah. People are hard, and tech can be easy. So, just to clarify, RPA is the robotic process automation in this context, correct?

Raimundo: Yeah, absolutely. Yeah. Sorry about the pretty layered… Yeah.

Laurel: No, no. There’s lots of acronyms going around.

So, inversely, we’re just discussing the challenges, what are the positive outcomes from making this transformation? And could you give us an example or a use case of how that updated platform boosted efficiency across existing processes?

Raimundo: Absolutely. Just quick generic. So, generic things is you find yourself a lot in this cycle of that data. The users look at the applications that said that data’s not correct, and they lose the appetite for using that, but the problem is they own the data, but the process to change the data is so cumbersome that people don’t really want to take ownership of that because they said, “Look, I have 20 things to do. The least in my list is updating that data.”

So, we’re in this cycle of trying to put tools out for the user, the data is not correct, but we’re not the ones who own the data. So, the specific example of how we broke that cycle is using automation. So, to give you an example, before we create automation, if you needed to change any contract data, you have to find what the contract is, then you have to go to a tool like Salesforce and create a case. That case goes to our global business support team, and then they have to read the case, open the system of record, make the change. And that could take between days or weeks. Meantime, the user is like, “Well, I requested this thing, and it hasn’t even happened.”

So, what we did is leverage internal technology. We already had a large investment on Microsoft, as you can imagine. And we said, look, “From Power BI, you can look at your contract, you can click on the record you want to change. Power App comes up and tells you what do you want to do.” Say, I want to change the contract owner, for example. It opens a window, says, “Who’s the new person you want to put in?” And as soon as you submit it, literally, within less than a second, the API goes to the system of record, change the owner, creates an email that notifies everybody who is an stakeholder in that contract, which then increases visibility to changes across the organization.

And at the same time, it leaves you an audit trail. So, if somebody wants to challenge that, you know exactly what happened. So, that has been an incredible outcome of reducing cycle time from days and weeks to merely seconds, at the same time, increasing communication and visibility into the data. That has been proved one of the greatest achievements that we have.

Laurel: Well, I think you’ve really outlined this challenge. So, investing in supply chain visibility can be costly, but often bolsters trust and reputability among customers. What’s the role of transparency and visibility in a digital transformation journey of this size?

Raimundo: I keep talking about agile, and I think that’s one of the tenets. And what I will add to transparent visibility, I would add actually honesty. I think it’s very, very easy to learn from success. Everybody wants to tout the great things that they have done, but people may a little bit less inclined to speak out about their mistakes. I’ll just give you an example of our situation with RPA. We don’t feel bad about it. We feel that the more we share that knowledge with the technical teams, the much more value it has because then people will learn from that again and not commit the same mistake obviously.

But I think also what honesty do in this visibility is when you bring your users into the development team, you have that visibility. They feel part of it. They’re more willing to give you feedback. And also, they’re also willing to give you a little bit more leeway. If you say that the tool is going to be, or some feature is going to be delayed a month, for example, but you don’t give the reasons and they don’t have that transparency and visibility into what is driving that delay, people just lose faith in your tool.

Where I think the more open, the more visible you are, but also, again, with honesty, is you have a product that is so much more well received and that everybody feels part of the tool. It’s something that in every training, at the end of the training, I just say, “By the way, this is not my tool. This is your tool. And the more engaged you are with us, the much better outcome you’re going to have.” And that’s just achieved through transparency and visibility.

Laurel: So, for other large organizations looking to create a centralized platform to improve supply chain visibility, what are some of the key best practices that you’ve found that leadership can adopt to achieve the smoothest transition?

Raimundo: So, I probably think about three things. I think, one, the leadership needs to really, really do is understand the project. And when I say, understand the project, is really understanding the technical complexity, the human aspect of it, because I think that’s where your leadership has a lot of role to play. They’re able to influence their teams on this project that you’re trying to… And then they really need to understand also what are the risks associated with this project. And also that these could be a very lengthy journey. Hopefully, obviously, there’ll be results and milestones along, but they need to feel comfortable with also this agile mentality that we’re going to do features, fail, adapt, and they really need to be part of that journey.

The second biggest, I think, most important thing is having the right team. And in that, I think I’ve been super fortunate. We have a great partnership with Infosys. I’ve got one of the engineers named Sai. What the Infosys team and my technical team says is, “Look, do not shortchange yourself on the ideas that you bring from the business side.” A lot of times, we might think about something as impossible. They really encourage me to come up with almost crazy ideas. Just come with everything that you can think about. And they’re really, really incredible of delivering all the resources to bring in a solution to that. We almost end up using each other’s phrases. So, having a team that is really passionate about change, about being honest, about working together is the key to delivery. And finally, data foundation. I think that we get so stuck looking at the shiny tools out there that seem like science fiction and they’ll be great, and we forget that the outcome of those technologies are only as good as the data that we are supporting.

And data, a lot of times, it seem as like the, I don’t know, I don’t want to call it ugly sister, the ugly person in the room. But it’s really people… They’re like, “Oh, I don’t want to deal with that. I just want to do AI.” Well, your AI is not going to give you what you want if it doesn’t understand where you’re at. So, data foundation is key. Having the perfect team and technology partners and understanding the project length, the risk and being really engaged will be, for me, the key items there.

Laurel: That’s certainly helpful. So, looking ahead, what technologies or trends do you foresee will enable greater efficiencies across supply chain and operations?

Raimundo: It’s not like a broken record, bet. I really think that technologies that look at our data and help us clean the data, foresee what items we’re going to have with the data, how we can really have a data set that is really, really powerful, that is easy, and it has reflects exactly our situation, it’s the key for then the next step, which is all of these amazing technologies. If I think about our vision, for the platform is to create a semi-autonomous supply chain. And the vision is imagine having, again, first, the right data, and now what you have is AI/ML and all these models that look at that internal data, compare that with external factors.

And what it does is instead of presenting us alerts, we’ll go to the next level, and it, basically, presents scenarios. And say, “Look, based on the data that I see on the market, what you have had in your history, these are the three things that can happen, these are the plans that the tool recommends, and this is how you interact or affect that change.” So, moving a supply chain from a transactional item to a much more strategic item with the leverage of this technology, I think, that, to me, is the ultimate vision for supply chain.

Laurel: Well, Raimundo, thank you so much for joining us today on the Business Lab. This has been very enlightening.

Raimundo: Thank you. I’ve been a pleasure. And I wish everybody a great journey out there. It’s definitely a very exciting moment right now.

Laurel: Thank you.

That was Raimundo Martinez, who is a global digital solutions manager, procurement and supply chain at bp, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Outperforming competitors as a data-driven organization

In 2006, British mathematician Clive Humby said, “data is the new oil.” While the phrase is almost a cliché, the advent of generative AI is breathing new life into this idea. A global study on the Future of Enterprise Data & AI by WNS Triange and Corinium Intelligence shows 76% of C-suite leaders and decision-makers are planning or implementing generative AI projects. 

Harnessing the potential of data through AI is essential in today’s business environment. A McKinsey report says data-driven organizations demonstrate EBITDA increases of up to 25%. AI-driven data strategy can boost growth and realize untapped potential by increasing alignment with business objectives, breaking down silos, prioritizing data governance, democratizing data, and incorporating domain expertise.

“Companies need to have the necessary data foundations, data ecosystems, and data culture to embrace an AI-driven operating model,” says Akhilesh Ayer, executive vice president and global head of AI, analytics, data, and research practice at WNS Triange, a unit of business process management company WNS Global Services.

A unified data ecosystem

Embracing an AI-driven operating model requires companies to make data the foundation of their business. Business leaders need to ensure “every decision-making process is data-driven, so that individual judgment-based decisions are minimized,” says Ayer. This makes real-time data collection essential. “For example, if I’m doing fraud analytics for a bank, I need real-time data of a transaction,” explains Ayer. “Therefore, the technology team will have to enable real-time data collection for that to happen.” 

Real-time data is just one element of a unified data ecosystem. Ayer says an all-round approach is necessary. Companies need clear direction from senior management; well-defined control of data assets; cultural and behavioral changes; and the ability to identify the right business use cases and assess the impact they’ll create. 

Aligning business goals with data initiatives  

An AI-driven data strategy will only boost competitiveness if it underpins primary business goals. Ayer says companies must determine their business goals before deciding what to do with data. 

One way to start, Ayer explains, is a data-and-AI maturity audit or a planning exercise to determine whether an enterprise needs a data product roadmap. This can determine if a business needs to “re-architect the way data is organized or implement a data modernization initiative,” he says. 

The demand for personalization, convenience, and ease in the customer experience is a central and differentiating factor. How businesses use customer data is particularly important for maintaining a competitive advantage, and can fundamentally transform business operations. 

Ayer cites WNS Triange’s work with a retail client as an example of how evolving customer expectations drive businesses to make better use of data. The retailer wanted greater value from multiple data assets to improve customer experience. In a data triangulation exercise while modernizing the company’s data with cloud and AI, WNS Triange created a unified data store with personalization models to increase return on investment and reduce marketing spend. “Greater internal alignment of data is just one way companies can directly benefit and offer an improved customer experience,” says Ayer. 

Weeding out silos 

Regardless of an organization’s data ambitions, few manage to thrive without clear and effective communication. Modern data practices have process flows or application programming interfaces that enable reliable, consistent communication between departments to ensure secure and seamless data-sharing, says Ayer. 

This is essential to breaking down silos and maintaining buy-in. “When companies encourage business units to adopt better data practices through greater collaboration with other departments and data ecosystems, every decision-making process becomes automatically data-driven,” explains Ayer.  

WNS Triange helped a well-established insurer remove departmental silos and establish better communication channels. Silos were entrenched. The company had multiple business lines in different locations and legacy data ecosystems. WNS Triange brought them together and secured buy-in for a common data ecosystem. “The silos are gone and there’s the ability to cross leverage,” says Ayer. “As a group, they decide what prioritization they should take; which data program they need to pick first; and which businesses should be automated and modernized.”

Data ownership beyond IT

Removing silos is not always straightforward. In many organizations, data sits in different departments. To improve decision-making, Ayer says, businesses can unite underlying data from various departments and broaden data ownership. One way to do this is to integrate the underlying data and treat this data as a product. 

While IT can lay out the system architecture and design, primary data ownership shifts to business users. They understand what data is needed and how to use it, says Ayer. “This means you give the ownership and power of insight-generation to the users,” he says. 

This data democratization enables employees to adopt data processes and workflows that cultivate a healthy data culture. Ayer says companies are investing in trainings in this area. “We’ve even helped a few companies design the necessary training programs that they need to invest in,” he says. 

Tools for data decentralization

Data mesh and data fabric, powered by AI, empower businesses to decentralize data ownership, nurture the data-as-a-product concept, and create a more agile business. 

For organizations adopting a data fabric model, it’s crucial to include a data ingestion framework to manage new data sources. “Dynamic data integration must be enabled because it’s new data with a new set of variables,” says Ayer. “How it integrates with an existing data lake or warehouse is something that companies should consider.” 

Ayer cites WNS Triange’s collaboration with a travel client as an example of improving data control. The client had various business lines in different countries, meaning controlling data centrally was difficult and ineffective. WNS Triange deployed a data mesh and data fabric ecosystem that allowed for federated governance controls. This boosted data integration and automation, enabling the organization to become more data-centric and efficient. 

A governance structure for all

“Governance controls can be federated, which means that while central IT designs the overall governance protocols, you hand over some of the governance controls to different business units, such as data-sharing, security, and privacy, making data deployment more seamless and effective,” says Ayer. 

AI-powered data workflow automation can add precision and improve downstream analytics. For example, Ayer says, in screening insurance claims for fraud, when an insurer’s data ecosystem and workflows are fully automated, instantaneous AI-driven fraud assessments are possible. 

“The ability to process a fresh claim, bring it into a central data ecosystem, match the policyholder’s information with the claim’s data, and make sure that the claim-related information passes through a model to give a recommendation, and then push back that recommendation into the company’s workflow is the phenomenal experience of improving downstream analytics,” Ayer says. 

Data-driven organizations of the future

A well-crafted data strategy aligned with clear business objectives can seamlessly integrate AI tools and technologies into organizational infrastructure. This helps ensure competitive advantage in the digital age. 

To benefit from any data strategy, organizations must continuously overcome barriers such as legacy data platforms, slow adoption, and cultural resistance. “It’s extremely critical that employees embrace it for the betterment of themselves, customers, and other stakeholders,” Ayer points out. “Organizations can stay data-driven by aligning data strategy with business goals, ensuring stakeholders’ buy-in and employees’ empowerment for smoother adoption, and using the right technologies and frameworks.” 

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Deploying high-performance, energy-efficient AI

Although AI is by no means a new technology there have been massive and rapid investments in it and large language models. However, the high-performance computing that powers these rapidly growing AI tools — and enables record automation and operational efficiency — also consumes a staggering amount of energy. With the proliferation of AI comes the responsibility to deploy that AI responsibly and with an eye to sustainability during hardware and software R&D as well as within data centers.

“Enterprises need to be very aware of the energy consumption of their digital technologies, how big it is, and how their decisions are affecting it,” says corporate vice president and general manager of data center platform engineering and architecture at Intel, Zane Ball.

One of the key drivers of a more sustainable AI is modularity, says Ball. Modularity breaks down subsystems of a server into standard building blocks, defining interfaces between those blocks so they can work together. This system can reduce the amount of embodied carbon in a server’s hardware components and allows for components of the overall ecosystem to be reused, subsequently reducing R&D investments.

Downsizing infrastructure within data centers, hardware, and software can also help enterprises reach greater energy efficiency without compromising function or performance. While very large AI models require megawatts of super compute power, smaller, fine-tuned models that operate within a specific knowledge domain can maintain high performance but low energy consumption.

“You give up that kind of amazing general purpose use like when you’re using ChatGPT-4 and you can ask it everything from 17th century Italian poetry to quantum mechanics, if you narrow your range, these smaller models can give you equivalent or better kind of capability, but at a tiny fraction of the energy consumption,” says Ball.

The opportunities for greater energy efficiency within AI deployment will only expand over the next three to five years. Ball forecasts significant hardware optimization strides, the rise of AI factories — facilities that train AI models on a large scale while modulating energy consumption based on its availability — as well as the continued growth of liquid cooling, driven by the need to cool the next generation of powerful AI innovations.

“I think making those solutions available to our customers is starting to open people’s eyes how energy efficient you can be while not really giving up a whole lot in terms of the AI use case that you’re looking for.”

This episode of Business Lab is produced in partnership with Intel.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic is building a better AI architecture. Going green isn’t for the faint of heart, but it’s also a pressing need for many, if not all enterprises. AI provides many opportunities for enterprises to make better decisions, so how can it also help them be greener?

Two words for you: sustainable AI.

My guest is Zane Ball, corporate vice president and general manager of data center platform engineering and architecture at Intel.

This podcast is produced in partnership with Intel.

Welcome Zane.

Zane Ball: Good morning.

Laurel: So to set the stage for our conversation, let’s start off with the big topic. As AI transforms businesses across industries, it brings the benefits of automation and operational efficiency, but that high-performance computing also consumes more energy. Could you give an overview of the current state of AI infrastructure and sustainability at the large enterprise level?

Zane: Absolutely. I think it helps to just kind of really zoom out big picture, and if you look at the history of IT services maybe in the last 15 years or so, obviously computing has been expanding at a very fast pace. And the good news about that history of the last 15 years or so, is while computing has been expanding fast, we’ve been able to contain the growth in energy consumption overall. There was a great study a couple of years ago in Science Magazine that talked about how compute had grown by maybe 550% over a decade, but that we had just increased electricity consumption by a few percent. So those kind of efficiency gains were really profound. So I think the way to kind of think about it is computing’s been expanding rapidly, and that of course creates all kinds of benefits in society, many of which reduce carbon emissions elsewhere.

But we’ve been able to do that without growing electricity consumption all that much. And that’s kind of been possible because of things like Moore’s Law, Big Silicon has been improving with every couple of years and make devices smaller, they consume less power, things get more efficient. That’s part of the story. Another big part of this story is the advent of these hyperscale data centers. So really, really large-scale computing facilities, finding all kinds of economies of scale and efficiencies, high utilization of hardware, not a lot of idle hardware sitting around. That also was a very meaningful energy efficiency. And then finally this development of virtualization, which allowed even more efficient utilization of hardware. So those three things together allowed us to kind of accomplish something really remarkable. And during that time, we also had AI starting to play, I think since about 2015, AI workloads started to play a pretty significant role in digital services of all kinds.

But then just about a year ago, ChatGPT happens and we have a non-linear shift in the environment and suddenly large language models, probably not news to anyone on this listening to this podcast, has pivoted to the center and there’s just a breakneck investment across the industry to build very, very fast. And what is also driving that is that not only is everyone rushing to take advantage of this amazing large language model kind of technology, but that technology itself is evolving very quickly. And in fact also quite well known, these models are growing in size at a rate of about 10x per year. So the amount of compute required is really sort of staggering. And when you think of all the digital services in the world now being infused with AI use cases with very large models, and then those models themselves growing 10x per year, we’re looking at something that’s not very similar to that last decade where our efficiency gains and our greater consumption were almost penciling out.

Now we’re looking at something I think that’s not going to pencil out. And we’re really facing a really significant growth in energy consumption in these digital services. And I think that’s concerning. And I think that means that we’ve got to take some strong actions across the industry to get on top of this. And I think just the very availability of electricity at this scale is going to be a key driver. But of course many companies have net-zero goals. And I think as we pivot into some of these AI use cases, we’ve got work to do to square all of that together.

Laurel: Yeah, as you mentioned, the challenges are trying to develop sustainable AI and making data centers more energy efficient. So could you describe what modularity is and how a modularity ecosystem can power a more sustainable AI?

Zane: Yes, I think over the last three or four years, there’ve been a number of initiatives. Intel’s played a big part of this as well of re-imagining how servers are engineered into modular components. And really modularity for servers is just exactly as it sounds. We break different subsystems of the server down into some standard building blocks, define some interfaces between those standard building blocks so that they can work together. And that has a number of advantages. Number one, from a sustainability point of view, it lowers the embodied carbon of those hardware components. Some of these hardware components are quite complex and very energy intensive to manufacture. So imagine a 30 layer circuit board, for example, is a pretty carbon intensive piece of hardware. I don’t want the entire system, if only a small part of it needs that kind of complexity. I can just pay the price of the complexity where I need it.

And by being intelligent about how we break up the design in different pieces, we bring that embodied carbon footprint down. The reuse of pieces also becomes possible. So when we upgrade a system, maybe to a new telemetry approach or a new security technology, there’s just a small circuit board that has to be replaced versus replacing the whole system. Or maybe a new microprocessor comes out and the processor module can be replaced without investing in new power supplies, new chassis, new everything. And so that circularity and reuse becomes a significant opportunity. And so that embodied carbon aspect, which is about 10% of carbon footprint in these data centers can be significantly improved. And another benefit of the modularity, aside from the sustainability, is it just brings R&D investment down. So if I’m going to develop a hundred different kinds of servers, if I can build those servers based on the very same building blocks just configured differently, I’m going to have to invest less money, less time. And that is a real driver of the move towards modularity as well.

Laurel: So what are some of those techniques and technologies like liquid cooling and ultrahigh dense compute that large enterprises can use to compute more efficiently? And what are their effects on water consumption, energy use, and overall performance as you were outlining earlier as well?

Zane: Yeah, those are two I think very important opportunities. And let’s just take them one at a  time. Emerging AI world, I think liquid cooling is probably one of the most important low hanging fruit opportunities. So in an air cooled data center, a tremendous amount of energy goes into fans and chillers and evaporative cooling systems. And that is actually a significant part. So if you move a data center to a fully liquid cooled solution, this is an opportunity of around 30% of energy consumption, which is sort of a wow number. I think people are often surprised just how much energy is burned. And if you walk into a data center, you almost need ear protection because it’s so loud and the hotter the components get, the higher the fan speeds get, and the more energy is being burned in the cooling side and liquid cooling takes a lot of that off the table.

What offsets that is liquid cooling is a bit complex. Not everyone is fully able to utilize it. There’s more upfront costs, but actually it saves money in the long run. So the total cost of ownership with liquid cooling is very favorable, and as we’re engineering new data centers from the ground up. Liquid cooling is a really exciting opportunity and I think the faster we can move to liquid cooling, the more energy that we can save. But it’s a complicated world out there. There’s a lot of different situations, a lot of different infrastructures to design around. So we shouldn’t trivialize how hard that is for an individual enterprise. One of the other benefits of liquid cooling is we get out of the business of evaporating water for cooling. A lot of North America data centers are in arid regions and use large quantities of water for evaporative cooling.

That is good from an energy consumption point of view, but the water consumption can be really extraordinary. I’ve seen numbers getting close to a trillion gallons of water per year in North America data centers alone. And then in humid climates like in Southeast Asia or eastern China for example, that evaporative cooling capability is not as effective and so much more energy is burned. And so if you really want to get to really aggressive energy efficiency numbers, you just can’t do it with evaporative cooling in those humid climates. And so those geographies are kind of the tip of the spear for moving into liquid cooling.

The other opportunity you mentioned was density and bringing higher and higher density of computing has been the trend for decades. That is effectively what Moore’s Law has been pushing us forward. And I think it’s just important to realize that’s not done yet. As much as we think about racks of GPUs and accelerators, we can still significantly improve energy consumption with higher and higher density traditional servers that allows us to pack what might’ve been a whole row of racks into a single rack of computing in the future. And those are substantial savings. And at Intel, we’ve announced we have an upcoming processor that has 288 CPU cores and 288 cores in a single package enables us to build racks with as many as 11,000 CPU cores. So the energy savings there is substantial, not just because those chips are very, very efficient, but because the amount of networking equipment and ancillary things around those systems is a lot less because you’re using those resources more efficiently with those very high dense components. So continuing, if perhaps even accelerating our path to this ultra-high dense kind of computing is going to help us get to the energy savings we need maybe to accommodate some of those larger models that are coming.

Laurel: Yeah, that definitely makes sense. And this is a good segue into this other part of it, which is how data centers and hardware as well software can collaborate to create greater energy efficient technology without compromising function. So how can enterprises invest in more energy efficient hardware such as hardware-aware software, and as you were mentioning earlier, large language models or LLMs with smaller downsized infrastructure but still reap the benefits of AI?

Zane: I think there are a lot of opportunities, and maybe the most exciting one that I see right now is that even as we’re pretty wowed and blown away by what these really large models are able to do, even though they require tens of megawatts of super compute power to do, you can actually get a lot of those benefits with far smaller models as long as you’re content to operate them within some specific knowledge domain. So we’ve often referred to these as expert models. So take for example an open source model like the Llama 2 that Meta produced. So there’s like a 7 billion parameter version of that model. There’s also, I think, a 13 and 70 billion parameter versions of that model compared to a GPT-4, maybe something like a trillion element model. So it’s far, far, far smaller, but when you fine tune that model with data to a specific use case, so if you’re an enterprise, you’re probably working on something fairly narrow and specific that you’re trying to do.

Maybe it’s a customer service application or it’s a financial services application, and you as an enterprise have a lot of data from your operations, that’s data that you own and you have the right to use to train the model. And so even though that’s a much smaller model, when you train it on that domain specific data, the domain specific results can be quite good in some cases even better than the large model. So you give up that kind of amazing general purpose use like when you’re using ChatGPT-4 and you can ask it everything from 17th century Italian poetry to quantum mechanics, if you narrow your range, these smaller models can give you equivalent or better kind of capability, but at a tiny fraction of the energy consumption.

And we’ve demonstrated a few times, even with just a standard Intel Xeon two socket server with some of the AI acceleration technologies we have in those systems, you can actually deliver quite a good experience. And that’s without even any GPUs involved in the system. So that’s just good old-fashioned servers and I think that’s pretty exciting.

That also means the technology’s quite accessible, right? So you may be an enterprise, you have a general purpose infrastructure that you use for a lot of things, you can use that for AI use cases as well. And if you’ve taken advantage of these smaller models that fit within infrastructure we already have or infrastructure that you can easily obtain. And so those smaller models are pretty exciting opportunities. And I think that’s probably one of the first things the industry will adopt to get energy consumption under control is just right sizing the model to the activity to the use case that we’re targeting. I think there’s also… you mentioned the concept of hardware-aware software. I think that the collaboration between hardware and software has always been an opportunity for significant efficiency gains.

I mentioned early on in this conversation how virtualization was one of the pillars that gave us that kind of fantastic result over the last 15 years. And that was very much exactly that. That’s bringing some deep collaboration between the operating system and the hardware to do something remarkable. And a lot of the acceleration that exists in AI today actually is a similar kind of thinking, but that’s not really the end of the hardware software collaboration. We can deliver quite stunning results in encryption and in memory utilization in a lot of areas. And I think that that’s got to be an area where the industry is ready to invest. It is very easy to have plug and play hardware where everyone programs at a super high level language, nobody thinks about the impact of their software application downstream. I think that’s going to have to change. We’re going to have to really understand how our application designs are impacting energy consumption going forward. And it isn’t purely a hardware problem. It’s got to be hardware and software working together.

Laurel: And you’ve outlined so many of these different kind of technologies. So how can enterprise adoption of things like modularity and liquid cooling and hardware aware software be incentivized to actually make use of all these new technologies?

Zane: A year ago, I worried a lot about that question. How do we get people who are developing new applications to just be aware of the downstream implications? One of the benefits of this revolution in the last 12 months is I think just availability of electricity is going to be a big challenge for many enterprises as they seek to adopt some of these energy intensive applications. And I think the hard reality of energy availability is going to bring some very strong incentives very quickly to attack these kinds of problems.

But I do think beyond that like a lot of areas in sustainability, accounting is really important. There’s a lot of good intentions. There’s a lot of companies with net-zero goals that they’re serious about. They’re willing to take strong actions against those goals. But if you can’t accurately measure what your impact is either as an enterprise or as a software developer, I think you have to kind of find where the point of action is, where does the rubber meet the road where a micro decision is being made. And if the carbon impact of that is understood at that point, then I think you can see people take the actions to take advantage of the tools and capabilities that are there to get a better result. And so I know there’s a number of initiatives in the industry to create that kind of accounting, and especially for software development, I think that’s going to be really important.

Laurel: Well, it’s also clear there’s an imperative for enterprises that are trying to take advantage of AI to curb that energy consumption as well as meet their environmental, social, and governance or ESG goals. So what are the major challenges that come with making more sustainable AI and computing transformations?

Zane: It’s a complex topic, and I think we’ve already touched on a couple of them. Just as I was just mentioning, definitely getting software developers to understand their impact within the enterprise. And if I’m an enterprise that’s procuring my applications and software, maybe cloud services, I need to make sure that accounting is part of my procurement process, that in some cases that’s gotten easier. In some cases, there’s still work to do. If I’m operating my own infrastructure, I really have to look at liquid cooling, for example, an adoption of some of these more modern technologies that let us get to significant gains in energy efficiency. And of course, really looking at the use cases and finding the most energy efficient architecture for that use case. For example, like using those smaller models that I was talking about. Enterprises need to be very aware of the energy consumption of their digital technologies, how big it is and how their decisions are affecting it.

Laurel: So could you offer an example or use case of one of those energy efficient AI driven architectures and how AI was subsequently deployed for it?

Zane: Yes. I think that some of the best examples I’ve seen in the last year were really around these smaller models where Intel did an example that we published around financial services, and we found that something like three hours of fine-tuning training on financial services data allowed us to create a chatbot solution that performed in an outstanding manner on a standard Xeon processor. And I think making those solutions available to our customers is starting to open people’s eyes how energy efficient you can be while not really giving up a whole lot in terms of the AI use case that you’re looking for. And so I think we need to just continue to get those examples out there. We have a number of collaborations such as with Hugging Face with open source models, enabling those solutions on our products like our Gaudi2 accelerator has also performed very well from a performance per watt point of view, the Xeon processor itself. So those are great opportunities.

Laurel: And then how do you envision the future of AI and sustainability in the next three to five years? There seems like so much opportunity here.

Zane: I think there’s going to be so much change in the next three to five years. I hope no one holds me to what I’m about to say, but I think there are some pretty interesting trends out there. One thing, I think, to think about is the trend of AI factories. So training a model is a little bit of an interesting activity that’s distinct from what we normally think of as real time digital services. You have real time digital service like Vinnie, the app on your iPhone that’s connected somewhere in the cloud, and that’s a real time experience. And it’s all about 99.999% uptime, short latencies to deliver that user experience that people expect. But AI training is different. It’s a little bit more like a factory. We produce models as a product and then the models are used to create the digital services. And that I think becomes an important distinction.

So I can actually build some giant gigawatt facility somewhere that does nothing but train models on a large scale. I can partner with the infrastructure of the electricity providers and utilities much like an aluminum plant or something would do today where I actually modulate my energy consumption with its availability. Or maybe I take advantage of solar or wind power’s ability, I can modulate when I’m consuming power, not consuming power. And so I think if we’re going to see some really large scale kinds of efforts like that, and those AI factories could be very, very efficient, they can be liquid cooled and they can be closely coupled to the utility infrastructure. I think that’s a pretty exciting opportunity. And while that’s kind of an acknowledgement that there’s going to be gigawatts and gigawatts of AI training going on. Second opportunity, I think in this three to five years, I do think liquid cooling will become far more pervasive.

I think that will be driven by the need to cool the next generation of accelerators and GPUs will make it a requirement, but then that will be able to build that technology out and scale it more ubiquitously for all kinds of infrastructure. And that will let us shave huge amounts of gigawatts out of the infrastructure, save hundreds of billions of gallons of water annually. I think that’s incredibly exciting. And if I just… the innovation on the model size as well, so much has changed with just the last five years with large language models like ChatGPT, let’s not assume there’s not going to be even bigger change in the next three to five years. What are the new problems that are going to be solved, new innovations? So I think as the costs and impact of AI are being felt more substantively, there’ll be a lot of innovation on the model side and people will come up with new ways of cracking some of these problems and there’ll be new exciting use cases that come about.

Finally, I think on the hardware side, there will be new AI architectures. From an acceleration point of view today, a lot of AI performance is limited by memory bandwidth, memory bandwidth and networking bandwidth between the various accelerator components. And I don’t think we’re anywhere close to having an optimized AI training system or AI inferencing systems. I think the discipline is moving faster than the hardware and there’s a lot of opportunity for optimization. So I think we’ll see significant differences in networking, significant differences in memory solutions over the next three to five years, and certainly over the next 10 years that I think can open up a substantial set of improvements.

And of course, Moore’s Law itself continues to advance advanced packaging technologies, new transistor types that allow us to build ever more ambitious pieces of silicon, which will have substantially higher energy efficiency. So all of those things I think will be important. Whether we can keep up with our energy efficiency gains with the explosion in AI functionality, I think that’s the real question and it’s just going to be a super interesting time. I think it’s going to be a very innovative time in the computing industry over the next few years.

Laurel: And we’ll have to see. Zane, thank you so much for joining us on the Business Lab.

Zane: Thank you.

Laurel: That was Zane Ball, corporate vice president and general manager of data center platform engineering and architecture at Intel, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can also find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.


This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Bringing breakthrough data intelligence to industries

As organizations recognize the transformational opportunity presented by generative AI, they must consider how to deploy that technology across the enterprise in the context of their unique industry challenges, priorities, data types, applications, ecosystem partners, and governance requirements. Financial institutions, for example, need to ensure that data and AI governance has the built-in intelligence to fully align with strict compliance and regulatory requirements. Media and entertainment (M&E) companies seek to build AI models to drive deeper product personalization. And manufacturers want to use AI to make their internet of things (IoT) data insights readily accessible to everyone from the data scientist to the shop floor worker.

In any of these scenarios, the starting point is access to all relevant data—of any type, from any source, in real time—governed comprehensively and shared across an industry ecosystem. When organizations can achieve this with the right data and AI foundation, they have the beginnings of data intelligence: the ability to understand their data and break free from data silos that would block the most valuable AI outcomes.

But true data intelligence is about more than establishing the right data foundation. Organizations are also wrestling with how to overcome dependence on highly technical staff and create frameworks for data privacy and organizational control when using generative AI. Specifically, they are looking to enable all employees to use natural language to glean actionable insight from the company’s own data; to leverage that data at scale to train, build, deploy, and tune their own secure large language models (LLMs); and to infuse intelligence about the company’s data into every business process.

In this next frontier of data intelligence, organizations will maximize value by democratizing AI while differentiating through their people, processes, and technology within their industry context. Based on a global, cross-industry survey of 600 technology leaders as well as in-depth interviews with technology leaders, this report explores the foundations being built and leveraged across industries to democratize data and AI. Following are its key findings:

• Real-time access to data, streaming, and analytics are priorities in every industry. Because of the power of data-driven decision-making and its potential for game-changing innovation, CIOs require seamless access to all of their data and the ability to glean insights from it in real time. Seventy-two percent of survey respondents say the ability to stream data in real time for analysis and action is “very important” to their overall technology goals, while another 20% believe it is “somewhat important”—whether that means enabling real-time recommendations in retail or identifying a next best action in a critical health-care triage situation.

• All industries aim to unify their data and AI governance models. Aspirations for a single approach to governance of data and AI assets are strong: 60% of survey respondents say a single approach to built-in governance for data and AI is “very important,” and an additional 38% say it is “somewhat important,” suggesting that many organizations struggle with a fragmented or siloed data architecture. Every industry will have to achieve this unified governance in the context of its own unique systems of record, data pipelines, and requirements for security and compliance.

• Industry data ecosystems and sharing across platforms will provide a new foundation for AI-led growth. In every industry, technology leaders see promise in technology-agnostic data sharing across an industry ecosystem, in support of AI models and core operations that will drive more accurate, relevant, and profitable outcomes. Technology teams at insurers and retailers, for example, aim to ingest partner data to support real-time pricing and product offer decisions in online marketplaces, while manufacturers see data sharing as an important capability for continuous supply chain optimization. Sixty-four percent of survey respondents say the ability to share live data across platforms is “very important,” while an additional 31% say it is “somewhat important.” Furthermore, 84% believe a managed central marketplace for data sets, machine learning models, and notebooks is very or somewhat important.

• Preserving data and AI flexibility across clouds resonates with all verticals. Sixty-three percent of respondents across verticals believe that the ability to leverage multiple cloud providers is at least somewhat important, while 70% feel the same about open-source standards and technology. This is consistent with the finding that 56% of respondents see a single system to manage structured and unstructured data across business intelligence and AI as “very important,” while an additional 40% see this as “somewhat important.” Executives are prioritizing access to all of the organization’s data, of any type and from any source, securely and without compromise.

• Industry-specific requirements will drive the prioritization and pace by which generative AI use cases are adopted. Supply chain optimization is the highest-value generative AI use case for survey respondents in manufacturing, while it is real-time data analysis and insights for the public sector, personalization and customer experience for M&E, and quality control for telecommunications. Generative AI adoption will not be one-size-fits-all; each industry is taking its own strategy and approach. But in every case, value creation will depend on access to data and AI permeating the enterprise’s ecosystem and AI being embedded into its products and services.

Maximizing value and scaling the impact of AI across people, processes, and technology is a common goal across industries. But industry differences merit close attention for their implications on how intelligence is infused into the data and AI platforms. Whether it be for the retail associate driving omnichannel sales, the health-care practitioner pursuing real-world evidence, the actuary analyzing risk and uncertainty, the factory worker diagnosing equipment, or the telecom field agent assessing network health, the language and scenarios AI will support vary significantly when democratized to the front lines of every industry.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Navigating a shifting customer-engagement landscape with generative AI

One can’t step into the same river twice. This simple representation of change as the only constant was taught by the Greek philosopher Heraclitus more than 2000 years ago. Today, it rings truer than ever with the advent of generative AI. The emergence of generative AI is having a profound effect on today’s enterprises—business leaders face a rapidly changing technology that they need to grasp to meet evolving consumer expectations.

“Across all industries, customers are at the core, and tapping into their latent needs is one of the most important elements to sustain and grow a business,” says Akhilesh Ayer, executive vice president and global head of AI, analytics, data, and research practice at WNS Triange, a unit of WNS Global Services, a leading business process management company. “Generative AI is a new way for companies to realize this need.”

A strategic imperative

Generative AI’s ability to harness customer data in a highly sophisticated manner means enterprises are accelerating plans to invest in and leverage the technology’s capabilities. In a study titled “The Future of Enterprise Data & AI,” Corinium Intelligence and WNS Triange surveyed 100 global C-suite leaders and decision-makers specializing in AI, analytics, and data. Seventy-six percent of the respondents said that their organizations are already using or planning to use generative AI.

According to McKinsey, while generative AI will affect most business functions, “four of them will likely account for 75% of the total annual value it can deliver.” Among these are marketing and sales and customer operations. Yet, despite the technology’s benefits, many leaders are unsure about the right approach to take and mindful of the risks associated with large investments.

Mapping out a generative AI pathway

One of the first challenges organizations need to overcome is senior leadership alignment. “You need the necessary strategy; you need the ability to have the necessary buy-in of people,” says Ayer. “You need to make sure that you’ve got the right use case and business case for each one of them.” In other words, a clearly defined roadmap and precise business objectives are as crucial as understanding whether a process is amenable to the use of generative AI.

The implementation of a generative AI strategy can take time. According to Ayer, business leaders should maintain a realistic perspective on the duration required for formulating a strategy, conduct necessary training across various teams and functions, and identify the areas of value addition. And for any generative AI deployment to work seamlessly, the right data ecosystems must be in place.

Ayer cites WNS Triange’s collaboration with an insurer to create a claims process by leveraging generative AI. Thanks to the new technology, the insurer can immediately assess the severity of a vehicle’s damage from an accident and make a claims recommendation based on the unstructured data provided by the client. “Because this can be immediately assessed by a surveyor and they can reach a recommendation quickly, this instantly improves the insurer’s ability to satisfy their policyholders and reduce the claims processing time,” Ayer explains.

All that, however, would not be possible without data on past claims history, repair costs, transaction data, and other necessary data sets to extract clear value from generative AI analysis. “Be very clear about data sufficiency. Don’t jump into a program where eventually you realize you don’t have the necessary data,” Ayer says.

The benefits of third-party experience

Enterprises are increasingly aware that they must embrace generative AI, but knowing where to begin is another thing. “You start off wanting to make sure you don’t repeat mistakes other people have made,” says Ayer. An external provider can help organizations avoid those mistakes and leverage best practices and frameworks for testing and defining explainability and benchmarks for return on investment (ROI).

Using pre-built solutions by external partners can expedite time to market and increase a generative AI program’s value. These solutions can harness pre-built industry-specific generative AI platforms to accelerate deployment. “Generative AI programs can be extremely complicated,” Ayer points out. “There are a lot of infrastructure requirements, touch points with customers, and internal regulations. Organizations will also have to consider using pre-built solutions to accelerate speed to value. Third-party service providers bring the expertise of having an integrated approach to all these elements.”

Ayer offers the example of WNS Triange helping a travel intermediary use generative AI to deal with customer inquiries about airline rescheduling, cancellations, and other itinerary complications. “Our solution is immediately able to go into a thousand policy documents, pick out the policy parameters relevant to the query… and then come back quickly not only with the response but with a nice, summarized, human-like response,” he says.

In another example, Ayer shares that his company helped a global retailer create generative AI–driven designs for personalized gift cards. “The customer experience goes up tremendously,” he says.

Hurdles in the generative AI journey

As with any emerging technology, however, there are organizational, technical, and implementation barriers to overcome when adopting generative AI.

Organizational:  One of the major hurdles businesses can face is people. “There is often immediate resistance to the adoption of generative AI because it affects the way people work on a daily basis,” says Ayer.

As a result, securing internal buy-in from all teams and being mindful of a skills gap is a must. Additionally, the ability to create a business case for investment—and getting buy-in from the C-suite—will help expedite the adoption of generative AI tools.

Technical: The second set of obstacles relates to large language models (LLMs) and mechanisms to safeguard against hallucinations and bias and ensure data quality. “Companies need to figure out if generative AI can solve the whole problem or if they still need human input to validate the outputs from LLM models,” Ayer explains. At the same time, organizations must ask whether the generative AI models being used have been appropriately trained within the customer context or with the enterprise’s own data and insights. If not, there is a high chance that the response will be incorrect. Another related challenge is bias: If the underlying data has certain biases, the modeling of the LLM could be unfair. “There have to be mechanisms to address that,” says Ayer. Other issues, such as data quality, output authenticity, and explainability, also must be addressed.

Implementation: The final set of challenges relates to actual implementation. The cost of implementation can be significant, especially if companies cannot orchestrate a viable solution, says Ayer. In addition, the right infrastructure and people must be in place to avoid resource constraints. And users must be convinced that the output will be relevant and of high quality, so as to gain their acceptance for the program’s implementation.

Lastly, privacy and ethical issues must be addressed. The Corinium Intelligence and WNS Triange survey showed that almost 72% of respondents were concerned about ethical AI decision-making.

The focus of future investment

The entire ecosystem of generative AI is moving quickly. Enterprises must be agile and adapt quickly to change to ensure customer expectations are met and maintain a competitive edge. While it is almost impossible to anticipate what’s next with such a new and fast-developing technology, Ayer says that organizations that want to harness the potential of generative AI are likely to increase investment in three areas:

  • Data modernization, data management, data quality, and governance: To ensure underlying data is correct and can be leveraged.
  • Talent and workforce: To meet demand, training, apprenticeships, and injection of fresh talent or leveraging market-ready talent from service providers will be required.
  • Data privacy solutions and mechanisms: To ensure privacy is maintained, C-suite leaders must also keep pace with relevant laws and regulations across relevant jurisdictions.

However, it shouldn’t be a case of throwing everything at the wall and seeing what sticks. Ayer advises that organizations examine ROI from the effectiveness of services or products provided to customers. Business leaders must clearly demonstrate and measure a marked improvement in customer satisfaction levels using generative AI–based interventions.

“Along with a defined generative AI strategy, companies need to understand how to apply and build use cases, how to execute them at scale and speed to market, and how to measure their success,” says Ayer. Leveraging generative AI for customer engagement is typically a multi-pronged approach, and a successful partnership can help with every stage.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Developing climate solutions with green software

After years of committing to sustainable practices in his personal life from recycling to using cloth-based diapers, Asim Hussain, currently the director of green software and ecosystems at Intel, began to ask questions about the practices in his work: software development.

Developers often asked if their software was secure enough, fast enough, or cost-effective enough but, Hussain says, they rarely considered the environmental consequences of their applications. Hussain would go on to work at Intel and become the executive director and chairperson of the Green Software Foundation, a non-profit aiming to create an ecosystem of people, tooling, and best practices around sustainable software development.

“What we need to do as software developers and software engineers is we need to make sure that it is emitting the least amount of carbon for the same amount of value and user functionality that we’re getting out of it,” says Hussain.

The three pillars of green software are energy efficiency, hardware efficiency, and carbon awareness. Making more efficient use of hardware and energy consumption when developing applications can go a long way toward reducing emissions, Hussain says. And carbon-aware computing involves divestment from fossil fuels in favor of renewable energy sources to improve efficiency without compromising performance.

Often, when something is dubbed “green,” there is an assumption that the product, application, or practice functions worse than its less environmentally friendly version. With software, however, the opposite is true.

“Being green in the software space means being more efficient, which translates almost always to being faster,” says Hussain. “When you factor in the hardware efficiency component, oftentimes it translates to building software that is more resilient, more fault-tolerant. Oftentimes it also translates then into being cheaper.”

Instituting green software necessitates not just a shift in practices and tooling but also a culture change within an enterprise. While regulations and ESG targets help to create an imperative, says Hussain, a shift in mindset can enable some of the greatest strides forward.

“If there’s anything we really need to do is to drive that behavior change, we need to drive behavior change so people actually invest their time on making software more energy efficient, more hardware efficient, or more carbon aware.”

This episode of Business Lab is produced in partnership with Intel.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic is green software, from apps to devices to the cloud. Computing runs the world around us. However, there is a better way to do it with a focus on sustainability.

Two words for you: sustainable code.

My guest is Asim Hussain, who is the director of the Office of Green Software and Ecosystems at Intel, as well as the chairperson of the Green Software Foundation.

This podcast is produced in partnership with Intel.

Welcome, Asim.

Asim Hussain: Hi Laurel. Thank you very much for having me.

Laurel: Well, glad you’re here. So for a bit of background, you’ve been working in software development and sustainability advocacy from startups to global enterprises for the last two decades. What drew you into sustainability as a focus and what are you working on now?

Asim: I’ve personally been involved and interested in the sustainability space for quite a while on a very personal level. Then around the birth of my first son, about five years ago now, I started asking myself this one question, which was how come I was willing to do all these things I was doing for sustainability to recycle, we were using cloth-based nappies, all sorts of these different things. Yet I could not remember in my entire career, my entire career, I could not remember one single moment where in any technical discussion, in any architectural meeting, in any discussion about how we’re going to build this piece of software. I mean, people oftentimes raise points around is this secure enough? Is this fast enough? Does this cost too much? But at no point I’d ever heard anybody ask the question, is this emitting too much carbon? Is this piece of software, is this solution that we’re talking about right now, how does that solution, what kind of environmental impacts does that have? I’ve never, ever, ever heard anybody raise that question.

So I really started to ask that question myself. I found other people who are like me. Five years ago, there weren’t many of us, but were all asking the same questions. I joined and then I started to become a co-organizer of a community called ClimateAction.Tech. Then the community just grew. A lot of people were starting to ask themselves these questions and some answers were coming along. At the time, I used to work at Microsoft and I pitched and formed something called the green cloud advocacy team, where we talked about how to actually build applications in a greener way on the cloud.

We formed something called the Green Software Foundation, which is a consortium of now 60 member organizations, which I am a chairperson of. Over a year ago I joined Intel because Intel has been heavily investing in sustainable software space. If you think about what Intel does, pretty much everything that Intel produces, developers use it and developers write software and write code on Intel’s products. So it makes sense for Intel to have a strong green software strategy. That’s kind of why I was brought in and I’ve since then been working on Intel’s green software strategy internally.

Laurel: So a little bit more about that. How can organizations make their software greener? Then maybe we should take a step back and define what green software actually is.

Asim: Well, I think we have to define what green software actually is first. The way the conversation’s landed in recent years and the Green Software Foundation has been a large part of this is we’ve coalesced around this idea of carbon efficiency, which is if you are building a piece of software … Everything we do emits carbon, everything we do emits carbon, this tool we’re using right now to record this session is emitting carbon right now. What we need to do as software developers and software engineers is we need to make sure that it is emitting the least amount of carbon for the same amount of value and user functionality that we’re getting out of it. That’s what we call carbon efficiency.

What we say is there’s three pillars underneath, there’s only really three ways to make your software green. The first is to make it more energy efficient, to use less energy. Most electricity is still created through the burning of fossil fuels. So just using less electricity is going to emit fewer carbon emissions into the atmosphere. So the first is energy efficiency. The second is hardware efficiency because all software runs on hardware and depends on the, if you’re talking about a mobile phone, typically people are forced to move on from mobile phones because the software just doesn’t run on their older models. In the cloud it tends to be more around a conversation around utilization by making more use of the servers that you already have in the cloud, making just more efficient use of the hardware. The third one is a very interesting space. It’s a very new space. It’s called carbon awareness or carbon-aware computing. That is you are going to be using electricity anyway. Can you make your software? Can you architect your software in such a way?

So it does more when the electricity is clean and does less when the electricity is dirty. So can you architect an application? So for instance, it does more when there’s more renewable energy on the grid right now, and it does less when more coal or gas is getting burnt. There’s some very interesting projects in this space that have been happening, very high-profile projects and carbon-aware computing is an area where there’s a lot of interest because it’s a stepping stone. It might not get you your 50, 60, 70% carbon reductions, but it will get you your 1, 2, 3, and 4% carbon reductions and it’ll get you that with very minimal investments. There’s a lot of interest in carbon-aware computing. But those are basically the three areas, what we call the three pillars of green software, energy efficiency, hardware efficiency, and carbon awareness.

Laurel: So another reason we’re talking about all of this is that technology can contribute to the environmental issues that it is trying to actually help. So for example, a lot of energy is needed to train AI models. Also, blockchain was key in the development of energy-efficient microgrids, but it’s also behind the development of cryptocurrency platforms, some of which consume more energy than that of a small country. So how can advanced technologies like AI, machine learning, and blockchain contribute positively to the development of green software?

Asim: That’s an interesting question because sometimes the focus oftentimes is how do we actually make that technology greener? But I don’t believe that is necessarily the whole story. The story is the broader story. How can we use that technology to make software greener? I think there’s many ways you can probably tackle that question. One thing that’s been interesting for me since my journey as a software developer joining Intel is me realizing how little I knew about hardware. There is so much, I describe it as the gap between software and silicon. The gap is quite large right now. If you’re building software these days, you have very little understanding of the silicon that’s running that software. Through a greater understanding of exactly how your software is exactly getting executed by the silicon to implement the functionality, that’s where we are seeing that there’s a lot of great opportunities to reduce emissions and to make that software more energy efficient, more hardware efficient.

I think that’s where places like AI can really help out. Developer productivity has been the buzzword in this space for a very long time. Developers are extremely expensive. Getting to market fast and beating your competition is the name of the game these days. So it’s always been about how do we implement the functionality we need as fast as possible, make sure it’s secure, get it out the door. But oftentimes the only way you can do that is to increase the gap between the software and silicon and just make it a little bit more inefficient. I think AI can really help there. You can build AI solutions that can, there’s copilot solutions which can help as you’re developing code could actually suggest to you. If you were to write your code in a slightly different way, it could be more efficient. So that’s one way AI can help out.

Another way that I’m seeing AI utilized in this space as well is when you deploy … Silicon and the products that we produce can actually, they come out of the box configured in a certain way, but they can actually be tuned to actually execute that particular piece of software much more efficiently. So if you have a data center running just one type of software, you can actually tune the hardware so that software is run more efficiently on that hardware. We’re seeing AI solutions come on the market these days, which can then automatically just figure out what type of application are you, how do you run, how do you work? We have a solution called Granulate, which does part of this as well. It can then figure out how do you tune the underlying hardware in such a way so it executes that software more efficiently. So I think that’s kind of a couple of ways that this technology could actually be used to make software itself greener.

Laurel: To bridge that gap between software and silicon, you must be able to measure the progress and meet targets. So what parameters do you use to measure the energy efficiency of software? Could you talk us through the tenets of actually measuring?

Asim: So measuring is an extremely challenging problem. When we first launched the Green Software Foundation three years ago, I remember asking all the members, what is your biggest pain point? They all came back, almost all came back with measuring. Measuring is very, very challenging. It’s so nuanced, there’s so many different levels to it. For instance, at Intel, we have technology in our chips to actually measure the energy of the whole chip. Those counters on the chip which measure it. Unfortunately, that only gives you the energy of the entire chip itself. So it does give you a measurement, but then if you are a developer, there’s maybe 10 processes running on that chip and only one of them is yours. You need to know how much energy is your process consuming because that’s what you can optimize for. That’s what you can see. Currently, the best way to measure at that level is using models, models which are either generated again through AI or through other processes where you can effectively just run lots large amounts of data and generate statistical models.

Oftentimes a model that’s used is one that uses CPU [central processing unit] utilization, so how busy a CPU is and translate that into energy. So you can see my process is consuming 10% of the CPU. There are models out there that can convert that into energy, but again, all models are wrong, some models are useful. So there’s always so much nuance to this whole space as well, because how have you tweaked your computer? What else is running on your computer? It can also affect how those numbers are measured. So, unfortunately, this is a very, very challenging area.

But this is really the really big area that a lot of people are trying to resolve right now. We are not at the perfect solution, but we are way, way better than we were three, four or five years ago. It’s actually a very exciting time for measurement in this space.

Laurel: Well, and I guess part of it is that green software seems to be developed with greater scrutiny and higher quality controls to ensure that the product actually meets these standards to reduce emissions. Measurement is part of that, right? So what are some of the rewards beyond emissions reduction or meeting green goals of developing software? You kind of touched on that earlier with the carbon efficiency as well as hardware efficiency.

Asim: Yeah, so this is something I used to think about a lot because the term green has a lot associated with it. I mean, oftentimes when people historically have used the word green, you can have the main product or the green version of the product. There’s an idea in your mind that the green version is somehow less than, it’s somehow not as good. But actually in the software space it’s so interesting because the exact opposite. Being green in the software space means being more efficient, which translates almost always to being faster. When you factor in the hardware efficiency component, oftentimes it translates to building software that is more resilient, more fault-tolerant. Oftentimes it also translates then into being cheaper. So actually green has a lot of positive associations with it already.

Laurel: So in that vein, how can external standards help provide guidance for building software and solutions? I mean, obviously, there’s a need to create something like the Green Software Foundation, and with the focus that most enterprises have now on environmental, social, and governance goals or ESG, companies are now looking more and more to build those ideas into their everyday workflow. So how do regulations help and not necessarily hinder this kind of progress?

Asim: So standards are very, very important in this space. Standards, I mean, one of the things, again, when we look to the ecosystem about three, four years ago, the biggest problem the enterprises had, I mean a lot of them were very interested in green software, but the biggest problem they had was what do they trust? What can I trust? Whose advice should I take? That’s where standards come in. That’s where standards are most important. Standards are, at least the way we develop standards inside the Green Software Foundation, they’re done via consensus. There are like 60 member organizations. So when you see a standard that’s been created by that many people and that many people have been involved with it, it really builds up that trust. So now you know what to do. Those standards give you that compass direction to tell you which direction to go in and that you can trust.

There’s several standards that we’ve been focusing on in the Green Software Foundation, one’s called the SEI, which is a software carbon intensity specification. Again, to prove it as an ISO standard, you have to reach consensus through 196 countries. So then you get even more trust into a standard so you can use it. So standards really help to build up that trust, which organizations can use to help guide them in the directions to take. There’s a couple of other standards that are really coming up in the foundation that I think are quite interesting. One is called Real-Time Cloud. One of the challenges right now is, and again always comes back to measurement, it always always comes back to measurement. Right now measurement is very discreet, it happens oftentimes just a few times a year. Oftentimes when you get measurement data, it is very delayed. So one of the specs that’s been worked on right now is called Real-Time Cloud.

It’s trying to ask the question, is it possible? Is it possible to get data that is real-time? Oftentimes when you want to react and change behaviors, you need real-time data. If you want data so that when somebody does something, they know instantly the impact of that action so they can make adjustments instantly. If they’re having to wait three months, that behavior change might not happen. Real-time [data] is oftentimes at log aheads with regulations because oftentimes you have to get your data audited and auditing data that’s real-time is very, very challenging. So one of the questions we’re trying to ask is, is it possible to have data which is real-time, which then over the course of a year, you can imagine it just aggregates up over the course of a year. Can that aggregation then provide enough trust so that an auditor can then say, actually, we now trust this information and we will allow that to be used in regulatory reporting.

That’s something that we’re very excited about because you really need real-time data to drive behavior change. If there’s anything we really need to do is to drive that behavior change, we need to drive behavior change so people actually invest their time on making software more energy efficient, more hardware efficient, or more carbon aware. So that’s some of the ways where standards are really helping in this space.

Laurel: I think it’s really helpful to talk about standards and how they are so ingrained with software development in general because there are so many misconceptions about sustainability. So what are some of the other misconceptions that people kind of get stuck on, maybe that even calling it green, right? Are there philosophies or strategies that you can caution against or you try to advocate for?

Asim: So as a couple of things I talk about, so one of the things I talk about is it does take everybody, I mean, I remember very early on when I was talking in this space, oftentimes a conversation went, oh, don’t bother talking to that person or don’t talk to this sector of developers, don’t talk to that type of developers. Only talk to these people, these people who have the most influence to make the kind of changes to make software greener. But it really takes a cultural change. This is what’s very important, really takes a cultural change inside an organization. It takes everybody. You can’t really talk to one slice of the developer ecosystem. You need to talk to everybody. Every single developer or engineer inside an organization really needs to take this on board. So that’s one of the things I say is that you have to speak to every single person. You cannot just speak to one set of people and exclude another set of people.

Another challenge that I often see is that people, when they talk about this space, one of the misconceptions they talk about is they rank where effort should be spent in terms of the carbon slice of the pie that it is responsible for and I’ll talk about this in general. But really how you should be focusing is you should be focusing not on the slice of the pie, but on the ability to decarbonize that slice of the pie. That’s why green software is so interesting and that’s why it’s such a great place to spend effort and time. It might not be, I mean it is, depending on which academic paper you look at, it can be between 2 to 4% of global emissions. So some people might say, well, that’s not really worth spending the time in.

But my argument is actually the ability for us to decarbonize that 2 to 4% is far easier than our ability to decarbonize other sectors like airlines or concrete or these other sectors. We know what we need to do oftentimes in the software space, we know the choices. There doesn’t need to be new technology made, there just needs to be decisions made to prioritize this work. That’s something I think is very, very important. We should rank everything in terms of our ability to decarbonize the ease of decarbonization and then work on the topmost item first down, rather than just looking at things in just terms of tons of carbon, which I think leads to wrong decision making.

Laurel: Well, I think you’re laying out a really good argument because green initiatives, they can be daunting, especially for large enterprises looking to meet those decarbonization thresholds within the next decade. For those companies that are making the investment into this, how should they begin? Where are the fundamental things just to be aware of when you’re starting this journey?

Asim: So the first step is, I would say training. What we’re describing here in terms of, especially in terms of the green software space, it’s a very new movement. It’s a very new field of computing. So a lot of the terms that I talk about are just not well understood and a lot of the reasons for those terms are not well understood as well. So the number one thing I always say is you need to focus on training. There’s loads of training out there. The Green Software Foundation’s got some training, learn.GreenSoftware.Foundation, it’s just two hours, it’s free. We send that over to anybody who’s starting in this space just to understand the language, the terminology, just to get everybody on the same page. That is usually a very good start. Now in terms of how do you motivate inside, I think about this a lot.

If you’re the lead of an organization and you want to make a change, how do you actually make that change? I’m a big, big believer in trusting your team, trusting your people. If you give engineers a problem, they will find a solution to that problem. But what they oftentimes need is permission, a thumbs up from leadership that this is a priority. So that’s why it’s very important for organizations to be very public about their commitments, make public commitments. Same way Intel has made public commitments. Be very vocal as a leader inside your organization and be very clear that this is a priority for you, that you will listen to people and to teams who bring you solutions in this space.

You will find that people within your organization are already thinking about this space, already have ideas, already probably have decks ready to present to you. Just create an environment where they feel capable of presenting it to you. I guarantee you, your solutions are already within your organization and already within the minds of your employees.

Laurel: Well, that is all very inspiring and interesting and so exciting. So when you think about the next three to five years in green software development and adoption, what are you looking forward to the most? What excites you?

Asim: I think I’m very excited right now, to be honest with you. I look back, I look back five years ago the very, very early days, first looked at this, and I still remember if there was one article, one mentioning green software, we would all lose our heads. We’d get so excited about it, we’d share it, we’d pour over it. Now I’m inundated with information. This movement has grown significantly. There are so many organizations that are deeply interested in this space. There’s so much research, so much academic research.

I have so many articles coming my way every single week. I do not have time to read them. So that gives me just a lot of hope for the future. That really excites me. I might just be because I’m at this kind of cutting edge of this space, so I see a lot of this stuff before anybody else, but I see a huge amount of interest and I see also a huge amount of activity as well. I see a lot of people working on solutions, not just talking about problems, but working on solutions to those problems. That honestly just excites me. I don’t know where we’re going to end up in five years time, but if this is our growth so far, I think we’re going to end up in a very good place.

Laurel: Oh, that’s excellent. Awesome. Thank you so much for joining us today on the Business Lab.

Asim: Thank you very much for having me.

Laurel: That was Asim Hussain, the director of the Office of Green Software and Ecosystems at Intel, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can also find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.