Shoring up global supply chains with generative AI

The outbreak of covid-19 laid bare the vulnerabilities of global, interconnected supply chains. National lockdowns triggered months-long manufacturing shutdowns. Mass disruption across international trade routes sparked widespread supply shortages. Costs spiralled. And wild fluctuations in demand rendered tried-and-tested inventory planning and forecasting tools useless.

“It was the black swan event that nobody had accounted for, and it threw traditional measures for risk and resilience out the window,” says Matthias Winkenbach, director of research at the MIT Center for Transportation and Logistics. “Covid-19 showed that there were vulnerabilities in the way the supply chain industry had been running for years. Just-in-time inventory, a globally interconnected supply chain, a lean supply chain—all of this broke down.”

It is not the only catastrophic event to strike supply chains in the last five years either. For example, in 2021 a six-day blockage of the Suez Canal—a narrow waterway through which 30% of global container traffic passes—added further upheaval, impacting an estimated $9.6 billion in goods each day that it remained impassable.

These shocks have been a sobering wake-up call. Now, 86% of CEOs cite resilience as a priority issue in their own supply chains. Amid ongoing efforts to better prepare for future disruptions, generative AI has emerged as a powerful tool, capable of surfacing risk and solutions to circumnavigate threats.

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Fueling seamless AI at scale

From large language models (LLMs) to reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous.

First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learning (ML) allow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value.

Silicon’s mid-life crisis

AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau.

For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age.

As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing units (CPUs) have managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing units (GPUs) and other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently.

But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units (TPUs). AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks.

Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics.

Understanding models and paradigms

The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute.

Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance. 

New system architectures, like retrieval-augmented generation (RAG), have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts.

The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case.

Orchestrating AI

As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices.

With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users.

Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs.

Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments.

More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks.

Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning. 

How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere.

Learn more about Arm’s approach to enabling AI everywhere.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Building customer-centric convenience

In the U.S., two-thirds of the country’s 150,000 convenience stores are run by independent operators. Mom-and-pop shops, powered by personal relationships and local knowledge, are the backbone of the convenience sector. These neighborhood operators have long lacked the resources needed to compete with larger chains when it comes to technology, operations, and customer loyalty programs. 

As consumer expectations evolve, many small business owners find themselves grappling with outdated systems, rising costs, and limited digital tools to keep up.

“What would happen if these small operations could combine their knowledge of their market, of their neighborhood, with the state-of-the-art technology?” asks GM of digital products, mobility, and convenience for the Americas at bp, Tarang Sethia. That question is shaping a years-long, multi-pronged initiative to bring modern retail tools, like cloud-connected point-of-sale systems and personalized AI, into the hands of local convenience store operators, without stripping their independence. 

Sethia’s mission is to close the digital gap. bp’s newly launched Earnify app centralizes loyalty rewards for convenience stores across the country, helping independent stores build repeat business with data-informed promotions. Behind the scenes, a cloud-based operating system can proactively monitor store operations and infrastructure to automate fixes to routine issues and reduce costly downtime. This is especially critical for businesses that double as their own IT departments. 

“We’ve aggregated all of that into one offering for our customers. We proactively monitor it. We fix it. We take ownership of making sure that these systems are up. We make sure that the systems are personalizing offers for the customers,” says Sethia. 

But the goal isn’t to corporatize corner stores. “We want them to stay local,” says Sethia. “We want them to stay the mom-and-pop store operator that their customers trust, but we are providing them the tools to run their stores more efficiently and to delight their guests.”

From personalizing promotions to proactively resolving technical issues to optimizing in-store inventory, the success of AI should be measured, says Sethia, by its ability to make frontline workers more effective and customers more loyal.

The future, Sethia believes, lies in thoughtful integration of technology that centers humans rather than replacing them. 

“AI and other technologies should help us create an ecosystem that does not replace humans, but actually augments their ability to serve consumers and to serve the consumers so well that the consumers don’t go back to their old ways.”

This episode of Business Lab is produced in association with Infosys Cobalt.

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

This episode is produced in partnership with Infosys Cobalt. 

Our topic today is innovating with AI. As companies move along in their journey to digitalization and AI adoption, we’re starting to see real-world business models that demonstrate the innovation these emerging technologies enable. 

Two words for you: ecosystem innovation. 

My guest today is Tarang Sethia, the GM of digital products, mobility and convenience for the Americas at BP. 

Welcome, Tarang.

Tarang Sethia: Thank you.

Megan: Lovely to have you. Now, for a bit of context just to start with, could you give us some background about the current convenience store and gas station landscape in the United States and what the challenges are for owners and customers right now?

Tarang: Absolutely. What is important to understand is, what is the state of the market? If you look at the convenience and mobility market, it is a very fragmented market. The growth and profitability are driven by consumer loyalty, store experience, and also buying power of the products that they sell to the customers that come into their stores.

And from an operations perspective, there is a vast difference. If you put the bucket of these single-store smaller operators, these guys are very well run, they are in the community, they know their customers. Sometimes they even know the frequent buyers that are coming in, and they address them by name and keep the product ready. They know their communities and customers, and they have a personal affinity with them. They also know their likes and dislikes. But they also need to rapidly change to the changing needs of the customers. These mom-and-pop stores represent the core of the convenience market. And these constitute about 60% of the entire market.

Now, where the fragmentation lies is, there are also larger operations that are equally motivated to develop strong relationships with customers and they have the scale. They may not match the personal affinity of these mom-and-pop store operators, but they do have the capital to actually leverage data, technology, AI, to personalize and customize their stores for the consumers or the customers that come to their stores. 

And this is like the 25% or 30% of the market. Just to put that number in perspective, out of the 150,000 convenience stores in the US market, 60% constitute almost 100,000 stores, which are mom-and-pop operated. The rest are through organized retail. Okay.

Now let me talk about the problems that they face. In today’s day and age, these mom-and-pop stores don’t have the capital to create a loyalty program and to create those offers that make customers choose to come to the store instead of going to somebody else. They also don’t have a simpler operations technology and the operations ecosystem. What I mean is that they don’t have the systems that stay up, these are still legacy POS systems that run their stores. So they spend a lot of time making the transaction happen.

Finally, what they pay for, say, a bottle of soda, compared to the larger operation, because of the lack of buying power, also eats into their margin. So overall, the problems are that they’re not able to delight their guests with loyalty. Their operations are not simple, and so they do a lot of work to keep their operations up to date and pay a lot more for their operations, both technology and convenience operations. That’s kind of the summary.

Megan: Right, and I suppose there’s a way to help them address these challenges. I know bp has created this new way to reach convenience store owners to offer various new opportunities and products. Could you tell us a bit about what you’ve been working on? For example, I know there’s an app, point of sale and payment systems, and a snack brand, and also how these sort of benefit convenience store owners and their customers in this climate that we’re talking about.

Tarang: So bp is in pursuit of these digital first customer experiences that don’t replace the one-on-one human interactions of mom-and-pop store operators, but they amplify that by providing them with an ecosystem that helps them delight their guests, run their stores simply and more efficiently, and also reduce their cost while doing so. And what we have done as bp is, we’ve launched a suite of customer solutions and an innovative retail operating system experience. We’ve branded it Crosscode so that it works from the forecourt to the backcourt, it works for the consumers, it works for the stores to run their stores more efficiently, and we can leverage all kinds of technologies like AI to personalize and customize for the customers and the stores.

The reason why we did this is, we asked ourselves, what would happen if these small operations could combine their knowledge of their market, of their neighborhood, with the state-of-the-art technology? That’s how we came up with a consumer app called Earnify. It is kind of the Uber of loyalty programs. We did not name it BPme. We did not name it BP Rewards or ampm or Thorntons. We created one standardized loyalty program that would work in the entire country to get more loyal consumers and drive their frequency, and we’ve scaled it to about 8,000 stores in the last year, and the results are amazing. There are 68% more active, loyal consumers that are coming through Earnify nationally. 

And the second piece, which is even more important is, which a lot of companies haven’t taken care of, is a simple to operate, cloud-based retail operating system, which is kind of the POS, point of sale, and the ecosystem of the products that they sell to customers and payment systems. We have applied AI to make a lot of tasks automated in this retail operating system.

What that has led to is 20% reduction in the operating costs for these mom-and-pop store operators. That 20% reduction in operating costs, goes directly to the bottom line of these stores. So now, the mom-and-pop store operators are going to be able to delight their guests, keeping their customers loyal. Number two, they’re able to spend less money on running their store operations. And number three, very, very, very important, they are able to spend more time serving the guests instead of running the store.

Megan: Yeah, absolutely. Really fantastic results that you’ve achieved there already. And you touched on a couple of the sort of technologies you’ve made use of there, but I wondered if you could share a bit more detail on what additional technologies, like cloud and AI, did you adopt and implement, and perhaps what were some of the barriers to adoption as well?

Tarang: Absolutely. I will first start with how did we enable these mom-and-pop store operators to delight their guests? The number one thing that we did was we first started with a basic points-based loyalty program where their guests earn points and value for both fueling at the fuel pump and buying convenience store items inside the store. And when they have enough points to redeem, they can redeem them either way. So they have value for going from the forecourt to the backcourt and backcourt to the forecourt. Number one thing, right? Then we leveraged data, machine learning, and artificial intelligence to personalize the offer for customers.

If you’re on Earnify and I am in New York, and if I were a bagel enthusiast, then it would send me offers of a bagel plus coffee. And say my wife likes to go to a convenience store to quickly pick up a salad and a diet soda. She would get offers for that, right? So personalization. 

What we also applied is, now these mom-and-pop store operators, depending on the changing seasons or the changing landscape, could create their own offers and they could be instantly available to their customers. That’s how they are able to delight their guests. Number two is, these mom-and-pop store operators, their biggest problem with technology is that it goes down, and when it goes down, they lose sales. They are on calls, they become the IT support help desk, right? They’re trying to call five different numbers.

So we first provided a proactively monitored help desk. So when we leveraged AI technology to monitor what is working in their store, what is not working, and actually look at patterns to find out what may be going down, like a PIN pad. We would know hours before, looking at the patterns that the PIN pad may have issues. We proactively call the customer or the store to say, “Hey, you may have some problems with the PIN pad. You need to replace it, you need to restart it.”

What that does is, it takes away the six to eight hours of downtime and lost sales for these stores. That’s a proactively monitored solution. And also, if ever they have an issue, they need to call one number, and we take ownership of solving the problems of the store for them. Now, it’s almost like they have an outsourced help desk, which is leveraging AI technology to both proactively monitor, resolve, and also fix the issues faster because we now know that store X also had this issue and this is what it took to resolve, instead of constantly trying to resolve it and take hours.

The third thing that we’ve done is we have put in a cloud-based POS system so we can constantly monitor their POS. We’ve connected it to their back office pricing systems so they can change the prices of products faster, and [monitor] how they are performing. This actually helps the store to say, “Okay, what is working, what is not working? What do I need to change?” in almost near real-time, instead of waiting hours or days or weeks to react to the changing customer needs. And now they don’t need to make a decision. Do I have the capital to invest in this technology? The scale of bp allows them to get in, to leverage technology that is 20% cheaper and is working so much better for them.

Megan: Fantastic. Some really impactful examples of how you’ve used technology there. Thank you for that. And how has bp also been agile or quick to respond to the data it has received during this campaign?

Tarang: Agility is a mindset. What we’ve done is to bring in a customer-obsessed mindset. Like our leader Greg Franks talks about, we have put the customer at the heart of everything that we do. For us, customers are people who come to our stores and the people on the frontline who serve them. Their needs are of the utmost importance. What we did was, we changed how we went to business about them. Instead of going to vendors and putting vendors in charge of the store technology and consumer technology, we took ownership. We built out a technology team that was trained in the latest tools and technologies like AI, like POS, like APIs.

Then we changed the processes of how quickly we go to market. Instead of waiting two years on an enterprise project and then delivering it three years later, what we said was, “Let’s look at an MVP experience, most valuable experience delivered through a product for the customers.” And we started putting it in the stores so that the store owners could start delighting their guests and learning. Some things worked, some didn’t, but we learned much faster and were able to react almost on a weekly basis. Our store owners now get these updates on a biweekly basis instead of waiting two years or three years.

Third, we’ve applied an ecosystem mindset. Companies like Airbnb and Uber are known for their aggregator business models. They don’t do everything themselves, and we don’t do everything ourselves. But what we have done is, we’ve become an aggregator of all the capabilities, like consumer app, like POS, like back office or convenience value chain, like pricing, like customer support. We’ve aggregated all of that into one offering for our customers. We proactively monitor it. We fix it. We take ownership of making sure that these systems are up. We make sure that the systems are personalizing offers for the customers. So the store owner can just focus on delighting their guests.

We have branded this as Crosscode Retail Operating System, and we are providing it as a SaaS service. You can see in the name, there’s no bp in the name because, unlike the very big convenience players, we are not trying to make them into a particular brand that we want them. We want them to stay local. We want them to stay the mom-and-pop store operator that their customers trust, but we are providing them the tools to run their stores more efficiently and to delight their guests.

Megan: Really fantastic. And you mentioned that this was a very customer-centric approach that you took. So, how important was it to focus on that customer experience, in addition to the 

technology and all that it can provide?

Tarang: The customer experience was the most important thing. We could have started with a project and determined, “Hey, this is how it makes money for bp first.” But we said, “Okay, let’s look at solving the core problems of the customer.” Our customer told us, “Hey, I want to pay frictionlessly at the pump, when I come to the pump.” So what did we do? We launched pay for fuel feature, where they can come to the pump, they don’t need to take their wallet out. They just take their app out and choose what pump and what payment method. 

Then they said, “Hey, I don’t get any value from buying fuel every week and going inside. These are two different stores for me.” So what did we do? We launched a unified loyalty program. Then the store owner said, “Hey, my customers don’t like the same offers that you do nationally.” So what did we do? We created both personalized offers and build-your-own offers for the store owner. 

Finally, to be even more customer-obsessed, we said that being customer-obsessed doesn’t just happen. We have to measure it. We are constantly measuring how the consumers are rating the offers in our app and how the consumers are rating that experience. And we made a dramatic shift. The consumers, if you go to the Earnify app in the app store, they’re rating it as 4.9. 

We have 68% more loyal consumers. We are also measuring these loyal consumers, how often they are coming and what they are buying. Then we said, “Okay, from a store owner perspective, their satisfaction is important.” We are constantly measuring the satisfaction of these store operators and the frontline employees who are operating the systems. Customer satisfaction used to be three out of 10 when we first started, and now, it has reached an 8.7 out of 10, and we are constantly monitoring. Some stores go down because we haven’t paid enough attention. We learn from it and we apply.

Finally, what we’ve also done is with this Earnify app, instead of a local store operator having their own loyalty program with a few hundred customers, how many people are going to download that app? We’ve given them a network of millions of consumers nationwide that can be part of the ecosystem. The technologies that we are using are helping the stores delight the consumers, helping the stores providing the value to the consumers that they see, helping the stores provide the experience to the consumers that they see, and also helping bp to provide the seamless experience to the frontline employees.

Megan: Fantastic. There are some incredible results there in terms of customer satisfaction. Are there any other metrics of success that you’re tracking along the way? Any other kind of wins that you can share so far in the implementation of all of this?

Tarang: We are tracking a very important deeper metric so that we can hold ourselves accountable, the uptime of the store. The meantime to resolve the issues, the sales uplift of the stores, the transaction uplift of the stores. Are the consumers buying more? Are the consumers rating their consumer experience higher? Are they engaging in different offers? Because we may do hundreds of offers. If consumers don’t like it, then they are just offers.

On this journey, we are measuring every metric, and we are making it transparent. That entire team is on the same scorecard of metrics that the customers or the store owners have for the performance of their business. Their performance and the consumer delight are embedded into the metrics on how all of us digital employees are measured.

Megan: Yes, absolutely. It sounds like you’re measuring success through several different lenses, so it’s really interesting to hear about that approach. Given where you are in your journey, as many companies struggle to adopt and implement AI and other emerging technologies, is there any advice that you’d offer, given the lessons you’ve learned so far?

Tarang: On AI, we have to keep it very, very simple. Instead of saying that, “Hey, we are going to create, we are going to use AI technology for the sake of it,” we have to tie the usage of AI technology to the impact it has on the customers. I’ll use four examples on how we are doing that. 

When we say we are leveraging AI to personalize the offers, leveraging data for consumers, what are we measuring, and what are we applying? We are looking at the data of consumer behavior and applying AI models to see, based on the current transactions, how would they react, what would they buy? People living in Frisco, Texas, age, whatever, what do they buy, when do they come, and what are they buying other places?

So let’s personalize offers so that they make that left turn. And we are measuring, whether personalization is driving the delight enough that the consumers come back to the store and don’t go back to their old ways, number one. Number two, what we are also doing is, like I mentioned earlier, we are leveraging data and AI technologies to constantly monitor the trends right in the marketplace, and we’ve created some automation to leverage those trends and act quickly, which also leads to some level of personalization. It’s more regionalization. 

Now, as we do that, we also look at the patterns of what equipment or what transactions are slowing down and we proactively monitor and resolve them. So if the store has issues and if payment has issue, loyalty has issue, or POS has issue, back office has issue, we proactively work on it to resolve that.

Number three that we are doing is, we are looking at the convenience market and we are looking at what is selling and what is in stock, so we are optimizing our supply chain inventory, pricing, and inventory, so that we could enable the store owners to cater to their consumers who come to the stores. This is actually really helping us have the product in the store that the customer actually came for.

Megan: Absolutely. Looking ahead, when you think about the path to generative AI and other emerging technologies? Is there something that excites you the most, kind of looking ahead in the years to come as well?

Tarang: That’s a great question, Megan. I’m going to answer that question a little bit philosophically because as technologists, our tendency is, whenever there is a new technology like generative AI, to create a lot of toys with it, right? But I’ve learned through this experience that whatever technology we use, like generative AI, we need to tie it to the objectives and key results for the consumer and the store. 

As an example, if we are going to leverage generative AI to do personalized offers, to do personalized creative, then we need to be able to create frameworks to measure the impact on the store, to measure the impact on the consumer, and tie that directly to the use of the technology. Are we making the consumers more loyal? Are they coming more often? Are they buying more? Because only then, we will have adopters of that technology, both the store and stores driving the consumers to adopt.

Number two, AI and other technologies should help us create an ecosystem that does not replace humans, but actually augments their ability to serve consumers and to serve the consumers so well that the consumers don’t go back to their old ways. That’s where we have to stay very, very customer-obsessed instead of just business-obsessed.

When I say ecosystem, what excites me the most is, think about it. These small mom-and-pop store operators, these generational businesses, which are the core of the American dream or entrepreneurialism, we are going to enable them with an ecosystem like an Airbnb of mobility and convenience, where they get a loyalty program with personalization, where they can delight their guests. They get technology to run their stores very, very efficiently and reduce their cost by 20%.

Number three, and very important, their frontline employees look like heroes to the guests that are walking into the store. If we achieve these three things and create an ecosystem, then that will drive prosperity leveraging technology. And bp, as a company, we would love to be part of that.

Megan: I think that’s fantastic advice. Thank you so much, Tarang, for that.

Tarang: Thank you.

Megan: That was Tarang Sethia, the GM of digital products, mobility and convenience for the Americas at bp, whom I spoke with from Brighton, England. 

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts, and if you enjoy this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks ever so much for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

How cloud and AI transform and improve customer experiences

As AI technologies become increasingly mainstream, there’s mounting competitive pressure to transform traditional infrastructures and technology stacks. Traditional brick-and-mortar companies are finding cloud and data to be the foundational keys to unlocking their paths to digital transformation, and to competing in modern, AI-forward industry landscapes. 

In this exclusive webcast, experts discuss the building blocks for digital transformation, approaches for upskilling employees and putting digital processes in place, and data management best practices. The discussion also looks at what the near future holds and emphasizes the urgency for companies to transform now to stay relevant. 

Learn from the experts

  • Digital transformation, from the ground up, starts by moving infrastructure and data to the cloud
  • AI implementation requires a talent transformation at scale, across the organization
  • AI is a company-wide initiative—everyone in the company will become either an AI creator or consumer

Featured speakers

Mohammed Rafee Tarafdar, Chief Technology Officer, Infosys

Rafee is Infosys’s Chief Technology Officer. He is responsible for the technology vision and strategy, sensing & scaling emerging technologies, advising and partnering with clients to help them succeed in their AI transformation journey and building high technology talent density. He is leading the AI First transformation journey for Infosys and has implemented population and enterprise scale platforms. He is the co-author of “The Live Enterprise” book and has been recognized as a top 50 technology global leader by Forbes in 2023 and Top 25 Tech Wavemaker by Entrepreneur India magazine in 2024.

Sam Jaddi, Chief Information Officer, ADT

Sam Jaddi is the Chief Information Officer for ADT. With more than 26 years of experience in technology innovation, Sam has deep knowledge of the security and smart home industry. His team helps to drive ADT’s business platforms and processes to improve both customer and employee experiences in the future. Sam has helped set the technology strategy, vision and direction for the company’s Digital transformation. Prior to Sam’s role at ADT, he served as Chief Technology Officer at Stanley, overseeing the company’s new security division, leading global integration initiatives, IT strategy, transformation and international operations.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The business of the future is adaptive

Manufacturing is in a state of flux. From supply chain disruptions to rising costs, tougher environmental regulations, and a changing consumer market, the sector faces a series of competing challenges.

But a new way of operating offers a way to tackle complexities head-on: adaptive production hardwires flexibility and resilience into the enterprise, drawing on powerful tools like artificial intelligence, digital twins, and robotics. Taking automation a step further, adaptive production allows manufacturers to respond in real time to demand fluctuations, adapt to supply chain disruptions, and autonomously optimize operations. It also facilitates an unprecedented level of personalization and customization for regional markets.

Time to adapt

The journey to adaptive production is not just about addressing today’s pressures, like rising costs and supply chain disruptions—it’s about positioning businesses for long-term success in a world of constant change. “In the coming years,” says Jana Kirchheim, director of manufacturing for Microsoft Germany, “I expect that new key technologies like copilots, small language models, high-performance computing, or the adaptive cloud approach will revolutionize the shop floor and accelerate industrial automation by enabling faster adjustments and re-programming for specific tasks.” These capabilities make adaptive production a transformative force, enhancing responsiveness and opening doors to systems with increasing autonomy—designed to complement human ingenuity rather than replace it.

These advances enable more than technical upgrades—they drive fundamental shifts in how manufacturers operate. John Hart, professor of mechanical engineering and director of MIT’s Center for Advanced Production Technologies, explains that automation is “going from a rigid high-volume, low-mix focus”—where factories make large quantities of very few products—“to more flexible high-volume, high-mix, and low-volume, high-mix scenarios”—where many product types can be made in custom quantities. These new capabilities demand a fundamental shift in how value is created and captured.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Driving business value by optimizing the cloud

Organizations are deepening their cloud investments at an unprecedented pace, recognizing its fundamental role in driving business agility and innovation. Synergy Research Group reports that companies spent $84 billion worldwide on cloud infrastructure services in the third quarter of 2024, a 23% rise over the third quarter of 2023 and the fourth consecutive quarter in which the year-on-year growth rate has increased.

Allowing users to access IT systems from anywhere in the world, cloud services also ensure solutions remain highly configurable and automated.

At the same time, hosted services like generative AI and tailored industry solutions can help companies quickly launch applications and grow the business. To get the most out of these services, companies are turning to cloud optimization—the process of selecting and allocating cloud resources to reduce costs while maximizing performance.

But despite all the interest in the cloud, many workloads remain stranded on-premises, and many more are not optimized for efficiency and growth, greatly limiting the forward momentum. Companies are missing out on a virtuous cycle of mutually reinforcing results that comes from even more efficient use of the cloud.

Organizations can enhance security, make critical workloads more resilient, protect the customer experience, boost revenues, and generate cost savings. These benefits can fuel growth and avert expenses, generating capital that can be invested in innovation.

“Cloud optimization involves making sure that your cloud spending is efficient so you’re not spending wastefully,” says André Dufour, Director and General Manager for AWS Cloud Optimization at Amazon Web Services. “But you can’t think of it only as cost savings at the expense of other things. Dollars freed up through optimization can be redirected to fund net new innovations, like generative AI.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Adapting for AI’s reasoning era

Anyone who crammed for exams in college knows that an impressive ability to regurgitate information is not synonymous with critical thinking.

The large language models (LLMs) first publicly released in 2022 were impressive but limited—like talented students who excel at multiple-choice exams but stumble when asked to defend their logic. Today’s advanced reasoning models are more akin to seasoned graduate students who can navigate ambiguity and backtrack when necessary, carefully working through problems with a methodical approach.

As AI systems that learn by mimicking the mechanisms of the human brain continue to advance, we’re witnessing an evolution in models from rote regurgitation to genuine reasoning. This capability marks a new chapter in the evolution of AI—and what enterprises can gain from it. But in order to tap into this enormous potential, organizations will need to ensure they have the right infrastructure and computational resources to support the advancing technology.

The reasoning revolution

“Reasoning models are qualitatively different than earlier LLMs,” says Prabhat Ram, partner AI/HPC architect at Microsoft, noting that these models can explore different hypotheses, assess if answers are consistently correct, and adjust their approach accordingly. “They essentially create an internal representation of a decision tree based on the training data they’ve been exposed to, and explore which solution might be the best.”

This adaptive approach to problem-solving isn’t without trade-offs. Earlier LLMs delivered outputs in milliseconds based on statistical pattern-matching and probabilistic analysis. This was—and still is—efficient for many applications, but it doesn’t allow the AI sufficient time to thoroughly evaluate multiple solution paths.

In newer models, extended computation time during inference—seconds, minutes, or even longer—allows the AI to employ more sophisticated internal reinforcement learning. This opens the door for multi-step problem-solving and more nuanced decision-making.

To illustrate future use cases for reasoning-capable AI, Ram offers the example of a NASA rover sent to explore the surface of Mars. “Decisions need to be made at every moment around which path to take, what to explore, and there has to be a risk-reward trade-off. The AI has to be able to assess, ‘Am I about to jump off a cliff? Or, if I study this rock and I have a limited amount of time and budget, is this really the one that’s scientifically more worthwhile?’” Making these assessments successfully could result in groundbreaking scientific discoveries at previously unthinkable speed and scale.

Reasoning capabilities are also a milestone in the proliferation of agentic AI systems: autonomous applications that perform tasks on behalf of users, such as scheduling appointments or booking travel itineraries. “Whether you’re asking AI to make a reservation, provide a literature summary, fold a towel, or pick up a piece of rock, it needs to first be able to understand the environment—what we call perception—comprehend the instructions and then move into a planning and decision-making phase,” Ram explains.

Enterprise applications of reasoning-capable AI systems

The enterprise applications for reasoning-capable AI are far-reaching. In health care, reasoning AI systems could analyze patient data, medical literature, and treatment protocols to support diagnostic or treatment decisions. In scientific research, reasoning models could formulate hypotheses, design experimental protocols, and interpret complex results—potentially accelerating discoveries across fields from materials science to pharmaceuticals. In financial analysis, reasoning AI could help evaluate investment opportunities or market expansion strategies, as well as develop risk profiles or economic forecasts.

Armed with these insights, their own experience, and emotional intelligence, human doctors, researchers, and financial analysts could make more informed decisions, faster. But before setting these systems loose in the wild, safeguards and governance frameworks will need to be ironclad, particularly in high-stakes contexts like health care or autonomous vehicles.

“For a self-driving car, there are real-time decisions that need to be made vis-a-vis whether it turns the steering wheel to the left or the right, whether it hits the gas pedal or the brake—you absolutely do not want to hit a pedestrian or get into an accident,” says Ram. “Being able to reason through situations and make an ‘optimal’ decision is something that reasoning models will have to do going forward.”

The infrastructure underpinning AI reasoning

To operate optimally, reasoning models require significantly more computational resources for inference. This creates distinct scaling challenges. Specifically, because the inference durations of reasoning models can vary widely—from just a few seconds to many minutes—load balancing across these diverse tasks can be challenging.

Overcoming these hurdles requires tight collaboration between infrastructure providers and hardware manufacturers, says Ram, speaking of Microsoft’s collaboration with NVIDIA, which brings its accelerated computing platform to Microsoft products, including Azure AI.

“When we think about Azure, and when we think about deploying systems for AI training and inference, we really have to think about the entire system as a whole,” Ram explains. “What are you going to do differently in the data center? What are you going to do about multiple data centers? How are you going to connect them?” These considerations extend into reliability challenges at all scales: from memory errors at the silicon level, to transmission errors within and across servers, thermal anomalies, and even data center-level issues like power fluctuations—all of which require sophisticated monitoring and rapid response systems.

By creating a holistic system architecture designed to handle fluctuating AI demands, Microsoft and NVIDIA’s collaboration allows companies to harness the power of reasoning models without needing to manage the underlying complexity. In addition to performance benefits, these types of collaborations allow companies to keep pace with a tech landscape evolving at breakneck speed. “Velocity is a unique challenge in this space,” says Ram. “Every three months, there is a new foundation model. The hardware is also evolving very fast—in the last four years, we’ve deployed each generation of NVIDIA GPUs and now NVIDIA GB200NVL72. Leading the field really does require a very close collaboration between Microsoft and NVIDIA to share roadmaps, timelines, and designs on the hardware engineering side, qualifications and validation suites, issues that arise in production, and so on.”

Advancements in AI infrastructure designed specifically for reasoning and agentic models are critical for bringing reasoning-capable AI to a broader range of organizations. Without robust, accessible infrastructure, the benefits of reasoning models will remain relegated to companies with massive computing resources.

Looking ahead, the evolution of reasoning-capable AI systems and the infrastructure that supports them promises even greater gains. For Ram, the frontier extends beyond enterprise applications to scientific discovery and breakthroughs that propel humanity forward: “The day when these agentic systems can power scientific research and propose new hypotheses that can lead to a Nobel Prize, I think that’s the day when we can say that this evolution is complete.”

To learn more, please read Microsoft and NVIDIA accelerate AI development and performance, watch the NVIDIA GTC AI Conference sessions on demand, and explore the topic areas of Azure AI solutions and Azure AI infrastructure.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

A vision for the future of automation

The manufacturing industry is at a crossroads: Geopolitical instability is fracturing supply chains from the Suez to Shenzhen, impacting the flow of materials. Businesses are battling rising costs and inflation, coupled with a shrinking labor force, with more than half a million unfilled manufacturing jobs in the U.S. alone. And climate change is further intensifying the pressure, with more frequent extreme weather events and tightening environmental regulations forcing companies to rethink how they operate. New solutions are imperative.

Meanwhile, advanced automation, powered by the convergence of emerging and established technologies, including industrial AI, digital twins, the internet of things (IoT), and advanced robotics, promises greater resilience, flexibility, sustainability, and efficiency for industry. Individual success stories have demonstrated the transformative power of these technologies, providing examples of AI-driven predictive maintenance reducing downtime by up to 50%. Digital twin simulations can significantly reduce time to market, and bring environment dividends, too: One survey found 77% of leaders expect digital twins to reduce carbon emissions by 15% on average.

Yet, broad adoption of this advanced automation has lagged. “That’s not necessarily or just a technology gap,” says John Hart, professor of mechanical engineering and director of the Center for Advanced Production Technologies at MIT. “It relates to workforce capabilities and financial commitments and risk required.” For small and medium enterprises, and those with brownfield sites—older facilities with legacy systems— the barriers to implementation are significant.

In recent years, governments have stepped in to accelerate industrial progress. Through a revival of industrial policies, governments are incentivizing high-tech manufacturing, re-localizing critical production processes, and reducing reliance on fragile global supply chains.

All these developments converge in a key moment for manufacturing. The external pressures on the industry—met with technological progress and these new political incentives—may finally enable the shift toward advanced automation.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The machines are rising — but developers still hold the keys

Rumors of the ongoing death of software development — that it’s being slain by AI — are greatly exaggerated. In reality, software development is at a fork in the road: embracing the (currently) far-off notion of fully automated software development or acknowledging the work of a software developer is much more than just writing lines of code.

The decision the industry makes could have significant long-term consequences. Increasing complacency around AI-generated code and a shift to what has been termed “vibe coding” — where code is generated through natural language prompts until the results seem to work — will lead to code that’s more error-strewn, more expensive to run and harder to change in the future. And, if the devaluation of software development skills continues, we may even lack a workforce with the skills and knowledge to fix things down the line. 

This means software developers are going to become more important to how the world builds and maintains software. Yes, there are many ways their practices will evolve thanks to AI coding assistance, but in a world of proliferating machine-generated code, developer judgment and experience will be vital.

The dangers of AI-generated code are already here

The risks of AI-generated code aren’t science fiction: they’re with us today. Research done by GitClear earlier this year indicates that with AI coding assistants (like GitHub Copilot) going mainstream, code churn — which GitClear defines as “changes that were either incomplete or erroneous when the author initially wrote, committed, and pushed them to the company’s git repo” — has significantly increased. GitClear also found there was a marked decrease in the number of lines of code that have been moved, a signal for refactored code (essentially the care and feeding to make it more effective).

In other words, from the time coding assistants were introduced there’s been a pronounced increase in lines of code without a commensurate increase in lines deleted, updated, or replaced. Simultaneously, there’s been a decrease in lines moved — indicating a lot of code has been written but not refactored. More code isn’t necessarily a good thing (sometimes quite the opposite); GitClear’s findings ultimately point to complacency and a lack of rigor about code quality.

Can AI be removed from software development?

However, AI doesn’t have to be removed from software development and delivery. On the contrary, there’s plenty to be excited about. As noted in the latest volume of the Technology Radar — Thoughtworks’ report on technologies and practices from work with hundreds of clients all over the world — the coding assistance space is full of opportunities. 

Specifically, the report noted tools like Cursor, Cline and Windsurf can enable software engineering agents. What this looks like in practice is an agent-like feature inside developer environments that developers can ask specific sets of coding tasks to be performed in the form of a natural language prompt. This enables the human/machine partnership.

That being said, to only focus on code generation is to miss the variety of ways AI can help software developers. For example, Thoughtworks has been interested in how generative AI can be used to understand legacy codebases, and we see a lot of promise in tools like Unblocked, which is an AI team assistant that helps teams do just that. In fact, Anthropic’s Claude Code helped us add support for new languages in an internal tool, CodeConcise. We use CodeConcise to understand legacy systems; and while our success was mixed, we do think there’s real promise here.

Tightening practices to better leverage AI

It’s important to remember much of the work developers do isn’t developing something new from scratch. A large proportion of their work is evolving and adapting existing (and sometimes legacy) software. Sprawling and janky code bases that have taken on technical debt are, unfortunately, the norm. Simply applying AI will likely make things worse, not better, especially with approaches like vibe.  

This is why developer judgment will become more critical than ever. In the latest edition of the Technology Radar report, AI-friendly code design is highlighted, based on our experience that AI coding assistants perform best with well-structured codebases. 

In practice, this requires many different things, including clear and expressive naming to ensure context is clearly communicated (essential for code maintenance), reducing duplicate code, and ensuring modularity and effective abstractions. Done together, these will all help make code more legible to AI systems.

Good coding practices are all too easy to overlook when productivity and effectiveness are measured purely in terms of output, and even though this was true before there was AI tooling, software development needs to focus on good coding first.

AI assistance demands greater human responsibility

Instagram co-founder Mike Krieger recently claimed that in three years software engineers won’t write any code: they will only review AI-created code. This might sound like a huge claim, but it’s important to remember that reviewing code has always been a major part of software development work. With this in mind, perhaps the evolution of software development won’t be as dramatic as some fear.

But there’s another argument: as AI becomes embedded in how we build software, software developers will take on more responsibility, not less. This is something we’ve discussed a lot at Thoughtworks: the job of verifying that an AI-built system is correct will fall to humans. Yes, verification itself might be AI-assisted, but it will be the role of the software developer to ensure confidence. 

In a world where trust is becoming highly valuable — as evidenced by the emergence of the chief trust officer — the work of software developers is even more critical to the infrastructure of global industry. It’s vital software development is valued: the impact of thoughtless automation and pure vibes could prove incredibly problematic (and costly) in the years to come.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Powering the food industry with AI

There has never been a more pressing time for food producers to harness technology to tackle the sector’s tough mission. To produce ever more healthy and appealing food for a growing global population in a way that is resilient and affordable, all while minimizing waste and reducing the sector’s environmental impact. From farm to factory, artificial intelligence and machine learning can support these goals by increasing efficiency, optimizing supply chains, and accelerating the research and development of new types of healthy products. 

In agriculture, AI is already helping farmers to monitor crop health, tailor the delivery of inputs, and make harvesting more accurate and efficient. In labs, AI is powering experiments in gene editing to improve crop resilience and enhance the nutritional value of raw ingredients. For processed foods, AI is optimizing production economics, improving the texture and flavor of products like alternative proteins and healthier snacks, and strengthening food safety processes too. 

But despite this promise, industry adoption still lags. Data-sharing remains limited and companies across the value chain have vastly different needs and capabilities. There are also few standards and data governance protocols in place, and more talent and skills are needed to keep pace with the technological wave. 

All the same, progress is being made and the potential for AI in the food sector is huge. Key findings from the report are as follows: 

Predictive analytics are accelerating R&D cycles in crop and food science. AI reduces the time and resources needed to experiment with new food products and turns traditional trial-and-error cycles into more efficient data-driven discoveries. Advanced models and simulations enable scientists to explore natural ingredients and processes by simulating thousands of conditions, configurations, and genetic variations until they crack the right combination. 

AI is bringing data-driven insights to a fragmented supply chain. AI can revolutionize the food industry’s complex value chain by breaking operational silos and translating vast streams of data into actionable intelligence. Notably, large language models (LLMs) and chatbots can serve as digital interpreters, democratizing access to data analysis for farmers and growers, and enabling more informed, strategic decisions by food companies. 

Partnerships are crucial for maximizing respective strengths. While large agricultural companies lead in AI implementation, promising breakthroughs often emerge from strategic collaborations that leverage complementary strengths with academic institutions and startups. Large companies contribute extensive datasets and industry experience, while startups bring innovation, creativity, and a clean data slate. Combining expertise in a collaborative approach can increase the uptake of AI. 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.