Designing the future of entertainment

An entertainment revolution, powered by AI and other emerging technologies, is fundamentally changing how content is created and consumed today. Media and entertainment (M&E) brands are faced with unprecedented opportunities—to reimagine costly and complex production workloads, to predict the success of new scripts or outlines, and to deliver immersive entertainment in novel formats like virtual reality (VR) and the metaverse. Meanwhile, the boundaries between entertainment formats—from gaming to movies and back—are blurring, as new alliances form across industries, and hardware innovations like smart glasses and autonomous vehicles make media as ubiquitous as air.

At the same time, media and entertainment brands are facing competitive threats. They must reinvent their business models and identify new revenue streams in a more fragmented and complex consumer landscape. They must keep up with advances in hardware and networking, while building an IT infrastructure to support AI and related technologies. Digital media standards will need to evolve to ensure interoperability and seamless experiences, while companies search for the right balance between human and machine, and protect their intellectual property and data.

This report examines the key technology shifts transforming today’s media and entertainment industry and explores their business implications. Based on in-depth interviews with media and entertainment executives, startup founders, industry analysts, and experts, the report outlines the challenges and opportunities that tech-savvy business leaders will find ahead.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Harnessing cloud and AI to power a sustainable future 

Organizations working toward ambitious sustainability targets are finding an ally in emerging technologies. In agriculture, for instance, AI can use satellite imagery and real-time weather data to optimize irrigation and reduce water usage. In urban areas, cloud-enabled AI can power intelligent traffic systems, rerouting vehicles to cut commute times and emissions. At an industrial level, advanced algorithms can predict equipment failures days or even weeks in advance. 

But AI needs a robust foundation to deliver on its lofty promises—and cloud computing provides that bedrock. As AI and cloud continue to converge and mature, organizations are discovering new ways to be more environmentally conscious while driving operational efficiencies. 

Data from a poll conducted by MIT Technology Review Insights in 2024 suggests growing momentum for this dynamic duo: 38% of executives polled say that cloud and AI are key components of their company’s sustainability initiatives, and another 35% say the combination is making a meaningful contribution to sustainability goals (see Figure 1). 

This enthusiasm isn’t just theoretical, either. Consider that 45% of respondents identified energy consumption optimization as their most relevant use case for AI and cloud in sustainability initiatives. And organizations are backing these priorities with investment—more than 50% of companies represented in the poll plan to increase their spending on cloud and AI-focused sustainability initiatives by 25% or more over the next two years. 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Reframing digital transformation through the lens of generative AI

Enterprise adoption of generative AI technologies has undergone explosive growth in the last two years and counting. Powerful solutions underpinned by this new generation of large language models (LLMs) have been used to accelerate research, automate content creation, and replace clunky chatbots with AI assistants and more sophisticated AI agents that closely mimic human interaction.

“In 2023 and the first part of 2024, we saw enterprises experimenting, trying out new use cases to see, ‘What can this new technology do for me?’” explains Arthy Krishnamurthy, senior director for business transformation at Dataiku. But while many organizations were eager to adopt and exploit these exciting new capabilities, some may have underestimated the need to thoroughly scrutinize AI-related risks and recalibrate existing frameworks and forecasts for digital transformation.

“Now, the question is more around how fundamentally can this technology reshape our competitive landscape?” says Krishnamurthy. “We are no longer just talking about technological implementation but about organizational transformation. Expansion is not a linear progression but a strategic recalibration that demands deep systems thinking.”

Key to this strategic recalibration will be a refined approach to ROI, delivery, and governance in the context of generative AI-led digital transformation. “This really has to start in the C-suite and at the board level,” says Kevin Powers, director of Boston College Law School’s Master of Legal Studies program in cybersecurity, risk, and governance. “Focus on AI as something that is core to your business. Have a plan of action.”

Download the full article

Implementing responsible AI in the generative age

Many organizations have experimented with AI, but they haven’t always gotten the full value from their investments. A host of issues standing in the way center on the accuracy, fairness, and security of AI systems. In response, organizations are actively exploring the principles of responsible AI: the idea that AI systems must be fair, transparent, and beneficial to society for it to be widely adopted. 

When responsible AI is done right, it unlocks trust and therefore customer adoption of enterprise AI. According to the US National Institute of Standards and Technology the essential building blocks of AI trustworthiness include: 

  • Validity and reliability 
  • Safety
  • Security and resiliency 
  • Accountability and transparency 
  • Explainability and interpretability 
  • Privacy
  • Fairness with mitigation of harmful bias 

To investigate the current landscape of responsible AI across the enterprise, MIT Technology Review Insights surveyed 250 business leaders about how they’re implementing principles that ensure AI trustworthiness. The poll found that responsible AI is important to executives, with 87% of respondents rating it a high or medium priority for their organization.

A majority of respondents (76%) also say that responsible AI is a high or medium priority specifically for creating a competitive advantage. But relatively few have figured out how to turn these ideas into reality. We found that only 15% of those surveyed felt highly prepared to adopt effective responsible AI practices, despite the importance they placed on them. 

Putting responsible AI into practice in the age of generative AI requires a series of best practices that leading companies are adopting. These practices can include cataloging AI models and data and implementing governance controls. Companies may benefit from conducting rigorous assessments, testing, and audits for risk, security, and regulatory compliance. At the same time, they should also empower employees with training at scale and ultimately make responsible AI a leadership priority to ensure their change efforts stick. 

“We all know AI is the most influential change in technology that we’ve seen, but there’s a huge disconnect,” says Steven Hall, chief AI officer and president of EMEA at ISG, a global technology research and IT advisory firm. “Everybody understands how transformative AI is going to be and wants strong governance, but the operating model and the funding allocated to responsible AI are well below where they need to be given its criticality to the organization.” 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Fueling the future of digital transformation

In the rapidly evolving landscape of digital innovation, staying adaptable isn’t just a strategy—it’s a survival skill. “Everybody has a plan until they get punched in the face,” says Luis Niño, digital manager for technology ventures and innovation at Chevron, quoting Mike Tyson.

Drawing from a career that spans IT, HR, and infrastructure operations across the globe, Niño offers a unique perspective on innovation and how organizational microcultures within Chevron shape how digital transformation evolves. 

Centralized functions prioritize efficiency, relying on tools like AI, data analytics, and scalable system architectures. Meanwhile, business units focus on simplicity and effectiveness, deploying robotics and edge computing to meet site-specific needs and ensure safety.

“From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant,” he says.

Central to this transformation is the rise of industrial AI. Unlike consumer applications, industrial AI operates in high-stakes environments where the cost of errors can be severe. 

“The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes,” says Niño. “If a machine reacts in ways you don’t expect, people could get hurt, and so there’s an extra level of care that needs to happen and that we need to think about as we deploy these technologies.”

Niño highlights Chevron’s efforts to use AI for predictive maintenance, subsurface analytics, and process automation, noting that “AI sits on top of that foundation of strong data management and robust telecommunications capabilities.” As such, AI is not just a tool but a transformation catalyst redefining how talent is managed, procurement is optimized, and safety is ensured.

Looking ahead, Niño emphasizes the importance of adaptability and collaboration: “Transformation is as much about technology as it is about people.” With initiatives like the Citizen Developer Program and Learn Digital, Chevron is empowering its workforce to bridge the gap between emerging technologies and everyday operations using an iterative mindset. 

Niño is also keeping watch over the convergence of technologies like AI, quantum computing, Internet of Things, and robotics, which hold the potential to transform how we produce and manage energy.

“My job is to keep an eye on those developments,” says Niño, “to make sure that we’re managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective.”

This episode of Business Lab is produced in association with Infosys Cobalt.

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Our topic today is digital transformation, from back office operations to infrastructure in the field like oil rigs, companies continue to look for ways to increase profit, meet sustainability goals, and invest in the latest and greatest technology. 

Two words for you: enabling innovation. 

My guest is Luis Niño, who is the digital manager of technology ventures, and innovation at Chevron. This podcast is produced in association with Infosys Cobalt. 

Welcome, Luis. 

Luis Niño: Thank you, Megan. Thank you for having me. 

Megan: Thank you so much for joining us. Just to set some context, Luis, you’ve had a really diverse career at Chevron, spanning IT, HR, and infrastructure operations. I wonder, how have those different roles shaped your approach to innovation and digital strategy? 

Luis: Thank you for the question. And you’re right, my career has spanned many different areas and geographies in the company. It really feels like I’ve worked for different companies every time I change roles. Like I said, different functions, organizations, locations I’ve had since here in Houston and in Bakersfield, California and in Buenos Aires, Argentina. From an organizational standpoint, I’ve seen central teams international service centers, as you mentioned, field infrastructure and operation organizations in our business units, and I’ve also had corporate function roles. 

And the reason why I mentioned that diversity is that each one of those looks at digital transformation and innovation through its own lens. From the priority to scale and streamline in central organizations to the need to optimize and simplify out in business units and what I like to call the periphery, you really learn about the concept first off of microcultures and how different these organizations can be even within our own walls, but also how those come together in organizations like Chevron. 

Over time, I would highlight two things. In central organizations, whether that’s functions like IT, HR, or our technical center, we have a central technical center, where we continuously look for efficiencies in scaling, for system architectures that allow for economies of scale. As you can imagine, the name of the game is efficiency. We have also looked to improve employee experience. We want to orchestrate ecosystems of large technology vendors that give us an edge and move the massive organization forward. In areas like this, in central areas like this, I would say that it is data analytics, data science, and artificial intelligence that has become the sort of the fundamental tools to achieve those objectives. 

Now, if you allow that pendulum to swing out to the business units and to the periphery, the name of the game is effectiveness and simplicity. The priority for the business units is to find and execute technologies that help us achieve the local objectives and keep our people safe. Especially when we are talking about our manufacturing environments where there’s risk for our folks. In these areas, technologies like robotics, the Internet of Things, and obviously edge computing are currently the enablers of information. 

I wouldn’t want to miss the opportunity to say that both of those, let’s call it, areas of the company, rely on the same foundation and that is a foundation of strong data management, of strong network and telecommunications capabilities because those are the veins through which the data flows and everything relies on data. 

In my experience, this pendulum also drives our technology priorities and our technology strategy. From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant. If you are deploying something in the center and you suddenly realize that some business unit already has a solution, you cannot just say, let’s shut it down and go with what I said. You have to adapt, you have to understand behavioral change management and you really have to make sure that change and adjustments are your bread and butter. 

I don’t know if you know this, Megan, but there’s a popular fight happening this weekend with Mike Tyson and he has a saying, and that is everybody has a plan until they get punched in the face. And what he’s trying to say is you have to be adaptable. The plan is good, but you have to make sure that you remain agile. 

Megan: Yeah, absolutely. 

Luis: And then I guess the last lesson really quick is about risk management or maybe risk appetite. Each group has its own risk appetite depending on the lens or where they’re sitting, and this may create some conflict between organizations that want to move really, really fast and have urgency and others that want to take a step back and make sure that we’re doing things right at the balance. I think that at the end, I think that’s a question for leadership to make sure that they have a pulse on our ability to change. 

Megan: Absolutely, and you’ve mentioned a few different elements and technologies I’d love to dig into a bit more detail on. One of which is artificial intelligence because I know Chevron has been exploring AI for several years now. I wonder if you could tell us about some of the AI use cases it’s working on and what frameworks you’ve developed for effective adoption as well. 

Luis: Yeah, absolutely. This is the big one, isn’t it? Everybody’s talking about AI. As you can imagine, the focus in our company is what is now being branded as industrial AI. That’s really a simple term to explain that AI is being applied to industrial and manufacturing settings. And like other AI, and as I mentioned before, the foundation remains data. I want to stress the importance of data here. 

One of the differences however is that in the case of industrial AI, data comes from a variety of sources. Some of them are very critical. Some of them are non-critical. Sources like operating technologies, process control networks, and SCADA, all the way to Internet of Things sensors or industrial Internet of Things sensors, and unstructured data like engineering documentation and IT data. These are massive amounts of information coming from different places and also from different security structures. The complexity of industrial AI is considerably higher than what I would call consumer or productivity AI. 

Megan: Right. 

Luis: The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes. When you’re in an industrial setting, if a machine reacts in ways you don’t expect, people could get hurt, and so there’s an extra level of care that needs to happen and that we need to think about as we deploy these technologies. 

AI sits on top of that foundation and it takes different shapes. It can show up as a copilot like the ones that have been popularized recently, or it can show up as agentic AI, which is something that we’re looking at closely now. And agentic AI is just a term to mean that AI can operate autonomously and can use complex reasoning to solve multistep problems in an industrial setting. 

So with that in mind, going back to your question, we use both kinds of AI for multiple use cases, including predictive maintenance, subsurface analytics, process automation, and workflow optimization, and also end-user productivity. Each one of those use cases obviously needs specific objectives that the business is looking at in each area of the value chain. 

In predictive maintenance, for example, we monitor and we analyze equipment health, we prevent failures, and we allow for preventive maintenance and reduced downtime. The AI helps us understand when machinery needs to be maintained in order to prevent failure instead of just waiting for it to happen. In subsurface analysis, we’re exploring AI to develop better models of hydrocarbon reservoirs. We are exploring AI to forecast geomechanical models and to capture and understand data from fiber optic sensing. Fiber optic sensing is a capability that has proven very valuable to us, and AI is helping us make sense of the wealth of information that comes out of the whole, as we like to say. Of course, we don’t do this alone. We partner with many third-party organizations, with vendors, and with people inside subject matter experts inside of Chevron to move the projects forward. 

There are several other areas beyond industrial AI that we are looking at. AI really is a transformation catalyst, and so areas like finance and law and procurement and HR, we’re also doing testing in those corporate areas. I can tell you that I’ve been part of projects in procurement, in HR. When I was in HR we ran a pretty amazing effort in partnership with a third-party company, and what they do is they seek to transform the way we understand talent, and the way they do that is they are trying to provide data-driven frameworks to make talent decisions. 

And so they redefine talent by framing data in the form of skills, and as they do this, they help de-bias processes that are usually or can be usually prone to unconscious biases and perspectives. It really is fascinating to think of your talent-based skills and to start decoupling them from what we know since the industrial era began, which is people fit in jobs. Now the question is more the other way around. How can jobs adapt to people’s skills? And then in procurement, AI is basically helping us open the aperture to a wider array of vendors in an automated fashion that makes us better partners. It’s more cost-effective. It’s really helpful. 

Before I close here, you did reference frameworks, so the framework of industrial AI versus what I call productivity AI, the understanding of the use cases. All of this sits on top of our responsible AI frameworks. We have set up a central enterprise AI organization and they have really done a great job in developing key areas of responsible AI as well as training and adoption frameworks. This includes how to use AI, how not to use AI, what data we can share with the different GPTs that are available to us. 

We are now members of organizations like the Responsible AI Institute. This is an organization that fosters the safe use of AI and trustworthy AI. But our own responsible AI framework, it involves four pillars. The first one is the principles, and this is how we make sure we continue to stay aligned with the values that drive this company, which we call The Chevron Way. It includes assessment, making sure that we evaluate these solutions in proportion to impact and risk. As I mentioned, when you’re talking about industrial processes, people’s lives are at stake. And so we take a very close look at what we are putting out there and how we ensure that it keeps our people safe. It includes education, I mentioned training our people to augment their capabilities and reinforcing responsible principles, and the last of the four is governance oversight and accountability through control structures that we are putting in place. 

Megan: Fantastic. Thank you so much for those really fascinating specific examples as well. It’s great to hear about. And digital transformation, which you did touch on briefly, has become critical of course to enable business growth and innovation. I wonder what has Chevron’s digital transformation looked like and how has the shift affected overall operations and the way employees engage with technology as well? 

Luis: Yeah, yeah. That’s a really good question. The term digital transformation is interpreted in many different ways. For me, it really is about leveraging technology to drive business results and to drive business transformation. We usually tend to specify emerging technology as the catalyst for transformation. I think that is okay, but I also think that there are ways that you can drive digital transformation with technology that’s not necessarily emerging but is being optimized, and so under this umbrella, we include everything from our Citizen Developer Program to complex industry partnerships that help us maximize the value of data. 

The Citizen Developer Program has been very successful in helping bridge the gap between our technical software engineer and software development practices and people who are out there doing the work, getting familiar, and demystifying the way to build solutions. 

I do believe that transformation is as much about technology as it is about people. And so to go back to the responsible AI framework, we are actively training and upskilling the workforce. We created a program called Learn Digital that helps employees embrace the technologies. I mentioned the concept of demystifying. It’s really important that people don’t fall into the trap of getting scared by the potential of the technology or the fact that it is new and we help them and we give them the tools to bridge the change management gap so they can get to use them and get the most out of them. 

At a high level, our transformation has followed the cyclical nature that pretty much any transformation does. We have identified the data foundations that we need to have. We have understood the impact of the processes that we are trying to digitize. We organize that information, then we streamline and automate processes, we learn, and now machines learn and then we do it all over again. And so this cyclical mindset, this iterative mindset has really taken hold in our culture and it has made us a little bit better at accepting the technologies that are driving the change. 

Megan: And to look at one of those technologies in a bit more detail, cloud computing has revolutionized infrastructure across industries. But there’s also a pendulum ship now toward hybrid and edge computing models. How is Chevron balancing cloud, hybrid, and edge strategies for optimal performance as well? 

Luis: Yeah, that’s a great question and I think you could argue that was the genesis of the digital transformation effort. It’s been a journey for us and it’s a journey that I think we’re not the only ones that may have started it as a cost savings and storage play, but then we got to this ever-increasing need for multiple things like scaling compute power to support large language models and maximize how we run complex models. There’s an increasing need to store vast amounts of data for training and inference models while we improve data management and, while we predict future needs. 

There’s a need for the opportunity to eliminate hardware constraints. One of the promises of cloud was that you would be able to ramp up and down depending on your compute needs as projects demanded. And that hasn’t stopped, that has only increased. And then there’s a need to be able to do this at a global level. For a company like ours that is distributed across the globe, we want to do this everywhere while actively managing those resources without the weight of the infrastructure that we used to carry on our books. Cloud has really helped us change the way we think about the digital assets that we have. 

It’s important also that it has created this symbiotic need to grow between AI and the cloud. So you don’t have the AI without the cloud, but now you don’t have the cloud without AI. In reality, we work on balancing the benefits of cloud and hybrid and edge computing, and we keep operational efficiency as our North Star. We have key partnerships in cloud, that’s something that I want to make sure I talk about. Microsoft is probably the most strategic of our partnerships because they’ve helped us set our foundation for cloud. But we also think of the convenience of hybrid through the lens of leveraging a convenient, scalable public cloud and a very secure private cloud that helps us meet our operational and safety needs. 

Edge computing fills the gap or the need for low latency and real-time data processing, which are critical constraints for decision-making in most of the locations where we operate. You can think of an offshore rig, a refinery, an oil rig out in the field, and maybe even not-so-remote areas like here in our corporate offices. Putting that compute power close to the data source is critical. So we work and we partner with vendors to enable lighter compute that we can set at the edge and, I mentioned the foundation earlier, faster communication protocols at the edge that also solve the need for speed. 

But it is important to remember that you don’t want to think about edge computing and cloud as separate things. Cloud supports edge by providing centralized management by providing advanced analytics among others. You can train models in the cloud and then deploy them to edge devices, keeping real-time priorities in mind. I would say that edge computing also supports our cybersecurity strategy because it allows us to control and secure sensitive environments and information while we embed machine learning and AI capabilities out there. 

So I have mentioned use cases like predictive maintenance and safety, those are good examples of areas where we want to make sure our cybersecurity strategy is front and center. When I was talking about my experience I talked about the center and the edge. Our strategy to balance that pendulum relies on flexibility and on effective asset management. And so making sure that our cloud reflects those strategic realities gives us a good footing to achieve our corporate objectives. 

Megan: As you say, safety is a top priority. How do technologies like the Internet of Things and AI help enhance safety protocols specifically too, especially in the context of emissions tracking and leak detection? 

Luis: Yeah, thank you for the question. Safety is the most important thing that we think and talk about here at Chevron. There is nothing more important than ensuring that our people are safe and healthy, so I would break safety down into two. Before I jump to emissions tracking and leak detection, I just want to make a quick point on personal safety and how we leverage IoT and AI to that end. 

We use sensing capabilities that help us keep workers out of harm’s way, and so things like computer vision to identify and alert people who are coming into safety areas. We also use computer vision, for example, to identify PPE requirements—personal protective equipment requirements—and so if there are areas that require a certain type of clothing, a certain type of identification, or a hard hat, we are using technologies that can help us make sure people have that before they go into a particular area. 

We’re also using wearables. Wearables help us in one of the use cases is they help us track exhaustion and dehydration in locations where that creates inherent risk, and so locations that are very hot, whether it’s because of the weather or because they are enclosed, we can use wearables that tell us how fast the person’s getting dehydrated, what are the levels of liquid or sodium that they need to make sure that they’re safe or if they need to take a break. We have those capabilities now. 

Going back to emissions tracking and leak detection, I think it’s actually the combination of IoT and AI that can transform how we prevent and react to those. In this case, we also deploy sensing capabilities. We use things like computer vision, like infrared capabilities, and we use others that deliver data to the AI models, which then alert and enable rapid response. 

The way I would explain how we use IoT and AI for safety, whether it’s personnel safety or emissions tracking and leak detection, is to think about sensors as the extension of human ability to sense. In some cases, you could argue it’s super abilities. And so if you think of sight normally you would’ve had supervisors or people out there that would be looking at the field and identifying issues. Well, now we can use computer vision with traditional RGB vision, we can use them with infrared, we can use multi-angle to identify patterns, and have AI tell us what’s going on. 

If you keep thinking about the human senses, that’s sight, but you can also use sound through ultrasonic sensors or microphone sensors. You can use touch through vibration recognition and heat recognition. And even more recently, this is something that we are testing more recently, you can use smell. There are companies that are starting to digitize smell. Pretty exciting, also a little bit crazy. But it is happening. And so these are all tools that any human would use to identify risk. Well, so now we can do it as an extension of our human abilities to do so. This way we can react much faster and better to the anomalies. 

A specific example with methane. We have a simple goal with methane, we want to keep methane in the pipe. Once it’s out, it’s really hard or almost impossible to take it back. Over the last six to seven years, we have reduced our methane intensity by over 60% and we’re leveraging technology to achieve that. We have deployed a methane detection program. We have trialed over 10 to 15 advanced methane detection technologies. 

A technology that I have been looking at recently is called Aquanta Vision. This is a company supported by an incubator program we have called Chevron Studio. We did this in partnership with the National Renewable Energy Laboratory, and what they do is they leverage optical gas imaging to detect methane effectively and to allow us to prevent it from escaping the pipe. So that’s just an example of the technologies that we’re leveraging in this space. 

Megan: Wow, that’s fascinating stuff. And on emissions as well, Chevron has made significant investments in new energy technologies like hydrogen, carbon capture, and renewables. How do these technologies fit into Chevron’s broader goal of reducing its carbon footprint? 

Luis: This is obviously a fascinating space for us, one that is ever-changing. It is honestly not my area of expertise. But what I can say is we truly believe we can achieve high returns and lower carbon, and that’s something that we communicate broadly. A few years ago, I believe it was 2021, we established our Chevron New Energies company and they actively explore lower carbon alternatives including hydrogen, renewables, and carbon capture offsets. 

My area, the digital area, and the convergence between digital technologies and the technical sciences will enable the techno-commercial viability of those business lines. Thinking about carbon capture, is something that we’ve done for a long time. We have decades of experience in carbon capture technologies across the world. 

One of our larger projects, the Gorgon Project in Australia, I think they’ve captured something between 5 and 10 million tons of CO2 emissions in the past few years, and so we have good expertise in that space. But we also actively partner in carbon capture. We have joined hubs of carbon capture here in Houston, for example, where we investing in companies like there’s a company called Carbon Clean, a company called Carbon Engineering, and one called Svante. I’m familiar with these names because the corporate VC team is close to me. These companies provide technologies for direct air capture. They provide solutions for hard-to-abate industries. And so we want to keep an eye on these emerging capabilities and make use of them to continuously lower our carbon footprint. 

There are two areas here that I would like to talk about. Hydrogen first. This is another area that we’re familiar with. Our plan is to build on our existing assets and capabilities to deliver a large-scale hydrogen business. Since 2005, I think we’ve been doing retail hydrogen, and we also have several partnerships there. In renewables, we are creating a range of fuels for different transportation types. We use diesel, bio-based diesel, we use renewable natural gas, we use sustainable aviation fuel. Yeah, so these are all areas of importance to us. They’re emerging business lines that are young in comparison to the rest of our company. We’ve been a company for 140 years plus, and this started in 2021, so you can imagine how steep that learning curve is. 

I mentioned how we leverage our corporate venture capital team to learn and to keep an eye out on what are these emerging trends and technologies that we want to learn about. They leverage two things. They leverage a core fund, which is focused on areas that can seek innovation for our core business for the title. And we have a separate future energy fund that explores areas that are emerging. Not only do they invest in places like hydrogen, carbon capture, and renewables, but they also may invest in other areas like wind and geothermal and nuclear capability. So we constantly keep our eyes open for these emerging technologies. 

Megan: I see. And I wonder if you could share a bit more actually about Chevron’s role in driving sustainable business innovation. I’m thinking of initiatives like converting used cooking oil into biodiesel, for example. I wonder how those contribute to that overall goal of creating a circular economy. 

Luis: Yeah, this is fascinating and I was so happy to learn a little bit more about this year when I had the chance to visit our offices in Iowa. I’ll get into that in a second. But happy to talk about this, again with the caveat that it’s not my area of expertise. 

Megan: Of course. 

Luis: In the case of biodiesel, we acquired a company called REG in 2022. They were one of the founders of the renewable fuels industry, and they honestly do incredible work to create energy through a process, I forget the name of the process to be honest. But at the most basic level what they do is they prepare feedstocks that come from different types of biomass, you mentioned cooking oils, there’s also soybeans, there’s animal fats. And through various chemical reactions, what they do is convert components of the feedstock into biodiesel and glycerin. After that process, what they do is they separate un-reactive methanol, which is recovered and recycled into the process, and the biodiesel goes through a final processing to make sure that it meets the standards necessary to be commercialized. 

What REG has done is it has boosted our knowledge as a broader organization on how to do this better. They continuously look for bio-feedstocks that can help us deliver new types of energy. I had mentioned bio-based diesel. One of the areas that we’re very focused on right now is sustainable aviation fuel. I find that fascinating. The reason why this is working and the reason why this is exciting is because they brought this great expertise and capability into Chevron. And in turn, as a larger organization, we’re able to leverage our manufacturing and distribution capabilities to continue to provide that value to our customers. 

I mentioned that I learned a little bit more about this this year. I was lucky earlier in the year I was able to visit our REG offices in Ames, Iowa. That’s where they’re located. And I will tell you that the passion and commitment that those people have for the work that they do was incredibly energizing. These are folks who have helped us believe, really, that our promise of lower carbon is attainable. 

Megan: Wow. Sounds like there’s some fascinating work going on. Which brings me to my final question. Which is sort of looking ahead, what emerging technologies are you most excited about and how do you see them impacting both Chevron’s core business and the energy sector as a whole as well? 

Luis: Yeah, that’s a great question. I have no doubt that the energy business is changing and will continue to change only faster, both our core business as well as the future energy, or the way it’s going to look in the future. Honestly, in my line of work, I come across exciting technology every day. The obvious answers are AI and industrial AI. These are things that are already changing the way we live without a doubt. You can see it in people’s productivity. You can see it in how we optimize and transform workflows. AI is changing everything. I am actually very, very interested in IoT, in the Internet of Things, and robotics, the ability to protect humans in high-risk environments, like I mentioned, is critical to us, the opportunity to prevent high-risk events and predict when they’re likely to happen. 

This is pretty massive, both for our productivity objectives as well as for our lower carbon objectives. If we can predict when we are at risk of particular events, we could avoid them altogether. As I mentioned before, this ubiquitous ability to sense our surroundings is a capability that our industry and I’m going to say humankind, is only beginning to explore. 

There’s another area that I didn’t talk too much about, which I think is coming, and that is quantum computing. Quantum computing promises to change the way we think of compute power and it will unlock our ability to simulate chemistry, to simulate molecular dynamics in ways we have not been able to do before. We’re working really hard in this space. When I say molecular dynamics, think of the way that we produce energy today. It is all about the molecule and understanding the interactions between hydrocarbon molecules and the environment. The ability to do that in multi-variable systems is something that quantum, we believe, can provide an edge on, and so we’re working really hard in this space. 

Yeah, there are so many, and having talked about all of them, AI, IoT, robotics, quantum, the most interesting thing to me is the convergence of all of them. If you think about the opportunity to leverage robotics, but also do it as the machines continue to control limited processes and understand what it is they need to do in a preventive and predictive way, this is such an incredible potential to transform our lives, to make an impact in the world for the better. We see that potential. 

My job is to keep an eye on those developments, to make sure that we’re managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective. 

Megan: Absolutely. Such an important point to finish on. And unfortunately, that is all the time we have for today, but what a fascinating conversation. Thank you so much for joining us on the Business Lab, Luis. 

Luis: Great to talk to you. 

Megan:  Thank you so much. That was Luis Niño, who is the digital manager of technology ventures and innovation at Chevron, who I spoke with today from Brighton, England. 

That’s it for this episode of Business Lab. I’m Megan Tatum, I’m your host and a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. 

This show is available wherever you get your podcasts, and if you enjoyed this episode, we really hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thank you so much for listening. 

Training robots in the AI-powered industrial metaverse

Imagine the bustling floors of tomorrow’s manufacturing plant: Robots, well-versed in multiple disciplines through adaptive AI education, work seamlessly and safely alongside human counterparts. These robots can transition effortlessly between tasks—from assembling intricate electronic components to handling complex machinery assembly. Each robot’s unique education enables it to predict maintenance needs, optimize energy consumption, and innovate processes on the fly, dictated by real-time data analyses and learned experiences in their digital worlds.

Training for robots like this will happen in a “virtual school,” a meticulously simulated environment within the industrial metaverse. Here, robots learn complex skills on accelerated timeframes, acquiring in hours what might take humans months or even years.

Beyond traditional programming

Training for industrial robots was once like a traditional school: rigid, predictable, and limited to practicing the same tasks over and over. But now we’re at the threshold of the next era. Robots can learn in “virtual classrooms”—immersive environments in the industrial metaverse that use simulation, digital twins, and AI to mimic real-world conditions in detail. This digital world can provide an almost limitless training ground that mirrors real factories, warehouses, and production lines, allowing robots to practice tasks, encounter challenges, and develop problem-solving skills. 

What once took days or even weeks of real-world programming, with engineers painstakingly adjusting commands to get the robot to perform one simple task, can now be learned in hours in virtual spaces. This approach, known as simulation to reality (Sim2Real), blends virtual training with real-world application, bridging the gap between simulated learning and actual performance.

Although the industrial metaverse is still in its early stages, its potential to reshape robotic training is clear, and these new ways of upskilling robots can enable unprecedented flexibility.

Italian automation provider EPF found that AI shifted the company’s entire approach to developing robots. “We changed our development strategy from designing entire solutions from scratch to developing modular, flexible components that could be combined to create complete solutions, allowing for greater coherence and adaptability across different sectors,” says EPF’s chairman and CEO Franco Filippi.

Learning by doing

AI models gain power when trained on vast amounts of data, such as large sets of labeled examples, learning categories, or classes by trial and error. In robotics, however, this approach would require hundreds of hours of robot time and human oversight to train a single task. Even the simplest of instructions, like “grab a bottle,” for example, could result in many varied outcomes depending on the bottle’s shape, color, and environment. Training then becomes a monotonous loop that yields little significant progress for the time invested.

Building AI models that can generalize and then successfully complete a task regardless of the environment is key for advancing robotics. Researchers from New York University, Meta, and Hello Robot have introduced robot utility models that achieve a 90% success rate in performing basic tasks across unfamiliar environments without additional training. Large language models are used in combination with computer vision to provide continuous feedback to the robot on whether it has successfully completed the task. This feedback loop accelerates the learning process by combining multiple AI techniques—and avoids repetitive training cycles.

Robotics companies are now implementing advanced perception systems capable of training and generalizing across tasks and domains. For example, EPF worked with Siemens to integrate visual AI and object recognition into its robotics to create solutions that can adapt to varying product geometries and environmental conditions without mechanical reconfiguration.

Learning by imagining

Scarcity of training data is a constraint for AI, especially in robotics. However, innovations that use digital twins and synthetic data to train robots have significantly advanced on previously costly approaches.

For example, Siemens’ SIMATIC Robot Pick AI expands on this vision of adaptability, transforming standard industrial robots—once limited to rigid, repetitive tasks—into complex machines. Trained on synthetic data—virtual simulations of shapes, materials, and environments—the AI prepares robots to handle unpredictable tasks, like picking unknown items from chaotic bins, with over 98% accuracy. When mistakes happen, the system learns, improving through real-world feedback. Crucially, this isn’t just a one-robot fix. Software updates scale across entire fleets, upgrading robots to work more flexibly and meet the rising demand for adaptive production.

Another example is the robotics firm ANYbotics, which generates 3D models of industrial environments that function as digital twins of real environments. Operational data, such as temperature, pressure, and flow rates, are integrated to create virtual replicas of physical facilities where robots can train. An energy plant, for example, can use its site plans to generate simulations of inspection tasks it needs robots to perform in its facilities. This speeds the robots’ training and deployment, allowing them to perform successfully with minimal on-site setup.

Simulation also allows for the near-costless multiplication of robots for training. “In simulation, we can create thousands of virtual robots to practice tasks and optimize their behavior. This allows us to accelerate training time and share knowledge between robots,” says Péter Fankhauser, CEO and co-founder of ANYbotics.

Because robots need to understand their environment regardless of orientation or lighting, ANYbotics and partner Digica created a method of generating thousands of synthetic images for robot training. By removing the painstaking work of collecting huge numbers of real images from the shop floor, the time needed to teach robots what they need to know is drastically reduced.

Similarly, Siemens leverages synthetic data to generate simulated environments to train and validate AI models digitally before deployment into physical products. “By using synthetic data, we create variations in object orientation, lighting, and other factors to ensure the AI adapts well across different conditions,” says Vincenzo De Paola, project lead at Siemens. “We simulate everything from how the pieces are oriented to lighting conditions and shadows. This allows the model to train under diverse scenarios, improving its ability to adapt and respond accurately in the real world.”

Digital twins and synthetic data have proven powerful antidotes to data scarcity and costly robot training. Robots that train in artificial environments can be prepared quickly and inexpensively for wide varieties of visual possibilities and scenarios they may encounter in the real world. “We validate our models in this simulated environment before deploying them physically,” says De Paola. “This approach allows us to identify any potential issues early and refine the model with minimal cost and time.”

This technology’s impact can extend beyond initial robot training. If the robot’s real-world performance data is used to update its digital twin and analyze potential optimizations, it can create a dynamic cycle of improvement to systematically enhance the robot’s learning, capabilities, and performance over time.

The well-educated robot at work

With AI and simulation powering a new era in robot training, organizations will reap the benefits. Digital twins allow companies to deploy advanced robotics with dramatically reduced setup times, and the enhanced adaptability of AI-powered vision systems makes it easier for companies to alter product lines in response to changing market demands.

The new ways of schooling robots are transforming investment in the field by also reducing risk. “It’s a game-changer,” says De Paola. “Our clients can now offer AI-powered robotics solutions as services, backed by data and validated models. This gives them confidence when presenting their solutions to customers, knowing that the AI has been tested extensively in simulated environments before going live.”

Filippi envisions this flexibility enabling today’s robots to make tomorrow’s products. “The need in one or two years’ time will be for processing new products that are not known today. With digital twins and this new data environment, it is possible to design today a machine for products that are not known yet,” says Filippi.

Fankhauser takes this idea a step further. “I expect our robots to become so intelligent that they can independently generate their own missions based on the knowledge accumulated from digital twins,” he says. “Today, a human still guides the robot initially, but in the future, they’ll have the autonomy to identify tasks themselves.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Enabling human-centric support with generative AI

It’s a stormy holiday weekend, and you’ve just received the last notification you want in the busiest travel week of the year: The first leg of your flight is significantly delayed.

You might expect this means you’ll be sitting on hold with airline customer service for half an hour. But this time, the process looks a little different: You have a brief text exchange with the airline’s AI chatbot, which quickly assesses your situation and places you in a priority queue. Shortly after, a human agent takes over, confirms the details, and gets you rebooked on an earlier flight so you can make your connection. You’ll be home in time to enjoy mom’s pot roast.

Generative AI is becoming a key component of business operations and customer service interactions today. According to Salesforce research, three out of five workers (61%) either currently use or plan to use generative AI in their roles. A full 68% of these employees are confident that the technology—which can churn out text, video, image, and audio content almost instantaneously—will enable them to provide more enriching customer experiences.

But the technology isn’t a complete solution—or a replacement for human workers. Sixty percent of the surveyed employees believe that human oversight is indispensable for effective and trustworthy generative AI.

Generative AI enables people and increases efficiencies in business operations, but using it to empower employees will make all the difference. Its full business value will only be achieved when it is used thoughtfully to blend with human empathy, ingenuity, and emotional intelligence.

Generative AI pilots across industries

Though the technology is still nascent, many generative AI use cases are starting to emerge.

In sales and marketing, generative AI can assist with creating targeted ad content, identifying leads, upselling, cross-selling, and providing real-time sales analytics. When used for internal functions like IT, HR, and finance, generative AI can improve help-desk services, simplify recruitment processes, generate job descriptions, assist with onboarding and exit processes, and even write code.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Pairing live support with accurate AI outputs

A live agent spends hours each week manually documenting routine interactions. Another combs through multiple knowledge bases to find the right solution, scrambling to piece it together while the customer waits on hold. A third types out the same response they’ve written dozens of times before.

These repetitive tasks can be draining, leaving less time for meaningful customer interactions—but generative AI is changing this reality. By automating routine workflows, AI augments the efforts of live agents, freeing them to do what they do best: solving complex problems and applying human understanding and empathy to help customers during critical situations.

“Enterprises are trying to rush to figure out how to implement or incorporate generative AI into their business to gain efficiencies,” says Will Fritcher, deputy chief client officer at TP. “But instead of viewing AI as a way to reduce expenses, they should really be looking at it through the lens of enhancing the customer experience and driving value.”

Doing this requires solving two intertwined challenges: empowering live agents by automating routine tasks and ensuring AI outputs remain accurate, reliable, and precise. And the key to both these goals? Striking the right balance between technological innovation and human judgment.

A key role in customer support

Generative AI’s potential impact on customer support is twofold: Customers stand to benefit from faster, more consistent service for simple requests, while
also receiving undivided human attention for complex, emotionally charged situations. For employees, eliminating repetitive tasks boosts job satisfaction and reduces burnout.The tech can also be used to streamline customer support workflows and enhance service quality in various ways, including:

Automated routine inquiries: AI systems handle straightforward customer requests, like resetting passwords or checking account balances.

Real-time assistance: During interactions, AI pulls up contextually relevant resources, suggests responses, and guides live agents to solutions faster.

Fritcher notes that TP is relying on many of these capabilities in its customer support solutions. For instance, AI-powered coaching marries AI-driven metrics with human expertise to provide feedback on 100% of customer interactions, rather than the traditional 2%
to 4% that was monitored pre-generative AI.

Call summaries: By automatically documenting customer interactions, AI saves live agents valuable time that can be reinvested in customer care.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Accelerating AI innovation through application modernization

Business applications powered by AI are revolutionizing customer experiences, accelerating the speed of business, and driving employee productivity. In fact, according to research firm Frost & Sullivan’s 2024 Global State of AI report, 89% of organizations believe AI and machine learning will help them grow revenue, boost operational efficiency, and improve customer experience.

Take for example, Vodafone. The telecommunications company is using a suite of Azure AI services, such as Azure OpenAI Service, to deliver real-time, hyper-personalized experiences across all of its customer touchpoints, including its digital chatbot TOBi. By leveraging AI to increase customer satisfaction, Naga Surendran, senior director of product marketing for Azure Application Services at Microsoft, says Vodafone has managed to resolve 70% of its first-stage inquiries through AI-powered digital channels. It has also boosted the productivity of support agents by providing them with access to AI capabilities that mirror those of Microsoft Copilot, an AI-powered productivity tool.

“The result is a 20-point increase in net promotor score,” he says. “These benefits are what’s driving AI infusion into every business process and application.”

Yet realizing measurable business value from AI-powered applications requires a new game plan. Legacy application architectures simply aren’t capable of meeting the high demands of AI-enhanced applications. Rather, the time is now for organizations to modernize their infrastructure, processes, and application architectures using cloud native technologies to stay competitive.

The time is now for modernization

Today’s organizations exist in an era of geopolitical shifts, growing competition, supply chain disruptions, and evolving consumer preferences. AI applications can help by supporting innovation, but only if they have the flexibility to scale when needed. Fortunately, by modernizing applications, organizations can achieve the agile development, scalability, and fast compute performance needed to support rapid innovation and accelerate the delivery of AI applications. David Harmon, director of software development for AMD says companies, “really want to make sure that they can migrate their current [environment] and take advantage of all the hardware changes as much as possible.” The result is not only a reduction in the overall development lifecycle of new applications but a speedy response to changing world circumstances.

Beyond building and deploying intelligent apps quickly, modernizing applications, data, and infrastructure can significantly improve customer experience. Consider, for example, Coles, an Australian supermarket that invested in modernization and is using data and AI to deliver dynamic e-commerce experiences to its customers both online and in-store. With Azure DevOps, Coles has shifted from monthly to weekly deployments of applications while, at the same time, reducing build times by hours. What’s more, by aggregating views of customers across multiple channels, Coles has been able to deliver more personalized customer experiences. In fact, according to a 2024 CMSWire Insights report, there is a significant rise in the use of AI across the digital customer experience toolset, with 55% of organizations now using it to some degree, and more beginning their journey.

But even the most carefully designed applications are vulnerable to cybersecurity attacks. If given the opportunity, bad actors can extract sensitive information from machine learning models or maliciously infuse AI systems with corrupt data. “AI applications are now interacting with your core organizational data,” says Surendran. “Having the right guard rails is important to make sure the data is secure and built on a platform that enables you to do that.” The good news is modern cloud based architectures can deliver robust security, data governance, and AI guardrails like content safety to protect AI applications from security threats and ensure compliance with industry standards.

The answer to AI innovation

New challenges, from demanding customers to ill-intentioned hackers, call for a new approach to modernizing applications. “You have to have the right underlying application architecture to be able to keep up with the market and bring applications faster to market,” says Surendran. “Not having that foundation can slow you down.”

Enter cloud native architecture. As organizations increasingly adopt AI to accelerate innovation and stay competitive, there is a growing urgency to rethink how applications are built and deployed in the cloud. By adopting cloud native architectures, Linux, and open source software, organizations can better facilitate AI adoption and create a flexible platform purpose built for AI and optimized for the cloud. Harmon explains that open source software creates options, “And the overall open source ecosystem just thrives on that. It allows new technologies to come into play.”

Application modernization also ensures optimal performance, scale, and security for AI applications. That’s because modernization goes beyond just lifting and shifting application workloads to cloud virtual machines. Rather, a cloud native architecture is inherently designed to provide developers with the following features:

  • The flexibility to scale to meet evolving needs
  • Better access to the data needed to drive intelligent apps
  • Access to the right tools and services to build and deploy intelligent applications easily
  • Security embedded into an application to protect sensitive data

Together, these cloud capabilities ensure organizations derive the greatest value from their AI applications. “At the end of the day, everything is about performance and security,” says Harmon. Cloud is no exception.

What’s more, Surendran notes that “when you leverage a cloud platform for modernization, organizations can gain access to AI models faster and get to market faster with building AI-powered applications. These are the factors driving the modernization journey.”

Best practices in play

For all the benefits of application modernization, there are steps organizations must take to ensure both technological and operational success. They are:

Train employees for speed. As modern infrastructure accelerates the development and deployment of AI-powered applications, developers must be prepared to work faster and smarter than ever. For this reason, Surendran warns, “Employees must be skilled in modern application development practices to support the digital business needs.” This includes developing expertise in working with loosely coupled microservices to build scalable and flexible application and AI integration.

Start with an assessment. Large enterprises are likely to have “hundreds of applications, if not thousands,” says Surendran. As a result, organizations must take the time to evaluate their application landscape before embarking on a modernization journey. “Starting with an assessment is super important,” continues Surendran. “Understanding, taking inventory of the different applications, which team is using what, and what this application is driving from a business process perspective is critical.”

Focus on quick wins. Modernization is a huge, long-term transformation in how companies build, deliver, and support applications. Most businesses are still learning and developing the right strategy to support innovation. For this reason, Surendran recommends focusing on quick wins while also working on a larger application estate transformation. “You have to show a return on investment for your organization and business leaders,” he says. For example, modernize some apps quickly with re-platforming and then infuse them with AI capabilities.

Partner up. “Modernization can be daunting,” says Surendran. Selecting the right strategy, process, and platform to support innovation is only the first step. Organizations must also “bring on the right set of partners to help them go through change management and the execution of this complex project.”

Address all layers of security. Organizations must be unrelenting when it comes to protecting their data. According to Surendran, this means adopting a multi-layer approach to security that includes: security by design, in which products and services are developed from the get-go with security in mind; security by default, in which protections exist at every layer and interaction where data exists; and security by ongoing operations, which means using the right tools and dashboards to govern applications throughout their lifecycle.

A look to the future

Most organizations are already aware of the need for application modernization. But with the arrival of AI comes the startling revelation that modernization efforts must be done right, and that AI applications must be built and deployed for greater business impact. Adopting a cloud native architecture can help by serving as a platform for enhanced performance, scalability, security, and ongoing innovation. “As soon as you modernize your infrastructure with a cloud platform, you have access to these rapid innovations in AI models,” says Surendran. “It’s about being able to continuously innovate with AI.”

Read more about how to accelerate app and data estate readiness for AI innovation with Microsoft Azure and AMD. Explore Linux on Azure.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Why materials science is key to unlocking the next frontier of AI development

The Intel 4004, the first commercial microprocessor, was released in 1971. With 2,300 transistors packed into 12mm2, it heralded a revolution in computing. A little over 50 years later, Apple’s M2 Ultra contains 134 billion transistors.

The scale of progress is difficult to comprehend, but the evolution of semiconductors, driven for decades by Moore’s Law, has paved a path from the emergence of personal computing and the internet to today’s AI revolution.

But this pace of innovation is not guaranteed, and the next frontier of technological advances—from the future of AI to new computing paradigms—will only happen if we think differently.

Atomic challenges

The modern microchip stretches both the limits of physics and credulity. Such is the atomic precision, that a few atoms can decide the function of an entire chip. This marvel of engineering is the result of over 50 years of exponential scaling creating faster, smaller transistors.

But we are reaching the physical limits of how small we can go, costs are increasing exponentially with complexity, and efficient power consumption is becoming increasingly difficult. In parallel, AI is demanding ever-more computing power. Data from Epoch AI indicates the amount of computing needed to develop AI is quickly outstripping Moore’s Law, doubling every six months in the “deep learning era” since 2010.

These interlinked trends present challenges not just for the industry, but society as a whole. Without new semiconductor innovation, today’s AI models and research will be starved of computational resources and struggle to scale and evolve. Key sectors like AI, autonomous vehicles, and advanced robotics will hit bottlenecks, and energy use from high-performance computing and AI will continue to soar.

Materials intelligence

At this inflection point, a complex, global ecosystem—from foundries and designers to highly specialized equipment manufacturers and materials solutions providers like Merck—is working together more closely than ever before to find the answers. All have a role to play, and the role of materials extends far, far beyond the silicon that makes up the wafer.

Instead, materials intelligence is present in almost every stage of the chip production process—whether in chemical reactions to carve circuits at molecular scale (etching) or adding incredibly thin layers to a wafer (deposition) with atomic precision: a human hair is 25,000 times thicker than layers in leading edge nodes.

Yes, materials provide a chip’s physical foundation and the substance of more powerful and compact components. But they are also integral to the advanced fabrication methods and novel chip designs that underpin the industry’s rapid progress in recent decades.

For this reason, materials science is taking on a heightened importance as we grapple with the limits of miniaturization. Advanced materials are needed more than ever for the industry to unlock the new designs and technologies capable of increasing chip efficiency, speed, and power. We are seeing novel chip architectures that embrace the third dimension and stack layers to optimize surface area usage while lowering energy consumption. The industry is harnessing advanced packaging techniques, where separate “chiplets” are fused with varying functions into a more efficient, powerful single chip. This is called heterogeneous integration.

Materials are also allowing the industry to look beyond traditional compositions. Photonic chips, for example, harness light rather than electricity to transmit data. In all cases, our partners rely on us to discover materials never previously used in chips and guide their use at the atomic level. This, in turn, is fostering the necessary conditions for AI to flourish in the immediate future.

New frontiers

The next big leap will involve thinking differently. The future of technological progress will be defined by our ability to look beyond traditional computing.

Answers to mounting concerns over energy efficiency, costs, and scalability will be found in ambitious new approaches inspired by biological processes or grounded in the principles of quantum mechanics.

While still in its infancy, quantum computing promises processing power and efficiencies well beyond the capabilities of classical computers. Even if practical, scalable quantum systems remain a long way off, their development is dependent on the discovery and application of state-of-the-art materials.

Similarly, emerging paradigms like neuromorphic computing, modelled on the human brain with architectures mimicking our own neural networks, could provide the firepower and energy-efficiency to unlock the next phase of AI development. Composed of a deeply complex web of artificial synapses and neurons, these chips would avoid traditional scalability roadblocks and the limitations of today’s Von Neumann computers that separate memory and processing.

Our biology consists of super complex, intertwined systems that have evolved by natural selection, but it can be inefficient; the human brain is capable of extraordinary feats of computational power, but it also requires sleep and careful upkeep. The most exciting step will be using advanced compute—AI and quantum—to finally understand and design systems inspired by biology. This combination will drive the power and ubiquity of next-generation computing and associated advances to human well-being.

Until then, the insatiable demand for more computing power to drive AI’s development poses difficult questions for an industry grappling with the fading of Moore’s Law and the constraints of physics. The race is on to produce more powerful, more efficient, and faster chips to progress AI’s transformative potential in every area of our lives.

Materials are playing a hidden, but increasingly crucial role in keeping pace, producing next-generation semiconductors and enabling the new computing paradigms that will deliver tomorrow’s technology.

But materials science’s most important role is yet to come. Its true potential will be to take us—and AI—beyond silicon into new frontiers and the realms of science fiction by harnessing the building blocks of biology.

This content was produced by EMD Electronics. It was not written by MIT Technology Review’s editorial staff.