Five benefits of a health tech accelerator program

In the ever-evolving world of health care, the role of technology is becoming increasingly crucial. From improving patient outcomes to streamlining administrative processes, digital technologies are changing the face of the industry. However, for startups developing health tech solutions, breaking into the market and scaling their products can be a challenging journey, requiring access to resources, expertise, and a network they might not have. This is where health tech accelerator programs come in.

Health tech accelerator programs are designed to support early-stage startups in the health technology space, providing them with the resources, mentorship, and funding they need to grow and succeed. These programs are often highly competitive, and startups that are selected gain access to a wealth of opportunities that can significantly accelerate their development. In this article, we’ll explore five key benefits of participating in a health tech accelerator program.

1. Access to mentorship and expertise

One of the most valuable aspects of health tech accelerator programs is the access they provide to experienced mentors and industry experts. Health tech startups often face unique challenges, such as navigating complex health-care regulations, developing scalable technologies, and understanding the intricacies of health systems. Having mentors who have firsthand experience in these areas can provide critical guidance.

These mentors often include clinicians, informaticists, investors, health-care professionals, and thought leaders. Their insights can help startups refine their business strategies, optimize their digital health solutions, and navigate the health-care landscape. With this guidance, startups are better positioned to make informed decisions, avoid common pitfalls, and accelerate their growth.

2. Funding and investment opportunities

For many startups, securing funding is one of the biggest hurdles they face. Health tech innovation can be expensive, especially in the early stages when startups are working on solution development, regulatory approvals, and pilot testing. Accelerator programs often provide startups with seed funding, as well as the opportunity to connect with venture capitalists, angel investors, and other potential backers.

Many accelerator programs culminate in a “demo day,” where startups pitch their solutions to a room full of investors and other key decision-makers. These events can be crucial in securing the funding necessary to scale a digital health solution or product. Beyond initial funding, the exposure gained from being part of a well-known accelerator program can lead to additional investment opportunities down the road.

3. Networking and industry connections

The health-care industry is notoriously complex and fragmented, making it difficult for new players to break in without the right connections. Health tech accelerator programs offer startups the opportunity to network with key leaders in the health-care and technology ecosystems, including clinicians, payers, pharmaceutical companies, government agencies, and potential customers.

Through structured networking events, mentorship sessions, and partnerships with established organizations, startups gain access to a wide range of stakeholders who can help substantiate their products, open doors to new markets, and provide feedback that can be used to refine their offerings. In the health tech space, strong industry connections are often critical to gaining traction and scaling successfully.

4. Market validation and credibility

The health tech industry is highly regulated and risk-averse, meaning that customers and investors are often wary of new technologies. Participating in an accelerator program can serve as a form of market validation, signaling that a startup’s offering has been vetted by experts and has the potential for success.

The credibility gained from being accepted into a prestigious accelerator program can be a game-changer. It provides startups with a level of legitimacy that can help them stand out in a crowded and competitive market. Whether it’s attracting investors, forging partnerships, or securing early customers, the reputation of the accelerator can give a startup a significant boost.

Additionally, accelerator programs often have ties to major health-care institutions and organizations. This can provide startups with opportunities to pilot their products in real-world health-care settings, which can serve as both a test of the product’s viability and a powerful proof of concept for future customers and investors.

5. Access to resources and infrastructure

Another significant benefit of accelerators is the access to resources and infrastructure that startups might not obtain otherwise. These resources can include everything from access to clinical data for model building and testing, legal and regulatory support, and technology infrastructure to deploy and scale. For early-stage health tech companies, these resources can be a game-changer.

Conclusion

Health tech startups are at the forefront of transforming health care, but navigating the challenges of innovation, regulation, and market entry can be daunting. Health tech accelerator programs offer invaluable support by providing startups with the mentorship, funding, networking opportunities, credibility, and resources they need to succeed.

Mayo Clinic Platform_Accelerate is a 30-week accelerator program from Mayo Clinic Platform focused on helping startups with digital technologies advance their solution development and get to market faster. Learn more about the program and the access it provides to clinical data, Mayo Clinic experts, technical resources, investors, and more at https://www.mayoclinicplatform.org/accelerate/.

This content was produced by Mayo Clinic Platform. It was not written by MIT Technology Review’s editorial staff.

Customizing generative AI for unique value

Since the emergence of enterprise-grade generative AI, organizations have tapped into the rich capabilities of foundational models, developed by the likes of OpenAI, Google DeepMind, Mistral, and others. Over time, however, businesses often found these models limiting since they were trained on vast troves of public data. Enter customization—the practice of adapting large language models (LLMs) to better suit a business’s specific needs by incorporating its own data and expertise, teaching a model new skills or tasks, or optimizing prompts and data retrieval.

Customization is not new, but the early tools were fairly rudimentary, and technology and development teams were often unsure how to do it. That’s changing, and the customization methods and tools available today are giving businesses greater opportunities to create unique value from their AI models.

We surveyed 300 technology leaders in mostly large organizations in different industries to learn how they are seeking to leverage these opportunities. We also spoke in-depth with a handful of such leaders. They are all customizing generative AI models and applications, and they shared with us their motivations for doing so, the methods and tools they’re using, the difficulties they’re encountering, and the actions they’re taking to surmount them.

Our analysis finds that companies are moving ahead ambitiously with customization. They are cognizant of its risks, particularly those revolving around data security, but are employing advanced methods and tools, such as retrieval-augmented generation (RAG), to realize their desired customization gains.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Architecting tomorrow’s network

Technological advances continue to move at breakneck speeds. While companies struggle through their digital transformation journeys, even more new technologies emerge, with promises of opportunity, cost savings—and added complexity. Many companies have yet to fully adopt AI and ML technologies, let alone figure out how newer technologies like generative AI might fit into their programs.

A 2024 IDC survey revealed 22% of tech leaders said their organizations haven’t yet reached full digital maturity, and 41% of respondents said the complexity of integrating new technologies and approaches with existing tech stacks is the biggest challenge for tech adoption.

To fuel successful technology adoption and maximize outcomes, companies need to focus on simplifying infrastructure architecture rather than how to make new technologies fit into existing stacks. “When it comes to digital transformation, choosing an architectural approach over a purely technology-driven one is about seeing the bigger picture,” says Rajarshi Purkayastha, the VP of solutions at Tata Communications. “Instead of focusing on isolated tools or systems, an architectural approach connects the dots—linking silos rather than simply trying to eliminate them.”

Establishing the robust global network most companies need to connect these dots and link their silos requires more capability and bandwidth than traditional networks like multiprotocol label switching (MPLS) circuits can typically provide in a cost-effective way. To keep pace with innovation, consumer demands, and market competition, today’s wide area networks (WANs) need to support flexible, anywhere connectivity for multi-cloud based services, remote locations and users, and edge data centers.

Understanding hybrid WAN

Traditional MPLS became the gold standard for most WAN architectures in the early 2000s to address the mounting challenges brought by the rapid growth of the internet and subsequent rapid expansions of enterprise networks. Today, as technological advances continue to accelerate, however, the limitations of MPLS are becoming apparent: MPLS networking is expensive; hard-wired connectivity is difficult to scale; and on its own, it doesn’t fit well with cloud computing adoption strategies.

In 2014, Gartner predicted hybrid WANs would be the future of networking. Hybrid WANs differ from traditional WANs in that the hybrid architecture facilitates multiple connection points: private network connections for mission-critical business, usually via the legacy MPLS circuits; and public network connections, typically utilizing internet connections such as 5G, LTE, or VPN, for less critical data traffic; and dedicated internet access (DIA) for somewhat critical traffic.

In 2025, we are seeing signs Gartner’s hybrid WAN prediction might be coming to fruition. At Tata Communications, for example, hybrid WAN is a key component of its network fabric—one facet of its digital fabric architecture, which weaves together networking, interaction, cloud, and IoT technologies.

“Our digital fabric simplifies the complexity of managing diverse technologies, breaks down silos, and provides a secure, unified platform for hyper-connected ecosystems,” explains Purkayastha. “By doing so, it ensures businesses have the agility, visibility, and scalability to succeed in their digital transformation journey—turning challenges into opportunities for innovation and growth.”

Hybrid WAN provides the flexible, real-time data traffic channeling an architectural approach requires to create a programmable, performant, and secure network that can reduce complexities and ease adoption of emerging technologies. “It’s not just about solving today’s challenges—it lays the groundwork for a resilient, scalable future,” says Purkayastha.

Benefits of hybrid WAN

Hybrid networking architectures support digital transformation journeys and emerging tech adoption in several ways.

More efficient, even intelligent, data trafficking. A hybrid architecture brings together multiple avenues of data flow from MPLS and internet connectivity, which provides highly flexible, resilient architecture along with increased bandwidth to decrease network congestion. It also allows companies to prioritize critical data traffic. Hybrid WANs can also combine the hyper-secure connectivity of MPLS with software-defined WAN (SD-WAN) technology, which allows for intelligent switching across a company’s information highways. If, for instance, one route encounters latency or malfunctions, that traffic will be automatically re-routed, helping to maintain continuous connectivity and reduce downtime.

Increased scalability. The agility and flexibility of a hybrid WAN allows companies to dynamically scale bandwidth up or down as application needs change. An agile WAN architecture also paves the way for scaling business operations.

Less complex cloud migration and easier adoption of new technologies. Adding internet connectivity to MPLS circuits allows for seamless data trafficking to the cloud, providing a more direct way for companies to transition to cloud-first strategies. Easing cloud migration also opens doors for emerging technologies like AI, generative AI, and machine learning, enabling companies to innovate to remain relevant in their markets.

Improved productivity. The internet speed and connectivity of a hybrid WAN keeps geographically separated company locations and remote workers connected, increasing efficiency and collaboration.

Easier integration with legacy systems. A hybrid approach allows legacy MPLS connections to remain, while offloading less sensitive data traffic to the internet. The ability to incorporate legacy applications and processes into a hybrid architecture not only eases integration and adoption, but helps to maximize returns on network investments.

Network cost savings. Many of the benefits on this list translate into cost savings, as internet bandwidth is considerably cheaper than MPLS networking. A reduction in downtime reduces expenses companywide, and the ability to customize bandwidth usage at scale gives companies more control over network expenses while maximizing connectivity.

Deploying a hybrid WAN

A recent collaboration between Air France-KLM Group and Tata Communications highlights the benefits a hybrid WAN can bring to a global enterprise.

Air France looked to increase its network and application performance threefold without incurring additional costs—and while ensuring the security and integrity of their network. A hybrid WAN solution—specifically, using MPLS and internet services from Tata Communications and other third-party providers—afforded the flexibility, resilience, and continuous connectivity they needed.

According to Tata Communications, the hybrid architecture increased Air France’s network availability to more than 99.94%, supporting its global office locations as well as their customer-facing applications, including passenger and cargo bookings and operating service centers.

“However, which connectivity to choose based on location type and application is complex, given the fact that networks vary by region, and one has to also take into account regulations, for e.g., in China,” says Purkayastha. “This is what Tata Communications helps customers with—choosing the right type of network, resulting in both cost savings and a better user experience.”

Enabling business for success

Innovating and expanding enterprise operations in today’s era of increasingly complex technology evolutions requires businesses to find agile and cost-effective avenues to stay connected and competitive.

As emerging machine learning and AI technologies aren’t likely to slow, hybrid network architectures likely are going to become necessary infrastructure components for companies of all sizes. The flexibility, resiliency, and configurability of a hybrid WAN provides a relatively straightforward, lightweight network upgrade to allow companies to focus on business objectives with less time and expense worrying about network reliability and reach. “At the end of the day, it isn’t just about technology—it’s about enabling your business to stay agile, competitive, and ready to innovate, no matter how the landscape shifts,” says Purkayastha.

The evolution of AI: From AlphaGo to AI agents, physical AI, and beyond

In March 2016, the world witnessed a unique moment in the evolution of artificial intelligence (AI) when AlphaGo, an AI developed by DeepMind, played against Lee Sedol, one of the greatest Go players of the modern era. The match reached a critical juncture in Game 2 with Move 37, where AlphaGo made a move so unconventional and creative that it stunned both the audience and Lee Sedol himself.

This moment has since been recognized as a pivotal point in the evolution of AI. It was not merely a demonstration of AI’s proficiency in playing Go but a revelation that machines could think outside the box and exhibit creativity. This moment fundamentally altered the perception of AI, transforming it from a tool that follows predefined rules to an entity capable of innovation. Since that fateful match, AI continues to drive profound changes across industries, from content recommendations to fraud detection. However, the game-changing power of AI became evident when ChatGPT brought generative AI to the masses.


Experience the latest in AI innovation. Join Microsoft at the NVIDIA GTC AI Conference. Learn more and register.


The critical moment of ChatGPT

The release of ChatGPT by OpenAI in November 2022 marked another significant milestone in the evolution of AI. ChatGPT, a large language model capable of generating human-like text, demonstrated the potential of AI to understand and generate natural language. This capability opened up new possibilities for AI applications, from customer service to content creation. The world responded to ChatGPT with a mix of awe and excitement, recognizing the potential of AI to transform how humans communicate and interact with technology to enhance our lives.

The rise of agentic AI

Today, the rise of agentic AI — systems capable of advanced reasoning and task execution — is revolutionizing the way organizations operate. Agentic AI systems are designed to pursue complex goals with autonomy and predictability. They are productivity enablers that can effectively incorporate humans in the loop via the use of multi-modality. These systems can take goal-directed actions with minimal human oversight, make contextual decisions, and dynamically adjust plans based on changing conditions.    

Deploy agentic AI today    

Microsoft and NVIDIA are at the forefront of developing and deploying agentic AI systems, providing the necessary infrastructure and tools to enable advanced capabilities such as:

Azure AI services: Microsoft Azure AI services have been instrumental in creating agentic AI systems. For instance, the Azure AI Foundry and Azure OpenAI Service provide the foundational tools for building AI agents that can autonomously perceive, decide, and act in pursuit of specific goals. These services enable the development of AI systems that go beyond simple task execution to more complex, multi-step processes.

AI agents and agentic AI systems: Microsoft has developed various AI agents that automate and execute business processes, working alongside or on behalf of individuals and organizations. These agents, accessible via Microsoft Copilot Studio, Azure AI, or GitHub, are designed to autonomously perceive, decide, and act, adapting to new circumstances and conditions. For example, the mobile data recorder (MDR) copilot at BMW, powered by Azure AI, allows engineers to chat with the interface using natural language, converting conversations into technical insights.

Multi-agent systems: Microsoft’s research and development in multi-agent AI systems have led to the creation of modular, collaborative agents that can dynamically adapt to different tasks. These systems are designed to work together seamlessly, enhancing overall performance and efficiency. For example, Magnetic-One, a high-performance generalist agentic system, is designed to solve open-ended tasks across various domains, representing a significant advancement in agent technology.

Collaboration with NVIDIA: Microsoft and NVIDIA have collaborated deeply across the entire technology stack, including Azure accelerated instances equipped with NVIDIA GPUs. This enables users to develop agentic AI applications by leveraging NVIDIA GPUs alongside NVIDIA NIM models and NeMo microservices across their selected Azure services, such as Azure Machine Learning, Azure Kubernetes Service, or Azure Virtual Machines. Furthermore, NVIDIA NeMo microservices offer capabilities to support the creation and ongoing enhancement of agentic AI applications.

Physical AI and beyond

Looking ahead, the next wave in AI development is physical AI, powered by AI models that can understand and engage with our world and generate their actions based on advanced sensory input. Physical AI will enable a new frontier of digitalization for heavy industries, delivering more intelligence and autonomy to the world’s warehouses and factories, and driving major advancements in autonomous transportation. The NVIDIA Omniverse development platform is available on Microsoft Azure to enable developers to build advanced physical AI, simulation, and digital twin applications that accelerate industrial digitalization.

As AI continues to evolve, it promises to bring even more profound changes to our world. The journey that was sparked from a single move on a Go board to the emergence of agentic and physical AI underscores the incredible potential of AI to innovate, transform industries, and elevate our daily lives.

Experience the latest in AI innovation at NVIDIA GTC

Discover cutting-edge AI solutions from Microsoft and NVIDIA, that push the boundaries of innovation. Join Microsoft at the NVIDIA GTC AI Conference from March 17 to 21, 2025, in San Jose, California, or virtually.

Visit Microsoft at booth #514 to connect with Azure and NVIDIA AI experts and explore the latest AI technology and hardware. Attend Microsoft’s sessions to learn about Azure’s comprehensive AI platform and accelerate your innovation journey.

Learn more and register today.

This content was produced by Microsoft and NVIDIA. It was not written by MIT Technology Review’s editorial staff.

Designing the future of entertainment

An entertainment revolution, powered by AI and other emerging technologies, is fundamentally changing how content is created and consumed today. Media and entertainment (M&E) brands are faced with unprecedented opportunities—to reimagine costly and complex production workloads, to predict the success of new scripts or outlines, and to deliver immersive entertainment in novel formats like virtual reality (VR) and the metaverse. Meanwhile, the boundaries between entertainment formats—from gaming to movies and back—are blurring, as new alliances form across industries, and hardware innovations like smart glasses and autonomous vehicles make media as ubiquitous as air.

At the same time, media and entertainment brands are facing competitive threats. They must reinvent their business models and identify new revenue streams in a more fragmented and complex consumer landscape. They must keep up with advances in hardware and networking, while building an IT infrastructure to support AI and related technologies. Digital media standards will need to evolve to ensure interoperability and seamless experiences, while companies search for the right balance between human and machine, and protect their intellectual property and data.

This report examines the key technology shifts transforming today’s media and entertainment industry and explores their business implications. Based on in-depth interviews with media and entertainment executives, startup founders, industry analysts, and experts, the report outlines the challenges and opportunities that tech-savvy business leaders will find ahead.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Harnessing cloud and AI to power a sustainable future 

Organizations working toward ambitious sustainability targets are finding an ally in emerging technologies. In agriculture, for instance, AI can use satellite imagery and real-time weather data to optimize irrigation and reduce water usage. In urban areas, cloud-enabled AI can power intelligent traffic systems, rerouting vehicles to cut commute times and emissions. At an industrial level, advanced algorithms can predict equipment failures days or even weeks in advance. 

But AI needs a robust foundation to deliver on its lofty promises—and cloud computing provides that bedrock. As AI and cloud continue to converge and mature, organizations are discovering new ways to be more environmentally conscious while driving operational efficiencies. 

Data from a poll conducted by MIT Technology Review Insights in 2024 suggests growing momentum for this dynamic duo: 38% of executives polled say that cloud and AI are key components of their company’s sustainability initiatives, and another 35% say the combination is making a meaningful contribution to sustainability goals (see Figure 1). 

This enthusiasm isn’t just theoretical, either. Consider that 45% of respondents identified energy consumption optimization as their most relevant use case for AI and cloud in sustainability initiatives. And organizations are backing these priorities with investment—more than 50% of companies represented in the poll plan to increase their spending on cloud and AI-focused sustainability initiatives by 25% or more over the next two years. 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Reframing digital transformation through the lens of generative AI

Enterprise adoption of generative AI technologies has undergone explosive growth in the last two years and counting. Powerful solutions underpinned by this new generation of large language models (LLMs) have been used to accelerate research, automate content creation, and replace clunky chatbots with AI assistants and more sophisticated AI agents that closely mimic human interaction.

“In 2023 and the first part of 2024, we saw enterprises experimenting, trying out new use cases to see, ‘What can this new technology do for me?’” explains Arthy Krishnamurthy, senior director for business transformation at Dataiku. But while many organizations were eager to adopt and exploit these exciting new capabilities, some may have underestimated the need to thoroughly scrutinize AI-related risks and recalibrate existing frameworks and forecasts for digital transformation.

“Now, the question is more around how fundamentally can this technology reshape our competitive landscape?” says Krishnamurthy. “We are no longer just talking about technological implementation but about organizational transformation. Expansion is not a linear progression but a strategic recalibration that demands deep systems thinking.”

Key to this strategic recalibration will be a refined approach to ROI, delivery, and governance in the context of generative AI-led digital transformation. “This really has to start in the C-suite and at the board level,” says Kevin Powers, director of Boston College Law School’s Master of Legal Studies program in cybersecurity, risk, and governance. “Focus on AI as something that is core to your business. Have a plan of action.”

Download the full article

Implementing responsible AI in the generative age

Many organizations have experimented with AI, but they haven’t always gotten the full value from their investments. A host of issues standing in the way center on the accuracy, fairness, and security of AI systems. In response, organizations are actively exploring the principles of responsible AI: the idea that AI systems must be fair, transparent, and beneficial to society for it to be widely adopted. 

When responsible AI is done right, it unlocks trust and therefore customer adoption of enterprise AI. According to the US National Institute of Standards and Technology the essential building blocks of AI trustworthiness include: 

  • Validity and reliability 
  • Safety
  • Security and resiliency 
  • Accountability and transparency 
  • Explainability and interpretability 
  • Privacy
  • Fairness with mitigation of harmful bias 

To investigate the current landscape of responsible AI across the enterprise, MIT Technology Review Insights surveyed 250 business leaders about how they’re implementing principles that ensure AI trustworthiness. The poll found that responsible AI is important to executives, with 87% of respondents rating it a high or medium priority for their organization.

A majority of respondents (76%) also say that responsible AI is a high or medium priority specifically for creating a competitive advantage. But relatively few have figured out how to turn these ideas into reality. We found that only 15% of those surveyed felt highly prepared to adopt effective responsible AI practices, despite the importance they placed on them. 

Putting responsible AI into practice in the age of generative AI requires a series of best practices that leading companies are adopting. These practices can include cataloging AI models and data and implementing governance controls. Companies may benefit from conducting rigorous assessments, testing, and audits for risk, security, and regulatory compliance. At the same time, they should also empower employees with training at scale and ultimately make responsible AI a leadership priority to ensure their change efforts stick. 

“We all know AI is the most influential change in technology that we’ve seen, but there’s a huge disconnect,” says Steven Hall, chief AI officer and president of EMEA at ISG, a global technology research and IT advisory firm. “Everybody understands how transformative AI is going to be and wants strong governance, but the operating model and the funding allocated to responsible AI are well below where they need to be given its criticality to the organization.” 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Fueling the future of digital transformation

In the rapidly evolving landscape of digital innovation, staying adaptable isn’t just a strategy—it’s a survival skill. “Everybody has a plan until they get punched in the face,” says Luis Niño, digital manager for technology ventures and innovation at Chevron, quoting Mike Tyson.

Drawing from a career that spans IT, HR, and infrastructure operations across the globe, Niño offers a unique perspective on innovation and how organizational microcultures within Chevron shape how digital transformation evolves. 

Centralized functions prioritize efficiency, relying on tools like AI, data analytics, and scalable system architectures. Meanwhile, business units focus on simplicity and effectiveness, deploying robotics and edge computing to meet site-specific needs and ensure safety.

“From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant,” he says.

Central to this transformation is the rise of industrial AI. Unlike consumer applications, industrial AI operates in high-stakes environments where the cost of errors can be severe. 

“The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes,” says Niño. “If a machine reacts in ways you don’t expect, people could get hurt, and so there’s an extra level of care that needs to happen and that we need to think about as we deploy these technologies.”

Niño highlights Chevron’s efforts to use AI for predictive maintenance, subsurface analytics, and process automation, noting that “AI sits on top of that foundation of strong data management and robust telecommunications capabilities.” As such, AI is not just a tool but a transformation catalyst redefining how talent is managed, procurement is optimized, and safety is ensured.

Looking ahead, Niño emphasizes the importance of adaptability and collaboration: “Transformation is as much about technology as it is about people.” With initiatives like the Citizen Developer Program and Learn Digital, Chevron is empowering its workforce to bridge the gap between emerging technologies and everyday operations using an iterative mindset. 

Niño is also keeping watch over the convergence of technologies like AI, quantum computing, Internet of Things, and robotics, which hold the potential to transform how we produce and manage energy.

“My job is to keep an eye on those developments,” says Niño, “to make sure that we’re managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective.”

This episode of Business Lab is produced in association with Infosys Cobalt.

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Our topic today is digital transformation, from back office operations to infrastructure in the field like oil rigs, companies continue to look for ways to increase profit, meet sustainability goals, and invest in the latest and greatest technology. 

Two words for you: enabling innovation. 

My guest is Luis Niño, who is the digital manager of technology ventures, and innovation at Chevron. This podcast is produced in association with Infosys Cobalt. 

Welcome, Luis. 

Luis Niño: Thank you, Megan. Thank you for having me. 

Megan: Thank you so much for joining us. Just to set some context, Luis, you’ve had a really diverse career at Chevron, spanning IT, HR, and infrastructure operations. I wonder, how have those different roles shaped your approach to innovation and digital strategy? 

Luis: Thank you for the question. And you’re right, my career has spanned many different areas and geographies in the company. It really feels like I’ve worked for different companies every time I change roles. Like I said, different functions, organizations, locations I’ve had since here in Houston and in Bakersfield, California and in Buenos Aires, Argentina. From an organizational standpoint, I’ve seen central teams international service centers, as you mentioned, field infrastructure and operation organizations in our business units, and I’ve also had corporate function roles. 

And the reason why I mentioned that diversity is that each one of those looks at digital transformation and innovation through its own lens. From the priority to scale and streamline in central organizations to the need to optimize and simplify out in business units and what I like to call the periphery, you really learn about the concept first off of microcultures and how different these organizations can be even within our own walls, but also how those come together in organizations like Chevron. 

Over time, I would highlight two things. In central organizations, whether that’s functions like IT, HR, or our technical center, we have a central technical center, where we continuously look for efficiencies in scaling, for system architectures that allow for economies of scale. As you can imagine, the name of the game is efficiency. We have also looked to improve employee experience. We want to orchestrate ecosystems of large technology vendors that give us an edge and move the massive organization forward. In areas like this, in central areas like this, I would say that it is data analytics, data science, and artificial intelligence that has become the sort of the fundamental tools to achieve those objectives. 

Now, if you allow that pendulum to swing out to the business units and to the periphery, the name of the game is effectiveness and simplicity. The priority for the business units is to find and execute technologies that help us achieve the local objectives and keep our people safe. Especially when we are talking about our manufacturing environments where there’s risk for our folks. In these areas, technologies like robotics, the Internet of Things, and obviously edge computing are currently the enablers of information. 

I wouldn’t want to miss the opportunity to say that both of those, let’s call it, areas of the company, rely on the same foundation and that is a foundation of strong data management, of strong network and telecommunications capabilities because those are the veins through which the data flows and everything relies on data. 

In my experience, this pendulum also drives our technology priorities and our technology strategy. From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant. If you are deploying something in the center and you suddenly realize that some business unit already has a solution, you cannot just say, let’s shut it down and go with what I said. You have to adapt, you have to understand behavioral change management and you really have to make sure that change and adjustments are your bread and butter. 

I don’t know if you know this, Megan, but there’s a popular fight happening this weekend with Mike Tyson and he has a saying, and that is everybody has a plan until they get punched in the face. And what he’s trying to say is you have to be adaptable. The plan is good, but you have to make sure that you remain agile. 

Megan: Yeah, absolutely. 

Luis: And then I guess the last lesson really quick is about risk management or maybe risk appetite. Each group has its own risk appetite depending on the lens or where they’re sitting, and this may create some conflict between organizations that want to move really, really fast and have urgency and others that want to take a step back and make sure that we’re doing things right at the balance. I think that at the end, I think that’s a question for leadership to make sure that they have a pulse on our ability to change. 

Megan: Absolutely, and you’ve mentioned a few different elements and technologies I’d love to dig into a bit more detail on. One of which is artificial intelligence because I know Chevron has been exploring AI for several years now. I wonder if you could tell us about some of the AI use cases it’s working on and what frameworks you’ve developed for effective adoption as well. 

Luis: Yeah, absolutely. This is the big one, isn’t it? Everybody’s talking about AI. As you can imagine, the focus in our company is what is now being branded as industrial AI. That’s really a simple term to explain that AI is being applied to industrial and manufacturing settings. And like other AI, and as I mentioned before, the foundation remains data. I want to stress the importance of data here. 

One of the differences however is that in the case of industrial AI, data comes from a variety of sources. Some of them are very critical. Some of them are non-critical. Sources like operating technologies, process control networks, and SCADA, all the way to Internet of Things sensors or industrial Internet of Things sensors, and unstructured data like engineering documentation and IT data. These are massive amounts of information coming from different places and also from different security structures. The complexity of industrial AI is considerably higher than what I would call consumer or productivity AI. 

Megan: Right. 

Luis: The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes. When you’re in an industrial setting, if a machine reacts in ways you don’t expect, people could get hurt, and so there’s an extra level of care that needs to happen and that we need to think about as we deploy these technologies. 

AI sits on top of that foundation and it takes different shapes. It can show up as a copilot like the ones that have been popularized recently, or it can show up as agentic AI, which is something that we’re looking at closely now. And agentic AI is just a term to mean that AI can operate autonomously and can use complex reasoning to solve multistep problems in an industrial setting. 

So with that in mind, going back to your question, we use both kinds of AI for multiple use cases, including predictive maintenance, subsurface analytics, process automation, and workflow optimization, and also end-user productivity. Each one of those use cases obviously needs specific objectives that the business is looking at in each area of the value chain. 

In predictive maintenance, for example, we monitor and we analyze equipment health, we prevent failures, and we allow for preventive maintenance and reduced downtime. The AI helps us understand when machinery needs to be maintained in order to prevent failure instead of just waiting for it to happen. In subsurface analysis, we’re exploring AI to develop better models of hydrocarbon reservoirs. We are exploring AI to forecast geomechanical models and to capture and understand data from fiber optic sensing. Fiber optic sensing is a capability that has proven very valuable to us, and AI is helping us make sense of the wealth of information that comes out of the whole, as we like to say. Of course, we don’t do this alone. We partner with many third-party organizations, with vendors, and with people inside subject matter experts inside of Chevron to move the projects forward. 

There are several other areas beyond industrial AI that we are looking at. AI really is a transformation catalyst, and so areas like finance and law and procurement and HR, we’re also doing testing in those corporate areas. I can tell you that I’ve been part of projects in procurement, in HR. When I was in HR we ran a pretty amazing effort in partnership with a third-party company, and what they do is they seek to transform the way we understand talent, and the way they do that is they are trying to provide data-driven frameworks to make talent decisions. 

And so they redefine talent by framing data in the form of skills, and as they do this, they help de-bias processes that are usually or can be usually prone to unconscious biases and perspectives. It really is fascinating to think of your talent-based skills and to start decoupling them from what we know since the industrial era began, which is people fit in jobs. Now the question is more the other way around. How can jobs adapt to people’s skills? And then in procurement, AI is basically helping us open the aperture to a wider array of vendors in an automated fashion that makes us better partners. It’s more cost-effective. It’s really helpful. 

Before I close here, you did reference frameworks, so the framework of industrial AI versus what I call productivity AI, the understanding of the use cases. All of this sits on top of our responsible AI frameworks. We have set up a central enterprise AI organization and they have really done a great job in developing key areas of responsible AI as well as training and adoption frameworks. This includes how to use AI, how not to use AI, what data we can share with the different GPTs that are available to us. 

We are now members of organizations like the Responsible AI Institute. This is an organization that fosters the safe use of AI and trustworthy AI. But our own responsible AI framework, it involves four pillars. The first one is the principles, and this is how we make sure we continue to stay aligned with the values that drive this company, which we call The Chevron Way. It includes assessment, making sure that we evaluate these solutions in proportion to impact and risk. As I mentioned, when you’re talking about industrial processes, people’s lives are at stake. And so we take a very close look at what we are putting out there and how we ensure that it keeps our people safe. It includes education, I mentioned training our people to augment their capabilities and reinforcing responsible principles, and the last of the four is governance oversight and accountability through control structures that we are putting in place. 

Megan: Fantastic. Thank you so much for those really fascinating specific examples as well. It’s great to hear about. And digital transformation, which you did touch on briefly, has become critical of course to enable business growth and innovation. I wonder what has Chevron’s digital transformation looked like and how has the shift affected overall operations and the way employees engage with technology as well? 

Luis: Yeah, yeah. That’s a really good question. The term digital transformation is interpreted in many different ways. For me, it really is about leveraging technology to drive business results and to drive business transformation. We usually tend to specify emerging technology as the catalyst for transformation. I think that is okay, but I also think that there are ways that you can drive digital transformation with technology that’s not necessarily emerging but is being optimized, and so under this umbrella, we include everything from our Citizen Developer Program to complex industry partnerships that help us maximize the value of data. 

The Citizen Developer Program has been very successful in helping bridge the gap between our technical software engineer and software development practices and people who are out there doing the work, getting familiar, and demystifying the way to build solutions. 

I do believe that transformation is as much about technology as it is about people. And so to go back to the responsible AI framework, we are actively training and upskilling the workforce. We created a program called Learn Digital that helps employees embrace the technologies. I mentioned the concept of demystifying. It’s really important that people don’t fall into the trap of getting scared by the potential of the technology or the fact that it is new and we help them and we give them the tools to bridge the change management gap so they can get to use them and get the most out of them. 

At a high level, our transformation has followed the cyclical nature that pretty much any transformation does. We have identified the data foundations that we need to have. We have understood the impact of the processes that we are trying to digitize. We organize that information, then we streamline and automate processes, we learn, and now machines learn and then we do it all over again. And so this cyclical mindset, this iterative mindset has really taken hold in our culture and it has made us a little bit better at accepting the technologies that are driving the change. 

Megan: And to look at one of those technologies in a bit more detail, cloud computing has revolutionized infrastructure across industries. But there’s also a pendulum ship now toward hybrid and edge computing models. How is Chevron balancing cloud, hybrid, and edge strategies for optimal performance as well? 

Luis: Yeah, that’s a great question and I think you could argue that was the genesis of the digital transformation effort. It’s been a journey for us and it’s a journey that I think we’re not the only ones that may have started it as a cost savings and storage play, but then we got to this ever-increasing need for multiple things like scaling compute power to support large language models and maximize how we run complex models. There’s an increasing need to store vast amounts of data for training and inference models while we improve data management and, while we predict future needs. 

There’s a need for the opportunity to eliminate hardware constraints. One of the promises of cloud was that you would be able to ramp up and down depending on your compute needs as projects demanded. And that hasn’t stopped, that has only increased. And then there’s a need to be able to do this at a global level. For a company like ours that is distributed across the globe, we want to do this everywhere while actively managing those resources without the weight of the infrastructure that we used to carry on our books. Cloud has really helped us change the way we think about the digital assets that we have. 

It’s important also that it has created this symbiotic need to grow between AI and the cloud. So you don’t have the AI without the cloud, but now you don’t have the cloud without AI. In reality, we work on balancing the benefits of cloud and hybrid and edge computing, and we keep operational efficiency as our North Star. We have key partnerships in cloud, that’s something that I want to make sure I talk about. Microsoft is probably the most strategic of our partnerships because they’ve helped us set our foundation for cloud. But we also think of the convenience of hybrid through the lens of leveraging a convenient, scalable public cloud and a very secure private cloud that helps us meet our operational and safety needs. 

Edge computing fills the gap or the need for low latency and real-time data processing, which are critical constraints for decision-making in most of the locations where we operate. You can think of an offshore rig, a refinery, an oil rig out in the field, and maybe even not-so-remote areas like here in our corporate offices. Putting that compute power close to the data source is critical. So we work and we partner with vendors to enable lighter compute that we can set at the edge and, I mentioned the foundation earlier, faster communication protocols at the edge that also solve the need for speed. 

But it is important to remember that you don’t want to think about edge computing and cloud as separate things. Cloud supports edge by providing centralized management by providing advanced analytics among others. You can train models in the cloud and then deploy them to edge devices, keeping real-time priorities in mind. I would say that edge computing also supports our cybersecurity strategy because it allows us to control and secure sensitive environments and information while we embed machine learning and AI capabilities out there. 

So I have mentioned use cases like predictive maintenance and safety, those are good examples of areas where we want to make sure our cybersecurity strategy is front and center. When I was talking about my experience I talked about the center and the edge. Our strategy to balance that pendulum relies on flexibility and on effective asset management. And so making sure that our cloud reflects those strategic realities gives us a good footing to achieve our corporate objectives. 

Megan: As you say, safety is a top priority. How do technologies like the Internet of Things and AI help enhance safety protocols specifically too, especially in the context of emissions tracking and leak detection? 

Luis: Yeah, thank you for the question. Safety is the most important thing that we think and talk about here at Chevron. There is nothing more important than ensuring that our people are safe and healthy, so I would break safety down into two. Before I jump to emissions tracking and leak detection, I just want to make a quick point on personal safety and how we leverage IoT and AI to that end. 

We use sensing capabilities that help us keep workers out of harm’s way, and so things like computer vision to identify and alert people who are coming into safety areas. We also use computer vision, for example, to identify PPE requirements—personal protective equipment requirements—and so if there are areas that require a certain type of clothing, a certain type of identification, or a hard hat, we are using technologies that can help us make sure people have that before they go into a particular area. 

We’re also using wearables. Wearables help us in one of the use cases is they help us track exhaustion and dehydration in locations where that creates inherent risk, and so locations that are very hot, whether it’s because of the weather or because they are enclosed, we can use wearables that tell us how fast the person’s getting dehydrated, what are the levels of liquid or sodium that they need to make sure that they’re safe or if they need to take a break. We have those capabilities now. 

Going back to emissions tracking and leak detection, I think it’s actually the combination of IoT and AI that can transform how we prevent and react to those. In this case, we also deploy sensing capabilities. We use things like computer vision, like infrared capabilities, and we use others that deliver data to the AI models, which then alert and enable rapid response. 

The way I would explain how we use IoT and AI for safety, whether it’s personnel safety or emissions tracking and leak detection, is to think about sensors as the extension of human ability to sense. In some cases, you could argue it’s super abilities. And so if you think of sight normally you would’ve had supervisors or people out there that would be looking at the field and identifying issues. Well, now we can use computer vision with traditional RGB vision, we can use them with infrared, we can use multi-angle to identify patterns, and have AI tell us what’s going on. 

If you keep thinking about the human senses, that’s sight, but you can also use sound through ultrasonic sensors or microphone sensors. You can use touch through vibration recognition and heat recognition. And even more recently, this is something that we are testing more recently, you can use smell. There are companies that are starting to digitize smell. Pretty exciting, also a little bit crazy. But it is happening. And so these are all tools that any human would use to identify risk. Well, so now we can do it as an extension of our human abilities to do so. This way we can react much faster and better to the anomalies. 

A specific example with methane. We have a simple goal with methane, we want to keep methane in the pipe. Once it’s out, it’s really hard or almost impossible to take it back. Over the last six to seven years, we have reduced our methane intensity by over 60% and we’re leveraging technology to achieve that. We have deployed a methane detection program. We have trialed over 10 to 15 advanced methane detection technologies. 

A technology that I have been looking at recently is called Aquanta Vision. This is a company supported by an incubator program we have called Chevron Studio. We did this in partnership with the National Renewable Energy Laboratory, and what they do is they leverage optical gas imaging to detect methane effectively and to allow us to prevent it from escaping the pipe. So that’s just an example of the technologies that we’re leveraging in this space. 

Megan: Wow, that’s fascinating stuff. And on emissions as well, Chevron has made significant investments in new energy technologies like hydrogen, carbon capture, and renewables. How do these technologies fit into Chevron’s broader goal of reducing its carbon footprint? 

Luis: This is obviously a fascinating space for us, one that is ever-changing. It is honestly not my area of expertise. But what I can say is we truly believe we can achieve high returns and lower carbon, and that’s something that we communicate broadly. A few years ago, I believe it was 2021, we established our Chevron New Energies company and they actively explore lower carbon alternatives including hydrogen, renewables, and carbon capture offsets. 

My area, the digital area, and the convergence between digital technologies and the technical sciences will enable the techno-commercial viability of those business lines. Thinking about carbon capture, is something that we’ve done for a long time. We have decades of experience in carbon capture technologies across the world. 

One of our larger projects, the Gorgon Project in Australia, I think they’ve captured something between 5 and 10 million tons of CO2 emissions in the past few years, and so we have good expertise in that space. But we also actively partner in carbon capture. We have joined hubs of carbon capture here in Houston, for example, where we investing in companies like there’s a company called Carbon Clean, a company called Carbon Engineering, and one called Svante. I’m familiar with these names because the corporate VC team is close to me. These companies provide technologies for direct air capture. They provide solutions for hard-to-abate industries. And so we want to keep an eye on these emerging capabilities and make use of them to continuously lower our carbon footprint. 

There are two areas here that I would like to talk about. Hydrogen first. This is another area that we’re familiar with. Our plan is to build on our existing assets and capabilities to deliver a large-scale hydrogen business. Since 2005, I think we’ve been doing retail hydrogen, and we also have several partnerships there. In renewables, we are creating a range of fuels for different transportation types. We use diesel, bio-based diesel, we use renewable natural gas, we use sustainable aviation fuel. Yeah, so these are all areas of importance to us. They’re emerging business lines that are young in comparison to the rest of our company. We’ve been a company for 140 years plus, and this started in 2021, so you can imagine how steep that learning curve is. 

I mentioned how we leverage our corporate venture capital team to learn and to keep an eye out on what are these emerging trends and technologies that we want to learn about. They leverage two things. They leverage a core fund, which is focused on areas that can seek innovation for our core business for the title. And we have a separate future energy fund that explores areas that are emerging. Not only do they invest in places like hydrogen, carbon capture, and renewables, but they also may invest in other areas like wind and geothermal and nuclear capability. So we constantly keep our eyes open for these emerging technologies. 

Megan: I see. And I wonder if you could share a bit more actually about Chevron’s role in driving sustainable business innovation. I’m thinking of initiatives like converting used cooking oil into biodiesel, for example. I wonder how those contribute to that overall goal of creating a circular economy. 

Luis: Yeah, this is fascinating and I was so happy to learn a little bit more about this year when I had the chance to visit our offices in Iowa. I’ll get into that in a second. But happy to talk about this, again with the caveat that it’s not my area of expertise. 

Megan: Of course. 

Luis: In the case of biodiesel, we acquired a company called REG in 2022. They were one of the founders of the renewable fuels industry, and they honestly do incredible work to create energy through a process, I forget the name of the process to be honest. But at the most basic level what they do is they prepare feedstocks that come from different types of biomass, you mentioned cooking oils, there’s also soybeans, there’s animal fats. And through various chemical reactions, what they do is convert components of the feedstock into biodiesel and glycerin. After that process, what they do is they separate un-reactive methanol, which is recovered and recycled into the process, and the biodiesel goes through a final processing to make sure that it meets the standards necessary to be commercialized. 

What REG has done is it has boosted our knowledge as a broader organization on how to do this better. They continuously look for bio-feedstocks that can help us deliver new types of energy. I had mentioned bio-based diesel. One of the areas that we’re very focused on right now is sustainable aviation fuel. I find that fascinating. The reason why this is working and the reason why this is exciting is because they brought this great expertise and capability into Chevron. And in turn, as a larger organization, we’re able to leverage our manufacturing and distribution capabilities to continue to provide that value to our customers. 

I mentioned that I learned a little bit more about this this year. I was lucky earlier in the year I was able to visit our REG offices in Ames, Iowa. That’s where they’re located. And I will tell you that the passion and commitment that those people have for the work that they do was incredibly energizing. These are folks who have helped us believe, really, that our promise of lower carbon is attainable. 

Megan: Wow. Sounds like there’s some fascinating work going on. Which brings me to my final question. Which is sort of looking ahead, what emerging technologies are you most excited about and how do you see them impacting both Chevron’s core business and the energy sector as a whole as well? 

Luis: Yeah, that’s a great question. I have no doubt that the energy business is changing and will continue to change only faster, both our core business as well as the future energy, or the way it’s going to look in the future. Honestly, in my line of work, I come across exciting technology every day. The obvious answers are AI and industrial AI. These are things that are already changing the way we live without a doubt. You can see it in people’s productivity. You can see it in how we optimize and transform workflows. AI is changing everything. I am actually very, very interested in IoT, in the Internet of Things, and robotics, the ability to protect humans in high-risk environments, like I mentioned, is critical to us, the opportunity to prevent high-risk events and predict when they’re likely to happen. 

This is pretty massive, both for our productivity objectives as well as for our lower carbon objectives. If we can predict when we are at risk of particular events, we could avoid them altogether. As I mentioned before, this ubiquitous ability to sense our surroundings is a capability that our industry and I’m going to say humankind, is only beginning to explore. 

There’s another area that I didn’t talk too much about, which I think is coming, and that is quantum computing. Quantum computing promises to change the way we think of compute power and it will unlock our ability to simulate chemistry, to simulate molecular dynamics in ways we have not been able to do before. We’re working really hard in this space. When I say molecular dynamics, think of the way that we produce energy today. It is all about the molecule and understanding the interactions between hydrocarbon molecules and the environment. The ability to do that in multi-variable systems is something that quantum, we believe, can provide an edge on, and so we’re working really hard in this space. 

Yeah, there are so many, and having talked about all of them, AI, IoT, robotics, quantum, the most interesting thing to me is the convergence of all of them. If you think about the opportunity to leverage robotics, but also do it as the machines continue to control limited processes and understand what it is they need to do in a preventive and predictive way, this is such an incredible potential to transform our lives, to make an impact in the world for the better. We see that potential. 

My job is to keep an eye on those developments, to make sure that we’re managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective. 

Megan: Absolutely. Such an important point to finish on. And unfortunately, that is all the time we have for today, but what a fascinating conversation. Thank you so much for joining us on the Business Lab, Luis. 

Luis: Great to talk to you. 

Megan:  Thank you so much. That was Luis Niño, who is the digital manager of technology ventures and innovation at Chevron, who I spoke with today from Brighton, England. 

That’s it for this episode of Business Lab. I’m Megan Tatum, I’m your host and a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. 

This show is available wherever you get your podcasts, and if you enjoyed this episode, we really hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thank you so much for listening. 

Training robots in the AI-powered industrial metaverse

Imagine the bustling floors of tomorrow’s manufacturing plant: Robots, well-versed in multiple disciplines through adaptive AI education, work seamlessly and safely alongside human counterparts. These robots can transition effortlessly between tasks—from assembling intricate electronic components to handling complex machinery assembly. Each robot’s unique education enables it to predict maintenance needs, optimize energy consumption, and innovate processes on the fly, dictated by real-time data analyses and learned experiences in their digital worlds.

Training for robots like this will happen in a “virtual school,” a meticulously simulated environment within the industrial metaverse. Here, robots learn complex skills on accelerated timeframes, acquiring in hours what might take humans months or even years.

Beyond traditional programming

Training for industrial robots was once like a traditional school: rigid, predictable, and limited to practicing the same tasks over and over. But now we’re at the threshold of the next era. Robots can learn in “virtual classrooms”—immersive environments in the industrial metaverse that use simulation, digital twins, and AI to mimic real-world conditions in detail. This digital world can provide an almost limitless training ground that mirrors real factories, warehouses, and production lines, allowing robots to practice tasks, encounter challenges, and develop problem-solving skills. 

What once took days or even weeks of real-world programming, with engineers painstakingly adjusting commands to get the robot to perform one simple task, can now be learned in hours in virtual spaces. This approach, known as simulation to reality (Sim2Real), blends virtual training with real-world application, bridging the gap between simulated learning and actual performance.

Although the industrial metaverse is still in its early stages, its potential to reshape robotic training is clear, and these new ways of upskilling robots can enable unprecedented flexibility.

Italian automation provider EPF found that AI shifted the company’s entire approach to developing robots. “We changed our development strategy from designing entire solutions from scratch to developing modular, flexible components that could be combined to create complete solutions, allowing for greater coherence and adaptability across different sectors,” says EPF’s chairman and CEO Franco Filippi.

Learning by doing

AI models gain power when trained on vast amounts of data, such as large sets of labeled examples, learning categories, or classes by trial and error. In robotics, however, this approach would require hundreds of hours of robot time and human oversight to train a single task. Even the simplest of instructions, like “grab a bottle,” for example, could result in many varied outcomes depending on the bottle’s shape, color, and environment. Training then becomes a monotonous loop that yields little significant progress for the time invested.

Building AI models that can generalize and then successfully complete a task regardless of the environment is key for advancing robotics. Researchers from New York University, Meta, and Hello Robot have introduced robot utility models that achieve a 90% success rate in performing basic tasks across unfamiliar environments without additional training. Large language models are used in combination with computer vision to provide continuous feedback to the robot on whether it has successfully completed the task. This feedback loop accelerates the learning process by combining multiple AI techniques—and avoids repetitive training cycles.

Robotics companies are now implementing advanced perception systems capable of training and generalizing across tasks and domains. For example, EPF worked with Siemens to integrate visual AI and object recognition into its robotics to create solutions that can adapt to varying product geometries and environmental conditions without mechanical reconfiguration.

Learning by imagining

Scarcity of training data is a constraint for AI, especially in robotics. However, innovations that use digital twins and synthetic data to train robots have significantly advanced on previously costly approaches.

For example, Siemens’ SIMATIC Robot Pick AI expands on this vision of adaptability, transforming standard industrial robots—once limited to rigid, repetitive tasks—into complex machines. Trained on synthetic data—virtual simulations of shapes, materials, and environments—the AI prepares robots to handle unpredictable tasks, like picking unknown items from chaotic bins, with over 98% accuracy. When mistakes happen, the system learns, improving through real-world feedback. Crucially, this isn’t just a one-robot fix. Software updates scale across entire fleets, upgrading robots to work more flexibly and meet the rising demand for adaptive production.

Another example is the robotics firm ANYbotics, which generates 3D models of industrial environments that function as digital twins of real environments. Operational data, such as temperature, pressure, and flow rates, are integrated to create virtual replicas of physical facilities where robots can train. An energy plant, for example, can use its site plans to generate simulations of inspection tasks it needs robots to perform in its facilities. This speeds the robots’ training and deployment, allowing them to perform successfully with minimal on-site setup.

Simulation also allows for the near-costless multiplication of robots for training. “In simulation, we can create thousands of virtual robots to practice tasks and optimize their behavior. This allows us to accelerate training time and share knowledge between robots,” says Péter Fankhauser, CEO and co-founder of ANYbotics.

Because robots need to understand their environment regardless of orientation or lighting, ANYbotics and partner Digica created a method of generating thousands of synthetic images for robot training. By removing the painstaking work of collecting huge numbers of real images from the shop floor, the time needed to teach robots what they need to know is drastically reduced.

Similarly, Siemens leverages synthetic data to generate simulated environments to train and validate AI models digitally before deployment into physical products. “By using synthetic data, we create variations in object orientation, lighting, and other factors to ensure the AI adapts well across different conditions,” says Vincenzo De Paola, project lead at Siemens. “We simulate everything from how the pieces are oriented to lighting conditions and shadows. This allows the model to train under diverse scenarios, improving its ability to adapt and respond accurately in the real world.”

Digital twins and synthetic data have proven powerful antidotes to data scarcity and costly robot training. Robots that train in artificial environments can be prepared quickly and inexpensively for wide varieties of visual possibilities and scenarios they may encounter in the real world. “We validate our models in this simulated environment before deploying them physically,” says De Paola. “This approach allows us to identify any potential issues early and refine the model with minimal cost and time.”

This technology’s impact can extend beyond initial robot training. If the robot’s real-world performance data is used to update its digital twin and analyze potential optimizations, it can create a dynamic cycle of improvement to systematically enhance the robot’s learning, capabilities, and performance over time.

The well-educated robot at work

With AI and simulation powering a new era in robot training, organizations will reap the benefits. Digital twins allow companies to deploy advanced robotics with dramatically reduced setup times, and the enhanced adaptability of AI-powered vision systems makes it easier for companies to alter product lines in response to changing market demands.

The new ways of schooling robots are transforming investment in the field by also reducing risk. “It’s a game-changer,” says De Paola. “Our clients can now offer AI-powered robotics solutions as services, backed by data and validated models. This gives them confidence when presenting their solutions to customers, knowing that the AI has been tested extensively in simulated environments before going live.”

Filippi envisions this flexibility enabling today’s robots to make tomorrow’s products. “The need in one or two years’ time will be for processing new products that are not known today. With digital twins and this new data environment, it is possible to design today a machine for products that are not known yet,” says Filippi.

Fankhauser takes this idea a step further. “I expect our robots to become so intelligent that they can independently generate their own missions based on the knowledge accumulated from digital twins,” he says. “Today, a human still guides the robot initially, but in the future, they’ll have the autonomy to identify tasks themselves.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.