Mapping the micro and macro of biology with spatial omics and AI

37 trillion. That is the number or cells that form a human being. How they all work together to sustain life is possibly the biggest unsolved puzzle in biology. A group of up-and-coming technologies for spatially resolved multi omics, here collectively called “spatial omics,” may provide researchers with the solution.

Over the last 20 years, the omics revolution has enabled us to understand cell and tissue biology at ever increasing resolutions. Bulk sequencing techniques that emerged in the mid 2000s allowed the study of mixed populations of cells. A decade later, single-cell omics methods became commercially available, revolutionizing our understanding of cell physiology and pathology. These methods, however, required dissociating cells from their tissue of origin, making it impossible to study their spatial organization in tissue.

Spatial omics refers to the ability to measure the activity of biomolecules (RNA, DNA, proteins, and other omics) in situ—directly from tissue samples. This is important because many biological processes are controlled by highly localized interactions between cells that take place in spatially heterogeneous tissue environments. Spatial omics allows previously unobservable cellular organization and biological events to be viewed in unprecedented detail.

A few years ago, these technologies were just prototypes in a handful of labs around the world. They worked only on frozen tissue and they required impractically large amounts of precious tissue biopsies. But as these challenges have been overcome and the technologies commercialized by life science technology providers, these tools have become available to the wider scientific community. Spatial omics technologies are now improving at a rapid pace, increasing the number of biomolecules that can be profiled from hundreds to tens of thousands, while increasing resolution to single-cell and even subcellular scales.

Complementary advances in data and AI will expand the impact of spatial omics on life sciences and health care—while also raising new questions. How are we going to generate the large datasets that are necessary to make clinically relevant discoveries? What will data scientists see in spatial omics data through the lens of AI?

Discovery requires large-scale spatial omics datasets

Several areas of life science are already benefiting from discoveries made possible by spatial omics, with the biggest impacts in cancer and neurodegenerative disease research. However, spatial omics technologies are very new, and experiments are challenging and costly to execute. Most present studies are performed by single institutions and include only a few dozen patients. Complex cell interactions are highly patient-specific, and they cannot be fully understood from these small cohorts. Researchers need the data to enable hypothesis generation and discovery. 

This requires a shift in mentality toward collaborative projects, which can generate large-scale reference datasets both on healthy organs and human diseases. Initiatives such as The Cancer Genome Atlas (TCGA) have transformed our understanding of cancer. Similar large-scale spatial omics efforts are needed to systematically interrogate the role of spatial organization in healthy and diseased tissues; they will generate large datasets to fuel many discovery programs. In addition, collaborative initiatives steer further improvement of spatial omics technologies, generate data standards and infrastructures for data repositories, and drive the development and adoption of computational tools and algorithms.

At Owkin we are pioneering the generation of such datasets. In June 2023, we launched an initiative to create the world largest spatial omics dataset in cancer, with a vision to include data from 7,000 patients with seven difficult-to-treat cancers. The project, known as MOSAIC (Multi-Omics Spatial Atlas in Cancer), won’t stop at the data generation, but will mine the data to learn disease biology and identify new molecular targets against which to design new drugs.

Owkin is well placed to drive this kind of initiative. We can tap a vast network of collaborating hospitals across the globe: to create the MOSAIC dataset, we are working with five world-class cancer research hospitals. And we have deep experience in AI: In the last five years, we have published 54 research papers generating AI methodological innovation and building predictive models in several disease areas, including many types of cancer.

AI’s transformative role in discovering new biology

Spatial omics was recognized as method of the year 2020 by Nature Methods, and it was named one of the top 10 emerging technologies by the World Economic Forum in 2023—alongside generative AI.

With these two technologies developing in tandem, the opportunities for AI-driven biological discoveries from spatial omics are numerous. Looking at the fast-evolving landscape of spatial omics AI methods, we see two broad categories of new methods breaking through.

In the first category are AI methods that aim to improve the usability of spatial omics and enable richer downstream analyses for researchers. Such methods are specially designed to deal with the high dimensionality and the signal-to-noise ratio that are specific to spatial omics. Some are used to remove technical artifacts and batch effects from the data. Other methods, collectively known as “super-resolution methods,” use AI to increase the resolution of spatial omics assays to near single-cell levels. Another group of approaches looks to integrate dissociated single-cell omics with spatial omics. Collectively, these AI methods are bridging the gap with future spatial omics technologies.

In the second category, AI methods aim at discovering new biology from spatial omics. By exploiting the localization information of spatial omics, they shed light on how groups of cells organize and communicate with unprecedented resolution.  Such methods are sharpening our understanding of how cells interact to form complex tissues.

At Owkin, we are developing methods to identify new therapeutic targets and patient subpopulations using spatial omics. We have pioneered methods allowing researchers to understand how cancer patient outcomes are linked to tumor heterogeneity, directly from tumor biopsy images. Building on this expertise and the MOSAIC consortium, we are developing the next generation of AI methods, which will link patient-level outcomes with an understanding of disease heterogeneity at the molecular level.

Looking ahead

Spatial biology has the potential to radically change our understanding of biology. It will change how we see a biomarker, going from the mere presence of a particular molecule in a sample to patterns of cells expressing a certain molecule in a tissue. Promising research on spatial biomarkers has been published for several diseases, including Alzheimer’s disease and ovarian cancer. Spatial omics has already been used in research associated with clinical trials to monitor tumor progression in patients.

Five years from now, spatial technologies will be capable of mapping every human protein, RNA, and metabolite at subcellular resolution. The computing infrastructure to store and analyze spatial omics data will be in place, as will the necessary standards for data and metadata and the analytical algorithms. The tumor microenvironment and cellular composition of difficult-to-treat cancers will be mapped through collaborative efforts such as MOSAIC.

Spatial omics datasets from patient biopsies will quickly become an essential part of pharmaceutical R&D, and through the lens of AI methods, they will be used to inform the design of new, more efficacious drugs and to drive faster and better-designed clinical trials to bring those drugs to patients. In the clinic, spatial omics data will routinely be collected from patients, and doctors will use purpose-built AI models to extract clinically relevant information about a patient’s tumor and what drugs it will best respond to.

Today we are witnessing the convergence of three forces: spatial omics technologies becoming increasingly high-throughput and high-resolution, large-scale datasets from patient biopsies being generated, and AI models becoming ever more sophisticated. Together, they will allow researchers to dissect the complex biology of health and diseases, enabling ever more sophisticated therapeutic interventions. 

Davide Mantiero, PhD, Joseph Lehár, PhD, and Darius Meadon also contributed to this piece.

This content was produced by Owkin. It was not written by MIT Technology Review’s editorial staff.

Capitalizing on machine learning with collaborative, structured enterprise tooling teams

Advances in machine learning (ML) and AI are emerging on a near-daily basis—meaning that industry, academia, government, and society writ large are evolving their understanding of the associated risks and capabilities in real time. As enterprises seek to capitalize on the potential of AI, it’s critical that they develop, maintain, and advance state-of-the-art ML practices and processes that will offer both strong governance and the flexibility to change as the demands of technology requirements, capabilities, and business imperatives change.

That’s why it’s critical to have strong ML operations (MLOps) tooling, practices, and teams—those that build and deploy a set of software development practices that keep ML models running effectively and with agility. Capital One’s core ML engineering teams demonstrate firsthand the benefits collaborative, well-managed, and adaptable MLOps teams can bring to enterprises in the rapidly evolving AI/ML space. Below are key insights and lessons learned during Capital One’s ongoing technology and AI journey.

Standardized, reusable components are critical

Most MLOps teams have people with extensive software development skills who love to build things. But the continuous build of new AI/ML tools must also be balanced with risk efficiency, governance, and risk mitigation.

Many engineers today are experimenting with new generative AI capabilities. It’s exciting to think about the possibilities that something like code generation can unlock for efficiency and standardization, but auto-generated code also requires sophisticated risk management and governance processes before it can be accepted into any production environment. Furthermore, a one-size-fits-all approach to things like generating code won’t work for most companies, which have industry, business, and customer-specific circumstances to account for.

As enterprise platform teams continue to explore the evolution of ML tools and techniques while prioritizing reusable tools and components, they can look to build upon open-source capabilities. One example is Scikit-Learn, a Python library containing numerous supervised and unsupervised learning algorithms that has a strong user community behind it and which can be used as a foundation to further customize for specific and reusable enterprise needs.

Cross-team communication is vital

Most large enterprises have data scientists and engineers working on projects through different parts of the company. This means it can also be difficult to know where new technologies and tools are built, resulting in arbitrary uniqueness.

This underscores the importance of creating a collaborative team culture where communication about the big picture, strategic goals, and initiatives is prioritized—including the ability to find out where tools are being built and evolved. What does this look like in practice?

Ensure your team knows what tools and processes it owns and contributes to. Make it clear how their work supports the broader company’s mission. Demonstrate how your team can feel empowered not to build something from scratch. Incentivize reuse and standardization. It takes time and effort to create a culture of “innersourcing” innovation and build communications mechanisms for clarity and context, but it’s well worth it to ensure long-term value creation, innovation, and efficiency.

Tools must map to business outcomes

Enterprise MLOps teams have a broader role than building tools for data scientists and engineers: they need to ensure those tools both mitigate risk and enable more streamlined, nimble technology capabilities for their business partners. Before setting off on building new AI/ML capabilities, engineers and their partners should ask themselves a few core questions. Does this tool actually help solve a core problem for the business? Will business partners be able to use it? Will it work with existing tools and processes? How quickly can we deliver it, and is there something similar that already exists that we should build upon first?

Having centralized enterprise MLOps and engineering teams ask these questions can free up the business to solve customer problems, and to consider how technology can continue to support the evolution of new solutions and experiences.

Don’t simply hire unicorns, build them

There’s no question that delivering for the needs of business partners in the modern enterprise takes significant amounts of MLOps expertise. It requires both software engineering and ML engineering experience, and—especially as AI/ML capabilities evolve—people with deeply specialized skill sets, such as those with deep graphics processing (GPU) expertise.

Instead of hiring a “unicorn” individual, companies should focus on building a unicorn team with the best of both worlds. This means having deep subject matter experts in science, engineering, statistics, product management, DevOps, and other disciplines. These are all complementary skill sets that add up to a more powerful collective. Together, individuals who can work effectively as a team, show a curiosity for learning, and an ability to empathize with the problems you’re solving are just as important as their unique domain skills.

Develop a product mindset to produce better tools

Last but not least, it’s important to take a product-backed mindset when building new AI and ML tools for internal customers and business partners. It requires not just thinking about what you build as just a task or project to be checked off the list, but understanding the customer you’re building for and taking a holistic approach that works back from their needs.

Often, the products MLOps teams build—whether it’s a new feature library or an explainability tool—look different than what traditional product managers deliver, but the process for creating great products should be the same. Focusing on the customer needs and pain points helps everyone deliver better products; it’s a muscle that many data science and engineering experts have to build, but ultimately helps us all create better tooling and deliver more value for the customer.

The bottom line is that today, the most effective MLOps strategies are not just about technical capabilities, but also involve intentional and thoughtful culture, collaboration, and communication strategies. In large enterprises, it’s important to be cognizant that no one operates in a vacuum. As hard as it may be to see in the day-to-day, everything within the enterprise is ultimately connected, and the capabilities that AI/ML tooling and engineering teams bring to bear have important implications for the entire organization.

This content was produced by Capital One. It was not written by MIT Technology Review’s editorial staff.

Sustainability starts with the data center

When asked why he targeted banks, notorious criminal Willie Sutton reportedly answered, “Because that’s where the money is.” Similarly, when thoughtful organizations target sustainability, they look to their data centers—because that’s where the carbon emissions are.

The International Energy Agency (IEA) attributes about 1.5% of total global electricity use to data centers and data transmission networks. This figure is much higher, however, in countries with booming data storage sectors: in Ireland, 18% of electricity consumption was attributable to data centers in 2022, and in Denmark, it is projected to reach 15% by 2030. And while there have been encouraging shifts toward green-energy sources and increased deployment of energy-efficient hardware and software, organizations need to accelerate their data center sustainability efforts to meet ambitious net-zero targets.

For data center operators, options for boosting sustainability include shifting energy sources, upgrading physical infrastructure and hardware, improving and automating workflows, and updating the software that manages data center storage. Hitachi Vantara estimates that emissions attributable to data storage infrastructure can be reduced as much as 96% by using a combination of these approaches.

Critics might counter that, though data center decarbonization is a worthy social goal, it also imposes expenses that a company focused on its bottom line can ill afford. This, however, is a shortsighted view.

Data center decarbonization initiatives can provide an impetus that enables organizations to modernize, optimize, and automate their data centers. This leads directly to improved performance of mission-critical applications, as well as a smaller, denser, more efficient data center footprint—which then creates savings via reduced energy costs. And modern data storage and management solutions, beyond supporting sustainability, also create a unified platform for innovation and new business models through advanced data analytics, machine learning, and AI.

Dave Pearson, research vice president at IDC, says, “Decarbonization and the more efficient energy utilization of the data center are supported by the same technologies that support data center modernization. Modernization has sustainability goals, but obviously it provides all kinds of business benefits, including enabling data analytics and better business processes.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Augmenting the realities of work

Imagine an integrated workplace with 3D visualizations that augment presentations, interactive and accelerated onboarding, and controlled training simulations. This is the future of immersive technology that global head of Immersive Technology Research at JPMorgan Chase, Blair MacIntyre is working to build. Augmented reality (AR) and virtual reality (VR) technologies can blend physical and digital dimensions together and infuse new innovations and efficiencies into business and customer experiences.

“These technologies can offer newer ways of collaborating over distance both synchronously and asynchronously than we can get with the traditional work technologies that we use right now,” says MacIntyre. “It’s these new ways to collaborate, ways of using the environment and space in new and interesting ways that will hopefully offer new value and change the way we work.”

Many enterprises are integrating VR into business practices like video conference calls. But having some participants in a virtual world and some sidelined creates imbalances in the employee experience. MacIntyre’s team is looking for ways to use AR/VR technologies that can be additive, like 3D data visualizations that enhance financial forecasting within a bank, not ones that overhaul entire experiences.

Although the potential of AR/VR is quickly evolving, it’s unlikely that customers’ interactions or workplace environments will be entirely moved to the virtual world anytime soon. Rather, MacIntyre’s immersive technology research looks to infuse efficiencies into existing practices.

“It’s thinking about how the technologies integrate and how we can add value where there is value and not trying to replace everything we do with these technologies,” MacIntyre says.

AI can help remove some of the tedium from immersive technologies that have made them impractical for widespread enterprise use in the past. Using VR technology in the workplace may prohibit taking notes and having access to traditional input devices and files. AI tools can take and transcribe notes and fill in any other gaps to help remove that friction and eliminate redundancies.

Connected Internet of things (IoT) devices are also key to enabling AR/VR technologies. To create a valuable immersive experience, MacIntyre says, it’s imperative to know as much about the surrounding world of the user as well as their needs, habits, and preferences.

“If we can figure out more ways of enabling people to work together in a distributed way, we can start enabling more people to participate meaningfully in a wider variety of jobs,” says MacIntyre.

This episode of Business Lab is produced in association with JPMorgan Chase.

Full transcript

Laurel: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is emerging technologies, specifically, immersive technologies like augmented and virtual reality. Keeping up with technology trends may be a challenge for most enterprises, but it’s a critical way to think about future possibilities from product to customer service to employee experience. Augmented and virtual realities aren’t necessarily new, but when it comes to applying them beyond gaming, it’s a brave new world.

Two words for you: emerging realities.

My guest is Blair MacIntyre, who is the global head of Immersive Technology Research at JPMorgan Chase.

This podcast is produced in association with JPMorgan Chase.

Welcome, Blair.

Blair MacIntyre: Thank you. It’s great to be here.

Laurel: Well, let’s do a little bit of context setting. Your career has been focused on researching and exploring immersive technology, including software and design tools, privacy and ethics, and game and experience design. So what brought you to JPMorgan Chase, and could you describe your current role?

Blair: So before joining the firm, I had spent the last 23 years as a professor at Georgia Tech and Northeastern University. During that time, as you say, I explored a lot of ways that we can both create things with these technologies, immersive technologies and also, what they might be useful for and what the impacts on people in society and how we experience life are. But as these technologies have become more real, moved out of the lab, starting to see real products from real companies, we have this opportunity to actually see how they might be useful in practice and to have, for me, an impact on how these technologies will be deployed and used that goes beyond the traditional impact that professors might have. So beyond writing papers, beyond teaching students. That’s what brought me to the firm, and so my current role is, really, to explore that, to understand all the different ways that immersive technology could impact the firm and its customers. Right? So we think about not just customer-facing and not just products, but also employees and their experience as well.

Laurel: That’s really interesting. So why does JPMorgan Chase have a dedicated immersive technology focus in its global technology applied research division, and what are the primary goals of your team’s research within finance and large enterprises as a whole?

Blair: That’s a great question. So JPMorgan Chase has a fairly wide variety of research going on within the company. There’s large efforts in AI/ML, in quantum computing, blockchain. So they’re interested in looking at all of the range of new technologies and how they might impact the firm and our customers, and immersive technologies represent one of those technologies that could over time have a relatively large impact, I think, especially on the employee experience and how we interact with our customers. So they really want to have a group of people focusing on, really, looking both in the near and long term, and thinking about how we can leverage the technology now and how we might be able to leverage it down the road, and not just how we can, but what we should not do. Right? So we’re interested in understanding of these applications that are being proposed or people are imagining could be used. Which ones actually have value to the company, and which ones may not actually have value in practice?

Laurel: So when people think of immersive technologies like augmented reality and virtual reality, AR and VR, many think of headsets or smartphone apps for gaming and retail shopping experiences. Could you give an overview of the state of immersive technology today and what use cases you find to be the most innovative and interesting in your research?

Blair: So, as you say, I think many people think about smartphones, and we’ve seen, at least in movies and TV shows, head mounts of various kinds. The market, I would divide it right now into the two parts, the handheld phone and tablet experience. So you can do augmented reality now, and that really translates to we take the camera feed, and we can overlay computer graphics on it to do things like see what something you might want to buy looks like in your living room or do, in an enterprise situation, remote maintenance assistance where I can take my phone, point it at a piece of technology, and a remote expert could draw on it or help me do something with it.

There’s the phone-based things, and we carry these things in our pockets all the time, and they’re relatively cheap. So there’s a lot of opportunities when it’s appropriate to use those, but the big downside of those devices is that you have to hold them in your hands, so if you wanted to try to put information all around you, you would have to hold the device up and look around, which is uncomfortable and awkward. So that is where the head mount displays come in.

So either virtual reality displays which, right now, many of us think about computer games and education as use cases in the consumer world or augmented reality displays. These sorts of displays now let us do the same kind of things we might do with our phones, but we can do it without our hands having to hold something so we can be doing whatever work it was we wanted to do, right? Repairing the equipment, taking notes, working with things in the world around us, and we can have information spread all around us, which I think is the big advantage of head mounts.

So many of the things people imagine when they think about augmented reality in particular involve this serendipitous access to information. I’m walking into a conference room, and I see sort of my notes and information about the people I’m meeting there and the materials from our last meeting, whatever it is, or I’m walking down the street, and I see advertising or other kinds of, say, tourism information, but those things only work if the device is out of mind. If I can put it on, and then go about my life, I’m not going to walk into a conference room, and hold up a phone, and look at everybody through it.

So that, I think, is the big difference. You could implement the same sorts of applications on both the handheld devices and the head-worn devices, but the two different form factors are going to make very different applications appropriate for those two sorts of technologies.

On the virtual reality side, we’re at the point now where the displays we can buy are light enough and comfortable enough that we could wear them for half an hour, a couple hours without discomfort. So a lot of the applications that people imagine there, I think the most popular things that people have done research on and that I see having a near-term impact in the enterprise are immersive training applications where you can get into a situation rather than, say, watching a video or a little click-through presentation as part of your annual training. You could really be in an experience and hopefully learn more from it. So I think those sorts of experiences where we’re totally immersed and focused is where virtual reality comes in.

The big thing that I think is most exciting about head-worn displays in particular where we can wear them while we’re doing work as opposed to just having these ephemeral experiences with a phone is the opportunity to do things together, to collaborate. So I might want to look at a map on a table and see a bunch of data floating above the map, but it would be better if you and our other colleagues were around the table with me, and we can all see the same things, or if we want to take a training experience, I could be in there getting my training experience, but maybe someone else is joining me and being able to both offer feedback or guidance and so on.

Essentially, when I think about these technologies, I think about the parallels to how we do work regularly, right? We generally collaborate with people. We might grab a colleague and have them look at our laptop to show them something. I might send someone something on my phone, and then we can talk about it. So much of what we do involves interactions with other people and with the data that we are doing our job with that anything we do with these immersive technologies is really going to have to mimic that and give us the ability to do our real work in these immersive spaces with the people that we normally work with.

Laurel: Well, speaking of working with people, how can the scale of an institution like JPMorgan Chase help propel this research forward in immersive technology, and what opportunities does it provide that are otherwise limited in a traditional university or startup research environment?

Blair: I think it comes down to a few different things. On one hand, we have the access to people who are really doing the things that we want to build technologies to help with. Right? So if I wanted to look how I could use immersive visualization of data to help people in human resources do planning or help people who are doing financial modeling look at the data in new and interesting ways, now I could actually do the research in conjunction with the real people who do that work. Right? So I’ve already and I’ve been at the firm for a little over a year, and many conversations we’ve had were either we’ve had an idea or somebody has come to us with an idea. Through the course of the conversations, relatively quickly, we hone in on things that are much more sophisticated, much more powerful than what we might have thought of at a university where we didn’t have that sort of direct access to people doing the work.

On the other hand, if we actually build something, we can actually test it with the same people, which is an amazing opportunity. Right? When I go to a conference, we’re going to put 20 people who actually represent the real users of those systems. So, for me, that’s where I think the big opportunity of doing research in an enterprise is, is building solutions for the real people of that enterprise and being able to test it with those people.

Laurel: Recent years have actually changed what customers and employees expect from enterprises as well, like omnichannel retail experiences. So immersive technologies can be used to bridge gaps between physical and virtual environments as you were saying earlier. What are the different opportunities that AR and VR can offer enterprises, and how can these technologies be used to improve employee and customer experience?

Blair: So I alluded back to some of that in previous answers. I think the biggest opportunities have to do with how employees within the organization can do new things together, can interact, and also how companies can interact with customers. Now, we’re not going to move all of our interactions with our customers into the virtual world, or the metaverse, or whatever you want to call it nowadays anytime soon. Right? But I think there are opportunities for customers who are interested in those technologies, and comfortable with them, and excited by them to get new kinds of experiences and new ways of interacting with our firm or other firms than you could get with webpages and in-person meetings.

The other big opportunity I think is as we move to a more hybrid work environment and a distributed work environment, so a company like JPMorgan Chase is huge and spread around the world. We have over 300,000 employees now in most countries around the world. There might be groups of people, but they’re connected together through video right now. These technologies, I think, can offer new ways of collaborating over distance both synchronously and asynchronously than we can get with the traditional work technologies that we use right now. So it’s those new ways to collaborate, ways of using the environment and space in new and interesting ways that is going to, hopefully, offer new value and change the way we work.

Laurel: Yeah, and staying on that topic, we can’t really have a discussion about technology without talking about AI which is another evolving, increasingly popular technology. So that’s being used by many enterprises to reduce redundancies and automate repetitive tasks. In this way, how can immersive technology provide value to people in their everyday work with the help of AI?

Blair: So I think the big opportunity that AI brings to immersive technologies is helping ease a lot of the tedium and burden that may have prevented these technologies from being practical in the past, and this could happen in a variety of ways. When I’m in a virtual reality experience, I don’t have access to a keyboard, I don’t have access to traditional input devices, I don’t have necessarily the same sorts of access to my files, and so on. With a lot of the new AI technologies that are coming around, I can start relying on the computer to take notes. I can have new ways of pulling up information that I otherwise wouldn’t have access to. So, I think AI reducing the friction of using these technologies is a huge opportunity, and the research community is actively looking at that because friction has been one of the big problems with these technologies up till now.

Laurel: So, other than AI, what are other emerging technologies that can aid in immersive technology research and development?

Blair: So, aside from AI, if we step back and look at all of the emerging technologies as a whole and how they complement each other, I think we can see new opportunities. So, in our research, we work closely with people doing computer vision and other sort of sensing research to understand the world. We work closely with people looking at internet of things and connected devices because at a 10,000-foot level, all of these technologies are based on the idea of understanding, sensing the world, understanding what people are doing in it, understanding what people’s needs might be, and then somehow providing information to them or actuating things in the world, displaying stuff on walls or displays.

From that viewpoint, immersive technologies are primarily one way of displaying things in a new and interesting way and getting input from people, knowing what people want to do, allowing them to interact with data. But in order to do that, they need to know as much about the world around the user as possible, the structure of it, but also, who’s there, what we are doing, and so on. So all of these other technologies, especially the Internet of things (IoT) and other forms and ways of sensing what’s happening in the world are very complimentary and together can create new sorts of experiences that neither could do alone.

Laurel: So what are some of the challenges, but also, possible opportunities in your research that contrast the future potential of AR and VR to where the technology is today?

Blair: So I think one of the big limitations of technology today is that most of the experiences are very siloed and disconnected from everything else we do. During the pandemic, many of us experimented with how we could have conferences online in various ways, right? A lot of companies, small companies and larger companies, started looking at how you could create immersive meetings and big group experiences using virtual reality technology, but all of those experiences that people created were these closed systems that you couldn’t bring things into. So one of the things we’re really interested in is how we stop thinking about creating new kinds of experiences and new ways of doing things, and instead think about how do we add these technologies to our existing work practices to enhance them in some way.

So, for example. Right now, we do video meetings. It would be more interesting for some people to be able to join those meetings, say, in VR. Companies have experimented with that, but most of the experiments that people are doing assume that everyone is going to move into virtual reality, or we’re going to bring, say, the people in as a little video wall on the side of a big virtual reality room, making them second class citizens.

I’m really interested and my team is interested in how we can start incorporating technologies like this while keeping everyone a first-class participant in these meetings. As one example, a lot of the systems that large enterprises build, and we’re no different, are web-based right now. So if, let’s say, I have a system to do financial forecasting, you could imagine there’s a bunch of those at a bank, and it’s a web-based system, I’m really interested in how do we add the ability for people to go into a virtual reality or augmented reality experience, say, a 3D visualization of some kind of data at the moment they want to do it, do the work that they want to do, invite colleagues in to discuss things, and then go back to the work as it was always done on a desktop web browser.

So that idea of thinking of these technologies as a capability, a feature instead of a new whole application and way of doing things permeates all the work we’re doing. When I look down the road at where this can go, I see in, say, let’s say, two to five years, I see people with displays maybe sitting on their desk. They have their tablet and their phone, and they might also have another display or two sitting there. They’re doing their work, and at different times, they might be in a video chat, they might pick up a head mount and put it on to do different things, but it’s all integrated. I’m really interested in how we connect these together and reduce friction. Right? If it takes you four or five minutes to move your work into a VR experience, nobody is going to do it because it just is too problematic. So it’s that. It’s thinking about how the technologies integrate and how we can add value where there is value and not trying to replace everything we do with these technologies.

Laurel: So to stay on that future focus, how do you foresee the immersive technology landscape entirely evolving over the next decade, and how will your research enable those changes?

Blair: So, at some level, it’s really hard to answer that question. Right? So if I think back 10 years to where immersive technologies were, it would have been inconceivable for us to imagine the videos that are coming out. So, at some level, I can say, “Well, I have no idea where we’re going to be in 10 years.” On the other hand, it’s pretty safe to imagine the kinds of technologies that we’re experimenting with now just getting better, and more comfortable, and more easy to integrate into work. So I think the landscape is going to evolve in the near term to be more amenable to work.

Especially for augmented reality, the threshold that these devices would have to get to such that a lot of people would be willing to wear them all the time while they’re walking down the street, playing sports, doing whatever, that’s a very high bar because it has to be small, it has to be light, it has to be cheap, it has to have a battery that lasts all day, etcetera, etcetera. On the other hand, in the enterprise, in any business situation, it’s easy to imagine the scenario I described. It’s sitting on my desk, I pick it up, I put it on, I take it off.

In the medium term after that, I think we will see more consumer applications as people start solving more of the problems that are preventing people from wearing these devices for longer periods of time. Right? It’s not just size, and battery power, and comfort, it’s also things like optics. Right? A lot of people — not a lot, but say, let’s say 10%, 15% of people might experience headaches, or nausea, or other kinds of discomfort when they wear a VR display as they’re currently built, and a lot of that has to do with the fact that the optics that you’re looking at when you’re putting this display are built in a way that makes it hard to comfortably focus at objects at different distances away from you without getting into the nitty-gritty details. For many of us, that’s fine. We can deal with the slight problems. But for some people, it’s problematic.

So as we figure out how to solve problems like that, more people can wear them, and more people can use them. I think that’s a really critical issue for not just consumers, but for the enterprise because if we think about a future where more of our business applications and the kind of way we work are done with technologies like this, these technologies have to be accessible to everybody. Right? If that 10% or 15% of people get headaches and feel nauseous wearing this device, you’ve now disenfranchised a pretty significant portion of your workforce, but I think those can be solved, and so we need to be thinking about how we can enable everybody to use them.

On the other hand, technologies like this can enfranchise more people, where right now, working remotely, working in a distributed sense is hard. For many kinds of work, it’s difficult to do remotely. If we can figure out more ways of enabling people to work together in a distributed way, we can start enabling more people to participate meaningfully in a wider variety of jobs.

Laurel: Blair, that was fantastic. It’s so interesting. I really appreciate your perspective and sharing it here with us on the Business Lab.

Blair: It was great to be here. I enjoyed talking to you.

Laurel: That was Blair MacIntyre, the global head of Immersive Technology Research at JPMorgan Chase, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the global director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.


This podcast is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations are not the responsibility of JPMorgan Chase & Co.

Procurement in the age of AI

Procurement professionals face challenges more daunting than ever. Recent years’ supply chain disruptions and rising costs, deeply familiar to consumers, have had an outsize impact on business buying. At the same time, procurement teams are under increasing pressure to supply their businesses while also contributing to business growth and profitability.

Deloitte’s 2023 Global Chief Procurement Officer Survey reveals that procurement teams are now being called upon to address a broader range of enterprise priorities. These range from driving operational efficiency (74% of respondents) and enhancing corporate social responsibility (72%) to improving margins via cost reduction (71%).

To meet these rising expectations, many procurement teams are turning to advanced analytics, AI, and machine learning (ML) to transform the way they make smart business buying decisions and create value for the organization.

New procurement capabilities unlocked by AI

AI and ML tools have long helped procurement teams automate mundane and manual procurement processes, allowing them to focus on more strategic initiatives. But recent advances in natural language processing (NLP), pattern recognition, cognitive analytics, and large language models (LLMs) are “opening up opportunities to make procurement more efficient and effective,” says Julie Scully, director of software development at Amazon Business.

The good news is procurement teams are already well-positioned to capitalize on these technological advances. Their access to rich data sources, ranging from contracts to invoices, enables AI/ML solutions that can illuminate the insights contained within this data. Acting on these insights unlocks new capabilities that can enhance decision-making and improve spending patterns across the organization.

Predicting supply chain disruptions. In an era of constant supply chain disruptions, procurement teams are often faced with inconsistent item availability, which can negatively impact employee and customer experience. Indeed, the Deloitte 2023 Global Chief Procurement Officer survey finds that only 25% of firms are able to identify supply disruptions promptly “to a large extent.”

AI tools can help address this issue by recognizing patterns that indicate an emerging supply shortage and automatically recommending two or three product alternatives to business buyers, thereby preventing supply disruptions. These predictive capabilities also empower procurement teams to establish buying policies that proactively account for items that are more likely to go out of stock.

Answering pressing questions quickly. Sifting through data to understand the cause of a supply chain disruption, product defect, or other risk is time-consuming for a procurement professional. LLM-powered chatbots can streamline these processes by understanding complex queries about orders and “putting together a nuanced answer,” says Scully. “AI can query a wide variety of sources to fully answer a question quickly and in a way that feels natural and understandable.” In addition to providing fast and accurate answers to pressing questions, AI promises to reduce the need to explain procurement issues eventually. Instead, it will proactively analyze orders, buying patterns, and the current situation to provide instant support.

Offering customized recommendations. As business buyers increasingly demand personalized experiences, procurement officers seek ways to customize their interactions with business procurement systems. Scully provides the example of an employee tasked with hosting a holiday party for 150 employees who needs help deciding what to order. An AI-based procurement tool posed that scenario, she says, could generate a proposed shopping cart, sifting through “millions and billions of data points to recommend and suggest items that the employee may not have even thought of.”

Better yet, she adds, “as we get into really large language models, AI/ML can help answer questions or help buy items you didn’t even know you needed by understanding your particular situation in a much more detailed way.”

Influencing compliance spend. Procurement professionals aim to balance employees’ freedom to purchase the items they need with minimal intervention. However, self-sufficiency should not come at the cost of proper spend management, productivity, and policy compliance. “There’s always a healthy tension between how a company ensures they have the right controls and oversight but also enables a federated spend model,” says Scully. Fortunately, she says, “AI can offer huge value” in alerting procurement teams to any “outliers” before any damage is done.

AI can also help ensure compliance by enforcing spending policies and expectations so that employees “can still confidently buy the right items,” says Scully. This capability can minimize the risk of overspending and also help with companies’ contractual obligations, such as fulfilling a spending commitment to a particular supplier. In the future, an AI-powered anomaly detection trigger might even be used to examine large datasets to identify non-compliant purchases.

Increasing spending visibility. AI and analytics tools can provide greater transparency into overall procurement spending by automatically analyzing data and unlocking timely analysis. These data-driven insights provide procurement officers a comprehensive view of where they’re allocating budget and areas where they might be able to cut costs.

But greater transparency into procurement spend can also empower organizations to respond to emerging business priorities, such as adopting more socially responsible purchasing practices. “Companies want to prioritize locally owned businesses or businesses that prioritize a lower carbon footprint,” says Scully. With greater visibility into their procurement patterns, organizations can direct business buyers to climate-friendly products or suppliers that help meet  their environmental, social, and governance goals.

Driving procurement productivity. Monitoring supplier performance, ensuring spend compliance, and identifying supply chain disruptions—these are all time-consuming activities that distract procurement professionals from more business-critical objectives. “If the procurement team is bogged down in day-to-day processes, they can’t be thinking about their overall strategic goals for the company, if they’re able to deliver them, and where they might want to provide optimizations,” says Scully. By automating labor-intensive processes such as spend analysis, product selection, and tracking down orders, advanced procurement tools can free procurement teams to focus on value-added activities.

Best practices for AI-powered procurement

For all the advantages of advanced analytics and AI/ML solutions, procurement teams must take steps to ensure the best use of these innovative tools. AI models are only as relevant as the training data they ingest. For this reason, Scully says, organizations need “to be aware that a model may sometimes have blind spots or not immediately recognize if the business is beginning a change in strategic focus.” As an organization’s priorities evolve, the model training data must keep pace to reflect new business goals and circumstances.

To get the most from its advanced technology tools, procurement teams should ensure that they support the company’s overall procurement goals and business strategy. These goals may range from working with a more diverse supplier base to purchasing more sustainable goods. Whatever the desired end, the procurement function must link its use of new AI-powered tools to achieving its business goals and regularly evaluate the results.

The new procurement capabilities unlocked by advanced analytics and AI/ML can help businesses rethink how procurement gets done. As generative AI and related technologies advance, sophisticated procurement use cases are likely to multiply, offering substantial financial and operational gains to procurement teams.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Learn how Amazon Business is leveraging AI/ML to offer procurement professionals more efficient processes, a greater understanding of smart business buying habits and, ultimately, reduced prices.

Finding value in generative AI for financial services

With tools such as ChatGPT, DALLE-2, and CodeStarter, generative AI has captured the public imagination in 2023. Unlike past technologies that have come and gone—think metaverse—this latest one looks set to stay. OpenAI’s chatbot, ChatGPT, is perhaps the best-known generative AI tool. It reached 100 million monthly active users in just two months after launch, surpassing even TikTok and Instagram in adoption speed, becoming the fastest-growing consumer application in history.

According to a McKinsey report, generative AI could add $2.6 trillion to $4.4 trillion annually in value to the global economy. The banking industry was highlighted as among sectors that could see the biggest impact (as a percentage of their revenues) from generative AI. The technology “could deliver value equal to an additional $200 billion to $340 billion annually if the use cases were fully implemented,” says the report. 

For businesses from every sector, the current challenge is to separate the hype that accompanies any new technology from the real and lasting value it may bring. This is a pressing issue for firms in financial services. The industry’s already extensive—and growing—use of digital tools makes it particularly likely to be affected by technology advances. This MIT Technology Review Insights report examines the early impact of generative AI within the financial sector, where it is starting to be applied, and the barriers that need to be overcome in the long run for its successful deployment. 

The main findings of this report are as follows:

  • Corporate deployment of generative AI in financial services is still largely nascent. The most active use cases revolve around cutting costs by freeing employees from low-value, repetitive work. Companies have begun deploying generative AI tools to automate time-consuming, tedious jobs, which previously required humans to assess unstructured information.
  • There is extensive experimentation on potentially more disruptive tools, but signs of commercial deployment remain rare. Academics and banks are examining how generative AI could help in impactful areas including asset selection, improved simulations, and better understanding of asset correlation and tail risk—the probability that the asset performs far below or far above its average past performance. So far, however, a range of practical and regulatory challenges are impeding their commercial use.
  • Legacy technology and talent shortages may slow adoption of generative AI tools, but only temporarily. Many financial services companies, especially large banks and insurers, still have substantial, aging information technology and data structures, potentially unfit for the use of modern applications. In recent years, however, the problem has eased with widespread digitalization and may continue to do so. As is the case with any new technology, talent with expertise specifically in generative AI is in short supply across the economy. For now, financial services companies appear to be training staff rather than bidding to recruit from a sparse specialist pool. That said, the difficulty in finding AI talent is already starting to ebb, a process that would mirror those seen with the rise of cloud and other new technologies.
  • More difficult to overcome may be weaknesses in the technology itself and regulatory hurdles to its rollout for certain tasks. General, off-the-shelf tools are unlikely to adequately perform complex, specific tasks, such as portfolio analysis and selection. Companies will need to train their own models, a process that will require substantial time and investment. Once such software is complete, its output may be problematic. The risks of bias and lack of accountability in AI are well known. Finding ways to validate complex output from generative AI has yet to see success. Authorities acknowledge that they need to study the implications of generative AI more, and historically they have rarely approved tools before rollout.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

2023 Global Cloud Ecosystem

The cloud, fundamentally a tool for cost and resource efficiency, has long enabled companies and countries to organize around digital-first principles. It is an established capability that improves the bottom line for enterprises. However, maturity lags, and global standards are sorely needed.

Cloud capabilities play a crucial role in accelerating the global economy’s next stage of digital transformation. Results from our 2023 Global Cloud Ecosystem survey of executives indicate there are two stages of cloud maturity globally: one where firms adopt cloud to achieve essential opex and capex cost reduction, and a second where firms link cloud investments to a positive business value. Respondents indicate the two are converging quickly.

The key findings are as follows:

  • Cloud helps the top and bottom line globally. Cloud computing infrastructure investment will be more than 60% of all IT infrastructure spend worldwide in 2023, according to analyst firm IDC, as flexible cloud resources continue to define efficiency and productivity for technology decision-makers. More than eight out of 10 survey respondents report more cost efficiency due to cloud deployments. While establishing a link between cloud capabilities and top-line profitability is challenging, 82% say they are currently tracking cloud ROI, and 66% report positive ROI from cloud investments.
  • Cloud-centric organizations expect strong data governance (but don’t always get it). Strong data privacy protection and governance is essential to accelerate cloud adoption. Perceptions of national data sovereignty and privacy frameworks vary, underscoring the lack of global standards. Most respondents decline to say their countries are leaders in the space, but more than two-thirds say they keep pace.
  • All in for zero-trust. Public and hybrid cloud assets raise cybersecurity concerns. But cloud is required to grow AI and automation, which help secure digital assets with data cataloging, access, and visibility. Because of the risk associated with AI, the broad surface of the data it draws on, and the way AI generates change, the zero-trust user paradigm has gained wide acceptance across industries. Some 86%of the survey respondents use zero-trust architecture. However, one-third do not routinely identify and classify cloud assets.
  • Sustainability in the cloud. The cloud’s primary function—scaling up computing resources—is a key enabler that mitigates compliance issues such as security; privacy; and environment, social, and governance (ESG). More than half (54%) of respondents say they use cloud tools for ESG reporting and compliance, and a large number (51%) use cloud to enhance diversity, equity, and inclusion (DEI) compliance.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Customer experience horizons

Customer experience (CX) is a leading driver of brand loyalty and organizational performance. According to NTT’s State of CX 2023 report, 92% of CEOs believe improvements in CX directly impact their improved productivity, and customer brand advocacy. They also recognize that the quality of their employee experience (EX) is critical to success. The real potential for transforming business, according to 95% of CEOs, is bringing customer and employee experience improvements together into one end-to-end strategy. This, they anticipate, will deliver revenue growth, business agility, and resilience.

To succeed, organizations need to reimagine what’s possible with customer and employee experience and understand horizon trends that will affect their business. This MIT Technology Review Insights report explores the strategies and technologies that will transform customer experience and contact center employee experience in the years ahead. It is based on nearly two dozen interviews with customer experience leaders, conducted between December 2022 and April 2023. The interviews explored the future of customer experience and employee experience and the role of the contact center as a strategic driver of business value.

The main findings of this report are as follows:

  • Richly contextualized experiences will create mutual value for customers and brands. Organizations will grow long-term loyalty by intelligently using customer data to contextualize every interaction. They’ll gather data that serves a meaningful purpose past the point of sale, and then use that information to deliver future experiences that are more personalized than any competitor could provide. The value of data sharing will be evident to the customer, building trust and securing the relationship between individual and brand.
  • Brands will view every touchpoint as a relationship-building opportunity. Rather than view customer interactions as queries to be resolved as quickly and efficiently as possible, brands will increasingly view every touchpoint as an opportunity to deepen the relationship and grow lifetime value. Organizations will proactively share knowledge and anticipate customer issues; they’ll become trusted advisors and advocate on behalf of the customer. Both digital and human engagement will be critical to building loyal ongoing relationships.
  • AI will create a predictive “world without questions.” In the future, brands will have to fulfill customer needs preemptively, using contextual and real-time data to reduce or eliminate the need to ask repetitive questions. Surveys will also become less relevant, as sentiment analysis and generative AI provide deep insights into the quality of customer experiences and areas for improvement. Leading organizations will develop robust AI roadmaps that include conversational, generative, and predictive AI across both the customer and employee experience.
  • Work becomes personalized. Brands will recognize that humans have the same needs, whether as a customer or an employee. Those include being known, understood, and helped—in other words, treated with empathy. One size does not fit all, and leading organizations will empower employees to work in a way that meets their personal and professional objectives. Employees will have control over their hours and schedule; be routed interactions where they are best able to succeed; and receive personalized training and coaching recommendations. Their knowledge, experiences, and interests will benefit customers as they resolve complex issues, influence purchase decisions, or discuss shared values such as sustainability. This will increase engagement, reduce attrition, and manage costs.
  • The contact center will be a hub for customer advocacy and engagement. Offering the richest sources of real-time customer data, the contact center becomes an organization’s eyes and ears to provide a single source of truth for customer insights. Having a complete perspective of experience across the entire journey, the contact center will increasingly advocate for the customer across the enterprise. For many organizations, the contact center is already an innovation test bed. This trend will accelerate, as technologies like generative AI rapidly find application across a variety of use cases to transform productivity and strategic decision-making.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Bridging the expectation-reality gap in machine learning

Machine learning (ML) is now mission critical in every industry. Business leaders are urging their technical teams to accelerate ML adoption across the enterprise to fuel innovation and long-term growth. But there is a disconnect between business leaders’ expectations for wide-scale ML deployment and the reality of what engineers and data scientists can actually build and deliver on time and at scale.

In a Forrester study launched today and commissioned by Capital One, the majority of business leaders expressed excitement at deploying ML across the enterprise, but data scientist team members said they didn’t yet have all the necessary tools to develop ML solutions at scale. Business leaders would love to leverage ML as a plug-and-play opportunity: “just input data into a black box and valuable learnings emerge.” The engineers who wrangle company data to build ML models know it’s far more complex than that. Data may be unstructured or poor quality, and there are compliance, regulatory, and security parameters to meet.

There is no quick-fix to closing this expectation-reality gap, but the first step is to foster honest dialogue between teams. Then, business leaders can begin to democratize ML across the organization. Democratization means both technical and non-technical teams have access to powerful ML tools and are supported with continuous learning and training. Non-technical teams get user-friendly data visualization tools to improve their business decision-making, while data scientists get access to the robust development platforms and cloud infrastructure they need to efficiently build ML applications. At Capital One, we’ve used these democratization strategies to scale ML across our entire company of more than 50,000 associates.

When everyone has a stake in using ML to help the company succeed, the disconnect between business and technical teams fades. So what can companies do to begin democratizing ML? Here are several best practices to bring the power of ML to everyone in the organization.

Enable your creators

The best engineers today aren’t just technical whizzes, but also creative thinkers and vital partners to product specialists and designers. To foster greater collaboration, companies should provide opportunities for tech, product, and design to work together toward shared goals. According to the Forrester study, because ML use can be siloed, focusing on collaboration can be a key cultural component of success. It will also ensure that products are built from a business, human, and technical perspective. 

Leaders should also ask engineers and data scientists what tools they need to be successful to accelerate delivery of ML solutions to the business. According to Forrester, 67% of respondents agree that a lack of easy-to-use tools is slowing down cross-enterprise adoption of ML. These tools should be compatible with an underlying tech infrastructure that supports ML engineering. Don’t make your developers live in a “hurry up and wait” world where they develop a ML model in the sandbox staging area, but then must wait to deploy it because they don’t have the compute and infrastructure to put the model into production. A robust cloud-native multitenant infrastructure that supports ML training environments is critical.

Empower your employees

Putting the power of ML into the hands of every employee, whether they’re a marketing associate or business analyst, can turn any company into a data-driven organization. Companies can start by granting employees governed access to data. Then, offer teams no-code/low-code tools to analyze data for business decisioning. It goes without saying these tools should be developed with human-centered design, so they are easy to use. Ideally, a business analyst could upload a data set, apply ML functionality through a clickable interface, and quickly generate actionable outputs.

Many employees are eager to learn more about technology. Leaders should provide teams across the enterprise with many ways to learn new skills. At Capital One, we have found success with multiple technical upskilling programs, including our Tech College that offers courses in seven technology disciplines that align to our business imperatives; our Machine Learning Engineering Program that teaches the skills necessary to jumpstart a career in ML and AI; and the Capital One Developer Academy for recent college graduates with non-computer science degrees preparing for careers in software engineering. In the Forrester study, 64% of respondents agreed that lack of training was slowing the adoption of ML in their organizations. Thankfully, upskilling is something every company can offer by encouraging seasoned associates to mentor younger talent.

Measure and celebrate success

Democratizing ML is a powerful way to spread data-driven decision-making throughout the organization. But don’t forget to measure the success of democratization initiatives and continually improve areas that need work. To quantify the success of ML democratization, leaders can analyze which data-driven decisions made through the platforms delivered measurable business results, such as new customers or additional revenue. For example, at Capital One, we have measured the amount of money customers have saved with card fraud defense enabled by our ML innovations around anomaly and change point detection.

The success of any ML democratization program is built on collaborative teamwork and measurable accountability. Business users of ML tools can provide feedback to technical teams on what functionality would help them do their jobs better. Technical teams can share the challenges they face in building future product iterations and ask for training and tools to help them succeed.

When business leaders and technical teams coalesce around a unified, human-centered vision for ML, that ultimately benefits end-customers. A company can translate data-driven learnings into better products and services that delight their customers. Deploying a few best practices to democratize ML across the enterprise will go a long way toward building a future-forward organization that innovates with powerful data insights.

This content was produced by Capital One. It was not written by MIT Technology Review’s editorial staff.


AI gains momentum in core manufacturing services functions

When considering the potential for AI systems to change manufacturing, Ritu Jyoti, global AI research lead at market-intelligence firm IDC, points to windmill manufacturers.

To improve windmills before AI, she says, the company analyzed data from observing a functioning prototype, a process that took weeks. Now, the manufacturer has dramatically shortened the process using a digital twin—a digital model of the operational windmill—using machine learning (ML) and AI to create and simulate improvements.

“Sometimes it was impossible and physically challenging for them to even go and get all the measurements, so they used drones and AI technologies to generate a digital win,” Jyoti says. This manufacturer now sees this AI/ML technology as essential. “Because if they’re not doing it, they’re not going to be relevant,” she says.

Disruption in manufacturing and the supply chain has pushed businesses toward digital transformation as they seek ways to stay competitive. For manufacturers, these disruptions—along with the advent of AI—present opportunities to make manufacturing more efficient, safer, and sustainable.

Companies can use AI to streamline processes and fight downtime, adopt robotics that promote safety and speed, allow AI to detect anomalies quickly through computer vision, and develop AI systems to process vast volumes of data to identify patterns and predict customer needs.

“In manufacturing, the biggest benefits come when people from the business are able to work together with data experts, using data and AI to get insights, ultimately taking actions to improve their processes,” says Pierre Goutorbe, AI solutions director for energy and manufacturing at Dataiku. “The more workers get familiar with AI and use it on a daily basis, the more we’ll see the benefit from it,” he says.

Speeding up the adoption of AI

Between supply-chain disruptions and worker shortages, the manufacturing sector has been innovating to stay ahead in the global marketplace. However, a June 2023 study by Dataiku and Databricks found manufacturing lags behind other industries, with about a quarter (24%) of companies still at the exploration or experimentation stage in terms of AI adoption, while only about one-fifth (19%) of companies across all other industries are still in this beginning stage.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.