Customer experience horizons

Customer experience (CX) is a leading driver of brand loyalty and organizational performance. According to NTT’s State of CX 2023 report, 92% of CEOs believe improvements in CX directly impact their improved productivity, and customer brand advocacy. They also recognize that the quality of their employee experience (EX) is critical to success. The real potential for transforming business, according to 95% of CEOs, is bringing customer and employee experience improvements together into one end-to-end strategy. This, they anticipate, will deliver revenue growth, business agility, and resilience.

To succeed, organizations need to reimagine what’s possible with customer and employee experience and understand horizon trends that will affect their business. This MIT Technology Review Insights report explores the strategies and technologies that will transform customer experience and contact center employee experience in the years ahead. It is based on nearly two dozen interviews with customer experience leaders, conducted between December 2022 and April 2023. The interviews explored the future of customer experience and employee experience and the role of the contact center as a strategic driver of business value.

The main findings of this report are as follows:

  • Richly contextualized experiences will create mutual value for customers and brands. Organizations will grow long-term loyalty by intelligently using customer data to contextualize every interaction. They’ll gather data that serves a meaningful purpose past the point of sale, and then use that information to deliver future experiences that are more personalized than any competitor could provide. The value of data sharing will be evident to the customer, building trust and securing the relationship between individual and brand.
  • Brands will view every touchpoint as a relationship-building opportunity. Rather than view customer interactions as queries to be resolved as quickly and efficiently as possible, brands will increasingly view every touchpoint as an opportunity to deepen the relationship and grow lifetime value. Organizations will proactively share knowledge and anticipate customer issues; they’ll become trusted advisors and advocate on behalf of the customer. Both digital and human engagement will be critical to building loyal ongoing relationships.
  • AI will create a predictive “world without questions.” In the future, brands will have to fulfill customer needs preemptively, using contextual and real-time data to reduce or eliminate the need to ask repetitive questions. Surveys will also become less relevant, as sentiment analysis and generative AI provide deep insights into the quality of customer experiences and areas for improvement. Leading organizations will develop robust AI roadmaps that include conversational, generative, and predictive AI across both the customer and employee experience.
  • Work becomes personalized. Brands will recognize that humans have the same needs, whether as a customer or an employee. Those include being known, understood, and helped—in other words, treated with empathy. One size does not fit all, and leading organizations will empower employees to work in a way that meets their personal and professional objectives. Employees will have control over their hours and schedule; be routed interactions where they are best able to succeed; and receive personalized training and coaching recommendations. Their knowledge, experiences, and interests will benefit customers as they resolve complex issues, influence purchase decisions, or discuss shared values such as sustainability. This will increase engagement, reduce attrition, and manage costs.
  • The contact center will be a hub for customer advocacy and engagement. Offering the richest sources of real-time customer data, the contact center becomes an organization’s eyes and ears to provide a single source of truth for customer insights. Having a complete perspective of experience across the entire journey, the contact center will increasingly advocate for the customer across the enterprise. For many organizations, the contact center is already an innovation test bed. This trend will accelerate, as technologies like generative AI rapidly find application across a variety of use cases to transform productivity and strategic decision-making.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Enabling enterprise growth with data intelligence

Data — how it’s stored and managed — has become a key competitive differentiator. As global data continues to grow exponentially, organizations face many hurdles between piling up historical data, real-time data streams from IoT sensors, and building data-driven supply chains. Senior vice president of product engineering at Hitachi Vantara, Bharti Patel sees these challenges as an opportunity to create a better data strategy.

“Before enterprises can become data-driven, they must first become data intelligent,” says Patel. “That means knowing more about the data you have, whether you need to keep it or not, or where it should reside to derive the most value out of it.”

Patel stresses that the data journey begins with data planning that includes all stakeholders from CIOs and CTOs to business users. Patel describes universal data intelligence as enterprises having the ability to gain better insights from data streams and meet increasing demands for transparency by offering seamless access to data and insights no matter where it resides.

Building this intelligence means building a data infrastructure that is scalable, secure, cost-effective, and socially responsible. The public cloud is often lauded as a way for enterprises to innovate with agility at scale while on premises infrastructures are viewed as less accessible and user friendly. But while data streams continue to grow, IT budgets are not and Patel notes that many organizations that use the cloud are facing cost challenges. Combating this, says Patel, means finding the best of both worlds of both on-prem and cloud environments in private data centers to keep costs low but insights flowing.

Looking ahead, Patel foresees a future of total automation. Today, data resides in many places from the minds of experts to documentation to IT support tickets, making it impossible for one person to be able to analyze all that data and glean meaningful insights.

“As we go into the future, we’ll see more manual operations converted into automated operations,” says Patel. “First, we’ll see humans in the loop, and eventually we’ll see a trend towards fully autonomous data centers.”

This episode of Business Lab is produced in partnership with Hitachi Vantara.

Full transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is building better data infrastructures. Doing just the basics with data can be difficult, but when it comes to scaling and adopting emerging technologies, it’s crucial to organize data, tear down data silos, and focus on how data infrastructure, which is so often in the background, comes to the front of your data strategy.

Two words for you: data intelligence.

My guest is Bharti Patel. Bharti is a senior vice president of product engineering at Hitachi Vantara.

This episode of Business Lab is sponsored by Hitachi Vantara.

Welcome, Bharti.

Bharti Patel: Hey, thank you Laurel. Nice to be with you again.

Laurel: So let’s start off with kind of giving some context to this discussion. As global data continues to grow exponentially, according to IDC, it’s projected to double between 2022 and 2026. Enterprises face many hurdles to becoming data-driven. These hurdles include, but aren’t of course limited to, piles of historical data, new real-time data streams, and supply chains becoming more data-driven. How should enterprises be evaluating their data strategies? And what are the markers of a strong data infrastructure?

Bharti: Yeah, Laurel, I can’t agree more with you here. Data is growing exponentially, and as per one of the studies that we conducted recently where we talked to about 1,200 CIOs and CTOs from about 12 countries, then we have more proof for it that data is almost going to double every two to three years. And I think what’s more interesting here is that data is going to grow, but their budgets are not going to grow in the same proportion. So instead of worrying about it, I want to tackle this problem differently. I want to look at how we convert this challenge into an opportunity by deriving value out of this deal. So let’s talk a little more about this in the context of what’s happening in the industry today.

I’m sure everyone by now has heard about generative AI and why generative AI or gen AI is a buzzword. AI has been there in the industry forever. However, what has changed recently is ChatGPT has exposed the power of AI to common people right from school going kids to grandparents by providing a very simple natural language interface. And just to talk a little bit more about ChatGPT, it is the fastest growing app in the industry. It touched 100 million users in just about two months. And what has changed because of this very fast adoption is that this has got businesses interested in it. Everyone wants to see how to unleash the power of generative AI. In fact, according to McKinsey, they’re saying it’s like it’s going to add about $2.6 trillion to $4.4 trillion to the global economy. That means we are talking about big numbers here, but everyone’s talking about ChatGPT, but what is the science behind it? The science behind it is the large language models.

And if you think of these large language models, they are AI models with billions or even trillions of parameters, and they are the science behind ChatGPT. However, to get most of these large language models or LLMs, they need to be fine-tuned because that means you’re just relying on the public data. Then what you’re getting, it means you’re not getting first, you’re not getting the information that you want, correct all the time. And of course there is a risk of people feeding bad data associated with it. So how do you make the most of it? And here actually comes your private data sets. So your proprietary data sets are very, very important here. And if you use this private data to fine-tune your models, I have no doubt in mind that it will create differentiation for you in the long run to remain competitive.

So I think even with this, we’re just scratching the surface here when it comes to gen AI. And what more needs to be thought about for enterprise adoption is all the features that are needed like explainability, traceability, quality, trustworthiness, reliability. So if you again look at all these parameters, actually data is again the centerpiece of everything here. And you have to harness this private data, you have to curate it, and you have to create the data sets that will give you the maximum return on investment. Now, before enterprises can become data-driven, I think they must first become data intelligent.

And that means knowing more about the data you have, whether you need to keep it or not, or where it should reside to derive the most value out of it. And as I talk to more and more CIOs and CTOs, it is very evident that there’s a lot of data out there and we need to find a way to fix the problem. Because that data may or may not be useful, but you are storing it, you are keeping it, and you are spending money on it. So that is definitely a problem that needs to be solved. Then back to your question of, what is the right infrastructure, what are some of the parameters of it? So in my mind, it needs to be nimble, it needs to be scalable, trusted, secured, cost-effective, and finally socially responsible.

Laurel: That certainly gives us a lot of perspective, Bharti. So customers are demanding more access to data and enterprises also need to get better insights from the streams of data that they’re accumulating. So could you describe what universal data intelligence is, and then how it relates to data infrastructure?

Bharti: Universal data intelligence is the ability for businesses to offer seamless access to data and insights irrespective of where it resides. So basically we are talking about getting full insights into your data in a hybrid environment. Also, on the same lines, we also talk about our approach to infrastructure, which is a distributed approach. And what I mean by distributed is that you do as little data movement as possible because moving data from one place to another place is expensive. So what we are doing here at Hitachi Vantara, we are designing systems. Think of it as there is an elastic fabric that ties it all together and we are able to get insights from the data no matter where it resides in a very, very timely manner. And even this data could be in any format, from structured, unstructured, and it could be blocked to file to objects.

And just to kind of give you an example of the same, recently we worked with the Arizona Department of Water Resources to simplify their data management strategy. They have data coming from more than 300,000 water resources like means we are talking about huge data sets here. And what we did there for them was we designed an intelligent data discovery and automation tool. And in fact, we completed this data discovery and the metadata cataloging and platform migration in just two weeks with minimal downtime. And we are hearing all the time from them that they are really happy with it and they’re now able to understand, integrate, and analyze the data sets to meet the needs of their water users, their planners, and their decision makers.

Laurel: So that’s a great example. So data and how it’s stored and managed is clearly a competitive differentiator as well. But although the amount of data is increasing, many budgets, as you mentioned, particularly IT budgets are not. So how can organizations navigate building a data infrastructure that’s effective and cost-efficient? And then do you have another example of how to do more with less?

Bharti: Yeah, I think that’s a great question. And this goes back to having data intelligence as the first step to becoming data-driven and reaping the full benefits of the data. So I think it goes back to you needing to know what exists and why it exists. And all of it should be available to the decision makers and the people who are working on the data at their fingertips. Just to give an example here, suppose you have data that you’re just retaining because you need to just retain it for legal purposes, and the likelihood of it being used is extremely, extremely low. So there’s no point in storing that data on an expensive storage device. It makes sense to transfer that data to a low cost object storage.

And at the same time, you might have the data that you need to access all the time. And speed is important. Low latency is important, and that kind of data needs to reside on fast NVMEs. And in fact, many of our customers do it all the time, and in fact in all the sectors. So what they do is they have their data, which through the policies, they constantly transfer from our highly, highly efficient file systems to object storage based on the policies. And it’s like they still retain the pointers there in the file system and they’re able to access it back in case they need it.

Laurel: So the public cloud is often cited as a way for enterprises to scale, be more agile, and innovate while by contrast, legacy on-premises infrastructures are seen as less user-friendly and accessible. How accurate is this conception and how should enterprises approach data modernization and management of that data?

Bharti: Yeah, I’ve got to admit here that the public cloud and the hyperscalers have raised the bar in terms of what is possible when it comes to innovation. However, we are also seeing and hearing from our customers that the cost is a concern there. And in fact, many of our customers, they move to cloud very fast and now they’re facing the cost challenge. When their CIOs see the bills going exponentially up, they’re asking like, “Hey, well how could we keep it flat?” That’s where I think we see a big opportunity, how to provide the same experience that cloud provides in a private data center so that when customers are talking about partition of the data, we have something equivalent to offer.

And here again, I have got to say that we want to address in a slightly different manner. I think we want to address it so that customers are able to take full advantage of the elasticity of the cloud, and also they’re able to take full advantage of on-prem environments. And how we want to do it, we want to do it in such a way that it’s almost in a seamless way, in a seamless manner. They can manage the data from their private data centers, doing the cloud and get the best from both worlds.

Laurel: An interesting perspective there, but this also kind of requires different elements of the business to come in. So from a leadership perspective, what are some best practices that you’ve instituted or recommended to make that transition to better data management?

Bharti: Yeah, I would say I think the data journey starts with data planning, and which should not be done in a siloed manner. And getting it right from the onset is extremely, extremely important. And what you need to do here is at the beginning of your data planning, you’ve got to get all the stakeholders together, whether it’s your CIO, your business users, your CTOs. So this strategy should never be done in a siloed manner. And in fact, I do want to think about, highlight another aspect, which probably people don’t do very much is how do you even bring your partners into the mix? In fact, I do have an example here. Prior to joining Hitachi Vantara, I was a CTO, an air purifier company. And as we were defining our data strategy, we were looking at our Salesforce data, we were looking at data in our NetSuite, we were looking at the customer tickets, and we were doing all this to see how we can drive marketing campaigns.

And as I was looking at this data, I felt that something was totally missing. And in fact, what was missing was the weather data, which is not our data, which was third-party data. For us to design effective marketing campaigns, it was very important for us to have insights into this weather data. For example, if there are allergies in a particular region or if there are wildfires in a particular region. And that data was so important. So having a strategy where you are able to bring all stakeholders, all parts of data together and think from the beginning is the right thing to get started.

Laurel: And with big hairy problems and goals, there’s also this consideration that data centers contribute to an enterprise’s carbon emissions. Thinking about partnerships and modernizing data management and everything we’ve talked about so far, how can enterprises meet sustainability goals while also modernizing their data infrastructure to accommodate all of their historical and real-time data, especially when it comes from, as you mentioned, so many different sources?

Bharti: Yeah, I’m glad that you are bringing up this point because it’s very important not to ignore this. And in fact, with all the gen AI and all the things that we are talking about, like one fine-tuning of one model can actually generate up to five times the carbon emissions that are possible from a passenger car in a lifetime. So we’re talking about a huge, huge environmental effect here. And this particular topic is extremely important to Hitachi. And in fact, our goal is to go carbon-neutral with our operations by 2030 and across our value chain by 2050. And how we are addressing this problem here is kind of both on the hardware side and also on the software side. Right from the onset, we are designing our hardware, we are looking at end-to-end components to see what kind of carbon footprint it creates and how we could really minimize it. And in fact, once our hardware is ready, actually, it needs to pass through a very stringent set of energy certifications. And so that’s on the hardware side.

Now, on the software side, actually, I have just started this initiative where we are looking at how we can move to modern languages that are more likely to create less carbon footprint. And this is where we are looking at how we can replace our existing Java [code base] with Rust, wherever it makes sense. And again, this is a big problem we all need to think about and it cannot be solved overnight, but we have to constantly think about interface manner.

Laurel: Well, certainly are impressive goals. How can emerging technologies like generative AI, as you were saying before, help push an organization into a next generation of data infrastructure systems, but then also help differentiate it from competitors?

Bharti: Yeah, I want to take a kind of a two-pronged approach here. First, what I call is table stakes. So if you don’t do it, you’ll be completely wiped out. And these are simple things about how you automate certain things, how you create better customer experience. But in my mind, that’s not enough. You got to think about what kind of disruptions you will create for yourself and for your customers. So a couple of ideas that we are working on here are the companions or copilots. And these are, think of them as AI agents in the data centers. And these agents actually help the data center environment from becoming more reactive to proactive.

So basically these agents are running in your data center all the time and they’re watching if there is a new patch available and if you should update to the new patch, or maybe there’s a new white paper that has better insights to manage some of your resources. So this is like these agents are constantly acting in your data center. They are aware of what’s going on on the internet based on how you have designed, and they’re able to provide you with creative solutions. And I think that’s going to be the disruption here, and that’s something we are working on.

Laurel: So looking to the future, what tools, technologies, or trends do you see emerging as more and more enterprises look to modernize their data infrastructure and really benefit from data intelligence?

Bharti: Again, I’ll go back to what I’m talking about, generative AI here, and I’ll give an example. For one of our customers, we are managing their data center, and I’m also part of that channel where we see constant back and forth between the support and the engineering. The support is asking, “Hey, this is what is happening, what should we be doing?” So just think of it like a different scenario that you have all this and you were able to collect this data and feed it into the LLMs. When you’re talking about this data, this data resides at several places. It resides in the heads of our experts. It is there in the documentation, it’s there in the support tickets, it’s there in logs, like life logs. It is there in the traces. So it’s almost impossible for a human being to analyze this data and get meaningful insights.

However, if we combine LLMs with the power of, say, knowledge graphs, vector databases, and other tools, it will be possible to analyze this data at the speed of light, and present the recommendation in front of the user through a very simple user interface. And in most cases, just via a very simple natural language interface. So I think that’s a kind of a complete paradigm shift where you have so many sources that you need to constantly analyze versus having the full automation. And that’s why I feel that these copilots will become an essential part of the data centers. In the beginning they’ll help with the automation to deal with the problems prevalent in any data center like resource management and optimization, proactive problem determination, and resolution of the same. As we go into the future, we’ll see more manual operations converted into automated operations. First, we’ll see humans in the loop, and eventually we’ll see a trend towards fully autonomous data centers.

Laurel: Well, that is quite a future. Thank you very much for joining us today on the Business Lab.

Bharti: Thank you, Laurel. Bye-bye.

Laurel: That was Bharti Patel, who is the senior vice president of Product Marketing at Hitachi Vantara who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Turning medical data into actionable knowledge

Advances in imaging technologies are giving physicians unprecedented insights into disease states, but fragmented and siloed information technology systems make it difficult to provide the personalized, coordinated care that patients expect.

In the field of medical imaging, health care providers began replacing radiographic films with digital images stored in a picture and archiving communication system (PACS) in the 1980s. As this wave of digitization progressed, individual departments—ranging from cardiology to pathology to nuclear medicine, orthopedics, and beyond—began acquiring their own, distinct IT solutions.

PACS remains an indispensable tool for viewing and interpreting imaging results, but leading health care providers are now beginning to move beyond PACS. The new paradigm brings data from multiple medical specialties together into a single platform, with a single user interface that strives to provide a holistic understanding of the patient and facilitate clinical reporting. By connecting data from multiple specialties and enabling secure and efficient access to relevant patient data, advanced information technology platforms can enhance patient care, simplify workflows for clinicians, and reduce costs for health care organizations. This organizes data around patients, rather than clinical departments.

Meeting patient expectations

Health care providers generate an enormous volume of data. Today, nearly one-third of the world’s data volume is generated by the health care industry. The growth in health care data outpaces media and entertainment, whose data is expanding at a 25% compound annual growth rate ,compared to the 36% rate for health care data. This makes the need for a comprehensive health care data management systems increasingly urgent.

The volume of health care industry data is only part of the challenge. Different data types stored in different formats create an additional hurdle to the efficient storage, retrieval, and sharing of clinically important patient data.

PACS was designed to view and store data in the Digital Imaging and Communications in Medicine (DICOM) standard, and a process known as “DICOM-wrapping” is used for PACS to provide access to patient information stored in PDF, MP4, and other file formats. In addition to adding additional steps that impede efficient workflow, DICOM-wrapping makes it difficult for clinicians to work with a file in its native format. PACS users are given what is essentially a screen shot of an Excel file, which makes it impossible to use the data analysis features in the Excel software.

With an open image and data management (IDM) system coupled with an intuitive reading and reporting workspace, patient data can be consolidated in one location instead of in multiple data silos, providing clinicians with the information they need to provide the highest level of patient-centered care. In a 2017 survey by health insurance company Humana, its patients said they aren’t interested in the details of health care IT, but are nearly unanimous when it comes to their expectations, with 97% of patients saying that their health care providers should have access to their complete medical history.

Adapting to clinical needs

To meet patient expectations and needs, health care IT seeks to meet the needs of health care providers and systems by offering flexibility—both in its initial setup and in its capacity to scale to meet evolving organizational demands.

A modular architecture enables health care providers and systems to tailor their system to their specific needs. Depending on clinical needs, health care providers can integrate specialist applications for reading and reporting, AI-powered functionalities, advanced visualization, and third-party tools. The best systems are scalable, so that they can grow as an organization grows, with the ability to flexibly scale hardware by expanding the number of servers and storage capacity.

A simple, unified UI enables a quick learning curve across the organization, while the adoption of a single enterprise system helps reduce IT costs by enabling the consolidation and integration of previously distinct systems. Through password-protected data transfers, these systems can also facilitate communication with patients.

Many into one

One solution to the challenges and opportunities created by the growing volume of medical data is Siemens Healthineers’ Syngo Carbon Core. It combines two elements: Syngo Carbon Space is a front-end workspace for reporting and routine reading. On the back end is Syngo Carbon IDM, a powerful and flexible IDM system. By combining these two elements, Syngo Carbon Core allows health care providers to manage data around patients, not departments.

Syngo Carbon Space brings imaging data, diagnostic software elements, and clinical tools together into a single, intuitive workspace for both routine and more complex cases. Customizable layouts allow clinicians to tailor their routine reader to their needs and preferences, with workflow optimization tools that maximize efficiency. In addition, organizations have the flexibility to use editable structured reporting templates or free format reports. The translation of findings into coded and discrete data help specialists generate patient-centered reports that help guide clinical decision-making. Through the workspace, clinicians can also directly access Syngo Carbon’s Advanced Visualization, which incorporates additional tools and AI-powered applications, without having to switch to another application.

On the back end of Syngo Carbon Core, robust IDM consolidates patient data and seamlessly integrates systems across an enterprise. Its open design enables the integration of existing DICOM Long Term Archives (LTAs), including legacy PACS systems. All data is kept in its native format—meaning that a PDF remains a PDF, for example—to ensure interoperability.

The growing volume of data generated in modern health care environments creates challenges, but also presents tremendous opportunities for delivering high-quality, personalized medicine. With comprehensive health care data management systems, health care providers can turn data into a strategic asset for their organizations and their patients.

This content was produced by Siemens Healthineers. It was not written by MIT Technology Review’s editorial staff.

Why embracing complexity is the real challenge in software today

Technology Radar is a snapshot of the current technology landscape produced by Thoughtworks twice a year; it’s based on technologies we’ve been using as an organization and communicates our perspective on them. There is always a long list of candidates to be featured for us to work through and discuss, but with each edition that passes, the number of technologies the group discusses grows ever longer. It seems there are, increasingly, more and more ways to solve a problem. On the one hand this is a good thing—the marketplace is doing its job offering a wealth of options for technologists. Yet on the other it also adds to our cognitive load: there are more things to learn about and evaluate.

It’s no accident that many of the most widely discussed trends in technology—such as data mesh and, most recently, generative AI (GenAI)—are presented as solutions to this complexity. However, it’s important that we don’t ignore complexity or see it as something that can be fixed: we need to embrace it and use it to our advantage.

Redistributing complexity

The reason we can’t just wish away or “fix” complexity is that every solution—whether it’s a technology or methodology—redistributes complexity in some way. Solutions reorganize problems. When microservices emerged (a software architecture approach where an application or system is composed of many smaller parts), they seemingly solved many of the maintenance and development challenges posed by monolithic architectures (where the application is one single interlocking system). However, in doing so microservices placed new demands on engineering teams; they require greater maturity in terms of practices and processes. This is one of the reasons why we cautioned people against what we call “microservice envy” in a 2018 edition of the Technology Radar, with CTO Rebecca Parsons writing that microservices would never be recommended for adoption on Technology Radar because “not all organizations are microservices-ready.” We noticed there was a tendency to look to adopt microservices simply because it was fashionable.

This doesn’t mean the solution is poor or defective. It’s more that we need to recognize the solution is a tradeoff. At Thoughtworks, we’re fond of saying “it depends” when people ask questions about the value of a certain technology or approach. It’s about how it fits with your organization’s needs and, of course, your ability to manage its particular demands. This is an example of essential complexity in tech—it’s something that can’t be removed and which will persist however much you want to get to a level of simplicity you find comfortable.

In terms of microservices, we’ve noticed increasing caution about rushing to embrace this particular architectural approach. Some of our colleagues even suggested the term “monolith revivalists” to describe those turning away from microservices back to monolithic software architecture. While it’s unlikely that the software world is going to make a full return to monoliths, frameworks like Spring Modulith—a framework that helps developers structure code in such a way that it becomes easier to break apart a monolith into smaller microservices when needed—suggest that practitioners are becoming more keenly aware of managing the tradeoffs of different approaches to building and maintaining software.

Supporting practitioners with concepts and tools

Because technical solutions have a habit of reorganizing complexity, we need to carefully attend to how this complexity is managed. Failing to do so can have serious implications for the productivity and effectiveness of engineering teams. At Thoughtworks we have a number of concepts and approaches that we use to manage complexity. Sensible defaults, for instance, are starting points for a project or piece of work. They’re not things that we need to simply embrace as a rule, but instead practices and tools that we collectively recognize are effective for most projects. They give individuals and teams a baseline to make judgements about what might be done differently.

One of the benefits of sensible defaults is that they can guard you against the allure of novelty and hype. As interesting or exciting as a new technology might be, sensible defaults can anchor you in what matters to you. This isn’t to say that new technologies like generative AI shouldn’t be treated with enthusiasm and excitement—some of our teams have been experimenting with these tools and seen impressive results—but instead that adopting new tools needs to be done in a way that properly integrates with the way you work and what you want to achieve. Indeed, there are a wealth of approaches to GenAI, from high profile tools like ChatGPT to self-hosted LLMs. Using GenAI effectively is as much a question of knowing the right way to implement for you and your team as it is about technical expertise.

Interestingly, the tools that can help us manage complexity aren’t necessarily new. One thing that came up in the latest edition of Technology Radar was something called risk-based failure modeling, a process used to understand the impact, likelihood and ability of detecting the various ways that a system can fail. This has origins in failure modes and effects analysis (FMEA), a practice that dates back to the period following World War II, used in complex engineering projects in fields such as aerospace. This signals that there are some challenges that endure; while new solutions will always emerge to combat them, we should also be comfortable looking to the past for tools and techniques.

Learning to live with complexity

McKinsey’s argument that the productivity of development teams can be successfully measured caused a stir across the software engineering landscape. While having the right metrics in place is certainly important, prioritizing productivity in our thinking can cause more problems than it solves when it comes to complex systems and an ever-changing landscape of solutions. Technology Radar called this out with an edition with the theme, “How productive is measuring productivity?”This highlighted the importance of focusing on developer experience with the help of tools like DX DevEx 360. 

Focusing on productivity in the way McKinsey suggests can cause us to mistakenly see coding as the “real” work of software engineering, overlooking things like architectural decisions, tests, security analysis, and performance monitoring. This is risky—organizations that adopt such a view will struggle to see tangible benefits from their digital projects. This is why the key challenge in software today is embracing complexity; not treating it as something to be minimized at all costs but a challenge that requires thoughtfulness in processes, practices, and governance. The key question is whether the industry realizes this.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Optimizing platforms offers customers and stakeholders a better way to bank

When it comes to banking, whether it’s personal, business, or private, customer experience is everything. Building new technologies and platforms, employing them at scale, and optimizing workflows is especially critical for any large bank looking to meet evolving customer and internal stakeholder demands for faster and more personalized ways of doing business. Institutions like JPMorgan Chase are implementing best practices, cost efficient cloud migration, and emerging AI and machine learning (ML) tools to build better ways to bank, says Head of Managed Accounts, Client Onboarding and Client Services Technology at J. P. Morgan Private Bank, Vrinda Menon.

Menon stresses that it is critical that technologists stay very focused on the business impact of the software and tools they develop.

“We coach our teams that success and innovation does not come from rebuilding something that somebody has already built, but instead from leveraging it and taking the next leap with additional features upon it to create high impact business outcomes,” says Menon.

At JPMorgan Chase, technologists are encouraged, where possible, to see the bigger picture and solve for the larger pattern rather than just the singular problem at hand. To reduce redundancies and automate tasks, Menon and her team focus on data and measurements that indicate where emerging technologies like AI and machine learning could enhance processes like onboarding or transaction processing at scale. 

AI/ML have become commonplace across many industries with private banking being no exception, says Menon. At a base level, AI/ML can extract data from documents, classify information, analyze data smartly and detect issues and outliers across a wide range of use cases. But Menon is looking to the near future when AI/ML can help proactively predict client needs based on various signals. For example, a private banking client that has recently been married may ask their bank for a title change. Using the client’s data in context and this new request, AI/ML tools could proactively help bankers identify additional things to ask this client, such as need to change beneficiaries or the possibility to optimize taxes by considering jointly filed taxes.

“You have an opportunity to be more proactive and think about it holistically so you can address their needs before they even come to you to ask for that level of engagement and detail,” says Menon.

This episode of Business Lab is produced in association with JPMorgan Chase. 

Full transcript

Laurel Ruma: From MIT Technology Review. I’m Laurel Ruma and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is investing in building great experiences. A number of people benefit from enterprise investment in emerging and new technologies, including customers who want better, faster, and newer ways of doing business. But internal stakeholders want the same investment in better tools and systems to build those fast and new ways of doing business. Balancing both needs is possible.

Two words for you: optimizing platforms.

Today we’re talking with Vrinda Menon, the chief technology officer of Managed Accounts, Client Onboarding and Client Services at JPMorgan Private Bank.

This podcast is produced in association with JPMorgan Chase.

Welcome, Vrinda.

Vrinda Menon: Thank you so much, Laurel. I’m looking forward to this discussion.

Laurel: Great. So, let’s start with how often people think of JPMorgan Chase. They likely associate the company with personal banking, ATMs and credit cards, but could you describe what services the private bank provides and how operations and client services have evolved and transformed since you began your role at JPMorgan Chase?

Vrinda: Sure. JPMorgan Chase indeed does far more than personal banking, credit cards and ATMs. The private bank of JPMorgan Chase is often referred to as the crown jewel of our franchise. We service our high net worth clients and ultra-high net worth clients across the globe. We provide them services like investment management, trust and estate planning, banking services, brokerage services, customized lending, etc., just to name a few. And in terms of what has transformed in the recent years since I joined, I would say that we’ve become far more tech savvy as an organization, and this is thanks and no small measure to new leadership as well in operations and client services. I think three things have changed very dramatically since I’ve joined. The first is culture. In my first few months, I spent a week doing the job of an operations analyst. And in doing that I started to understand firsthand the painful manual work that people were subject to and feeling like they did not have the permission to have things changed for them.

But working off that and actually connecting with a lot more people at the ground who are doing these types of activities, we worked with them to make those changes and make them see light at the end of the tunnel. And then suddenly the demand for more change and demand for more automation started building as a groundswell energy with support from our partners in operations and services. Now, routine, repetitive, mundane, mind-numbing work is not an option at the table. It’s become a thing of the past. And secondly, what we’ve done also is we’ve grown an army of citizen developers who really have access to tools and technologies where they can do quick automation without having to depend on broader programs and broader pieces of technology. We’ve also done something super interesting, which is, over the past three years we’ve taken every new analyst in the private bank and trained them on Python.

And so, they’ve started to see the benefits of doing things themselves. So, culture change I think has been one of the biggest things that we’ve achieved in the past few years since I joined. Second, we built a whole set of capabilities, we call them common capabilities. Things like how do you configure new workflows? How do you make decisions using spreadsheets and decision models versus coding it into systems? So,  you can configure it, you can modify it, and you can do things more effectively. And then tools like checklists, which can be again put into systems and automated in a few minutes, in many cases. Today, we have millions of tasks and millions of decisions being executed through these capabilities, which has suddenly game-changed our ability to provide automation at scale.

And last but not least, AI and machine learning, it now plays an important role in the underpinnings of everything that we do in operations and client services. For example, we do a lot of process analytics. We do load balancing. So, when a client calls, which agent or which group of people do we direct that client call to so that they can actually service the client most effectively. In the space of payments, we do a lot with machine learning. Fraud detection is another, and I will say that I’m so glad we’ve had the time to invest and think through all of these foundational capabilities. So, we are now poised and ready to take on the next big leap of changes that are right now at our fingertips, especially in the evolving world of AI and machine learning and of course the public cloud.

Laurel: Excellent. Yeah, you’ve certainly outlined the diversity of the firm’s offerings. So, when building new technologies and platforms, what are some of the working methodologies and practices that you employ to build at scale and then optimize those workflows?

Vrinda: Yeah, as I said before, the private bank has a lot of offerings, but then amplify that with all the other offerings that JPMorgan Chase, the franchise has, a commercial bank, a corporate and investment bank, a consumer and community bank, and many of our clients cross all of these lines of business. It brings a lot of benefits, but it also has complexities. And one of the things that I obsess personally over is how do we simplify things, not add to the complexity? Second is a mantra of reuse. Don’t reinvent because it’s easy for technologists to look at a piece of software and say, “That’s great, but I can build something better.” Instead, the three things that I ask people to focus on and our organization collectively with our partners focus on is first of all, look at the business outcome. We coach our teams that success and innovation does not come from rebuilding something that somebody has already built, but instead from leveraging it and taking the next leap with additional features upon it to create high impact business outcomes.

So, focusing on outcome number one. Second, if you are given a problem, try and look at it from a bigger picture to see whether you can solve the pattern instead of that specific problem. So, I’ll give you an example. We built a chatbot called Casey. It’s one of the most loved products in our private bank right now. And Casey doesn’t do anything really complex, but what it does is solves a very common pattern, which is ask a few simple questions, get the inputs, join this with data services and join this with execution services and complete the task. And we have hundreds of thousands of tasks that Casey performs every single day. And one of them, especially a very simple functionality, the client wants a bank reference letter. Casey is called upon to do that thousands of times a month. And what used to take three or four hours to produce now takes like a few seconds.

So, it suddenly changes the outcome, changes productivity, and changes the happiness of people who are doing things that you know they themselves felt was mundane. So, solving the pattern, again, important. And last but not least, focusing on data is the other thing that’s helped us. Nothing can be improved if you don’t measure it. So, to give you an example of processes, the first thing we did was pick the most complex processes and mapped them out. We understood each step in the process, we understood the purpose of each step in the process, the time taken in each step, we started to question, do you really need this approval from this person? We observed that for the past six months, not one single thing has been rejected. So, is that even a meaningful approval to begin with?

Questioning if that process could be enhanced with AI, could AI automatically say, “Yes, please approve,” or “There’s a risk in this do not approve,” or “It’s okay, it needs a human review.” And then making those changes in our systems and flows and then obsessively measuring the impact of those changes. All of these have given us a lot of benefits. And I would say we’ve made significant progress just with these three principles of focus on outcome, focus on solving the pattern and focus on data and measurements in areas like client onboarding, in areas like maintaining client data, et cetera. So, this has been very helpful for us because in a bank like ours, scale is super important.

Laurel: Yeah, that’s a really great explanation. So, when new challenges do come along, like moving to the public cloud, how do you balance the opportunities of that scale, but also computing power and resources within the cost of the actual investment? How do you ensure that the shifts to the cloud are actually both financially and operationally efficient?

Vrinda: Great question. So obviously every technologist in the world is super excited with the advent of the public cloud. It gives us the powers of agility, economies of scale. We at JPMorgan Chase are able to leverage world class evolving capabilities at our fingertips. We have the ability also to partner with talented technologies at the cloud providers and many service providers that we work with that have advanced solutions that are available first on the public cloud. We are eager to get our hands on those. But with that comes a lot of responsibility because as a bank, we have to worry about security, client data, privacy, resilience, how are we going to operate in a multi-cloud environment because some data has to remain on-prem in our private cloud. So, there’s a lot of complexity, and we have engineers across the board who think a lot about this, and their day and night jobs are to try and figure this out.

As we think about moving to the public cloud in my area, I personally spend time thinking in depth about how we could build architectures that are financially efficient. And the reason I bring that up is because traditionally as we think about data centers where our hardware and software has been hosted, developers and architects haven’t had to worry about costs because you start with sizing the infrastructure, you order that infrastructure, it’s captive, it remains in the data center, and you can expand it, but it’s a one-time cost each time that you upgrade. With the cloud, that situation changes dramatically. It’s both an opportunity but also a risk. So, a financial lens then becomes super important right at the outset. Let me give you a couple of examples of what I mean. Developers in the public cloud have a lot of power, and with that power comes responsibility.

So, I’m a developer and my application is not working right now because there’s some issue. I have the ability to actually spin up additional processes. I have the ability to spin up additional environments, all of which attract costs, and if I don’t control and manage that, the cost could quickly pile up. Data storage, again, we had fixed storage, we could expand it periodically in the data centers, but in the public cloud, you have choices. You can say data that’s going to be slowly accessed versus data that’s going to be accessed frequently to be stored in different types of storage with different costs as a result. Now think about something like a financial ledger where you have retention requirements of let’s say 20 years. The cost could quickly pile up if you store it in the wrong type of storage. So, there’s an opportunity to optimize cost there, and if you ignore it and you’ve not kept an eye on it, you could actually have costs that are just not required.

To do this right, we have to ask developers, architects, and our engineers to not just think about the best performance, the most optimal resilience, but also think about cost as a fundamental aspect of how we look at architectures. So, this for me is a huge area of focus, starting with awareness for our people, training our people, thinking about architecture patterns, solution patterns, tooling, measurements, so that we completely stay on top of this and become more effective and more efficient in how we get to the public cloud. While the journey is exciting, I want to make sure that as we land there, we land safely and optimally from a cost standpoint.

Laurel: And especially in your position, thinking about how technology will affect the firm years and the future is critical. Therefore, as emerging technologies like AI and machine learning become more commonplace across industries, could you offer an example of how you’re using them in the areas that you cover?

Vrinda: Yeah, certainly. And we use AI/ML at many levels of complexity. So let me start with the base case. AI/ML, especially in operations and client services, starts with can I get data from documents? Can I OCR those documents, which is optical character recognition? Can I get information out of it, can I classify it? Can I perform analytics on it? So that’s the base case. On top of that, as you look at data, for example, payments data or data of transactions, and let’s say human beings are scanning them for issues or outliers, outlier detection techniques with AI/ML, they are also table stakes now, and many of our systems do that. But as you move on to the next level of prediction, what we’ve been able to do is start to build up models where say the client is calling. The client has all these types of cases in progress right now. What could they be calling about in addition to this?

The client expressed sentiment about something that they were not happy with two weeks ago. Is it likely that they’re calling about this? Can I have that information at the fingertips of the client service agent so they can look at it and respond as soon as the client asks for something? And think about the next stage of evolution, which is, the client came to us and said, “Change my title because I just got married.” Typically, in a transactional kind of activity, you would respond to the client and fix the title from let’s say, Ms. to Mrs. if that’s what they asked you to do. But imagine if when they came to do that, we said to them, here’s 10 other things that you should possibly think of now that you said you’ve got married. Congratulations. Do you want to address your beneficiaries? Do you want to change something in tax planning? Do you want to change the type of tax calculations that you do because you want to optimize now that you’re married, and you and your spouse could be filing jointly? Again, not that the client would choose to change those things, but you have an opportunity to be more proactive and think about it holistically so you can address their needs before they even come to you asking for that level of engagement and detail.

We also exploit a lot of AI/ML capabilities and in client onboarding to get better data to start to predict what data is right and start to predict risk. And our next leap, I believe strongly, and I’m super excited about this area of large language models, which I think are going to offer us exponential possibilities, not just in JPMorgan Chase, but as you can see in the world right now with technologies like ChatGPT, OpenAI’s technologies, as well as any of the other publicly available large language models that are being developed every single day.

Laurel: Well, it’s clear that AI offers great opportunities for optimizing platforms and transformations. Could you describe the process of how JPMorgan Chase decided to create dedicated teams for AI and machine learning, and how did you build out those teams?

Vrinda: Yeah, certainly. At JPMorgan Chase, we’ve been cultivating the mindset for some years now to think AI-first while hiring people. And we also leverage the best talent in the industry, and we’ve hired a lot of people in our research divisions as well to work on AI/ML. We’ve got thousands, several thousand technologists focused on AI. For me personally, in 2020, during the first months of the pandemic, I decided that I needed to see more AI/ML activity across my areas. So, I did what I called the “Summer of AI/ML,” and this was a fully immersive program that ran over 12 weeks with training for our people, and it was not full-time. So, they would dial in for a couple of hours, get trained on an AI/ML concept and some techniques, and then they would continue that and practice that for the week.

Then we had ideation sessions with our users for a couple of weeks and then a hackathon and some brilliant ideas came out of it. But when I stepped back and looked at this whole thing and the results of it, a few months later, I realized that many of the ideas had not reached the final destination into production. And in thinking a little more deeply about that, I understood that we had a problem. The problem was as follows, while AI is a great thing and everybody appreciates it, until AI becomes ingrained in everybody’s brain as the first thing to think about, there’s always going to be a healthy tension between choosing the next best feature on a product, which is very deterministic. If you say, add this button here or add these features using conventional technologies like Java versus game-changing the product using AI, which is a little bit more of a risk, the results are not always predictable, and it requires experimentation and R&D.

And so, when you have a choice of incremental changes that are deterministic and changes that are more probabilistic, people tend to take the most certain answer. And so, I decided that I needed to build out a focused, dedicated team of data scientists who were just going to obsess about solving problems in the space of data science and embed them across the products that we were building. And now the results are starting to show for themselves because the work they’ve done is phenomenal and the demand on them is growing every single day to the point where I’ve grown the team and the value that they’re providing is also measured and visible to the broader organization.

Laurel: So, in JPMorgan Chase’s client services, customer experience is clearly a driving force. How do you ensure that your teams are providing clients, especially those high-net-worth private clients that have high expectations of service with services that then meet their banking and account management needs?

Vrinda: So, we obsess over customer experience starting from the CEO down to every single employee. I have three tenets for my team. Number one is client experience, the second is user experience, and third is engineering excellence. And they know that a lot of us are measured by how well we service our clients. So, in the private bank specifically, in addition to reviewing our core capabilities like our case management system, our voice recognition systems, our fraud capture systems, all of that, we continuously analyze data received from client surveys, data received through every single interaction that we have with our client across all channels. So, whether it be a voice channel, whether it be emails, whether it be things that the client types in our websites, the places that they access, and our models just do not look at sentiment, they also look at client experience.

And as they look at experience, the things that we are trying to understand are, first of all, how’s the client feeling in this interaction? But more important is client one and client two and client three feeling the same thing about a particular aspect of our process, and do we need to change that process as a result, or is there more training that needs to be provided to our agents because we are not able to fully satisfy this category of requests? And by doing that continuously and analyzing it, and back to the point that I made earlier, by measuring it constantly, we are able to say, first of all, how was the experience to begin with? How is the experience now and after making these changes on these training programs or these fixes in our systems, how is that experience showing? And some of the other things we are able to do are look at experiences over a period of time.

So, for example, the client came to us last year and their experience based on the measurements that we did was at a certain level, they continue to interact with us over a period of months. Has it gone up? Has it gone down? How is that needle trending? How do we take that to superb? And we’ve been able to figure out these in ways that we’ve been able to prevent complaints, for example, and get to a point where things are escalated to the right people in the organization, especially in the servicing space where we are able to triage and manage these things more effectively because we are a high- touch client business, and we need to make sure that our clients are extremely happy with us.

Laurel: Oh yeah, absolutely. And sort of like another phase or idea when we’re thinking about customer experience and customer services, building a workforce that can respond to it. So here we’re going to talk a bit about how we promote diversity, which has been a tenet of your career, and you currently sit on the board of the Transition Network, which is a nonprofit that empowers a diverse network of women throughout career transitions. So, at JPMorgan Chase, how do you grow talent and improve representation across the company? And then how does that help build better customer experience?

Vrinda: Sure, that’s a great question. I certainly am very passionate about diversity, and during the past 15 years of my career, I’ve spent a lot of time supporting diversity. In my prior firm, I was co-head of the Asian Professional Network. Then subsequently for the past three years, I’ve been a board member at the Transition Network, which is all about women in transition. Meaning as they grow out of their careers into retirement and into other stages of life, how do we help them transition? And then here at JPMorgan Chase, I’m the sponsor for what is called the Take It Forward initiative, which is an initiative that supports 15,000 women technologists. JPMorgan Chase, as you know, does a broad range of activities in the area of diversity across all kinds of business resource groups, and we invest a lot of time and energy.

But specifically, for the Take It Forward initiative that I sponsor, it plays a key role in helping these 15,000 women technologists continuously enhance their leadership skills, grow their technical skills, build their confidence, develop networks, learn from senior sponsors and mentors, grow their careers, all of which makes their work experience very enriching. When I hear things like I’m motivated, I get new energy interacting with these senior women, I trust my personal power more. I’m confident to negotiate with my manager for a better role; I feel confident that I can discuss my compensation. It makes me really happy, and especially when they say I stay in JPMorgan Chase because of Take It Forward, it brings tears to my eyes. It’s really one of the most amazing volunteer driven initiatives in this organization. And a lot of people pour in passion, energy, time to make it succeed. And this initiative has won many awards as well externally.

I strongly believe all of these efforts are critical as it changes when people’s experiences change and when they’re happy, what they do becomes that much more effective. And it changes how we work internally, how we present ourselves externally and game changes our business outcomes. I’ve seen that in problem solving meetings. When you evaluate risk and you bring in people from diverse backgrounds, some are more risk averse, some are more risk taking, and you suddenly see that dynamic play out and the outcome becomes much different from what it would’ve been if you didn’t have all those people in the mix. So overall, I strongly believe in this, and I’ve seen it play out in every single firm that I’ve ever worked at. So that’s my take on diversity and how it helps us.

Laurel: Well, it certainly is important work, especially as it ties so tightly with the firm’s own ethos. So Vrinda, looking forward, how do you envision the future of private banking and client management? As we see emerging technologies become more prevalent and enterprises start to shift their infrastructure to the public cloud?

Vrinda: As I mentioned earlier, I see the next set of emerging technologies taking the world on a super exciting ride, and I think it’s going to be as transformational as the advent of the world wide web. Just take the example of large language models. The areas that are most likely to be first disrupted will be any work that involves content creation because that is table stakes for a large language model. As I expand that to my work, the rest of my work, client services and operations and many other areas that require repetitive work and large-scale interpretation and synthesis of information, that’s again, table stakes for large language models.

Expand that now to the next evolution, which is agents that are now the emerging technology with large language models. When agents are provided with a suite of tools and they can use reasoning like humans to decide which tool to execute based on input, that’ll game change the whole thing I was talking about earlier on workflow and task execution and operationally intense activities in the organization. When I look at myself as a software developer in the areas of code generation, code testing, code correction, test data generation, just to name a few, all of those are going to be game changed. So not just the work that our users do in the private bank, but the work that we do as technologists in the private bank. A lot of that is going to game change dramatically.

And then you add on the next level, which is problem solving. Large language models are continuously being trained on all subjects ever known to humans. And that for me is the most fascinating part of this. It’s like hundreds of thousands of brains that are working together on a diverse set of subjects. So, imagine a model that’s been trained on a domain like medicine or aerospace or defense, and then trying to bring all of that brainpower together to solve a problem in finance. That truly to me is the ultimate gold standard of problem solving. We talked about diverse people in the room coming with different experiences but imagine models that have been trained. You suddenly have breadth, depth, range of diverse knowledge that could never have been contemplated at that scale.

And in order to do all of this, obviously one of the key underpinnings is the public cloud and being able to spin up compute as quickly as possible to do complex calculations and then spin it down when you don’t need it, which is where the public cloud becomes super important. So, all I can say in conclusion is I think this is an amazing time to be in technology, and I just cannot wait to see how we further step up our game in the coming months and years, and things are moving almost at the speed of light now. Every single day, new papers get published and new ideas are coming out building on top of some of the exponential technologies that we are seeing in the world today.
 
Laurel: Oh, that’s fantastic. Vrinda, thank you so much for being on the Business Lab today.

Vrinda: Thank you so much, Laurel. It’s my pleasure. I really enjoyed speaking with you and thank you for your thoughtful questions. They were super interesting.

Laurel: That was Vrinda Menon, the chief technology officer of Managed Accounts, Client Onboarding and Client Services at J.P. Morgan Private Bank, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review overlooking the Charles River.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This podcast is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations are not the responsibility of JPMorgan Chase & Co.

Making sense of sensor data

Consider a supply chain where delivery vehicles, shipping containers, and individual products are sensor-equipped. Real-time insights enable workers to optimize routes, reduce delays, and efficiently manage inventory. This smart orchestration boosts efficiency, minimizes waste, and lowers costs.

Many industries are rapidly integrating sensors, creating vast data streams that can be leveraged to open profound business possibilities. In energy management, growing use of sensors and drone footage promises to enable efficient energy distribution, lower costs, and reduced environmental impact. In smart cities, sensor networks can enhance urban life by monitoring traffic flow, energy consumption, safety concerns, and waste management.

These aren’t glimpses of a distant future, but realities made possible today by the increasingly digitally instrumented world. Internet of Things (IoT) sensors have been rapidly integrated across industries, and now constantly track and measure properties like temperature, pressure, humidity, motion, light levels, signal strength, speed, weather events, inventory, heart rate and traffic.  

The information these devices collect—sensor and machine data—provides insight into the real-time status and trends of these physical parameters. This data can then be used to make informed decisions and take action—capabilities that unlock transformative business opportunities, from streamlined supply chains to futuristic smart cities.

John Rydning, research vice president at IDC, projects that sensor and machine data volumes will soar over the next five years, achieving a greater than 40% compound annual growth rate through 2027. He attributes that not primarily to an increasing number of devices, as IoT devices are already quite prevalent, but rather due to more data being generated by each one as businesses learn to make use of their ability to produce real-time streaming data.

Meanwhile, sensors are growing more interconnected and sophisticated, while the data they generate increasingly includes a location in addition to a timestamp. These spatial and temporal features not only capture data changes over time, but also create intricate maps of how these shifts unfold across locations—facilitating more comprehensive insights and predictions.

But as sensor data grows more complex and voluminous, legacy data infrastructure struggles to keep pace. Continuous readings over time and space captured by sensor devices now require a new set of design patterns to unlock maximum value. While businesses have capitalized on spatial and time-series data independently for over a decade, its true potential is only realized when considered in tandem, in context, and with the capacity for real-time insights.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Making sense of sensor data

Consider a supply chain where delivery vehicles, shipping containers, and individual products are sensor-equipped. Real-time insights enable workers to optimize routes, reduce delays, and efficiently manage inventory. This smart orchestration boosts efficiency, minimizes waste, and lowers costs.

Many industries are rapidly integrating sensors, creating vast data streams that can be leveraged to open profound business possibilities. In energy management, growing use of sensors and drone footage promises to enable efficient energy distribution, lower costs, and reduced environmental impact. In smart cities, sensor networks can enhance urban life by monitoring traffic flow, energy consumption, safety concerns, and waste management.

These aren’t glimpses of a distant future, but realities made possible today by the increasingly digitally instrumented world. Internet of Things (IoT) sensors have been rapidly integrated across industries, and now constantly track and measure properties like temperature, pressure, humidity, motion, light levels, signal strength, speed, weather events, inventory, heart rate and traffic.  

The information these devices collect—sensor and machine data—provides insight into the real-time status and trends of these physical parameters. This data can then be used to make informed decisions and take action—capabilities that unlock transformative business opportunities, from streamlined supply chains to futuristic smart cities.

John Rydning, research vice president at IDC, projects that sensor and machine data volumes will soar over the next five years, achieving a greater than 40% compound annual growth rate through 2027. He attributes that not primarily to an increasing number of devices, as IoT devices are already quite prevalent, but rather due to more data being generated by each one as businesses learn to make use of their ability to produce real-time streaming data.

Meanwhile, sensors are growing more interconnected and sophisticated, while the data they generate increasingly includes a location in addition to a timestamp. These spatial and temporal features not only capture data changes over time, but also create intricate maps of how these shifts unfold across locations—facilitating more comprehensive insights and predictions.

But as sensor data grows more complex and voluminous, legacy data infrastructure struggles to keep pace. Continuous readings over time and space captured by sensor devices now require a new set of design patterns to unlock maximum value. While businesses have capitalized on spatial and time-series data independently for over a decade, its true potential is only realized when considered in tandem, in context, and with the capacity for real-time insights.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Moving data through the supply chain with unprecedented speed

Product information is a powerful commodity in today’s digital economy. Making it accessible can let consumers know if an item contains allergens, help retailers respond swiftly to product recalls, and enable suppliers to track real-time inventory levels. But data can become siloed and inaccessible if organizations fail to make it easy to connect with. This means shifting away from legacy processes and using a “phygital” approach, which brings together data from physical objects and connected digital sources.

“The phygital creates a link between the actual physical good and its digital representation, which can unlock vast volumes of information for consumers—data they haven’t been able to access in the past because it has been tied up in proprietary systems,” says Carrie Wilkie, senior vice president at GS1 US, a member of GS1, a global not-for-profit supply chain standards organization.

Driving adoption of this phygital connection are technological enablement and standards for interoperability. Standards define a common language between technologies and can make data more technology agnostic. These standards, along with evolving data carriers such as two-dimensional (2D) barcodes and Radio Frequency Identification (RFID), are boosting supply chain visibility in an era of uncertainty, and are transforming how consumers select and interact with products.

Next-generation barcodes

Among the best-known global standards for classifying products are Global Trade Item Numbers (GTINs), which are used for identifying products, and Global Location Numbers (GLNs) for location. These unique identifiers, when embedded in a data carrier such as a barcode, are examples of standards that provide a way for varying technologies and trading partners across the globe to interpret the data in the same way, enabling them to find products anywhere in their supply chain. Today, a simple scan can connect permissioned data between points in the supply chain. Unlocking the full potential of data in a more robust data carrier can elevate that simple scan to connect any product data to digital information that flows seamlessly across trading partners.

The Universal Product Code (UPC), the one-dimensional machine-readable identifier in North America, and the European Article Number (EAN) barcode for the rest of the world, are the longest-established and most widely used of all barcodes. These common barcodes—and the data behind them—can shed new light on supply chain data. However, a new generation of barcodes is emerging that promises to provide consumers with greater transparency, helping them to make smarter decisions about what they buy and use, while simultaneously improving supply chain safety and resiliency for all stakeholders.

While UPC and EAN barcodes carry GTIN data and can be found on consumer products all over the world, they fail to “create a link between the physical and the digital,” says Wilkie, “We need more information about products at our fingertips in a machine-readable, interoperable way than we’ll ever be able to fit on product packaging.”

Advanced data carriers and emerging standards are capturing unprecedented amounts of data for businesses, regulators, consumers, and patients alike, offering much more than just links to static webpages. Rather, two-dimensional (2D) barcodes and Radio Frequency Identification (RFID) technology can support phygital connections to tell a richer story about a product, including where it comes from, if it contains allergens, is organic, even how it can be recycled for sustainability purposes.

Better yet, 2D barcodes and RFID technology allow brands to communicate directly with consumers to offer more timely, accurate, and authoritative information. This is a step beyond consumers using their cell phones to look up product data while browsing in a physical store, which nearly four out of 10 consumers currently do, according to 2020 research by PwC Global.

Another advantage of today’s more advanced data carriers: One-dimensional barcodes can contain about 20 characters of information, but 2D barcodes, such as QR codes (quick-response codes), can hold more than 7,000 characters of data, and can provide access to more detailed information such as features, ingredients, expiration date, care instructions, and marketing.

Innovative use cases for QR codes are expanding rapidly, as this matrix code can be read with a line-of-sight device like a hand-held scanner or personal device like a cell phone.

“By using 2D barcodes, we’re able to start unlocking more information for consumers, patients, and regulators, and create more of a phygital experience at every point in the supply chain,” says Wilkie.

For example, a grocery store chain can use a QR code containing batch and expiration data to support traceability, waste management, and consumer safety around the world. Another application of QR codes is on-demand discounting. According to Wilkie, a bakery can rely on a QR code and electronic store shelf tags “to determine which racks of bread will expire in the next few days,” and can easily mark down products about to expire without the intervention of a store associate. The result is a win-win scenario. Consumers benefit by receiving product discounts, while the retailer saves on both product waste and manual labor costs.

RFID technology is another advanced data carrier that is already delivering significant advantages. RFID uses electronic tags that respond to radio waves to automatically transmit data. They are affixed to products or pallets, enabling strategically positioned readers to capture and share huge amounts of information in real time. Since data is transmitted via radio waves, unlike barcodes, line of sight is not needed.

As a wireless system of tags and readers, RFID can deliver enormous benefits. For starters, RFID technology drives a more precise understanding of physical inventory across the supply chain, and in physical stores, with accuracy levels near 99%. By minimizing inventory errors and notifying organizations when it’s time to restock, RFID not only drives supply chain efficiencies but enhances the experiences of customers who want assurances that the products they order are readily available.

RFID can also enhance in-store consumer experiences when used in applications like smart shelves that can detect when products are removed, dynamic-pricing displays, and frictionless check-out where an RFID reader can read and check out an entire basket of tagged goods almost instantly. With real-time visibility into stock levels, RFID can also ensure better on-shelf availability of products. In fact, a study by research and consulting company Spherical Insights says the global market for electronic shelving technology, sized at $1.02 billion in 2022, will grow to $3.43 billion by 2032.

Global standards for the benefit of all

For data carriers to deliver on their promises of greater supply chain visibility and enhanced customer experiences, global standards need widespread adoption.

Standards and technology innovations are extending the power and flexibility of unique identifiers, providing a gateway to unprecedented volumes of important information. For example, the GS1 Digital Link standard with a QR code, says Wilkie, “allows organizations to take the GTIN and GS1 identification, and encode it in a URL in a standardized way, unlocking value for the consumer by allowing them to go to a website and access more information than would ever fit on a product package.”

Products still go beep at the point of sale; it’s just that consumers are now able to access more information than ever before in ways that not only facilitate a more phygital interaction at every point in the supply chain but promise to transform the way in which product data is shared with consumers and suppliers.

“Supply chain standards are table stakes,” says Wilkie. “Using standards to ensure interoperability is critical in making sure that the supply chain is efficient.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.