Bridging the operational AI gap

The transformational potential of AI is already well established. Enterprise use cases are building momentum and organizations are transitioning from pilot projects to AI in production. Companies are no longer just talking about AI; they are redirecting budgets and resources to make it happen. Many are already experimenting with agentic AI, which promises new levels of automation. Yet, the road to full operational success is still uncertain for many. And, while AI experimentation is everywhere, enterprise-wide adoption remains elusive.

Without integrated data and systems, stable automated workflows, and governance models, AI initiatives can get stuck in pilots and struggle to move into production. The rise of agentic AI and increasing model autonomy make a holistic approach to integrating data, applications, and systems more important than ever. Without it, enterprise AI initiatives may fail. Gartner predicts over 40% of agentic AI projects will be cancelled by 2027 due to cost, inaccuracy, and governance challenges. The real issue is not the AI itself, but the missing operational foundation.

To understand how organizations are structuring their AI operations and how they are deploying successful AI projects, MIT Technology Review Insights surveyed 500 senior IT leaders at mid- to large-size companies in the US, all of which are pursuing AI in some way.

The results of the survey, along with a series of expert interviews, all conducted in December 2025, show that a strong integration foundation aligns with more advanced AI implementations, conducive to enterprise-wide initiatives. As AI technologies and applications evolve and proliferate, an integration platform can help organizations avoid duplication and silos, and have clear oversight as they navigate the growing autonomy of workflows.

Key findings from the report include the following:

Some organizations are making progress with AI. In recent years, study after study has exposed a lack of tangible AI success. Yet, our research finds three in four (76%) surveyed companies have at least one department with an AI workflow fully in production.

AI succeeds most frequently with well-defined, established processes. Nearly half (43%) of organizations are finding success with AI implementations applied to well-defined and automated processes. A quarter are succeeding with new processes. And one-third (32%) are applying AI to various processes.

Two-thirds of organizations lack dedicated AI teams. Only one in three (34%) organizations have a team specifically for maintaining AI workflows. One in five (21%) say central IT is responsible for ongoing AI maintenance, and 25% say the responsibility lies with departmental operations. For 19% of organizations, the responsibility is spread out.

Enterprise-wide integration platforms lead to more robust implementation of AI. Companies with enterprise-wide integration platforms are five times more likely to use more diverse data sources in AI workflows. Six in 10 (59%) employ five or more data sources, compared to only 11% of organizations using integration for specific workflows, or 0% of those not using an integration platform. Organizations using integration platforms also have more multi-departmental implementation of AI, more autonomy in AI workflows, and more confidence in assigning autonomy in the future.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Finding value with AI and Industry 5.0 transformation

For years, Industry 4.0 transformation has centered on the convergence of intelligent technologies like AI, cloud, the internet of things, robotics, and digital twins. Industry 5.0 marks a pivotal shift from integrating emerging technologies to orchestrating them at scale. With Industry 5.0, the purpose of this interconnected web of technologies is more nuanced: to augment human potential, not just automate work, and enhance environmental sustainability.

Industry 5.0 has ushered in a radically new level of collaboration between humans and machines, one that removes data silos and optimizes infrastructure, operations, and resource use to disrupt business models and create new forms of enterprise value. But without discipline in tracking value creation, investments risk being wasted on incremental efficiency gains rather than strategic growth.

“To realize the promise of Industry 5.0, companies must move beyond cost and efficiency to focus on growth, resilience, and human-centric outcomes,” says Sachin Lulla, EY Americas industrials and energy transformation leader. “This requires not just new technologies, but new ways of working—where people and machines collaborate, and where value is measured not just in dollars saved, but in new opportunities created.”

An MIT Technology Review Insights survey of 250 industry leaders from around the world reveals most industrial investments still target efficiency. And while the data shows human-centric and sustainable use cases deliver higher value, they are underfunded. The research shows most organizations are not realizing the full value potential of Industry 5.0 due to a combination of:

• Culture, skills, and collaboration barriers.
• Tactical and misaligned technology investments.
• Use-case prioritization focused on efficiency over growth, sustainability, and well-being.

The barrier to achieving Industry 5.0 transformation is not only about fixing the technology, according to research from EY and Saïd Business School at the University of Oxford, it is also about bolstering human-centric elements like strategy, culture, and leadership. Companies are investing heavily in digital transformation, but not always in ways that unlock the full human potential of Industry 5.0.

“We’re not just doing digital work for work’s sake, what I call ‘chasing the digital fairies,’” says Chris Ware, general manager, iron ore digital, Rio Tinto. “We have to be very clear on what pieces of work we go after and why. Every domain has a unique roadmap about how to deliver the best value.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

From integration chaos to digital clarity: Nutrien Ag Solutions’ post-acquisition reset

Thank you for joining us on the “Enterprise AI hub.”

In this episode of the Infosys Knowledge Institute Podcast, Dylan Cosper speaks with Sriram Kalyan, head of applications and data at Nutrien Ag Solutions, Australia, about turning a high-risk post-acquisition IT landscape into a scalable digital foundation. Sriram shares how the merger of two major Australian agricultural companies created duplicated systems, fragile integrations, and operational risk, compounded by the sudden loss of key platform experts and partners. He explains how leadership alignment, disciplined platform consolidation, and a clear focus on business outcomes transformed integration from an invisible liability into a strategic enabler, positioning Nutrien Ag Solutions for future growth, cloud transformation, and enterprise scale.

Click here to continue.

What it takes to make agentic AI work in retail

Thank you for joining us on the “Enterprise AI hub.”

In this episode of the Infosys Knowledge Institute Podcast, Dylan Cosper speaks with Prasad Banala, director of software engineering at a large US-based retail organization, about operationalizing agentic AI across the software development lifecycle. Prasad explains how his team applies AI to validate requirements, generate and analyze test cases, and accelerate issue resolution, while maintaining strict governance, human-in-the-loop review, and measurable quality outcomes.

Click here to continue.

Tuning into the future of collaboration 

When work went remote, the sound of business changed. What began as a scramble to make home offices functional has evolved into a revolution in how people hear and are heard. From education to enterprises, companies across industries have reimagined what clear, reliable communication can mean in a hybrid world. For major audio and communications enterprises like Shure and Zoom, that transformation has been powered by artificial intelligence, new acoustic technologies, and a shared mission: making connection effortless. 

Necessity during the pandemic accelerated years of innovation in months.  

“Audio and video just working is a baseline for collaboration,” says chief ecosystem officer at Zoom, Brendan Ittelson. “That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem.”  

Audio is a foundation for trust, understanding, and collaboration. Poor sound quality can distort meaning and fatigue listeners, while crisp audio and intelligent processing can make digital interactions feel nearly as natural as in-person exchanges. 

“If you think about the fundamental need here,” adds chief technology officer at Shure, Sam Sabet, “It’s the ability to amplify the audio and the information that’s really needed, and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate.”  

For both Ittelson and Sabet, AI now sits at the center of this progress. For Shure, machine learning powers real-time noise suppression, adaptive beamforming, and spatial audio that tunes itself to a room’s acoustics. For Zoom, AI underpins every layer of its platform, from dynamic noise reduction to automated meeting summaries and intelligent assistants that anticipate user needs. These tools are transforming communication from reactive to proactive, enabling systems that understand intent, context, and emotion. 

“Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing,” says Sabet. “Having software and algorithms that adapt seamlessly and self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.” 

The future, they suggest, is one where technology fades into the background. As audio devices and AI companions learn to self-optimize, users won’t think about microphones or meeting links. Instead, they’ll simply connect. Both companies are now exploring agentic AI systems and advanced wireless solutions that promise to make collaboration seamless across spaces, whether in classrooms, conference rooms, or virtual environments yet to come. 

“It’s about helping people focus on strategy and creativity instead of administrative busy work,” says Ittelson. 

This episode of Business Lab is produced in partnership with Shure. 

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.  

This episode is produced in partnership with Shure.  

Now as the pandemic ushered in the cultural shift that led to our increasingly virtual world, it also sparked a flurry of innovation in the audio and video industries to keep employees and customers connected and businesses running. Today we’re going to talk about the AI technologies behind those innovations, the impact on audio innovation, and the continuing emerging opportunities for further advances in audio capabilities.  

Two words for you: elevated audio.  

My guests today are Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom.  

Welcome Sam, welcome Brendan. 

Sam Sabet: Thank you, Megan. It’s a pleasure to be here and I’m looking forward to this conversation with both you and Brendan. It should be a very exciting conversation. 

Brendan Ittelson: Thank you so much for having me today. I’m looking forward to the conversation and all the topics we have to dive into on this area. 

Megan: Fantastic. Lovely to have you both here. And Sam, just to set some context, I wonder if we could start with the pandemic and the innovation that really was born out of necessity. I mean, when it became clear that we were all going to be virtual for the foreseeable future, I wonder what was the first technological mission for Shure? 

Sam: Yeah, very good question. The pandemic really accelerated a lot of innovation around virtual communications and fundamentally how we perform our everyday jobs remotely. One of our first technological mission when the pandemic happened and everybody ended up going home and performing their functions remotely was to make sure that people could continue to communicate effectively, whether that’s for business meetings, virtual events, or educational purposes. We focused on collaboration and enhancing collaboration tools. And ideally what we were aiming to do, or we focused on, was to basically improve the ease of use and configuration of audio tool sets. 

Because unlike the office environment where it might be a lot more controlled, people are working from non-traditional areas like home offices or other makeshift solutions, we needed to make sure that people could still get pristine audio and that studio level audio even in uncontrolled environments that are not really made for that. We expedited development in our software solutions. We created tool sets that allowed for ease of deployment and remote configuration and management so we could enable people to continue doing the things they needed to do without having to worry about the underlying technology. 

Megan: And Brendan, during that time, it seemed everyone became a Zoom user of some sort. I mean, what was the first mission at Zoom when virtual connection became this necessity for everyone? 

Brendan: Well, our mission fundamentally didn’t change. It’s always been about delivering frictionless communications. What shifted was the urgency and the magnitude of what we were doing. Our focus shifted on how we do this reliably, securely, and to scale to ensure these millions of new users could connect instantly without friction. We really shifted our thinking of being just a business continuity tool to becoming a lifeline for so many individuals and industries. The stories that we heard across education, healthcare, and just general human connection, the number of those moments that matter to people that we were able to help facilitate just became so important. We really focused on how can we be there and make it frictionless so folks can focus on that human connection. And that accelerated our thinking in terms of innovation and reinforced the thought that we need to focus on the simplicity, accessibility, and trust in communication technology so that people could focus on that connection and not the technology that makes it possible. 

Megan: That’s so true. It did really just become an absolute lifeline for people, didn’t it? And before we dive into the technologies beyond these emerging capabilities, I wonder if we could first talk about just the importance of clear audio. I mean, Sam, as much as we all worry over how we look on Zoom, is how we sound perhaps as or even more impactful? 

Sam: Yeah, you’re absolutely correct. I mean, clear audio is absolutely critical for effective communications. Video quality is very important absolutely, but poor audio can really hinder understanding and engagement. As a matter of fact, there’s studies and research from areas such as Yale University that say that poor audio can make understanding somewhat more challenged and even affect retention of information. Especially in an educational type environment where there’s a lot of background noise and very differing types of spaces like auditoriums and lecture halls, it really becomes a high priority that you have great audio quality. And during the pandemic, as you said, and as Brendan rightly said, it became one of our highest priorities to focus on technologies like beamforming mics and ways to focus on the speaker’s voice and minimize that unwanted background noise so that we could ensure that the communication was efficient, was well understood, and that it removed the distraction so people could be able to actually communicate and retain the information that was being shared. 

Megan: It is incredible just how impactful audio can be, can’t it? Brendan, I mean as you said, remote and hybrid collaboration is part of Zoom’s DNA. What observations can you share about how users have grown along with the technological advancements and maybe how their expectations have grown as well? 

Brendan: Definitely. I mean, users now expect seamless and intelligent experiences. Audio and video just working is a baseline for collaboration. That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem. When we look at it, we’re really looking at these trends in terms of how people want to be better when they’re at home. For example, AI-powered tools like Smart Summaries, translation and noise suppression to help people stay productive and connected no matter where they’re working. But then this also comes into play at the office. We’re starting to see folks that dive into our technology like Intelligent Director and Smart Name Tags that create that meeting equity even when they’re in a conference room. 

So, the remote experience and the room experience all are similar and create that same ability to be seen, heard, and contribute. And we’re now diving further into this that it’s beyond just meetings. Zoom is really transforming into an AI-first work platform that’s focused on human connection. And so that goes beyond the meetings into things like Chat, Zoom Docs, Zoom Events and Webinars, the Zoom Contact Center and more. And all of this being brought together using our AI Companion at its core to help connect all of those different points of connection for individuals. 

Megan: I mean, so Brendan, we know it wasn’t only workplaces that were affected by the pandemic, it was also the education sector that had to undergo a huge change. I wondered if you could talk a little bit about how Zoom has operated in that higher education sphere as well. 

Brendan: Definitely. Education has always been a focus for Zoom and an area that we’ve believed in. Because education and learning is something as a company we value and so we have invested in that sector. And personally being the son of academics, it is always an area that I find fascinating. We continue to invest in terms of how do we make the classroom a stronger space? And especially now that the classroom has changed, where it can be in person, it can be virtual, it can be a mix. And using Zoom and its tools, we’re able to help bridge all those different scenarios to make learning accessible to students no matter their means. 

That’s what truly excites us, is being able to have that technology that allows people to pursue their desires, their interests, and really up-level their pursuits and inspire more. We’re constantly investing in how to allow those messages to get out and to integrate in the flow of communication and collaboration that higher education uses, whether that’s being integrated into the classroom, into learning management systems, to make that a seamless flow so that students and their educators can just collaborate seamlessly. And also that we can support all the infrastructure and administration that helps make that possible. 

Megan: Absolutely. Such an important thing. And Sam, Shure as well, could you talk to us a bit about how you worked in that kind of education space as well from an audio point of view? 

Sam: Absolutely. Actually, this is a topic that’s near and dear to my heart because I’m actually an adjunct professor in my free time. 

Megan: Oh, wow. Very impressive. 

Sam: And the challenges of trying to do this sort of a hybrid lecture, if you will. And Shure has been particularly well suited for this environment and we’ve been focused on it and investing in technologies there for decades. If you think about how a lecture hall is structured, it’s a little different than just having a meeting around the conference table. And Shure has focused on creating products that allow this combination of a presenter scenario along with a meeting space plus the far end where users or students are remote, they can hear intelligibly what’s happening in the lecture hall, but they can also participate. 

Between our products like the Ceiling Mic Arrays and our wireless microphones that are purpose built for presenters and educators like our MXW neXt product line, we’ve created technologies that allow those two previously separate worlds to integrate together. And then add that onto integrating with Zoom and other products that allow for that collaboration has been very instrumental. And again, being a user and providing those lectures, I can see a night and day difference and just how much more effective my lectures are today from where they were five to six years ago. And that’s all just made possible by all the technologies that are purpose built for these scenarios and integrating more with these powerful tools that just make the job so much more seamless. 

Megan: Absolutely fascinating that you got to put the technology to use yourself as well to check that it was all working well. And you mentioned AI there, of course. I mean, Sam, what AI technologies have had the most significant impact on recent audio advancements too? 

Sam: Yeah. Absolutely. If you think about the fundamental need here, it’s the ability to amplify the audio and the information that’s really needed and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate. With our innovations at Shure, we’ve leveraged the cutting-edge technologies to both enhance communication effectiveness and to align seamlessly with evolving features in unified communications like the ones that Brandon just mentioned in the Zoom platforms.  

We partner with industry leaders like Zoom to ensure that we’re providing the ability to be able to focus on that needed audio and eliminate all the background distractions. AI has transformed that audio technology with things like machine learning algorithms that enable us to do more real-time audio processing and significantly enhancing things like noise reduction and speech isolation. Just to give you a simple example, our IntelliMix Room audio processing software that we’ve released as well as part of a complete room solution uses AI to optimize sound in different environments. 

And really that’s one of the fundamental changes in this period, whether that’s pandemic or post-pandemic, is that the key is really flexibility and being able to adapt to changing work environments. Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing. And so having software and algorithms that adapt seamlessly and are able to self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.  

And then last but not least, AI has transformed the way audio and video integrate. For example, we utilize voice recognition systems that integrate with intelligent cameras so that we enable voice tracking technology so that cameras can not only identify who’s speaking, but you have the ability to hear and see people clearly. And that in general just enhances the overall communication experience. 

Megan: Wow. It’s just so much innovation in quite a short space of time really. I mean, Brendan, you mentioned AI a little bit there beforehand, but I wonder what other AI technologies have had the biggest impact as Zoom builds out its own emerging capabilities? 

Brendan: Definitely. And I couldn’t agree more with Sam that, I mean, AI has made such a big shift and it’s really across the spectrum. And when I think about it, there’s almost three tiers when you look at the stack. You start off at the raw audio where AI is doing those things like noise suppression, echo cancellation, voice enhancements. All of that just makes this amazing audio signal that can then go into the next layer, which is the speech AI and natural language processing. Which starts to open up those items such as the real-time transcription, translation, searchable content to make the communication not just what’s heard, but making it more accessible to more individuals and inclusive by providing that content in a format that is best for them. 

And then you take those two layers and put the generative and agentic AI on top of that, that can start surfacing insights, summarize the conversation, and even take actions on someone’s behalf. It really starts to change the way that people work and how they have access and allows them to connect. I think it is a huge shift and I’m very excited by how those three levels start to interact to really enable people to do more and to connect thanks to AI. 

Megan: Yeah. Absolutely. So much rich information that can come out from a single call now because of those sorts of tools. And following on from that, Brendan, I mean, you mentioned before the Zoom AI Companion. I wondered if you could talk a bit about what were your top priorities when building that product to ensure it was truly useful for your customers? 

Brendan: Definitely. When we developed AI Companion, we had two priority focus areas from day one, trust and security, and then accuracy and relevance. On the trust side, it was a non-negotiable that customer data wouldn’t be used to train our models. People need to know that their conversations and content are private and secure. 

Megan: Of course. 

Brendan: And then with accuracy, we needed to ensure AI outputs weren’t generic but grounded in the actual context of a meeting, a chat or a product. But the real story here when I think about AI Companion is the customer value that it delivers. AI Companion helps people save time with meeting recaps, task generation, and proactive prep for the next session. It reduces that friction in hybrid work, whether you’re in a meeting room, a Zoom room, or collaborating across different collaboration tools like Microsoft or Google. And it enables more equitable participation by surfacing the right context for everyone no matter where and how they’re working.  

All this leads to a result where it’s practical, trustworthy, and embedded where work happens. And it’s just not another tool to manage, it’s there in someone’s flow of work to help them along the way. 

Megan: Yeah. That trust piece is just so important, isn’t it, today? And Sam, as much as AI has impacted audio innovation, audio has also had an impact on AI capabilities. I wondered if you could talk a little bit about audio as a data input and the advancements technologies like large language models, LLMs, are enabling. 

Sam: Absolutely. Audio is really a rich data source that’s added a new dimension to AI capabilities. If you think about speech recognition or natural language processing, they’ve had significant advances due to audio data that’s provided for them. And to Brendan’s point about trust and accuracy, I like to think of the products that Shure enables customers with as essentially the eyes and ears in the room for leading AI companions just like the Zoom AI Companion. You really need that pristine audio input to be able to trust the accuracy of what the AI generates. These AI Companions have been very instrumental in the way we do business every day. I mean, between transcription, speaker attributions, the ability to add action items within a meeting and be able to track what’s happening in our interactions, all of that really has to rely on that accurate and pristine input from audio into the AI. I feel that further improves the trust that our end users have to the results of AI and be able to leverage it more.  

If you think about it, if you look at how AI audio inputs enhance that interactive AI system, it enables more natural and intuitive interactions with AI. And it really allows for that seamless integration and the ability for users to use it without having to worry about, is the room set up correctly? Is the audio level proper? And when we talk even about agentic AI, we’re working on future developments where systems can self-heal or detect that there are issues in the environment so that they can autocorrect and adapt in all these different environments and further enable the AI to be able to do a much more effective job, if you will. 

Megan: Sam, you touched on future developments there. I wonder if we could close our conversation today with a bit of a future forward look, if we could. Brendan, can you share innovations that Zoom is working on now and what are you most excited to see come to fruition? 

Brendan: Well, your timing for this question is absolutely perfect because we’ve just wrapped up Zoomtopia 2025. 

Megan: Oh, wow. 

Brendan: And this is where we discussed a lot of the new AI innovations that we have coming to Zoom. Starting off, there’s AI Companion 3.0. And we’ve launched this next generation of agentic AI capabilities in Zoom Workplace. And with 3.0 when it releases, it isn’t just about transcribing, it’s turned into really a platform that helps you with follow-up task, prep for your next conversation, and even proactively suggest how to free up your time. For example, AI Companion can help you schedule meetings intelligently across time zones, suggest which meetings you can skip, and still stay informed and even prepare you with context and insights before you walk into the conversation. It’s about helping people focus on strategy and creativity instead of administrative busy work. And for hybrid work specifically, we introduced Zoomie Group Assistant, which will be a big leap for hybrid collaboration. 

Acting as an assistant for a group chat and meetings, you can simply ask, “@Zoomie, what’s the latest update on the project?” Or “@Zoomie, what are the team’s action items?” And then get instant answers. Or because we’re talking about audio here, you can go into a conference room and say, “Hey, Zoomie,” and get help with things like checking into a room, adjusting lights, temperature, or even sharing your screen. And while all these are built-in features, we’re also expanding the platform to allow custom AI agents through our AI Studio, so organizations can bring their own agents or integrate with third-party ones.  

Zoom has always believed in an open platform and philosophy and that is continuing. Folks using AI Companion 3.0 will be able to use agents across platforms to work with the workflows that they have across all the different SaaS vendors that they might have in their environment, whether that’s Google, Microsoft, ServiceNow, Cisco, and so many other tools. 

Megan: Fantastic. It certainly sounds like a tool I could use in my work, so I look forward to hearing more about that. And Sam, we’ve touched on there are so many exciting things happening in audio too. What are you working on at Shure? And what are you most excited to see come to fruition? 

Sam: At Shure, our engineering teams are really working on a range of exciting projects, but particularly we’re working on developing new collaboration solutions that are integral for IT end users. And these integrate obviously with the leading UC platforms.  

We’re integrating audio and video technologies that are scalable, reliable solutions. And we want to be able to seamlessly connect these to cloud services so that we can leverage both AI technologies and the tool sets available to optimize every type of workspace essentially. Not just meeting rooms, but lecture halls, work from home scenarios, et cetera.  

The other area that we really focus on in terms of our reliability and quality really comes from our DNA in the pro audio world. And that’s really all-around wireless audio technologies. We’re developing our next-generation wireless systems and these are going to offer even greater reliability and range. And they really become ideal for everything from a large-scale event to personal home use and the gamut across that whole spectrum. And I think all of that in partnership with our partners like Zoom will help just facilitate the modern workspace. 

Megan: Absolutely. So much exciting innovation clearly going on behind the scenes. Thank you both so much.  

That was Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom, whom I spoke with from Brighton in England.  

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.  

This show is available wherever you get your podcasts. And if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review and this episode was produced by Giro Studios. Thanks for listening. 

Consolidating systems for AI with iPaaS

For decades, enterprises reacted to shifting business pressures with stopgap technology solutions. To rein in rising infrastructure costs, they adopted cloud services that could scale on demand. When customers shifted their lives onto smartphones, companies rolled out mobile apps to keep pace. And when businesses began needing real-time visibility into factories and stockrooms, they layered on IoT systems to supply those insights.

Each new plug-in or platform promised better, more efficient operations. And individually, many delivered. But as more and more solutions stacked up, IT teams had to string together a tangled web to connect them—less an IT ecosystem and more of a make-do collection of ad-hoc workarounds.

That reality has led to bottlenecks and maintenance burdens, and the impact is showing up in performance. Today, fewer than half of CIOs (48%) say their current digital initiatives are meeting or exceeding business outcome targets. Another 2025 survey found that operations leaders point to integration complexity and data quality issues as top culprits for why investments haven’t delivered as expected.

Achim Kraiss, chief product officer of SAP Integration Suite, elaborates on the wide-ranging problems inherent in patchwork IT: “A fragmented landscape makes it difficult to see and control end-to-end business processes,” he explains. “Monitoring, troubleshooting, and governance all suffer. Costs go up because of all the complex mappings and multi-application connectivity you have to maintain.”

These challenges take on new significance as enterprises look to adopt AI. As AI becomes embedded in everyday workflows, systems are suddenly expected to move far larger volumes of data, at higher speeds, and with tighter coordination than yesterday’s architectures were built
to sustain.

As companies now prepare for an AI-powered future, whether that is generative AI, machine learning, or agentic AI, many are realizing that the way data moves through their business matters just as much as the insights it generates. As a result, organizations are moving away from scattered integration tools and toward consolidated, end-to-end platforms that restore order and streamline how systems interact.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

From guardrails to governance: A CEO’s guide for securing agentic systems

The previous article in this series, “Rules fail at the prompt, succeed at the boundary,” focused on the first AI-orchestrated espionage campaign and the failure of prompt-level control. This article is the prescription. The question every CEO is now getting from their board is some version of: What do we do about agent risk?

Across recent AI security guidance from standards bodies, regulators, and major providers, a simple idea keeps repeating: treat agents like powerful, semi-autonomous users, and enforce rules at the boundaries where they touch identity, tools, data, and outputs.

The following is an actionable eight-step plan one can ask teams to implement and report against:  

Eight controls, three pillars: govern agentic systems at the boundary. Source: Protegrity

Constrain capabilities

These steps help define identity and limit capabilities.

1. Identity and scope: Make agents real users with narrow jobs

Today, agents run under vague, over-privileged service identities. The fix is straightforward: treat each agent as a non-human principal with the same discipline applied to employees.

Every agent should run as the requesting user in the correct tenant, with permissions constrained to that user’s role and geography. Prohibit cross-tenant on-behalf-of shortcuts. Anything high-impact should require explicit human approval with a recorded rationale. That is how Google’s Secure AI Framework (SAIF) and NIST AI’s access-control guidance are meant to be applied in practice.

The CEO question: Can we show, today, a list of our agents and exactly what each is allowed to do?

2. Tooling control: Pin, approve, and bound what agents can use

The Anthropic espionage framework worked because the attackers could wire Claude into a flexible suite of tools (e.g., scanners, exploit frameworks, data parsers) through Model Context Protocol, and those tools weren’t pinned or policy-gated.

The defense is to treat toolchains like a supply chain:

  • Pin versions of remote tool servers.
  • Require approvals for adding new tools, scopes, or data sources.
  • Forbid automatic tool-chaining unless a policy explicitly allows it.

This is exactly what OWASP flags under excessive agency and what it recommends protecting against. Under the EU AI Act, designing for such cyber-resilience and misuse resistance is part of the Article 15 obligation to ensure robustness and cybersecurity.

The CEO question: Who signs off when an agent gains a new tool or a broader scope? How does one know?

3. Permissions by design: Bind tools to tasks, not to models

A common anti-pattern is to give the model a long-lived credential and hope prompts keep it polite. SAIF and NIST argue the opposite: credentials and scopes should be bound to tools and tasks, rotated regularly, and auditable. Agents then request narrowly scoped capabilities through those tools.

In practice, that looks like: “finance-ops-agent may read, but not write, certain ledgers without CFO approval.”

The CEO question: Can we revoke a specific capability from an agent without re-architecting the whole system?

Control data and behavior

These steps gate inputs, outputs, and constrain behavior.

4. Inputs, memory, and RAG: Treat external content as hostile until proven otherwise

Most agent incidents start with sneaky data: a poisoned web page, PDF, email, or repository that smuggles adversarial instructions into the system. OWASP’s prompt-injection cheat sheet and OpenAI’s own guidance both insist on strict separation of system instructions from user content and on treating unvetted retrieval sources as untrusted.

Operationally, gate before anything enters retrieval or long-term memory: new sources are reviewed, tagged, and onboarded; persistent memory is disabled when untrusted context is present; provenance is attached to each chunk.

The CEO question: Can we enumerate every external content source our agents learn from, and who approved them?

5. Output handling and rendering: Nothing executes “just because the model said so”

In the Anthropic case, AI-generated exploit code and credential dumps flowed straight into action. Any output that can cause a side effect needs a validator between the agent and the real world. OWASP’s insecure output handling category is explicit on this point, as are browser security best practices around origin boundaries.

The CEO question: Where, in our architecture, are agent outputs assessed before they run or ship to customers?

6. Data privacy at runtime: Protect the data first, then the model

Protect the data such that there is nothing dangerous to reveal by default. NIST and SAIF both lean toward “secure-by-default” designs where sensitive values are tokenized or masked and only re-hydrated for authorized users and use cases.

In agentic systems, that means policy-controlled detokenization at the output boundary and logging every reveal. If an agent is fully compromised, the blast radius is bounded by what the policy lets it see.

This is where the AI stack intersects not just with the EU AI Act but with GDPR and sector-specific regimes. The EU AI Act expects providers and deployers to manage AI-specific risk; runtime tokenization and policy-gated reveal are strong evidence that one is actively controlling those risks in production.

The CEO question: When our agents touch regulated data, is that protection enforced by architecture or by promises?

Prove governance and resilience

For the final steps, it’s important to show controls work and keep working.

7. Continuous evaluation: Don’t ship a one-time test, ship a test harness

Anthropic’s research about sleeper agents should eliminate all fantasies about single test dreams and show how critical continuous evaluation is. This means instrumenting agents with deep observability, regularly red teaming with adversarial test suites, and backing everything with robust logging and evidence, so failures become both regression tests and enforceable policy updates.

The CEO question: Who works to break our agents every week, and how do their findings change policy?

 8. Governance, inventory, and audit: Keep score in one place

AI security frameworks emphasize inventory and evidence: enterprises must know which models, prompts, tools, datasets, and vector stores they have, who owns them, and what decisions were taken about risk.

For agents, that means a living catalog and unified logs:

  • Which agents exist, on which platforms
  • What scopes, tools, and data each is allowed
  • Every approval, detokenization, and high-impact action, with who approved it and when

The CEO question: If asked how an agent made a specific decision, could we reconstruct the chain?

And don’t forget the system-level threat model: assume the threat actor GTG-1002 is already in your enterprise. To complete enterprise preparedness, zoom out and consider the MITRE ATLAS product, which exists precisely because adversaries attack systems, not models. Anthropic provides a case study of a state-based threat actor (GTG-1002) doing exactly that with an agentic framework.

Taken together, these controls do not make agents magically safe. They do something more familiar and more reliable: they put AI, its access, and actions back inside the same security frame used for any powerful user or system.

For boards and CEOs, the question is no longer “Do we have good AI guardrails?” It’s: Can we answer the CEO questions above with evidence, not assurances?

This content was produced by Protegrity. It was not written by MIT Technology Review’s editorial staff.

The crucial first step for designing a successful enterprise AI system

Many organizations rushed into generative AI, only to see pilots fail to deliver value. Now, companies want measurable outcomes—but how do you design for success?

At Mistral AI, we partner with global industry leaders to co-design tailored AI solutions that solve their most difficult problems. Whether it’s increasing CX productivity with Cisco, building a more intelligent car with Stellantis, or accelerating product innovation with ASML, we start with open frontier models and customize AI systems to deliver impact for each company’s unique challenges and goals.

Our methodology starts by identifying an iconic use case, the foundation for AI transformation that sets the blueprint for future AI solutions. Choosing the right use case can mean the difference between true transformation and endless tinkering and testing.

Identifying an iconic use case

Mistral AI has four criteria that we look for in a use case: strategic, urgent, impactful, and feasible.

First, the use case must be strategically valuable, addressing a core business process or a transformative new capability. It needs to be more than an optimization; it needs to be a gamechanger. The use case needs to be strategic enough to excite an organization’s C-suite and board of directors.

For example, use cases like an internal-facing HR chatbot are nice to have, but they are easy to solve and are not enabling any new innovation or opportunities. On the other end of the spectrum, imagine an externally facing banking assistant that can not only answer questions, but also help take actions like blocking a card, placing trades, and suggesting upsell/cross-sell opportunities. This is how a customer-support chatbot is turned into a strategic revenue-generating asset.

Second, the best use case to move forward with should be highly urgent and solve a business-critical problem that people care about right now. This project will take time out of people’s days—it needs to be important enough to justify that time investment. And it needs to help business users solve immediate pain points.

Third, the use case should be pragmatic and impactful. From day one, our shared goal with our customers is to deploy into a real-world production environment to enable testing the solution with real users and gather feedback. Many AI prototypes end up in the graveyard of fancy demos that are not good enough to put in front of customers, and without any scaffolding to evaluate and improve. We work with customers to ensure prototypes are stable enough to release, and that they have the necessary support and governance frameworks.

Finally, the best use case is feasible. There may be several urgent projects, but choosing one that can deliver a quick return on investment helps to maintain the momentum needed to continue and scale.

This means looking for a project that can be in production within three months—and a prototype can be live within a few weeks. It’s important to get a prototype in front of end users as fast as possible to get feedback to make sure the project is on track, and pivot as needed.

Where use cases fall short

Enterprises are complex, and the path forward is not usually obvious. To weed through all the possibilities and uncover the right first use case, Mistral AI will run workshops with our customers, hand-in-hand with subject-matter experts and end users.

Representatives from different functions will demo their processes and discuss business cases that could be candidates for a first use case—and together we agree on a winner. Here are some examples of types of projects that don’t qualify.

Moonshots: Ambitious bets that excite leadership but lack a path to quick ROI. While these projects can be strategic and urgent, they rarely meet the feasibility and impact requirements.

Future investments: Long-term plays that can wait. While these projects can be strategic and feasible, they rarely meet the urgency and impact requirements.

Tactical fixes: Firefighting projects that solve immediate pain but don’t move the needle. While these cases can be urgent and feasible, they rarely meet the strategy and impact requirements.

Quick wins: Useful for building momentum, but not transformative. While they can be impactful and feasible, they rarely meet the strategy and urgency requirements.

Blue sky ideas: These projects are gamechangers, but they need maturity to be viable. While they can be strategic and impactful, they rarely meet the urgency and feasibility requirements.

Hero projects: These are high-pressure initiatives that lack executive sponsorship or realistic timelines. While they can be urgent and impactful, they rarely meet the strategy and feasibility requirements.

Moving from use case to deployment

Once a clearly defined and strategic use case ready for development is identified, it’s time to move into the validation phase. This means doing an initial data exploration and data mapping, identifying a pilot infrastructure, and choosing a target deployment environment.

This step also involves agreeing on a draft pilot scope, identifying who will participate in the proof of concept, and setting up a governance process.

Once this is complete, it’s time to move into the building phase. Companies that partner with Mistral work with our in-house applied AI scientists who build our frontier models. We work together to design, build, and deploy the first solution.

During this phase, we focus on co-creation, so we can transfer knowledge and skills to the organizations we’re partnering with. That way, they can be self-sufficient far into the future. The output of this phase is a deployed AI solution with empowered teams capable of independent operation and innovation.

The first step is everything

After the first win, it’s imperative to use the momentum and learnings from the iconic use case to identify more high-value AI solutions to roll out. Success is when we have a scalable AI transformation blueprint with multiple high-value solutions across the organization.

But none of this could happen without successfully identifying that first iconic use case. This first step is not just about selecting a project—it’s about setting the foundation for your entire AI transformation.

It’s the difference between scattered experiments and a strategic, scalable journey toward impact. At Mistral AI, we’ve seen how this approach unlocks measurable value, aligns stakeholders, and builds momentum for what comes next.

The path to AI success starts with a single, well-chosen use case: one that is bold enough to inspire, urgent enough to demand action, and pragmatic enough to deliver.

This content was produced by Mistral AI. It was not written by MIT Technology Review’s editorial staff.

Rules fail at the prompt, succeed at the boundary

From the Gemini Calendar prompt-injection attack of 2026 to the September 2025 state-sponsored hack using Anthropic’s Claude code as an automated intrusion engine, the coercion of human-in-the-loop agentic actions and fully autonomous agentic workflows are the new attack vector for hackers. In the Anthropic case, roughly 30 organizations across tech, finance, manufacturing, and government were affected. Anthropic’s threat team assessed that the attackers used AI to carry out 80% to 90% of the operation: reconnaissance, exploit development, credential harvesting, lateral movement, and data exfiltration, with humans stepping in only at a handful of key decision points.

This was not a lab demo; it was a live espionage campaign. The attackers hijacked an agentic setup (Claude code plus tools exposed via Model Context Protocol (MCP)) and jailbroke it by decomposing the attack into small, seemingly benign tasks and telling the model it was doing legitimate penetration testing. The same loop that powers developer copilots and internal agents was repurposed as an autonomous cyber-operator. Claude was not hacked. It was persuaded and used tools for the attack.

Prompt injection is persuasion, not a bug

Security communities have been warning about this for several years. Multiple OWASP Top 10 reports put prompt injection, or more recently Agent Goal Hijack, at the top of the risk list and pair it with identity and privilege abuse and human-agent trust exploitation: too much power in the agent, no separation between instructions and data, and no mediation of what comes out.

Guidance from the NCSC and CISA describes generative AI as a persistent social-engineering and manipulation vector that must be managed across design, development, deployment, and operations, not patched away with better phrasing. The EU AI Act turns that lifecycle view into law for high-risk AI systems, requiring a continuous risk management system, robust data governance, logging, and cybersecurity controls.

In practice, prompt injection is best understood as a persuasion channel. Attackers don’t break the model—they convince it. In the Anthropic example, the operators framed each step as part of a defensive security exercise, kept the model blind to the overall campaign, and nudged it, loop by loop, into doing offensive work at machine speed.

That’s not something a keyword filter or a polite “please follow these safety instructions” paragraph can reliably stop. Research on deceptive behavior in models makes this worse. Anthropic’s research on sleeper agents shows that once a model has learned a backdoor, then strategic pattern recognition, standard fine-tuning, and adversarial training can actually help the model hide the deception rather than remove it. If one tries to defend a system like that purely with linguistic rules, they are playing on its home field.

Why this is a governance problem, not a vibe coding problem

Regulators aren’t asking for perfect prompts; they’re asking that enterprises demonstrate control.

NIST’s AI RMF emphasizes asset inventory, role definition, access control, change management, and continuous monitoring across the AI lifecycle. The UK AI Cyber Security Code of Practice similarly pushes for secure-by-design principles by treating AI like any other critical system, with explicit duties for boards and system operators from conception through decommissioning.

In other words: the rules actually needed are not “never say X” or “always respond like Y,” they are:

  • Who is this agent acting as?
  • What tools and data can it touch?
  • Which actions require human approval?
  • How are high-impact outputs moderated, logged, and audited?

Frameworks like Google’s Secure AI Framework (SAIF) make this concrete. SAIF’s agent permissions control is blunt: agents should operate with least privilege, dynamically scoped permissions, and explicit user control for sensitive actions. OWASP’s Top 10 emerging guidance on agentic applications mirrors that stance: constrain capabilities at the boundary, not in the prose.

From soft words to hard boundaries

The Anthropic espionage case makes the boundary failure concrete:

  • Identity and scope: Claude was coaxed into acting as a defensive security consultant for the attacker’s fictional firm, with no hard binding to a real enterprise identity, tenant, or scoped permissions. Once that fiction was accepted, everything else followed.
  • Tool and data access: MCP gave the agent flexible access to scanners, exploit frameworks, and target systems. There was no independent policy layer saying, “This tenant may never run password crackers against external IP ranges,” or “This environment may only scan assets labeled ‘internal.’”
  • Output execution: Generated exploit code, parsed credentials, and attack plans were treated as actionable artifacts with little mediation. Once a human decided to trust the summary, the barrier between model output and real-world side effect effectively disappeared.

We’ve seen the other side of this coin in civilian contexts. When Air Canada’s website chatbot misrepresented its bereavement policy and the airline tried to argue that the bot was a separate legal entity, the tribunal rejected the claim outright: the company remained liable for what the bot said. In espionage, the stakes are higher but the logic is the same: if an AI agent misuses tools or data, regulators and courts will look through the agent and to the enterprise.

Rules that work, rules that don’t

So yes, rule-based systems fail if by rules one means ad-hoc allow/deny lists, regex fences, and baroque prompt hierarchies trying to police semantics. Those crumble under indirect prompt injection, retrieval-time poisoning, and model deception. But rule-based governance is non-optional when we move from language to action.

The security community is converging on a synthesis:

  • Put rules at the capability boundary: Use policy engines, identity systems, and tool permissions to determine what the agent can actually do, with which data, and under which approvals.
  • Pair rules with continuous evaluation: Use observability tooling, red-teaming packages, and robust logging and evidence.
  • Treat agents as first-class subjects in your threat model: For example, MITRE ATLAS now catalogs techniques and case studies specifically targeting AI systems.

The lesson from the first AI-orchestrated espionage campaign is not that AI is uncontrollable. It’s that control belongs in the same place it always has in security: at the architecture boundary, enforced by systems, not by vibes.

This content was produced by Protegrity. It was not written by MIT Technology Review’s editorial staff.

The power of sound in a virtual world

In an era where business, education, and even casual conversations occur via screens, sound has become a differentiating factor. We obsess over lighting, camera angles, and virtual backgrounds, but how we sound can be just as critical to credibility, trust, and connection.

That’s the insight driving Erik Vaveris, vice president of product management and chief marketing officer at Shure, and Brian Scholl, director of the Perception & Cognition Laboratory at Yale University. Both see audio as more than a technical layer: It’s a human factor shaping how people perceive intelligence, trustworthiness, and authority in virtual settings.

“If you’re willing to take a little bit of time with your audio set up, you can really get across the full power of your message and the full power of who you are to your peers, to your employees, your boss, your suppliers, and of course, your customers,” says Vaveris.

Scholl’s research shows that poor audio quality can make a speaker seem less persuasive, less hireable, and even less credible.

“We know that [poor] sound doesn’t reflect the people themselves, but we really just can’t stop ourselves from having those impressions,” says Scholl. “We all understand intuitively that if we’re having difficulty being understood while we’re talking, then that’s bad. But we sort of think that as long as you can make out the words I’m saying, then that’s probably all fine. And this research showed in a somewhat surprising way, to a surprising degree, that this is not so.”

For organizations navigating hybrid work, training, and marketing, the stakes have become high.

Vaveris points out that the pandemic was a watershed moment for audio technology. As classrooms, boardrooms, and conferences shifted online almost overnight, demand accelerated for advanced noise suppression, echo cancellation, and AI-driven processing tools that make meetings more seamless. Today, machine learning algorithms can strip away keyboard clicks or reverberation and isolate a speaker’s voice in noisy environments. That clarity underpins the accuracy of AI meeting assistants that can step in to transcribe, summarize, and analyze discussions.

The implications across industries are rippling. Clearer audio levels the playing field for remote participants, enabling inclusive collaboration. It empowers executives and creators alike to produce broadcast-quality content from the comfort of their home office. And it offers companies new ways to build credibility with customers and employees without the costly overhead of traditional production.

Looking forward, the convergence of audio innovation and AI promises an even more dynamic landscape: from real-time captioning in your native language to audio filtering, to smarter meeting tools that capture not only what is said but how it’s said, and to technologies that disappear into the background while amplifying the human voice at the center.

“There’s a future out there where this technology can really be something that helps bring people together,” says Vaveris. “Now that we have so many years of history with the internet, we know there’s usually two sides to the coin of technology, but there’s definitely going to be a positive side to this, and I’m really looking forward to it.

In a world increasingly mediated by screens, sound may prove to be the most powerful connector of all.

This episode of Business Lab is produced in partnership with Shure.

Full Transcript

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

This episode is produced in partnership with Shure.

Our topic today is the power of sound. As our personal and professional lives become increasingly virtual, audio is emerging as an essential tool for everything from remote work to virtual conferences to virtual happy hour. While appearance is often top of mind in video conferencing and streaming, audio can be as or even more important, not only to effective communication, but potentially to brand equity for both the speaker and the company.

Two words for you: crystal clear.

My guests today are Erik Vaveris, VP of Product Management and Chief Marketing Officer at Shure, and Brian Scholl, Director of the Perception & Cognition Laboratory at Yale University.

Welcome, Erik and Brian.

Erik Vaveris: Thank you, Megan. And hello, Brian. Thrilled to be here today.

Brian Scholl: Good afternoon, everyone.

Megan: Fantastic. Thank you both so much for being here. Erik, let’s open with a bit of background. I imagine the pandemic changed the audio industry in some significant ways, given the pivot to our modern remote hybrid lifestyles. Could you talk a bit about that journey and some of the interesting audio advances that arose from that transformative shift?

Erik: Absolutely, Megan. That’s an interesting thing to think about now being here in 2025. And if you put yourself back in those moments in 2020, when things were fully shut down and everything was fully remote, the importance of audio quality became immediately obvious. As people adopted Zoom or Teams or platforms like that overnight, there were a lot of technical challenges that people experienced, but the importance of how they were presenting themselves to people via their audio quality was a bit less obvious. As Brian’s noted in a lot of the press that he’s received for his wonderful study, we know how we look on video. We can see ourselves back on the screen, but we don’t know how we sound to the people with whom we’re speaking.

If a meeting participant on the other side can manage to parse the words that you’re saying, they’re not likely to speak up and say, “Hey, I’m having a little bit of trouble hearing you.” They’ll just let the meeting continue. And if you don’t have a really strong level of audio quality, you’re asking the people that you’re talking to devote way too much brainpower to just determining the words that you’re saying. And you’re going to be fatiguing to listen to. And your message won’t come across. In contrast, if you’re willing to take a little bit of time with your audio set up, you can really get across the full power of your message and the full power of who you are to your peers, to your employees, your boss, your suppliers, and of course your customers. Back in 2020, this very quickly became a marketing story that we had to tell immediately.

And I have to say, it’s so gratifying to see Brian’s research in the news because, to me, it was like, “Yes, this is what we’ve been experiencing. And this is what we’ve been trying to educate people about.” Having the real science to back it up means a lot. But from that, development on improvements to key audio processing algorithms accelerated across the whole AV industry.

I think, Megan and Brian, you probably remember hearing loud keyboard clicking when you were on calls and meetings, or people eating potato chips and things like that back on those. But you don’t hear that much today because most platforms have invested in AI-trained algorithms to remove undesirable noises. And I know we’re going to talk more about that later on.

But the other thing that happened, thankfully, was that as we got into the late spring and summer of 2020, was that educational institutions, especially universities, and also businesses realized that things were going to need to change quickly. Nothing was going to be the same. And universities realized that all classrooms were going to need hybrid capabilities for both remote students and students in the classroom. And that helped the market for professional AV equipment start to recover because we had been pretty much completely shut down in the earlier months. But that focus on hybrid meeting spaces of all types accelerated more investment and more R&D into making equipment and further developing those key audio processing algorithms for more and different types of spaces and use cases. And since then, we’ve really seen a proliferation of different types of unobtrusive audio capture devices based on arrays of microphones and the supporting signal processing behind them. And right now, machine-learning-trained signal processing is really the norm. And that all accelerated, unfortunately, because of the pandemic.

Megan: Yeah. Such an interesting period of change, as you say. And Brian, what did you observe and experience in academia during that time? How did that time period affect the work at your lab?

Brian: I’ll admit, Megan, I had never given a single thought to audio quality or anything like that, certainly until the pandemic hit. I was thrown into this, just like the rest of the world was. I don’t believe I’d ever had a single video conference with a student or with a class or anything like that before the pandemic hit. But in some ways, our experience in universities was quite extreme. I went on a Tuesday from teaching an in-person class with 300 students to being on Zoom with everyone suddenly on a Thursday. Business meetings come in all shapes and sizes. But this was quite extreme. This was a case where suddenly I’m talking to hundreds and hundreds of people over Zoom. And every single one of them knows exactly what I sound like, except for me, because I’m just speaking my normal voice and I have no idea how it’s being translated through all the different levels of technology.

I will say, part of the general rhetoric we have about the pandemic focuses on all the negatives and the lack of personal connection and nuance and the fact that we can’t see how everyone’s paying attention to each other. Our experience was a bit more mixed. I’ll just tell you one anecdote. Shortly after the pandemic started, I started teaching a seminar with about 20 students. And of course, this was still online. What I did is I just invited, for whatever topic we were discussing on any given day, I sent a note to whoever was the clear world leader in the study of whatever that topic was. I said, “Hey, don’t prepare a talk. You don’t have to answer any questions. But just come join us on Zoom and just participate in the conversation. The students will have read some of your work.”

Every single one of them said, “Let me check my schedule. Oh, I’m stuck at home for a year. Sure. I’d be happy to do that.” And that was quite a positive. The students got to meet a who’s who of cognitive science from this experience. And it’s true that there were all these technological difficulties, but that would never, ever have happened if we were teaching the class in real life. That would’ve just been way too much travel and airfare and hotel and scheduling and all of that. So, it was a mixed bag for us.

Megan: That’s fascinating.

Erik: Yeah. Megan, can I add?

Megan: Of course.

Erik: That is really interesting. And that’s such a cool idea. And it’s so wonderful that that worked out. I would say that working for a global company, we like to think that, “Oh, we’re all together. And we’re having these meetings. And we’re in the same room,” but the reality was we weren’t in the same room. And there hadn’t been enough attention paid to the people who were conferencing in speaking not their native language in a different time zone, maybe pretty deep into the evening, in some cases. And the remote work that everybody got thrown into immediately at the start of the pandemic did force everybody to start to think more about those types of interactions and put everybody on a level playing field.

And that was insightful. And that helped some people have stronger voices in the work that we were doing than they maybe did before. And it’s also led businesses really across the board, there’s a lot written about this, to be much more focused on making sure that participants from those who may be remote at home, may be in the office, may be in different offices, may be in different time zones, are all able to participate and collaborate on really a level playing field. And that is a positive. That’s a good thing.

Megan: Yeah. There are absolutely some positive side effects there, aren’t there? And it inspired you, Brian, to look at this more closely. And you’ve done a study that shows poor audio quality can actually affect the perception of listeners. So, I wonder what prompted the study, in particular. And what kinds of data did you gather? What methodology did you use?

Brian: Yeah. The motivation for this study was actually a real-world experience, just like we’ve been talking about. In addition to all of our classes moving online with no notice whatsoever, the same thing was true of our departmental faculty meetings. Very early on in the pandemic, we had one of these meetings. And we were talking about some contentious issue about hiring or whatever. And two of my colleagues, who I’d known very well and for many, many years, spoke up to offer their opinions. And one of these colleagues is someone who I’m very close with. We almost always see eye to eye. He was actually a former graduate student of mine once upon a time. And we almost always see eye to eye on things. He happened to be participating in that meeting from an old not-so-hot laptop. His audio quality had that sort of familiar tinny quality that we’re all familiar with. I could totally understand everything he was saying, but I found myself just being a little skeptical.

I didn’t find his points so compelling as usual. Meanwhile, I had another colleague, someone who I deeply respect, I’ve collaborated with, but we don’t always see eye to eye on these things. And he was participating in this first virtual faculty meeting from his home recording studio. Erik, I don’t know if his equipment would be up to your level or not, but he sounded better than real life. He sounded like he was all around us. And I found myself just sort of naturally agreeing with his points, which sort of was notable and a little surprising in that context. And so, we turned this into a study.

We played people a number of short audio clips, maybe like 30 seconds or so. And we had these being played in the context of very familiar situations and decisions. One of them might be like a hiring decision. You would have to listen to this person telling you why they think they might be a good fit for your job. And then afterwards, you had to make a simple judgment. It might be of a trait. How intelligent did that person seem? Or it might be a real-world decision like, “Hey, based on this, how likely would you be to pursue trying to hire them?” And critically, we had people listen to exactly the same sort of scripts, but with a little bit of work behind the scenes to affect the audio quality. In one case, the audio sounded crisp and clear. Recorded with a decent microphone. And here’s what it sounded like.

Audio Clip: After eight years in sales, I’m currently seeking a new challenge which will utilize my meticulous attention to detail and friendly professional manner. I’m an excellent fit for your company and will be an asset to your team as a senior sales manager.

Brian: Okay. Whatever you think of the content of that message, at least it’s nice and clear. Other subjects listened to exactly the same recording. But again, it had that sort of tinny quality that we’re all familiar with when people’s voices are filtered through a microphone or a recording setup that’s not so hot. That sounded like this.

Audio Clip: After eight years in sales, I’m currently seeking a new challenge which will utilize my meticulous attention to detail and friendly professional manner. I’m an excellent fit for your company and will be an asset to your team as a senior sales manager.

Brian: All right. Now, the thing that I hope you can get from that recording there is that although it clearly has this what we would call, as a technical term, a disfluent sound, it’s just a little harder to process, you are ultimately successful, right? Megan, Erik, you were able to understand the words in that second recording.

Megan: Yeah.

Erik: Mm-hmm.

Brian: And we made sure this was true for all of our subjects. We had them do word-for-word transcription after they made these judgments. And I’ll also just point out that this kind of manipulation clearly can’t be about the person themselves, right? You couldn’t make your voices sound like that in real world conversation if you tried. Voices just don’t do those sorts of things. Nevertheless, in a way that sort of didn’t make sense, that was kind of irrational because this couldn’t reflect the person, this affected all sorts of judgments about people.

So, people were judged to be about 8% less hirable. They were judged to be about 8% less intelligent. We also did this in other contexts. We did this in the context of dateability as if you were listening to a little audio clip from someone who was maybe interested in dating you, and then you had to make a judgment of how likely would you be to date this person. Same exact result. People were a little less datable when their audio was a little more tinny, even though they were completely understandable.

The experiment, the result that I thought was in some ways most striking is one of the clips was about someone who had been in a car accident. It was a little narrative about what had happened in the car accident. And they were talking as if to the insurance agent. They were saying, “Hey, it wasn’t my fault. This is what happened.” And afterwards, we simply had people make a natural intuitive judgment of how credible do you think the person’s story was. And when it was recorded with high-end audio, these messages were judged to be about 8% more credible in this context. So those are our experiments. What it shows really is something about the power of perception. We know that that sort of sound doesn’t reflect the people themselves, but we really just can’t stop ourselves from having those impressions made. And I don’t know about you guys, but, Erik, I think you’re right, that we all understand intuitively that if we’re having difficulty being understood while we’re talking, then that’s bad. But we sort of think that as long as you can make out the words I’m saying, then that’s probably all fine. And this research showed in a somewhat surprising way to a surprising degree that this is not so.

Megan: It’s absolutely fascinating.

Erik: Wow.

Megan: From an industry perspective, Erik, what are your thoughts on those study results? Did it surprise you as well?

Erik: No, like I said, I found it very, very gratifying because we invest a lot in trying to make sure that people understand the importance of quality audio, but we kind of come about that intuitively. Our entire company is audio people. So of course, we think that. And it’s our mission to help other people achieve those higher levels of audio in everything that they do, whether you’re a minister at a church or you’re teaching a class or you’re performing on stage. When I first saw in the news about Brian’s study, I think it was the NPR article that just came up in one of my feeds. I read it and it made me feel like my life’s work has been validated to some extent. I wouldn’t say we were surprised by it, but iIt made a lot of sense to us. Let’s put it that way.

Megan: And how-

Brian: This is what we’re hearing. Oh, sorry. Megan, I was going to say this is what we’re hearing from a lot of the audio professionals as they’re saying, “Hey, you scientists, you finally caught up to us.” But of course-

Erik: I wouldn’t say it that way, Brian.

Brian: Erik, you’re in an unusual circumstance because you guys think about audio every day. When we’re on Zoom, look, I can see the little rectangle as well as you can. I can see exactly how I look like. I can check the lighting. I check my hair. We all do that every day. But I would say most people really, they use whatever microphone came with their setup, and never give a second thought to what they sound like because they don’t know what they sound like.

Megan: Yeah. Absolutely.

Erik: Absolutely.

Megan: Avoid listening to yourself back as well. I think that’s common. We don’t scrutinize audio as much as we should. I wonder, Erik, since the study came out, how are you seeing that research play out across industry? Can you talk a bit about the importance of strong, clear audio in today’s virtual world and the challenges that companies and employees are facing as well?

Erik: Yeah. Sure, Megan. That’s a great question. And studies kind of back this up, businesses understand that collaboration is the key to many things that we do. They know that that’s critical. And they are investing in making the experiences for the people at work better because of that knowledge, that intuitive understanding. But there are challenges. It can be expensive. You need solutions that people who are going to walk into a room or join a meeting on their personal device, that they’re motivated to use and that they can use because they’re simple. You also have to overcome the barriers to investment. We in the AV industry have had to look a lot at how can we bring down the overall cost of ownership of setting up AV technology because, as we’ve seen, the prices of everything that goes into making a product are not coming down.

Simplifying deployment and management is critical. Beyond just audio technology, IoT technology and cloud technology for IT teams to be able to easily deploy and manage classrooms across an entire university campus or conference rooms across a global enterprise are really, really critical. And those are quickly evolving. And integrations with more standard common IT tools are coming out. And that’s one area. Another thing is just for the end user, having the same user interface in each conference room that is familiar to everyone from their personal devices is also important. For many, many years, a lot of people had the experience where, “Hey, it’s time we’re going to actually do a conference meeting.” And you might have a few rooms in your company or in your office area that could do that. And you walk into the meeting room. And how long does it take you to actually get connected to the people you’re going to talk with?

There was always a joke that you’d have to spend the first 15 minutes of a meeting working all of that out. And that’s because the technology was fragmented and you had to do a lot of custom work to make that happen. But these days, I would say platforms like Zoom and Teams and Google and others are doing a really great job with this. If you have the latest and greatest in your meeting rooms and you know how to join from your own personal device, it’s basically the same experience. And that is streamlining the process for everyone. Bringing down the costs of owning it so that companies can get to those benefits to collaboration is kind of the key.

Megan: I was going to ask if we could dive a little deeper into that kind of audio quality, the technological advancements that AI has made possible, which you did touch on slightly there, Erik. What are the most significant advancements, in your view? And how are those impacting the ways we use audio and the things we can do with it?

Erik: Okay. Let me try to break that down into-

Megan: That’s a big question. Sorry.

Erik: … a couple different sections. Yeah. No, and one that’s just so exciting. Machine-learning-based digital signal processing, or DSP, is here and is the norm now. If you think about the beginning of telephones and teleconferencing, just going way back, one of the initial problems you had whenever you tried to get something out of a dedicated handset onto a table was echo. And I’m sure we’ve all heard that at some point in our life. You need to have a way to cancel echo. But by the way, you also want people to be able to speak at the same time on both ends of a call. You get to some of those very rudimentary things. Machine learning is really supercharging those algorithms to provide better performance with fewer trade-offs, fewer artifacts in the actual audio signal.

Noise reduction has come a long way. I mentioned earlier on, keyboard sounds and the sounds of people eating, and how you just don’t hear that anymore, at least I don’t when I’m on conference calls. But only a few years ago, that could be a major problem. The machine-learning-trained digital signal processing is in the market now and it’s doing a better job than ever in removing things that you don’t want from your sound. We have a new de-verberation algorithm, so if you have a reverberant room with echoes and reflections that’s getting into the audio signal, that can degrade the experience there. We can remove that now. Another thing, the flip side of that is that there’s also a focus on isolating the sound that you do want and the signal that you do want.

Microsoft has rolled out a voice print feature in Teams that allows you, if you’re willing, to provide them with a sample of your voice. And then whenever you’re talking from your device, it will take out anything else that the microphone may be picking up so that even if you’re in a really noisy environment outdoors or, say, in an airport, the people that you’re speaking with are going to hear you and only you. And it’s pretty amazing as well. So those are some of the things that are happening today and are available today.

Another thing that’s emerged from all of this is we’ve been talking about how important audio quality is to the people participating in a discussion, the people speaking, the people listening, how everyone is perceived, but a new consumer, if you will, of audio in a discussion or a meeting has emerged, and that is in the form of the AI agent that can summarize meetings and create action plans, do those sorts of things. But for it to work, a clean transcription of what was said is already table stakes. It can’t garbled. It can’t miss key things. It needs to get it word for word, sentence for sentence throughout the entire meeting. And the ability to attribute who said what to the meeting participants, even if they’re all in the same room, is quickly upon us. And the ability to detect and integrate sentiment and emotion of the participants is going to become very important as well for us to really get the full value out of those kinds of AI agents.

So audio quality is as important as ever for humans, as Brian notes, in some ways more important because this is now the normal way that we talk and meet, but it’s also critical for AI agents to work properly. And it’s different, right? It’s a different set of considerations. And there’s a lot of emerging thought and work that’s going into that as well. And boy, Megan, there’s so much more we could say about this beyond meetings and video conferences. AI tools to simplify the production process. And of course, there’s generative AI of music content. I know that’s beyond the scope of what we’re talking about. But it’s really pretty incredible when you look around at the work that’s happening and the capabilities that are emerging.

Megan: Yeah. Absolutely. Sounds like there are so many elements to consider and work going on. It’s all fascinating. Brian, what kinds of emerging capabilities and use cases around AI and audio quality are you seeing in your lab as well?

Brian: Yeah. Well, I’m sorry that Brian himself was not able to be here today, but I’m an AI agent.

Megan: You got me for a second there.

Brian: Just kidding. The fascinating thing that we’re seeing from the lab, from the study of people’s impressions is that all of this technology that Erik has described, when it works best, it’s completely invisible. Erik, I loved your point about not hearing potato chips being eaten or rain in the background or something like that. You’re totally right. I used to notice that all the time. I don’t think I’ve noticed that recently, but I also didn’t notice that I haven’t noticed that recently, right? It just kind of disappears. The interesting thing about these perceptual impressions, we’re constantly drawing intuitive conclusions about people based on how they sound. And that might be a good thing or a bad thing when we’re judging things like trustworthiness, for example, on the basis of a short audio clip.

But clearly, some of these things are valid, right? We can judge the size of someone or even of an animal based on how they sound, right? A chihuahua can’t make the sound of a lion. A lion can’t make the sound of a chihuahua. And that’s always been true because we’re producing audio signals that go right into each other’s ears. And now, of course, everything that Erik is talking about, that’s not true. It goes through all of these different layers of technology increasingly fueled by AI. But when that technology works the best way, it’s as if it isn’t there at all and we’re just hearing each other directly.

Erik: That’s the goal, right? That it’s seamless open communication and we don’t have to think about the technology anymore.

Brian: It’s a tough business to be in, I think, though, Erik, because people have to know what’s going on behind the surface in order to value it. Otherwise, we just expect it to work.

Erik: Well, that’s why we try to put the logo of our products on the side of them so they show up in the videos. But yeah, it’s a good point.

Brian: Very good. Very good.

Erik: Yeah.

Megan: And we’ve talked about virtual meetings and conversations quite a bit, but there’s also streamed and recorded content, which are increasingly important at work as well. I wondered, Erik, if you could talk a bit about how businesses are leveraging audio in new ways for things like marketing campaigns and internal upskilling and training and areas like that?

Erik: Yeah. Well, one of the things I think we’ve all seen in marketing is that not everything is a high production value commercial anymore. And there’s still a place for that, for sure. But people tend to trust influencers that they follow. People search on TikTok, on YouTube for topics. Those can be the place that they start. And as technology’s gotten more accessible, not just audio, but of course, the video technology too, content creators can produce satisfying content on their own or with just a couple of people with them. And Brian’s study shows that it doesn’t really matter what the origins of the content are for it to be compelling.

For the person delivering the message to be compelling, the audio quality does have to hit a certain level. But because the tools are simpler to use and you need less things to connect and pull together a decent production system, creator-driven content is becoming even more and more integral to a marketing campaign. And so not just what they maybe post on their Instagram page or post on LinkedIn, for example, but us as a brand being able to take that content and use that actually in paid media and things like that is all entirely possible because of the overall quality of the content. So that’s something that’s been a trend that’s been in process really, I would say, maybe since the advent of podcasts. But it’s been an evolution. And it’s come a long, long way.

Another thing, and this is really interesting, and this hits home personally, but I remember when I first entered the workforce, and I hope I’m not showing my age too badly here, but I remember the word processing department. And you would write down on a piece of paper, like a memo, and you would give it to the word processing department and somebody would type it up for you. That was a thing. And these days, we’re seeing actually more and more video production with audio, of course, transfer to the actual producers of the content.

In my company, at Shure, we make videos for different purposes to talk about different initiatives or product launches or things that we’re doing just for internal use. And right now, everybody, including our CEO, she makes these videos just at her own desk. She has a little software tool and she can show a PowerPoint and herself and speak to things. And with very, very limited amount of editing, you can put that out there. And I’ve seen friends and colleagues at other companies in very high-level roles just kind of doing their own production. Being able to buy a very high quality microphone with really advanced signal processing built right in, but just plug it in via USB and have it be handled as simply as any consumer device, has made it possible to do really very useful production where you are going to actually sound good and get your message across, but without having to make such a big production out of it, which is kind of cool.

Megan: Yeah. Really democratizes access to sort of creating high quality content, doesn’t it? And of course, no technology discussion is complete without a mention of return on investment, particularly nowadays. Erik, what are some ways companies can get returns on their audio tech investments as well? Where are the most common places you see cost savings?

Erik: Yeah. Well, we collaborated on a study with IDC Research. And they came up with some really interesting findings on this. And one of them was, no surprise, two-thirds or more of companies have taken action on improving their communication and collaboration technology, and even more have additional or initial investments still planned. But the ROI of those initiatives isn’t really tied to the initiative itself. It’s not like when you come out with a new product, you look at how that product performs, and that’s the driver of your ROI. The benefits of smoother collaboration come in the form of shorter meetings, more productive meetings, better decision-making, faster decision-making, stronger teamwork. And so to build an ROI model, what IDC concluded was that you have to build your model to account for those advantages really across the enterprise or across your university, or whatever it may be, and kind of up and down the different set of activities where they’re actually going to be utilized.

So that can be complex. Quantifying things can always be a challenge. But like I said, companies do seem to understand this. And I think that’s because, this is just my hunch, but because everybody, including the CEO and the CFO and the whole finance department, uses and benefits from collaboration technology too. Perhaps that’s one reason why the value is easier to convey. Even if they have not taken the time to articulate things like we’re doing here today, they know when a meeting is good and when it’s not good. And maybe that’s one of the things that’s helping companies to justify these investments. But it’s always tricky to do ROI on projects like that. But again, focusing on the broader benefits of collaboration and breaking it down into what it means for specific activities and types of meetings, I think, is the way to go about doing that.

Megan: Absolutely. And Brian, what kinds of advancements are you seeing in the lab that perhaps one day might contribute to those cost savings?

Brian: Well, I don’t know anything about cost savings, Megan. I’m a college professor. I live a pure life of the mind.

Megan: Of course.

Brian: ROI does not compute for me. No, I would say we are in an extremely exciting frontier right now because of AI and many different technologies. The studies that we talked about earlier, in one sense, they were broad. We explored many different traits from dating to hiring to credibility. And we isolated them in all sorts of ways we didn’t talk about. We showed that it wasn’t due to overall affect or pessimism or something like that. But in those studies, we really only tested one very particular set of dimensions along which an audio signal can vary, which is some sort of model of clarity. But in reality, the audio signal is so multi-dimensional. And as we’re getting more and more tools these days, we can not only change audio along the lines of clarity, as we’ve been talking about, but we can potentially manipulate it in all sorts of ways.

We’re very interested in pushing these studies forward and in exploring how people’s sort of brute impressions that they make are affected by all sorts of things. Meg and Erik, we walk around the world all the time making these judgments about people, right? You meet someone and you’re like, “Wow, I could really be friends with them. They seem like a great person.” And you know that you’re making that judgment, but you have no idea why, right? It just seems kind of intuitive. Well, in an audio signal, when you’re talking to someone, you can think of, “What if their signal is more bass heavy? What if it’s a little more treble heavy? What if we manipulate it in this way? In that way?”

When we talked about the faculty meeting that motivated this whole research program, I mentioned that my colleague, who was speaking from his home recording studio, he actually didn’t sound clear like in real life. He sounded better than in real life. He sounded like he was all around us. What is the implication of that? I think there’s so many different dimensions of an audio signal that we’re just being able to readily control and manipulate that it’s going to be very exciting to see how all of these sorts of things impact our impressions of each other.

Megan: And there may be some overlap with this as well, but I wondered if we could close with a future forward look, Brian. What are you looking forward to in emerging audio technology? What are some exciting opportunities on the horizon, perhaps related to what you were just talking about there?

Brian: Well, we’re interested in studying this from a scientific perspective. Erik, you talked about how when you started. When I started doing this science, we didn’t have a word processing department. We had a stone tablet department. But I hear tell that the current generation, when they send photos back and forth to each other, that they, as a matter, of course, they apply all sorts of filters-

Erik: Oh, yes.

Brian: … to those video signals, those video or just photographic signals. We’re all familiar with that. That hasn’t quite happened with the audio signals yet, but I think that’s coming up as well. You can imagine that you record yourself saying a little message and then you filter it this way or that way. And that’s going to become the Wild West about the kinds of impressions we make on each other, especially if and when you don’t know that those filters have been operating in the first place.

Megan: That’s so interesting. Erik, what are you looking forward to in audio technology as well?

Erik: Well, I’m still thinking about what Brian said.

Megan: Yeah. That’s-

Erik: That’s very interesting.

Megan: It’s terrifying.

Erik: I have to go back again. I’ll go back to the past, maybe 15 to 20 years. And I remember at work, we had meeting rooms with the Starfish phones in the middle of the table. And I remember that we would have international meetings with our partners there that were selling our products in different countries, including in Japan and in China, and the people actually in our own company in those countries. We knew the time zone was bad. And we knew that English wasn’t their native language, and tried to be as courteous as possible with written materials and things like that. But I went over to China, and I had to actually be on the other end of one of those calls. And I’m a native English speaker, or at least a native Chicago dialect of American English speaker. And really understanding how challenging it was for them to participate in those meetings just hit me right between the eyes.

We’ve come so far, which is wonderful. But I think of a scenario, and this is not far off, there are many companies working on this right now, where not only can you get a real time captioning in your native language, no matter what the language of the participant, you can actually hear the person who’s speaking’s voice manipulated into your native language.

I’m never going to be a fluent Japanese or Chinese speaker, that’s for sure. But I love the thought that I could actually talk with people and they could understand me as though I were speaking their native language, and that they could communicate to me and I could understand them in the way that they want to be understood. I think there’s a future out there where this technology can really be something that helps bring people together. Now that we have so many years of history with the internet, we know there’s usually two sides to the coin of technology, but there’s definitely going to be a positive side to this, and I’m really looking forward to it.

Megan: Gosh, that sounds absolutely fascinating. Thank you both so much for such an interesting discussion.

That was Erik Vaveris, the VP of product management and chief marketing officer at Shure, and Brian Scholl, director of the Perception & Cognition Laboratory at Yale University, whom I spoke with from Brighton in England.

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. And this episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.