Rethinking AI’s future in an augmented workplace

There are many paths AI evolution could take. On one end of the spectrum, AI is dismissed as a marginal fad, another bubble fueled by notoriety and misallocated capital. On the other end, it’s cast as a dystopian force, destined to eliminate jobs on a large scale and destabilize economies. Markets oscillate between skepticism and the fear of missing out, while the technology itself evolves quickly and investment dollars flow at a rate not seen in decades. 

All the while, many of today’s financial and economic thought leaders hold to the consensus that the financial landscape will stay the same as it has been for the last several years. Two years ago, Joseph Davis, global chief economist at Vanguard, and his team felt the same but wanted to develop their perspective on AI technology with a deeper foundation built on history and data. Based on a proprietary data set covering the last 130 years, Davis and his team developed a new framework, The Vanguard Megatrends Model, from research that suggested a more nuanced path than hype extremes: that AI has the potential to be a general purpose technology that lifts productivity, reshapes industries, and augments human work rather than displaces it. In short, AI will be neither marginal nor dystopian. 

“Our findings suggest that the continuation of the status quo, the basic expectation of most economists, is actually the least likely outcome,” Davis says. “We project that AI will have an even greater effect on productivity than the personal computer did. And we project that a scenario where AI transforms the economy is far more likely than one where AI disappoints and fiscal deficits dominate. The latter would likely lead to slower economic growth, higher inflation, and increased interest rates.”

Implications for business leaders and workers

Davis does not sugar-coat it, however. Although AI promises economic growth and productivity, it will be disruptive, especially for business leaders and workers in knowledge sectors. “AI is likely to be the most disruptive technology to alter the nature of our work since the personal computer,” says Davis. “Those of a certain age might recall how the broad availability of PCs remade many jobs. It didn’t eliminate jobs as much as it allowed people to focus on higher value activities.” 

The team’s framework allowed them to examine AI automation risks to over 800 different occupations. The research indicated that while the potential for job loss exists in upwards of 20% of occupations as a result of AI-driven automation, the majority of jobs—likely four out of five—will result in a mixture of innovation and automation. Workers’ time will increasingly shift to higher value and uniquely human tasks. 

This introduces the idea that AI could serve as a copilot to various roles, performing repetitive tasks and generally assisting with responsibilities. Davis argues that traditional economic models often underestimate the potential of AI because they fail to examine the deeper structural effects of technological change. “Most approaches for thinking about future growth, such as GDP, don’t adequately account for AI,” he explains. “They fail to link short-term variations in productivity with the three dimensions of technological change: automation, augmentation, and the emergence of new industries.” Automation enhances worker productivity by handling routine tasks; augmentation allows technology to act as a copilot, amplifying human skills; and the creation of new industries creates new sources of growth.

Implications for the economy 

Ironically, Davis’s research suggests that a reason for the relatively low productivity growth in recent years may be a lack of automation. Despite a decade of rapid innovation in digital and automation technologies, productivity growth has lagged since the 2008 financial crisis, hitting 50-year lows. This appears to support the view that AI’s impact will be marginal. But Davis believes that automation has been adopted in the wrong places. “What surprised me most was how little automation there has been in services like finance, health care, and education,” he says. “Outside of manufacturing, automation has been very limited. That’s been holding back growth for at least two decades.” The services sector accounts for more than 60% of US GDP and 80% of the workforce and has experienced some of the lowest productivity growth. It is here, Davis argues, that AI will make the biggest difference.

One of the biggest challenges facing the economy is demographics, as the Baby Boomer generation retires, immigration slows, and birth rates decline. These demographic headwinds reinforce the need for technological acceleration. “There are concerns about AI being dystopian and causing massive job loss, but we’ll soon have too few workers, not too many,” Davis says. “Economies like the US, Japan, China, and those across Europe will need to step up function in automation as their populations age.” 

For example, consider nursing, a profession in which empathy and human presence are irreplaceable. AI has already shown the potential to augment rather than automate in this field, streamlining data entry in electronic health records and helping nurses reclaim time for patient care. Davis estimates that these tools could increase nursing productivity by as much as 20% by 2035, a crucial gain as health-care systems adapt to ageing populations and rising demand. “In our most likely scenario, AI will offset demographic pressures. Within five to seven years, AI’s ability to automate portions of work will be roughly equivalent to adding 16 million to 17 million workers to the US labor force,” Davis says. “That’s essentially the same as if everyone turning 65 over the next five years decided not to retire.” He projects that more than 60% of occupations, including nurses, family physicians, high school teachers, pharmacists, human resource managers, and insurance sales agents, will benefit from AI as an augmentation tool. 

Implications for all investors 

As AI technology spreads, the strongest performers in the stock market won’t be its producers, but its users. “That makes sense, because general-purpose technologies enhance productivity, efficiency, and profitability across entire sectors,” says Davis. This adoption of AI is creating flexibility for investment options, which means diversifying beyond technology stocks might be appropriate as reflected in Vanguard’s Economic and Market Outlook for 2026. “As that happens, the benefits move beyond places like Silicon Valley or Boston and into industries that apply the technology in transformative ways.” And history shows that early adopters of new technologies reap the greatest productivity rewards. “We’re clearly in the experimentation phase of learning by doing,” says Davis. “Those companies that encourage and reward experimentation will capture the most value from AI.” 

Looking globally, Davis sees the United States and China as significantly ahead in the AI race. “It’s a virtual dead heat,” he says. “That tells me the competition between the two will remain intense.” But other economies, especially those with low automation rates and large service sectors, like Japan, Europe, and Canada, could also see significant benefits. “If AI is truly going to be transformative, three sectors stand out: health care, education, and finance,” says Davis. “For AI to live up to its potential, it must fundamentally reshape these industries, which face high costs and rising demand for better, faster, more personalized services.”

However, Davis says Vanguard is more bullish on AI’s potential to transform the economy than it was just a year ago. Especially since that transformation requires application beyond Silicon Valley. “When I speak to business leaders, I remind them that this transformation hasn’t happened yet,” says Davis. “It’s their investment and innovation that will determine whether it does.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The era of agentic chaos and how data will save us

AI agents are moving beyond coding assistants and customer service chatbots into the operational core of the enterprise. The ROI is promising, but autonomy without alignment is a recipe for chaos. Business leaders need to lay the essential foundations now.

The agent explosion is coming

Agents are independently handling end-to-end processes across lead generation, supply chain optimization, customer support, and financial reconciliation. A mid-sized organization could easily run 4,000 agents, each making decisions that affect revenue, compliance, and customer experience. 

The transformation toward an agent-driven enterprise is inevitable. The economic benefits are too significant to ignore, and the potential is becoming a reality faster than most predicted. The problem? Most businesses and their underlying infrastructure are not prepared for this shift. Early adopters have found unlocking AI initiatives at scale to be extremely challenging. 

The reliability gap that’s holding AI back

Companies are investing heavily in AI, but the returns aren’t materializing. According to recent research from Boston Consulting Group, 60% of companies report minimal revenue and cost gains despite substantial investment. However, the leaders reported they achieved five times the revenue increases and three times the cost reductions. Clearly, there is a massive premium for being a leader. 

What separates the leaders from the pack isn’t how much they’re spending or which models they’re using. Before scaling AI deployment, these “future-built” companies put critical data infrastructure capabilities in place. They invested in the foundational work that enables AI to function reliably. 

A framework for agent reliability: The four quadrants

To understand how and where enterprise AI can fail, consider four critical quadrants: models, tools, context, and governance.

Take a simple example: an agent that orders you pizza. The model interprets your request (“get me a pizza”). The tool executes the action (calling the Domino’s or Pizza Hut API). Context provides personalization (you tend to order pepperoni on Friday nights at 7pm). Governance validates the outcome (did the pizza actually arrive?). 

Each dimension represents a potential failure point:

  • Models: The underlying AI systems that interpret prompts, generate responses, and make predictions
  • Tools: The integration layer that connects AI to enterprise systems, such as APIs, protocols, and connectors 
  • Context: Before making decisions, information agents need to understand the full business picture, including customer histories, product catalogs, and supply chain networks
  • Governance: The policies, controls, and processes that ensure data quality, security, and compliance

This framework helps diagnose where reliability gaps emerge. When an enterprise agent fails, which quadrant is the problem? Is the model misunderstanding intent? Are the tools unavailable or broken? Is the context incomplete or contradictory? Or is there no mechanism to verify that the agent did what it was supposed to do?

Why this is a data problem, not a model problem

The temptation is to think that reliability will simply improve as models improve. Yet, model capability is advancing exponentially. The cost of inference has dropped nearly 900 times in three years, hallucination rates are on the decline, and AI’s capacity to perform long tasks doubles every six months.

Tooling is also accelerating. Integration frameworks like the Model Context Protocol (MCP) make it dramatically easier to connect agents with enterprise systems and APIs.

If models are powerful and tools are maturing, then what is holding back adoption?

To borrow from James Carville, “It is the data, stupid.” The root cause of most misbehaving agents is misaligned, inconsistent, or incomplete data.

Enterprises have accumulated data debt over decades. Acquisitions, custom systems, departmental tools, and shadow IT have left data scattered across silos that rarely agree. Support systems do not match what is in marketing systems. Supplier data is duplicated across finance, procurement, and logistics. Locations have multiple representations depending on the source.

Drop a few agents into this environment, and they will perform wonderfully at first, because each one is given a curated set of systems to call. Add more agents and the cracks grow, as each one builds its own fragment of truth.

This dynamic has played out before. When business intelligence became self-serve, everyone started creating dashboards. Productivity soared, reports failed to match. Now imagine that phenomenon not in static dashboards, but in AI agents that can take action. With agents, data inconsistency produces real business consequences, not just debates among departments.

Companies that build unified context and robust governance can deploy thousands of agents with confidence, knowing they’ll work together coherently and comply with business rules. Companies that skip this foundational work will watch their agents produce contradictory results, violate policies, and ultimately erode trust faster than they create value.

Leverage agentic AI without the chaos 

The question for enterprises centers on organizational readiness. Will your company prepare the data foundation needed to make agent transformation work? Or will you spend years debugging agents, one issue at a time, forever chasing problems that originate in infrastructure you never built?

Autonomous agents are already transforming how work gets done. But the enterprise will only experience the upside if those systems operate from the same truth. This ensures that when agents reason, plan, and act, they do so based on accurate, consistent, and up-to-date information. 

The companies generating value from AI today have built on fit-for-purpose data foundations. They recognized early that in an agentic world, data functions as essential infrastructure. A solid data foundation is what turns experimentation into dependable operations.

At Reltio, the focus is on building that foundation. The Reltio data management platform unifies core data from across the enterprise, giving every agent immediate access to the same business context. This unified approach enables enterprises to move faster, act smarter, and unlock the full value of AI.

Agents will define the future of the enterprise. Context intelligence will determine who leads it.

For leaders navigating this next wave of transformation, see Relatio’s practical guide:
Unlocking Agentic AI: A Business Playbook for Data Readiness. Get your copy now to learn how real-time context becomes the decisive advantage in the age of intelligence. 

Reimagining ERP for the agentic AI era

The story of enterprise resource planning (ERP) is really a story of businesses learning to organize themselves around the latest, greatest technology of the times. In the 1960s through the ’80s, mainframes, material requirements planning (MRP), and manufacturing resource planning (MRP II) brought core business data from file cabinets to centralized systems. Client-server architectures defined the ’80s and ’90s, taking digitization mainstream during the internet’s infancy. And in the 21st century, as work moved beyond the desktop, SaaS and cloud ushered in flexible access and elastic infrastructure.

The rise of composability and agentic AI marks yet another dawn—and an apt one for the nascent intelligence age. Composable architectures let organizations assemble capabilities from multiple systems in a mix-and-match fashion, so they can swap vendor gridlock for an à la carte portfolio of fit-for-purpose modules. On top of that architectural shift, agentic AI enables coordination across systems that weren’t originally designed to talk to one another.

Early indicators suggest that AI-enabled ERP will yield meaningful performance gains: One 2024 study found that organizations implementing AI-driven ERP solutions stand to gain around a 30% boost in user satisfaction and a 25% lift in productivity; another suggested that AI-driven ERP can lead to processing time savings of up to 45%, as well as improvements in decision accuracy to the tune of 60%.

These dual advancements address long-standing gaps that previous ERP eras fell short of delivering: freedom to innovate outside of vendor roadmaps, capacity for rapid iteration, and true interoperability across all critical functions. This shift signals the end of monolithic dependency as well as a once-in-a-generation opportunity for early movers to gain a competitive edge.

Key takeaways include:

  • Enterprises are moving away from monolithic ERP vendor upgrades in favor of modular architectures that allow them to change or modernize components independently while keeping a stable core for essential transactions.
  • Agentic AI is a timely complement to composability, functioning as a UX and orchestration layer that can coordinate workflows across disparate systems and turn multi-step processes into automated, cross-platform operations.
  • These dual shifts are finally enabling technology architecture to organize around the business, instead of the business around the ERP. Companies can modernize by reconfiguring and extending what they already have, rather than relying on ERP-centric upgrades.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Going beyond pilots with composable and sovereign AI

Today marks an inflection point for enterprise AI adoption. Despite billions invested in generative AI, only 5% of integrated pilots deliver measurable business value and nearly one in two companies abandons AI initiatives before reaching production.

The bottleneck is not the models themselves. What’s holding enterprises back is the surrounding infrastructure: Limited data accessibility, rigid integration, and fragile deployment pathways prevent AI initiatives from scaling beyond early LLM and RAG experiments. In response, enterprises are moving toward composable and sovereign AI architectures that lower costs, preserve data ownership, and adapt to the rapid, unpredictable evolution of AI—a shift IDC expects 75% of global businesses to make by 2027.

The concept to production reality

AI pilots almost always work, and that’s the problem. Proofs of concept (PoCs) are meant to validate feasibility, surface use cases, and build confidence for larger investments. But they thrive in conditions that rarely resemble the realities of production.

Source: Compiled by MIT Technology Review Insights with data from Informatica, CDO Insights 2025 report, 2026

“PoCs live inside a safe bubble” observes Cristopher Kuehl, chief data officer at Continent 8 Technologies. Data is carefully curated, integrations are few, and the work is often handled by the most senior and motivated teams.

The result, according to Gerry Murray, research director at IDC, is not so much pilot failure as structural mis-design: Many AI initiatives are effectively “set up for failure from the start.”

Download the article.

Securing digital assets as crypto crime surges

In February 2025, cyberattackers thought to be linked to North Korea executed a sophisticated supply chain attack on cryptocurrency exchange Bybit. By targeting its infrastructure and multi-signature security process, hackers managed to steal more than $1.5 billion worth of Ethereum in the largest known digital-asset theft to date.

The ripple effects were felt across the cryptocurrency market, with the price of Bitcoin dropping 20% from its record high in January. And the massive losses put 2025 on track to be the worst year in history for cryptocurrency theft.

Bitcoin, Ethereum, and stablecoins have established themselves as benchmark monetary vehicles, and, despite volatility, their values continue to rise. In October 2025, the value of cryptocurrency and other digital assets topped $4 trillion.

Yet, with this burgeoning value and liquidity comes more attention from cybercriminals and digital thieves. The Bybit attack demonstrates how focused sophisticated attackers are on finding ways to break the security measures that guard the crypto ecosystem, says Charles Guillemet, chief technology officer of Ledger, a provider of secure signer platforms.

”The attackers were very well organized, they have plenty of money, and they are spending a lot of time and resources trying to attack big stuff, because they can,” he says. “In terms of opportunity costs, it’s a big investment, but if at the end they earn $1.4 billion it makes sense to do this investment.”

But it also demonstrates how the crypto threat landscape has pitfalls not just for the unwary but for the tech savvy too. On the one hand, cybercriminals are using techniques like social engineering to target end users. On the other, they are increasingly looking for vulnerabilities to exploit at different points in the cryptocurrency infrastructure.

Historically, owners of digital assets have had to stand against these attackers alone. But now, cybersecurity firms and cryptocurrency-solution providers are offering new solutions, powered by in-depth threat research.

A treasure trove for attackers

One of the advantages of cryprocurrency is self custody. Users can save their private keys—the critical piece of alphanumeric code that proves ownership and grants full control over digital assets—into either a software or hardware wallet to safeguard it.

But users must put their faith in the security of the wallet technology, and, because the data is the asset, if the keys are lost or forgotten, the value too can be lost.

”If I hack your credit card, what is the issue? You will call your bank, and they will manage to revert the operations,” says Vincent Bouzon, head of the Donjon research team at Ledger. “The problem with crypto is, if something happens, it’s too late. So we must eliminate the possibility of vulnerabilities and give users security.”

Increasingly, attackers are focusing on digital assets known as stablecoins, a form of cryptocurrency that is pegged to the value of a hard asset, such as gold, or a fiat currency, like the US dollar.

Stablecoins rely on smart contracts—digital contracts stored on blockchain that use pre-set code to manage issuance, maintain value, and enforce rules—that can be vulnerable to different classes of attacks, often taking advantage of users’ credulity or lack of awareness about the threats. Post-theft countermeasures, such as freezing the transfer of coins and blacklisting of addresses, can lessen the risk with these kinds of attacks, however.

Understanding vulnerabilities

Software-based wallets, also known as “hot wallets,” which are applications or programs that run on a user’s computer, phone, or web browser, are often a weak link. While their connection to the internet makes them convenient for users, it also makes them more readily accessible to hackers too.

“If you are using a software wallet, by design it’s vulnerable because your keys are stored inside your computer or inside your phone. And unfortunately, a phone or a computer is not designed for security.” says Guillemet.

The rewards for exploiting this kind of vulnerability can be extensive. Hackers who stole credentials in a targeted attack on encrypted password manager application LastPass in 2022 managed to transfer millions worth of cryptocurrency away from victims in the subsequent two or more years. 

Even hardware-based wallets, which often resemble USB drives or key fobs and are more secure than their software counterparts since they are completely offline, can have vulnerabilities that a diligent attacker might find and exploit.

Tactics include the use of side-channel attacks, for example, where a cycbercriminal observes a system’s physical side effects, like timing, power, or electromagnetic and acoustic emissions to gain information about the implementation of an algorithm.

Guillemet explains that cybersecurity providers building digital asset solutions, such as wallets, need to help minimize the burden on the users by building security features and providing education about enhancing defense.

For businesses to protect cryptocurrency, tokens, critical documents, or other digital assets, this could be a platform that allows multi-stakeholder custody and governance, supports software and hardware protections, and allows for visibility of assets and transactions through Web3 checks.

Developing proactive security measures

As the threat landscape evolves at breakneck speed, in-depth research conducted by attack labs like Ledger Donjon can help security firms keep pace. The team at Ledger Donjon are working to understand how to proactively secure the digital asset ecosystem and set global security standards.

Key projects include the team’s offensive security research, which uses ethical and white hat hackers to simulate attacks and uncover weaknesses in hardware wallets, cryptographic systems, and infrastructure.

In November 2022, the Donjon team discovered a vulnerability in Web3 wallet platform Trust Wallet, which had been acquired by Binance. They found that the seed-phrase generation was not random enough, allowing the team to compute all possible private keys and putting as much as $30 million stored in Trust Wallet accounts at risk, says Bouzon. “The entropy was not high enough, the entropy was only 4 billion. It was huge, but not enough,” he says.

To enhance overall safety there are three key principles that digital-asset protection platforms should apply, says Bouzon. First, security providers should create secure algorithms to generate the seed phrases for private keys and conduct in-depth security audits of the software. Second, users should use hardware wallets with a secure screen instead of software wallets. And finally, any smart contract transaction should include visibility into what is being signed to avoid blind signing attacks.

Ultimately, the responsibility for safeguarding these valuable assets lies on both digital asset solution providers and the users themselves. As the value of cryptocurrencies continues to grow so too will the threat landscape as hackers keep attempting to circumvent new security measures. While digital asset providers, security firms, and wallet solutions must work to build strong and simple protection to support the cryptocurrency ecosystems, users must also seek out the information and education they need to proactively protect themselves and their wallets.

Learn more about how to secure digital assets in the Ledger Academy.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Mitigating emissions from air freight: Unlocking the potential of SAF with book and claim

Emissions from air freight have increased by 25% since 2019, according to a 2024 analysis by environmental advocacy organization Stand.Earth.

The researchers found that the expansion of cargo-only fleets to transport goods during the pandemic — as air travel halted, slower freight modes faced disruption, but demand for rapid delivery soared — has led to a yearly increase of almost 20 million tons of carbon dioxide, making up 93.8m tonnes from air freight overall.

And though fleet modernization and operational improvements by freight operators have contributed to ongoing decarbonization efforts, sustainable aviation fuel (SAF) looks set to be instrumental in helping the sector achieve its ambitions to reduce environmental footprint in the long-term.

When used neat, or pure and unblended, SAF can help reduce the life cycle of greenhouse gas emissions from aviation by as much as 80% relative to conventional fuel. It’s why the International Air Transport Association (IATA) estimates that SAF could account for as much as 65% of total reduction of emissions.

For Christoph Wolff, CEO of the Smart Freight Centre, “SAF is the main pathway” to decarbonization across both freight and the wider aviation ecosystem.

“The great thing about SAF is it’s chemically identical to Jet A fuel,” he says. “You can blend it [which means] you have a pathway to ramp it up. You can start small and you can scale it. By scaling it there is the promise or the hope that the price comes down.”

At at least twice the price of conventional jet fuel, cost is a significant barrier hindering broader adoption.

And it isn’t the only one standing between SAF and wider penetration.

Bridging the gap between a concentrated supply of SAF and global demand also remains a major hurdle.

Though the number of verified SAF outlets has increased from fewer than 20 locations in 2021 to 114 as of April 2025, according to sustainability solutions framework 4Air, that accounts for only 92 airports worldwide out of more than 40,000.

“SAF is central to the decarbonization of the aviation sector,” believes Raman Ojha, president of Shell Aviation. “Having said that, adoption and penetration of SAF hasn’t really picked up massively. It’s not due to lack of production capacity, but there are lots of things that are at play. And book and claim in that context helps to bridge that gap.”

Bridging the gap with book and claim

Book and claim is a chain of custody model, where the flow of administrative records is not necessarily connected to the physical product through the supply chain (source: ISO 22095:2020).

Book and claim potentially enables airlines and corporations to access the life cycle GHG emissions reduction benefits of SAF relative to conventional jet fuel even when SAF is not physically available at their location; this model helps bridge the gap between that concentrated supply and global demand, until SAF’s availability improves.

“To be bold, without book and claim, no short-term science-based target will be achieved,” says Bettina Paschke, vice president of ESG accounting, reporting and controlling at DHL Express. “Book and claim is essential to achieving science-based targets.”

“SAF production facilities are not everywhere,” she reiterates. “They’re very focused on one location, and if a customer wants to fulfil a mass balance obligation, SAF would need to be shipped around the world just to be at that airport for that customer. That would be very complicated, and very unrealistic.” It would also, counterintuitively, increase total emissions. By using book and claim instead, air freight operators can unlock the life cycle greenhouse gas emissions reduction benefits of SAF relative to conventional jet fuel now, without waiting for supply to broaden. “It might no longer be needed when we have SAF product facilities at each airport in the future,” she points out. “But at the moment, that’s not the case.”

At DHL itself, the mechanism has become central to achieving its own three interconnected sustainability pillars, which focus on decarbonizing logistics supply chains, supporting customers toward their decarbonization goals, and ensuring credible emission claims can be shared along the value chain.

Demonstrating the importance of a credible and viable framework for book and claim systems is also what inspired the 2022 launch of Shell’s Avelia, one of the first blockchain-powered digital SAF book and claim solutions for aviation, which expanded in 2024 to encompass air freight in addition to business travel. Depending on the offering, Avelia offers freight forwarders the opportunity to share the life cycle greenhouse gas emissions reduction benefits of SAF relative to conventional jet fuel across the value chain with shippers using their services.

“It’s also backed by a physical supply chain, which gives our customers — whether those be corporates or freight forwarders or even airlines — a peace of mind that the SAF has been injected at a certain airport, it’s been used and environmental attributes, with the help of blockchain, have been tracked to where they’re getting retired,” says Ojha.

He adds: “The most important or critical part is the transparency that it’s providing to our customers to be sure that they’re not saying something which they can’t confidently stand behind.”

Moving beyond early adoption

To scale up SAF via book and claim and help make it a more commercially viable lower-carbon solution, its adoption will need to be a coordinated “ecosystem play,” says Wolff. That includes early adopters, such as DHL, inspiring action from peers, solution providers such as Shell, working with various stakeholders to drive joint advocacy, and industry associations, like the Smart Freight Centre creating the required frameworks, educational resources, and industry alignment.

An active book and claim community made up of many forward-thinking advocates is already driving much of this work forward with a common goal to develop greater standardization and consensus, Wolff points out. “It helps to make sure all definitions on the system are compatible and they can talk to one another, provide educational support, and [also that] there’s a repository of transactions so that it can be documented in a way that people can see and think, ‘oh this is how we do it.’ There are some early adopters that are very experienced, but it needs a lot more people for it to get comfortable.”

In early 2024, discussions were held with a diverse group of expert book and claim stakeholders to develop and refine 11 key principles and best practices book and claim models. These represent an aligned set of principles informed by practical successes and challenges faced by practitioners working to decarbonize the heavy transport sector.

Adherence to such a framework is crucial given that book and claim is not yet accepted by the Greenhouse Gas (GHG) Protocol nor the Science Based Targets Initiative (SBTi) as a recognized model for reducing greenhouse gas emissions — though there are hopes that might change.

“The industrialization of book and claim delivery systems is key to credibility and recognition,” says Wolff. “The Greenhouse Gas Protocol and the Science Based Targets Initiative are making steps in recognizing that. There’s a pathway that the Smart Freight Centre is very closely involved in the technical working groups for [looking]to build such a system where, in addition to physical inventory, you also pursue market-based inventories.”

Paschke urges companies not to sit back and wait for policy to change before taking action, though. “The solution is there,” she says. “There are companies like DHL that are making huge upfront investments, and every single contribution helps to scale the industry and give a strong signal to the eco-space.”

As pressure to accelerate decarbonization gains pace, it’s critical that air freight operators consider this now, agrees Ojha. “Don’t wait for perfection in guidelines, regulations, or platforms — act now,” he says. “That’s very, very critical. Second, learn by doing and join hands with others. Don’t try to do everything independently or in-house.

“Third, make use of registries and platforms, such as Avelia, that can give credibility. Join them, utilize them, and leverage them so that you won’t have to establish auditability from scratch.

“And fourth, don’t look at scope book and claim as a means for acquiring a certificate for environmental attributes. Think in terms of your decarbonisation commitment and think of this as a tool for exposure management. Think in terms of the bigger picture.”

That bigger picture being a significant sector-wide push toward faster decarbonization — and turning the tide on emissions’ steep upward ascent.

Watch the full webcast.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

This content is produced by MIT Technology Review Insights in association with Avelia. Avelia is a Shell owned solution and brand that was developed with support from Amex GBT, Accenture and Energy Web Foundation. The views from individuals not affiliated with Shell are their own and not those of Shell PLC or its affiliates. Cautionary note | Shell Global

Using unstructured data to fuel enterprise AI success

Enterprises are sitting on vast quantities of unstructured data, from call records and video footage to customer complaint histories and supply chain signals. Yet this invaluable business intelligence, estimated to make up as much as 90% of the data generated by organizations, historically remained dormant because its unstructured nature makes analysis extremely difficult.

But if managed and centralized effectively, this messy and often voluminous data is not only a precious asset for training and optimizing next-generation AI systems, enhancing their accuracy, context, and adaptability, it can also deliver profound insights that drive real business outcomes.

A compelling example of this can be seen in the US NBA basketball team the Charlotte Hornets who successfully leveraged untapped video footage of gameplay—previously too copious to watch and too unstructured to analyze—to identify a new competition-winning recruit. However, before that data could deliver results, analysts working for the team first had to overcome the critical challenge of preparing the raw, unstructured footage for interpretation.

The challenges of organizing and contextualizing unstructured data

Unstructured data presents inherent difficulties due to its widely varying format, quality, and reliability, requiring specialized tools like natural language processing and AI to make sense of it.

Every organization’s pool of unstructured data also contains domain-specific characteristics and terminology that generic AI models may not automatically understand. A financial services firm, for example, cannot simply use a general language model for fraud detection. Instead, it needs to adapt the model to understand regulatory language, transaction patterns, industry-specific risk indicators, and unique company context like data policies.

The challenge intensifies when integrating multiple data sources with varying structures and quality standards, as teams may struggle to distinguish valuable data from noise.

How computer vision gave the Charlotte Hornets an edge 

When the Charlotte Hornets set out to identify a new draft pick for their team, they turned to AI tools including computer vision to analyze raw game footage from smaller leagues, which exist outside the tiers of the game normally visible to NBA scouts and, therefore, are not as readily available for analysis.

“Computer vision is a tool that has existed for some time, but I think the applicability in this age of AI is increasing rapidly,” says Jordan Cealey, senior vice president at AI company Invisible Technologies, which worked with the Charlotte Hornets on this project. “You can now take data sources that you’ve never been able to consume, and provide an analytical layer that’s never existed before.”

By deploying a variety of computer vision techniques, including object and player tracking, movement pattern analysis, and geometrically mapping points on the court, the team was able to extract kinematic data, such as the coordinates of players during movement, and generate metrics like speed and explosiveness to acceleration. 

This provided the team with rich, data-driven insights about individual players, helping them to identify and select a new draft whose skill and techniques filled a hole in the Charlotte Hornets’ own capabilities. The chosen athlete went on to be named the most valuable player at the 2025 NBA Summer League and helped the team win their first summer championship title.

Annotation of a basketball match

Before data from game footage can be used, it needs to be labeled so the model can interpret it. The x and y coordinates of the individual players, seen here in bounding boxes, as well as other features in the scene, are annotated so the model can identify individuals and track their movements through time.

Taking AI pilot programs into production 

From this successful example, several lessons can be learned. First, unstructured data must be prepared for AI models through intuitive forms of collection, and the right data pipelines and management records. “You can only utilize unstructured data once your structured data is consumable and ready for AI,” says Cealey. “You cannot just throw AI at a problem without doing the prep work.” 

For many organizations, this might mean they need to find partners that offer the technical support to fine-tune models to the context of the business. The traditional technology consulting approach, in which an external vendor leads a digital transformation plan over a lengthy timeframe, is not fit for purpose here as AI is moving too fast and solutions need to be configured to a company’s current business reality. 

Forward-deployed engineers (FDEs) are an emerging partnership model better suited to the AI era. Initially popularized by Palantir, the FDE model connects product and engineering capabilities directly to the customer’s operational environment. FDEs work closely with customers on-site to understand the context behind a technology initiative before a solution is built. 

“We couldn’t do what we do without our FDEs,” says Cealey. “They go out and fine-tune the models, working with our human annotation team to generate a ground truth dataset that can be used to validate or improve the performance of the model in production.”

Second, data needs to be understood within its own context, which requires models to be carefully calibrated to the use case. “You can’t assume that an out-of-the-box computer vision model is going to give you better inventory management, for example, by taking that open source model and applying it to whatever your unstructured data feeds are,” says Cealey. “You need to fine-tune it so it gives you the data exports in the format you want and helps your aims. That’s where you start to see high-performative models that can then actually generate useful data insights.” 

For the Hornets, Invisible used five foundation models, which the team fine-tuned to context-specific data. This included teaching the models to understand that they were “looking at” a basketball court as opposed to, say, a football field; to understand how a game of basketball works differently from any other sport the model might have knowledge of (including how many players are on each team); and to understand how to spot rules like “out of bounds.” Once fine-tuned, the models were able to capture subtle and complex visual scenarios, including highly accurate object detection, tracking, postures, and spatial mapping.

Lastly, while the AI technology mix available to companies changes by the day, they cannot eschew old-fashioned commercial metrics: clear goals. Without clarity on the business purpose, AI pilot programs can easily turn into open-ended, meandering research projects that prove expensive in terms of compute, data costs, and staffing. 

“The best engagements we have seen are when people know what they want,” Cealey observes. “The worst is when people say ‘we want AI’ but have no direction. In these situations, they are on an endless pursuit without a map.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Deploying a hybrid approach to Web3 in the AI era

When the concept of “Web 3.0” first emerged about a decade ago the idea was clear: Create a more user-controlled internet that lets you do everything you can now, except without servers or intermediaries to manage the flow of information.

Where Web2, which emerged in the early 2000s, relies on centralized systems to store data and supply compute, all owned—and monetized by—a handful of global conglomerates, Web3 turns that structure on its head. Instead, data and compute are decentralized through technologies like blockchain and peer-to-peer networks.

What was once a futuristic concept is quickly becoming a more concrete reality, even at a time when Web2 still dominates. Six out of ten Fortune 500 companies are exploring blockchain-based solutions, most taking a hybrid approach that combines traditional Web2 business models and infrastructure with the decentralized technologies and principles of Web3.

Popular use cases include cloud services, supply chain management, and, most notably financial services. In fact, at one point, the daily volume of transactions processed on decentralized finance exchanges exceeded $10 billion.

Gaining a Web3 edge

Among the advantages of Web3 for the enterprise are greater ownership and control of sensitive data, says Erman Tjiputra, founder and CEO of the AIOZ Network, which is building infrastructure for Web3, powered by decentralized physical infrastructure networks (DePIN), blockchain-based systems that govern physical infrastructure assets.

More cost-effective compute is another benefit, as is enhanced security and privacy as the cyberattack landscape grows more hostile, he adds. And it could even help protect companies from outages caused by a single point of failure, which can lead to downtime, data loss, and revenue deficits.

But perhaps the most exciting opportunity, says Tjiputra, is the ability to build and scale AI reliably and affordably. By leveraging a people-powered internet infrastructure, companies can far more easily access—and contribute to—shared resource like bandwidth, storage, and processing power to run AI inference, train models, and store data. All while using familiar developer tooling and open, usage-based incentives.

“We’re in a compute crunch where requirements are insatiable, and Web3 creates this ability to benefit while contributing,” explains Tjiputra.

In 2025, AIOZ Network launched a distributed compute platform and marketplace where developers and enterprises can access and monetize AI assets, and run AI inference or training on AIOZ Network’s more than 300,000 contributing devices. The model allows companies to move away from opaque datasets and models and scale flexibly, without centralized lock in.

Overcoming Web3 deployment challenges

Despite the promise, it is still early days for Web3, and core systemic challenges are leaving senior leadership and developers hesitant about its applicability at scale.

One hurdle is a lack of interoperability. The current fragmentation of blockchain networks creates a segregated ecosystem that makes it challenging to transfer assets or data between platforms. This often complicates transactions and introduces new security risks due to the reliance on mechanisms such as cross-chain bridges. These are tools that allow asset transfers between platforms but which have been shown to be vulnerable to targeted attacks.

“We have countless blockchains running on different protocols and consensus models,” says Tjiputra. “These blockchains need to work with each other so applications can communicate regardless of which chain they are on. This makes interoperability fundamental.”

Regulatory uncertainty is also a challenge. Outdated legal frameworks can sit at odds with decentralized infrastructures, especially when it comes to compliance with data protection and anti-money laundering regulations.

“Enterprises care about verifiability and compliance as much as innovation, so we need frameworks where on-chain transparency strengthens accountability instead of adding friction,” Tjiputra says.

And this is compounded by user experience (UX) challenges, says Tjiputra. “The biggest setback in Web3 today is UX,” he says. “For example, in Web2, if I forget my bank username or password, I can still contact the bank, log in and access my assets. The trade-off in Web3 is that, should that key be compromised or lost, we lose access to those assets. So, key recovery is a real problem.”

Building a bridge to Web3

Although such systemic challenges won’t be solved overnight, by leveraging DePIN networks, enterprises can bridge the gap between Web2 and Web3, without making a wholesale switch. This can minimize risk while harnessing much of the potential.

AIOZ Network’s own ecosystem includes capacity for media streaming, AI compute, and distributed storage that can be plugged into an existing Web2 tech stack. “You don’t need to go full Web3,” says Tjiputra. “You can start by plugging distributed storage into your workflow, test it, measure it, and see the benefits firsthand.”

The AIOZ Storage solution, for example, offers scalable distributed object storage by leveraging the global network of contributor devices on AIOZ DePIN. It is also compatible with existing storage systems or commonly used web application programming interfaces (APIs).

“Say we have a programmer or developer who uses Amazon S3 Storage or REST APIs, then all they need to do is just repoint the endpoints,” explains Tjiputra. “That’s it. It’s the same tools, it’s really simple. Even with media, with a single one-stop shop, developers can do transcoding and streaming with a simple REST API.”

Built on Cosmos, a network of hundreds of different blockchains that can communicate with each other, and a standardized framework enabled by Ethereum Virtual Machine (EVM), AIOZ Network has also prioritized interoperability. “Applications shouldn’t care which chain they’re on. Developers should target APIs without worrying about consensus mechanisms. That’s why we built on Cosmos and EVM—interoperability first.”

This hybrid model, which allows enterprises to use both Web2 and Web3 advantages in tandem, underpins what Tjiputra sees as the longer-term ambition for the much-hyped next iteration of the internet.

“Our vision is a truly peer-to-peer foundation for a people-powered internet, one that minimizes single points of failure through multi-region, multi-operator design,” says Tjiputra. “By distributing compute and storage across contributors, we gain both cost efficiency and end-to-end security by default.

“Ideally, we want to evolve the internet toward a more people-powered model, but we’re not there yet. We’re still at the starting point and growing.”

Indeed, Web3 isn’t quite snapping at the heels of the world’s Web2 giants, but its commercial advantages in an era of AI have become much harder to ignore. And with DePIN bridging the gap, enterprises and developers can step into that potential while keeping one foot on surer ground.

To learn more from AIOZ Network, you can read the AIOZ Network Vision Paper.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The overlooked driver of digital transformation

When business leaders talk about digital transformation, their focus often jumps straight to cloud platforms, AI tools, or collaboration software. Yet, one of the most fundamental enablers of how organizations now work, and how employees experience that work, is often overlooked: audio.

As Genevieve Juillard, CEO of IDC, notes, the shift to hybrid collaboration made every space, from corporate boardrooms to kitchen tables, meeting-ready almost overnight. In the scramble, audio quality often lagged, creating what research now shows is more than a nuisance. Poor sound can alter how speakers are perceived, making them seem less credible or even less trustworthy.

“Audio is the gatekeeper of meaning,” stresses Julliard. “If people can’t hear clearly, they can’t understand you. And if they can’t understand you, they can’t trust you, and they can’t act on what you said. And no amount of sharp video can fix that.” Without clarity, comprehension and confidence collapse.

For Shure, which has spent a century advancing sound technology, the implications extend far beyond convenience. Chris Schyvinck, Shure’s president and CEO, explains that ineffective audio undermines engagement and productivity. Meetings stall, decisions slow, and fatigue builds.

“Use technology to make hybrid meetings seamless, and then be clear on which conversations truly require being in the same physical space,” says Juillard. “If you can strike that balance, you’re not just making work more efficient, you’re making it more sustainable, you’re also making it more inclusive, and you’re making it more resilient.”

When audio is prioritized on equal footing with video and other collaboration tools, organizations can gain something rare: frictionless communication. That clarity ensures the machines listening in, from AI transcription engines to real-time translation systems, can deliver reliable results.

The research from Shure and IDC highlights two blind spots for leaders. First, buying decisions too often privilege price over quality, with costly consequences in productivity and trust. Second, organizations underestimate the stress poor sound imposes on employees, intensifying the cognitive load of already demanding workdays. Addressing both requires leaders to view audio not as a peripheral expense but as core infrastructure.

Looking ahead, audio is becoming inseparable from AI-driven collaboration. Smarter systems can already filter out background noise, enhance voices in real time, and integrate seamlessly into hybrid ecosystems.

“We should be able to provide improved accessibility and a more equitable meeting experience for people,” says Schyvinck.

For Schyvinck and Juillard, the future belongs to companies that treat audio transformation as an integral part of digital transformation, building workplaces that are more sustainable, equitable, and resilient.

This episode of Business Lab is produced in partnership with Shure.

Full Transcript

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

This episode is produced in partnership with Shure.

As companies continue their journeys towards digital transformation, audio modernization is an often overlooked but key component of any successful journey. Clear audio is imperative not only for quality communication, but also for brand equity, both for internal and external stakeholders and even the company as a whole.

Two words for you: audio transformation.

My guests today are Chris Schyvinck, President and CEO at Shure. And Genevieve Juillard, CEO at IDC.

Welcome Chris and Genevieve.

Chris Schyvinck: It’s really nice to be here. Thank you very much.

Genevieve Juillard: Yeah, thank you so much for having us. Great to be here.

Megan Tatum: Thank you both so much for being here. Genevieve, we could start with you. Let’s start with some history perhaps for context. How would you describe the evolution of audio technology and how use cases and our expectations of audio have evolved? What have been some of the major drivers throughout the years and more recently, perhaps would you consider the pandemic to be one of those drivers?

Genevieve: It’s interesting. If you go all the way back to 1976, Norman Macrae of The Economist predicted that video chat would actually kill the office, that people would just work from home. Obviously, that didn’t happen then, but the core technology for remote collaboration has actually been around for decades. But until the pandemic, most of us only experienced it in very specific contexts. Offices had dedicated video conferencing rooms and most ran on expensive proprietary systems. And then almost overnight, everything including literally the kitchen table had to be AV ready. The cultural norms shifted just as fast. Before the pandemic, it was perfectly fine to keep your camera off in a meeting, and now that’s seen as disengaged or even rude, and that changes what normalized video conferencing and my hybrid meetings.

But in a rush to equip a suddenly remote workforce, we hit two big problems. Supply chain disruptions and a massive spike in demand. High-quality gear was hard to get so low-quality audio and video became the default. And here’s a key point. We now know from research that audio quality matters more than video quality for meeting outcomes. You can run a meeting without video, but you can’t run a meeting without clear audio. Audio is the gatekeeper of meaning. If people can’t hear clearly, they can’t understand you. And if they can’t understand you, they can’t trust you and they can’t act on what you said. And no amount of sharp video can fix that.

Megan: Oh, true. It’s fascinating, isn’t it? And Chris, Shure and IDC recently released some research titled “The Hidden Influencer Rethinking Audio Could Impact Your Organization Today, Tomorrow, and Forever.” The research highlighted that importance of audio that Genevieve’s talking about in today’s increasingly virtual world. What did you glean from those results and did anything surprise you?

Chris: Yeah, well, the research certainly confirmed a lot of hunches we’ve had through the years. When you think about a company like Shure that’s been doing audio for 100 years, we just celebrated that anniversary this year.

Megan: Congratulations.

Chris: Our legacy business is over more in the music and performance arena. And so just what Genevieve said in terms of, “Yeah, you can have a performance and look at somebody, but that’s like 10% of it, right? 90% is hearing that person sing, perform, and talk.” We’ve always, of course, from our perspective, understood that clean, clear, crisp audio is what is needed in any setting. When you translate what’s happening on the stage into a meeting or collaboration space at a corporation, we’ve thought that that is just equally as important.

And we always had this hunch that if people don’t have the good audio, they’re going to have fatigue, they’re going to get a little disengaged, and the whole meeting is going to become quite unproductive. The research just really amplified that hunch for us because it really depicted the fact that people not only get kind of frustrated and disengaged, they might actually start to distrust what the other person with bad audio is saying or just cast it in a different light. And the degree to which that frustration becomes almost personal was very surprising to us. Like I said, it validated some hunches, but it really put an exclamation point on it for us.

Megan: And Genevieve, based on the research results, I understand that IDC pulled together some recommendations for organizations. What is it that leaders need to know and what is the biggest blind spot for them to overcome as well?

Genevieve: The biggest blind spot is this. If your microphone has poor audio quality, like Chris said, people will literally perceive you as less intelligent and less trustworthy. And by the way, that’s not an opinion. It’s what the science says. But yet, when we surveyed first time business buyers, the number one factor they used to choose audio gear was price. However, for repeat buyers, the top factor flipped to audio quality. My guess is they learn the lesson the hard way. The second blind spot is to Chris’s point, it’s the stress that bad audio creates. Poor sound forces your brain to work harder to decode what’s being said. That’s a cognitive load and it creates stress. And over a full day of meetings, that stress adds up. Now, we don’t have long-term studies yet on the effects, but we do know that prolonged stress is something that every company should be working to reduce.

Good audio lightens that cognitive load. It keeps people engaged and it levels the playing field. Whether you’re in a room or you’re halfway across the world, and here’s one that’s often overlooked, bad audio can sabotage AI transcription tools. As AI becomes more and more central to everyday work, that starts to become really critical. If your audio isn’t clear, the transcription won’t be accurate. And there’s a world of difference between working, for example, the consulting department and the insulting department, and that is an actual example from the field.

The bottom line is you fix the audio, you cut friction, you save time, and you make meetings more productive.

Megan: I mean, it’s just a huge game changer, isn’t it, really? I mean, and given that, Chris, in your experience across industries, are audio technologies being included in digital transformation strategies and also artificial intelligence implementation? Do we need a separate audio transformation perhaps?

Chris: Well, like I mentioned earlier, yes, people tend to initially focus on that visual platform, but increasingly the attention to audio is really coming into focus. And I’d hate to tear apart audio as a separate sort of strategy because at the same time, we, as an audio expert, are trying to really seamlessly integrate audio into the rest of the ecosystem. It really does need to be put on an equal footing with the rest of the components in that ecosystem. And to Genevieve’s point, as we are seeing audio and video systems with more AI functionalities, the importance of real-time translations that are being used, voice recognition, being able to attribute who said what in a meeting and take action items, it’s really, I think starting to elevate the importance of that clear audio. And it’s got to be part of a comprehensive, really collaboration plan that helps some company figure out what’s their whole digital transformation about. It just really has to be included in that comprehensive plan, but put on equal footing with the rest of the components in that system.

Megan: Yeah, absolutely. And in the broader landscape, Genevieve, in terms of discussing the importance of audio quality, what have you noticed across research projects about the effects of good and bad audio, not only from that company perspective, but from employee and client perspectives as well?

Genevieve: Well, let’s start with employees.

Megan: Sure.

Genevieve: Bad audio adds friction you don’t need, we’ve talked about this. When you’re straining to hear or make sense of what’s being said, your brain is burning energy on decoding instead of contributing. That frustration, it builds up, and by the end of the day, it hurts productivity. From a company perspective, the stakes get even higher. Meetings are where decisions happen or at least where they’re supposed to happen. And if people can’t hear clearly, decisions get delayed, mistakes creep in, and the whole process slows down. Poor audio doesn’t just waste time, it chips away at the ability to move quickly and confidently. And then there’s the client experience. So whether it’s in sales, customer service, or any external conversation, poor audio can make you sound less credible and yet less trustworthy. Again, that’s not my opinion. That’s what the research shows. So that’s quite a big risk when you’re trying to close a deal or solve a major problem.

The takeaway is good audio, it matters, it’s a multiplier. It makes meetings more productive and it can help decisions happen faster and client interactions be stronger.

Megan: It’s just so impactful, isn’t it, in so many different ways. I mean, Chris, how are you seeing these research results reflected as companies work through digital and AI transformations? What is it that leaders need to understand about what is involved in audio implementation across their organization?

Chris: Well, like I said earlier, I do think that audio is finally maybe getting its place in the spotlight a little bit up there with our cousins over in the video side. Audio, it’s not just a peripheral aspect anymore. It’s a very integral part of that sort of comprehensive collaboration plan I was talking about earlier. And when we think about how can we contribute solutions that are really more easy to use for our end users, because if you create something complicated, we were talking about the days gone by of walking into a room. It’s a very complicated system, and you need to find the right person that knows how to run it. Increasingly, you just need to have some plug and play kind of solutions. We’re thinking about a more sustainable strategy for our solutions where we make really high-quality hardware. We’ve done that account for a hundred years. People will come up to me and tell the story of the SM58 microphone they bought in 1980 and how they’re still using it every day.

We know how to do that part of it. If somebody is willing to make that investment upfront, put some high-quality hardware into their system, then we are getting to the point now where updates can be handled via software downloads or cloud connectivity. And just really being able to provide sort of a sustainable solution for people over time.

More in our industry, we’re collaborating with other industry partners to go in that direction, make something that’s very simple for anybody to walk into a room or on their individual at home setup and do something pretty simple. And I think we have the right industry groups, the right industry associations that can help make sure that the ecosystems have the proper standards, the right kind of ways to make sure everything is interoperable within a system. We’re all kind of heading in that direction with that end user in mind.

Megan: Fantastic. And when the internet of things was emerging, efforts began to create sort of these data ecosystems, it seems there’s an argument to be made that we need audio ecosystems as well. I wonder, Chris, what might an audio ecosystem look like and what would be involved in implementation?

Chris: Well, I think it does have to be part of that bigger ecosystem I was just talking about where we do collaborate with others in industry and we try to make sure that we’re all playing by the kind of same set of rules and protocols and standards and whatnot. And when you think about compatibility across all the devices that sit in a room or sit in your, again, maybe your at home setup, making sure that the audio quality is as good as it can be, that you can interoperate with everything else in the system. That’s just become very paramount in our day-to-day work here. Your hardware has to be scalable like I just alluded to a moment ago. You have to figure out how you can integrate with existing technologies, different platforms.

We were joking when we came into this session that when you’re going from the platform at your company, maybe you’re on Teams and you go into a Zoom setting or you go into a Google setting, you really have to figure out how to adapt to all those different sort of platforms that are out there. I think the ecosystem that we’re trying to build, we’re trying to be on that equal footing with the rest of the components in that system. And people really do understand that if you want to have extra functionalities in meetings and you want to be able to transcribe or take notes and all of that, that audio is an absolutely critical piece.

Megan: Absolutely. And speaking of bit of all those different platforms and use cases, that sort of audio is so relevant to Genevieve that goes back to this idea of in audio one size does not fit all and needs may change. How can companies also plan their audio implementations to be flexible enough to meet current needs and to be able to grow with future advancements?

Genevieve: I’m glad you asked this question. Even years after the pandemic, many companies, they’re still trying to get the balance right between remote, in office, how to support it. But even if a company has a strict return to office in-person policy, the reality is that work still isn’t going away for that company. They may have teams across cities or countries, clients and external stakeholders will have their own office preferences that they have to adapt to. Supporting hybrid work is actually becoming more important, not less. And our research shows that companies are leaning into, not away from, hybrid setups. About one third of companies are now redesigning or resizing office spaces every single year. For large organizations with multiple sites, staggered leases, that’s a moving target. It’s really important that they have audio solutions that can work before, during, after all of those changes that they’re constantly making. And so that’s where flexibility becomes really important. Companies need to buy not just for right now, but for the future.

And so here’s IDC’s kind of pro-tip, which is make sure as a company that you go with a provider that offers top-notch audio quality and also has strong partnerships and certifications with the big players and communications technology because that will save you money in the long run. Your systems will stay compatible, your investments will last longer, and you won’t be scrambling when that next shift happens.

Megan: Of course. And speaking of building for the future, as companies begin to include sustainability in their company goals, Chris, I wonder how can audio play a role in those sustainability efforts and how might that play into perhaps the return on investment in building out a high-quality audio ecosystem?

Chris: Well, I totally agree with what Genevieve just said in terms of hybrid work is not going anywhere. You get all of those big headlines that talk about XYZ company telling people to get back into the office. And I saw a fantastic piece of data just last week that showed the percent of in-office hours of the American workers versus out-of-office remote kind of work. It has basically been flatlined since 2022. This is our new way of working. And of course, like Genevieve mentioned, you have people in all these different locations. And in a strange way, living through the pandemic did teach us that we can do some things by not having to hop on an airplane and travel to go somewhere. Certainly that helps with a more sustainable strategy over time, and you’re saving on travel and able to get things done much more quickly.

And then from a product offering perspective, I’ll go back to the vision I was painting earlier where we and others in our industry see that we can create great solid hardware platforms. We’ve done it for decades, and now that advancements around AI and all of our software that enables products and everything else that has happened in the last probably decade, we can get enhancements and additions and new functionality to people in simpler ways on existing hardware. I think we’re all careening down this path of having a much more sustainable ecosystem for all collaboration. It’s really quite an exciting time, and that pays off with any company implementing a system, their ROI is going to be much better in the long run.

Megan: Absolutely. And Genevieve, what trends around sustainability are you seeing? What opportunities do you see for audio to play into those sustainability efforts going forward?

Genevieve: Yeah, similar to Chris. In some industries, there’s still a belief that the best work happens when everyone’s in the same room. And yes, face-to-face time is really important for building relationships, for brainstorming, for closing big deals, but it does come at a cost. The carbon footprint of daily commutes, the sales visits, the constant business travel. And then there’s the basic consideration, as we’ve talked about, of just pure practicality. The good news is with the right AV setup, especially high-quality audio, many of those interactions can happen virtually without losing effectiveness, as Chris said it, but our research shows it.

Our research shows that virtual meetings can be just as productive as in-person ones, and every commute or flight you avoid, of course makes a measurable sustainability impact. I don’t think, personally, that the takeaway is replace all in-person meetings, but instead it’s to be intentional. Use technology to make hybrid meetings seamless, and then be clear on which conversations truly require being in the same physical space. If you can strike that balance, you’re not just making work more efficient, you’re making it more sustainable, you’re also making it more inclusive, and you’re making it more resilient.

Megan: Such an important point. And let’s close with a future forward look, if we can. Genevieve, what innovations or advancements in the audio field are you most excited to see to come to fruition, and what potential interesting use cases do you see on the horizon?

Genevieve: I’m especially interested in how AI and audio are converging. We’re now seeing AI that can identify and isolate human voices in noisy environments. For example, right now, there are some jets flying overhead. It’s very loud in here, but I suspect you may not even know that that’s happening.

Megan: We can’t hear a thing. No.

Genevieve: Right. That technology, it’s pulling voices forward so that conversations like ours are crystal clear. And that’s a big deal, especially as companies invest more and more in AI tools, especially for that translating, transcribing and summarizing meetings. But as we’ve talked before, AI is only as good as the audio it hears. If the sound is poor or a word gets misheard, the meaning can shift entirely. And sometimes that’s just inconvenient, or it can even be funny. But in really high stakes settings, like healthcare for example, a single mis-transcribed word can have serious consequences. So that’s why our position as high quality audio is critical and it’s necessary for making AI powered communication accurate, trustworthy, and useful because when the input is clean, the output can actually live up to its promise.

Megan: Fantastic. And Chris, finally, what are you most excited to see developed? What advancements are you most looking forward to seeing?

Chris: Well, I really do believe that this is one of the most exciting times that I know I’ve lived through in my career. Just the pace of how fast technology is moving, the sudden emergence of all things AI. I was actually in a roundtable session of CEOs yesterday from lots of different industries, and the facilitator was talking about change management internally in companies as you’re going through all of these technology shifts and some of the fear that people have around AI and things like that. And the facilitator asked each of us to give one word that describes how we’re feeling right now. And the first CEO that went used the word dread. And that absolutely floored me because you enter into these eras with some skepticism and trying to figure out how to make things work and go down the right path. But my word was truly optimism.

When I look at all the ways that we are able to deliver better audio to people more quickly, there’s so many opportunities in front of us. We’re working on things outside of AI like algorithms that Genevieve just mentioned that filter out the bad sounds that you don’t want entering into a meeting. We’ve been doing that for quite a long time now. There’s also opportunities to do real time audio improvements, enhancements, make audio more personal for people. How do they want to be able to very simply, through voice commands perhaps, adjust their audio? There shouldn’t have to be a whole lot of techie settings that come along with our solutions.

We should be able to provide improved accessibility and a little bit more equitable meeting experience for people. And we’re looking at tech technology solutions around immersive audio. How can you maybe feel like you’re a bit more engaged in the meeting, kind of creating some realistic virtual experiences, if you will. There’s just so many opportunities in front of us, and I can just picture a day when you walk into a room and you tell the room, “Hey, call Genevieve. We’re going to have a meeting for an hour, and we might need to have Megan on call to come in at a certain time.”

And all of this will just be very automatic, very seamless, and we’ll be able to see each other and talk at the same time. And this isn’t years away. This is happening really, really quickly. And I do think it’s a really exciting time for audio and just all together collaboration in our industry.

Megan: Absolutely. Sounds like there’s plenty of reason to be optimistic. Thank you both so much.

That was Chris Schyvinck, President and CEO at Shure. And Genevieve Juillard, CEO at IDC, whom I spoke with from Brighton, England.

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. And if you enjoy this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Creating psychological safety in the AI era

Rolling out enterprise-grade AI means climbing two steep cliffs at once. First, understanding and implementing the tech itself. And second, creating the cultural conditions where employees can maximize its value. While the technical hurdles are significant, the human element can be even more consequential; fear and ambiguity can stall momentum of even the most promising initiatives.

Psychological safety—feeling free to express opinions and take calculated risks without worrying about career repercussions1—is essential for successful AI adoption. In psychologically safe workspaces, employees are empowered to challenge assumptions and raise concerns about new tools without fear of reprisal. This is nothing short of a necessity when introducing a nascent and profoundly powerful technology that still lacks established best practices.

“Psychological safety is mandatory in this new era of AI,” says Rafee Tarafdar, executive vice president and chief technology officer at Infosys. “The tech itself is evolving so fast—companies have to experiment, and some things will fail. There needs to be a safety net.”

To gauge how psychological safety influences success with enterprise-level AI, MIT Technology Review Insights conducted a survey of 500 business leaders. The findings reveal high self-reported levels of psychological safety, but also suggest that fear still has a foothold. Anecdotally, industry experts highlight a reason for the disconnect between rhetoric and reality: while organizations may promote a safe to experiment message publicly, deeper cultural undercurrents can counteract that intent.

Building psychological safety requires a coordinated, systems-level approach, and human resources (HR) alone cannot deliver such transformation. Instead, enterprises must deeply embed psychological safety into their collaboration processes.

Key findings for this report include:

  • Companies with experiment-friendly cultures have greater success with AI projects. The majority of executives surveyed (83%) believe a company culture that prioritizes psychological safety measurably improves the success of AI initiatives. Four in five leaders agree that organizations fostering such safety are more successful at adopting AI, and 84% have observed connections between psychological safety and tangible AI outcomes.
  • Psychological barriers are proving to be greater obstacles to enterprise AI adoption than technological challenges. Encouragingly, nearly three-quarters (73%) of respondents indicated they feel safe to provide honest feedback and express opinions freely in their workplace. Still, a significant share (22%) admit they’ve hesitated to lead an AI project because they might be blamed if it misfires.
  • Achieving psychological safety is a moving target for many organizations. Fewer than half of leaders (39%) rate their organization’s current level of psychological safety as “very high.” Another 48%report a “moderate” degree of it. This may mean that some enterprises are pursuing AI adoption on cultural foundations that are not yet fully stable.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.