Going beyond pilots with composable and sovereign AI

Today marks an inflection point for enterprise AI adoption. Despite billions invested in generative AI, only 5% of integrated pilots deliver measurable business value and nearly one in two companies abandons AI initiatives before reaching production.

The bottleneck is not the models themselves. What’s holding enterprises back is the surrounding infrastructure: Limited data accessibility, rigid integration, and fragile deployment pathways prevent AI initiatives from scaling beyond early LLM and RAG experiments. In response, enterprises are moving toward composable and sovereign AI architectures that lower costs, preserve data ownership, and adapt to the rapid, unpredictable evolution of AI—a shift IDC expects 75% of global businesses to make by 2027.

The concept to production reality

AI pilots almost always work, and that’s the problem. Proofs of concept (PoCs) are meant to validate feasibility, surface use cases, and build confidence for larger investments. But they thrive in conditions that rarely resemble the realities of production.

Source: Compiled by MIT Technology Review Insights with data from Informatica, CDO Insights 2025 report, 2026

“PoCs live inside a safe bubble” observes Cristopher Kuehl, chief data officer at Continent 8 Technologies. Data is carefully curated, integrations are few, and the work is often handled by the most senior and motivated teams.

The result, according to Gerry Murray, research director at IDC, is not so much pilot failure as structural mis-design: Many AI initiatives are effectively “set up for failure from the start.”

Download the article.

Securing digital assets as crypto crime surges

In February 2025, cyberattackers thought to be linked to North Korea executed a sophisticated supply chain attack on cryptocurrency exchange Bybit. By targeting its infrastructure and multi-signature security process, hackers managed to steal more than $1.5 billion worth of Ethereum in the largest known digital-asset theft to date.

The ripple effects were felt across the cryptocurrency market, with the price of Bitcoin dropping 20% from its record high in January. And the massive losses put 2025 on track to be the worst year in history for cryptocurrency theft.

Bitcoin, Ethereum, and stablecoins have established themselves as benchmark monetary vehicles, and, despite volatility, their values continue to rise. In October 2025, the value of cryptocurrency and other digital assets topped $4 trillion.

Yet, with this burgeoning value and liquidity comes more attention from cybercriminals and digital thieves. The Bybit attack demonstrates how focused sophisticated attackers are on finding ways to break the security measures that guard the crypto ecosystem, says Charles Guillemet, chief technology officer of Ledger, a provider of secure signer platforms.

”The attackers were very well organized, they have plenty of money, and they are spending a lot of time and resources trying to attack big stuff, because they can,” he says. “In terms of opportunity costs, it’s a big investment, but if at the end they earn $1.4 billion it makes sense to do this investment.”

But it also demonstrates how the crypto threat landscape has pitfalls not just for the unwary but for the tech savvy too. On the one hand, cybercriminals are using techniques like social engineering to target end users. On the other, they are increasingly looking for vulnerabilities to exploit at different points in the cryptocurrency infrastructure.

Historically, owners of digital assets have had to stand against these attackers alone. But now, cybersecurity firms and cryptocurrency-solution providers are offering new solutions, powered by in-depth threat research.

A treasure trove for attackers

One of the advantages of cryprocurrency is self custody. Users can save their private keys—the critical piece of alphanumeric code that proves ownership and grants full control over digital assets—into either a software or hardware wallet to safeguard it.

But users must put their faith in the security of the wallet technology, and, because the data is the asset, if the keys are lost or forgotten, the value too can be lost.

”If I hack your credit card, what is the issue? You will call your bank, and they will manage to revert the operations,” says Vincent Bouzon, head of the Donjon research team at Ledger. “The problem with crypto is, if something happens, it’s too late. So we must eliminate the possibility of vulnerabilities and give users security.”

Increasingly, attackers are focusing on digital assets known as stablecoins, a form of cryptocurrency that is pegged to the value of a hard asset, such as gold, or a fiat currency, like the US dollar.

Stablecoins rely on smart contracts—digital contracts stored on blockchain that use pre-set code to manage issuance, maintain value, and enforce rules—that can be vulnerable to different classes of attacks, often taking advantage of users’ credulity or lack of awareness about the threats. Post-theft countermeasures, such as freezing the transfer of coins and blacklisting of addresses, can lessen the risk with these kinds of attacks, however.

Understanding vulnerabilities

Software-based wallets, also known as “hot wallets,” which are applications or programs that run on a user’s computer, phone, or web browser, are often a weak link. While their connection to the internet makes them convenient for users, it also makes them more readily accessible to hackers too.

“If you are using a software wallet, by design it’s vulnerable because your keys are stored inside your computer or inside your phone. And unfortunately, a phone or a computer is not designed for security.” says Guillemet.

The rewards for exploiting this kind of vulnerability can be extensive. Hackers who stole credentials in a targeted attack on encrypted password manager application LastPass in 2022 managed to transfer millions worth of cryptocurrency away from victims in the subsequent two or more years. 

Even hardware-based wallets, which often resemble USB drives or key fobs and are more secure than their software counterparts since they are completely offline, can have vulnerabilities that a diligent attacker might find and exploit.

Tactics include the use of side-channel attacks, for example, where a cycbercriminal observes a system’s physical side effects, like timing, power, or electromagnetic and acoustic emissions to gain information about the implementation of an algorithm.

Guillemet explains that cybersecurity providers building digital asset solutions, such as wallets, need to help minimize the burden on the users by building security features and providing education about enhancing defense.

For businesses to protect cryptocurrency, tokens, critical documents, or other digital assets, this could be a platform that allows multi-stakeholder custody and governance, supports software and hardware protections, and allows for visibility of assets and transactions through Web3 checks.

Developing proactive security measures

As the threat landscape evolves at breakneck speed, in-depth research conducted by attack labs like Ledger Donjon can help security firms keep pace. The team at Ledger Donjon are working to understand how to proactively secure the digital asset ecosystem and set global security standards.

Key projects include the team’s offensive security research, which uses ethical and white hat hackers to simulate attacks and uncover weaknesses in hardware wallets, cryptographic systems, and infrastructure.

In November 2022, the Donjon team discovered a vulnerability in Web3 wallet platform Trust Wallet, which had been acquired by Binance. They found that the seed-phrase generation was not random enough, allowing the team to compute all possible private keys and putting as much as $30 million stored in Trust Wallet accounts at risk, says Bouzon. “The entropy was not high enough, the entropy was only 4 billion. It was huge, but not enough,” he says.

To enhance overall safety there are three key principles that digital-asset protection platforms should apply, says Bouzon. First, security providers should create secure algorithms to generate the seed phrases for private keys and conduct in-depth security audits of the software. Second, users should use hardware wallets with a secure screen instead of software wallets. And finally, any smart contract transaction should include visibility into what is being signed to avoid blind signing attacks.

Ultimately, the responsibility for safeguarding these valuable assets lies on both digital asset solution providers and the users themselves. As the value of cryptocurrencies continues to grow so too will the threat landscape as hackers keep attempting to circumvent new security measures. While digital asset providers, security firms, and wallet solutions must work to build strong and simple protection to support the cryptocurrency ecosystems, users must also seek out the information and education they need to proactively protect themselves and their wallets.

Learn more about how to secure digital assets in the Ledger Academy.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Mitigating emissions from air freight: Unlocking the potential of SAF with book and claim

Emissions from air freight have increased by 25% since 2019, according to a 2024 analysis by environmental advocacy organization Stand.Earth.

The researchers found that the expansion of cargo-only fleets to transport goods during the pandemic — as air travel halted, slower freight modes faced disruption, but demand for rapid delivery soared — has led to a yearly increase of almost 20 million tons of carbon dioxide, making up 93.8m tonnes from air freight overall.

And though fleet modernization and operational improvements by freight operators have contributed to ongoing decarbonization efforts, sustainable aviation fuel (SAF) looks set to be instrumental in helping the sector achieve its ambitions to reduce environmental footprint in the long-term.

When used neat, or pure and unblended, SAF can help reduce the life cycle of greenhouse gas emissions from aviation by as much as 80% relative to conventional fuel. It’s why the International Air Transport Association (IATA) estimates that SAF could account for as much as 65% of total reduction of emissions.

For Christoph Wolff, CEO of the Smart Freight Centre, “SAF is the main pathway” to decarbonization across both freight and the wider aviation ecosystem.

“The great thing about SAF is it’s chemically identical to Jet A fuel,” he says. “You can blend it [which means] you have a pathway to ramp it up. You can start small and you can scale it. By scaling it there is the promise or the hope that the price comes down.”

At at least twice the price of conventional jet fuel, cost is a significant barrier hindering broader adoption.

And it isn’t the only one standing between SAF and wider penetration.

Bridging the gap between a concentrated supply of SAF and global demand also remains a major hurdle.

Though the number of verified SAF outlets has increased from fewer than 20 locations in 2021 to 114 as of April 2025, according to sustainability solutions framework 4Air, that accounts for only 92 airports worldwide out of more than 40,000.

“SAF is central to the decarbonization of the aviation sector,” believes Raman Ojha, president of Shell Aviation. “Having said that, adoption and penetration of SAF hasn’t really picked up massively. It’s not due to lack of production capacity, but there are lots of things that are at play. And book and claim in that context helps to bridge that gap.”

Bridging the gap with book and claim

Book and claim is a chain of custody model, where the flow of administrative records is not necessarily connected to the physical product through the supply chain (source: ISO 22095:2020).

Book and claim potentially enables airlines and corporations to access the life cycle GHG emissions reduction benefits of SAF relative to conventional jet fuel even when SAF is not physically available at their location; this model helps bridge the gap between that concentrated supply and global demand, until SAF’s availability improves.

“To be bold, without book and claim, no short-term science-based target will be achieved,” says Bettina Paschke, vice president of ESG accounting, reporting and controlling at DHL Express. “Book and claim is essential to achieving science-based targets.”

“SAF production facilities are not everywhere,” she reiterates. “They’re very focused on one location, and if a customer wants to fulfil a mass balance obligation, SAF would need to be shipped around the world just to be at that airport for that customer. That would be very complicated, and very unrealistic.” It would also, counterintuitively, increase total emissions. By using book and claim instead, air freight operators can unlock the life cycle greenhouse gas emissions reduction benefits of SAF relative to conventional jet fuel now, without waiting for supply to broaden. “It might no longer be needed when we have SAF product facilities at each airport in the future,” she points out. “But at the moment, that’s not the case.”

At DHL itself, the mechanism has become central to achieving its own three interconnected sustainability pillars, which focus on decarbonizing logistics supply chains, supporting customers toward their decarbonization goals, and ensuring credible emission claims can be shared along the value chain.

Demonstrating the importance of a credible and viable framework for book and claim systems is also what inspired the 2022 launch of Shell’s Avelia, one of the first blockchain-powered digital SAF book and claim solutions for aviation, which expanded in 2024 to encompass air freight in addition to business travel. Depending on the offering, Avelia offers freight forwarders the opportunity to share the life cycle greenhouse gas emissions reduction benefits of SAF relative to conventional jet fuel across the value chain with shippers using their services.

“It’s also backed by a physical supply chain, which gives our customers — whether those be corporates or freight forwarders or even airlines — a peace of mind that the SAF has been injected at a certain airport, it’s been used and environmental attributes, with the help of blockchain, have been tracked to where they’re getting retired,” says Ojha.

He adds: “The most important or critical part is the transparency that it’s providing to our customers to be sure that they’re not saying something which they can’t confidently stand behind.”

Moving beyond early adoption

To scale up SAF via book and claim and help make it a more commercially viable lower-carbon solution, its adoption will need to be a coordinated “ecosystem play,” says Wolff. That includes early adopters, such as DHL, inspiring action from peers, solution providers such as Shell, working with various stakeholders to drive joint advocacy, and industry associations, like the Smart Freight Centre creating the required frameworks, educational resources, and industry alignment.

An active book and claim community made up of many forward-thinking advocates is already driving much of this work forward with a common goal to develop greater standardization and consensus, Wolff points out. “It helps to make sure all definitions on the system are compatible and they can talk to one another, provide educational support, and [also that] there’s a repository of transactions so that it can be documented in a way that people can see and think, ‘oh this is how we do it.’ There are some early adopters that are very experienced, but it needs a lot more people for it to get comfortable.”

In early 2024, discussions were held with a diverse group of expert book and claim stakeholders to develop and refine 11 key principles and best practices book and claim models. These represent an aligned set of principles informed by practical successes and challenges faced by practitioners working to decarbonize the heavy transport sector.

Adherence to such a framework is crucial given that book and claim is not yet accepted by the Greenhouse Gas (GHG) Protocol nor the Science Based Targets Initiative (SBTi) as a recognized model for reducing greenhouse gas emissions — though there are hopes that might change.

“The industrialization of book and claim delivery systems is key to credibility and recognition,” says Wolff. “The Greenhouse Gas Protocol and the Science Based Targets Initiative are making steps in recognizing that. There’s a pathway that the Smart Freight Centre is very closely involved in the technical working groups for [looking]to build such a system where, in addition to physical inventory, you also pursue market-based inventories.”

Paschke urges companies not to sit back and wait for policy to change before taking action, though. “The solution is there,” she says. “There are companies like DHL that are making huge upfront investments, and every single contribution helps to scale the industry and give a strong signal to the eco-space.”

As pressure to accelerate decarbonization gains pace, it’s critical that air freight operators consider this now, agrees Ojha. “Don’t wait for perfection in guidelines, regulations, or platforms — act now,” he says. “That’s very, very critical. Second, learn by doing and join hands with others. Don’t try to do everything independently or in-house.

“Third, make use of registries and platforms, such as Avelia, that can give credibility. Join them, utilize them, and leverage them so that you won’t have to establish auditability from scratch.

“And fourth, don’t look at scope book and claim as a means for acquiring a certificate for environmental attributes. Think in terms of your decarbonisation commitment and think of this as a tool for exposure management. Think in terms of the bigger picture.”

That bigger picture being a significant sector-wide push toward faster decarbonization — and turning the tide on emissions’ steep upward ascent.

Watch the full webcast.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

This content is produced by MIT Technology Review Insights in association with Avelia. Avelia is a Shell owned solution and brand that was developed with support from Amex GBT, Accenture and Energy Web Foundation. The views from individuals not affiliated with Shell are their own and not those of Shell PLC or its affiliates. Cautionary note | Shell Global

Using unstructured data to fuel enterprise AI success

Enterprises are sitting on vast quantities of unstructured data, from call records and video footage to customer complaint histories and supply chain signals. Yet this invaluable business intelligence, estimated to make up as much as 90% of the data generated by organizations, historically remained dormant because its unstructured nature makes analysis extremely difficult.

But if managed and centralized effectively, this messy and often voluminous data is not only a precious asset for training and optimizing next-generation AI systems, enhancing their accuracy, context, and adaptability, it can also deliver profound insights that drive real business outcomes.

A compelling example of this can be seen in the US NBA basketball team the Charlotte Hornets who successfully leveraged untapped video footage of gameplay—previously too copious to watch and too unstructured to analyze—to identify a new competition-winning recruit. However, before that data could deliver results, analysts working for the team first had to overcome the critical challenge of preparing the raw, unstructured footage for interpretation.

The challenges of organizing and contextualizing unstructured data

Unstructured data presents inherent difficulties due to its widely varying format, quality, and reliability, requiring specialized tools like natural language processing and AI to make sense of it.

Every organization’s pool of unstructured data also contains domain-specific characteristics and terminology that generic AI models may not automatically understand. A financial services firm, for example, cannot simply use a general language model for fraud detection. Instead, it needs to adapt the model to understand regulatory language, transaction patterns, industry-specific risk indicators, and unique company context like data policies.

The challenge intensifies when integrating multiple data sources with varying structures and quality standards, as teams may struggle to distinguish valuable data from noise.

How computer vision gave the Charlotte Hornets an edge 

When the Charlotte Hornets set out to identify a new draft pick for their team, they turned to AI tools including computer vision to analyze raw game footage from smaller leagues, which exist outside the tiers of the game normally visible to NBA scouts and, therefore, are not as readily available for analysis.

“Computer vision is a tool that has existed for some time, but I think the applicability in this age of AI is increasing rapidly,” says Jordan Cealey, senior vice president at AI company Invisible Technologies, which worked with the Charlotte Hornets on this project. “You can now take data sources that you’ve never been able to consume, and provide an analytical layer that’s never existed before.”

By deploying a variety of computer vision techniques, including object and player tracking, movement pattern analysis, and geometrically mapping points on the court, the team was able to extract kinematic data, such as the coordinates of players during movement, and generate metrics like speed and explosiveness to acceleration. 

This provided the team with rich, data-driven insights about individual players, helping them to identify and select a new draft whose skill and techniques filled a hole in the Charlotte Hornets’ own capabilities. The chosen athlete went on to be named the most valuable player at the 2025 NBA Summer League and helped the team win their first summer championship title.

Annotation of a basketball match

Before data from game footage can be used, it needs to be labeled so the model can interpret it. The x and y coordinates of the individual players, seen here in bounding boxes, as well as other features in the scene, are annotated so the model can identify individuals and track their movements through time.

Taking AI pilot programs into production 

From this successful example, several lessons can be learned. First, unstructured data must be prepared for AI models through intuitive forms of collection, and the right data pipelines and management records. “You can only utilize unstructured data once your structured data is consumable and ready for AI,” says Cealey. “You cannot just throw AI at a problem without doing the prep work.” 

For many organizations, this might mean they need to find partners that offer the technical support to fine-tune models to the context of the business. The traditional technology consulting approach, in which an external vendor leads a digital transformation plan over a lengthy timeframe, is not fit for purpose here as AI is moving too fast and solutions need to be configured to a company’s current business reality. 

Forward-deployed engineers (FDEs) are an emerging partnership model better suited to the AI era. Initially popularized by Palantir, the FDE model connects product and engineering capabilities directly to the customer’s operational environment. FDEs work closely with customers on-site to understand the context behind a technology initiative before a solution is built. 

“We couldn’t do what we do without our FDEs,” says Cealey. “They go out and fine-tune the models, working with our human annotation team to generate a ground truth dataset that can be used to validate or improve the performance of the model in production.”

Second, data needs to be understood within its own context, which requires models to be carefully calibrated to the use case. “You can’t assume that an out-of-the-box computer vision model is going to give you better inventory management, for example, by taking that open source model and applying it to whatever your unstructured data feeds are,” says Cealey. “You need to fine-tune it so it gives you the data exports in the format you want and helps your aims. That’s where you start to see high-performative models that can then actually generate useful data insights.” 

For the Hornets, Invisible used five foundation models, which the team fine-tuned to context-specific data. This included teaching the models to understand that they were “looking at” a basketball court as opposed to, say, a football field; to understand how a game of basketball works differently from any other sport the model might have knowledge of (including how many players are on each team); and to understand how to spot rules like “out of bounds.” Once fine-tuned, the models were able to capture subtle and complex visual scenarios, including highly accurate object detection, tracking, postures, and spatial mapping.

Lastly, while the AI technology mix available to companies changes by the day, they cannot eschew old-fashioned commercial metrics: clear goals. Without clarity on the business purpose, AI pilot programs can easily turn into open-ended, meandering research projects that prove expensive in terms of compute, data costs, and staffing. 

“The best engagements we have seen are when people know what they want,” Cealey observes. “The worst is when people say ‘we want AI’ but have no direction. In these situations, they are on an endless pursuit without a map.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Deploying a hybrid approach to Web3 in the AI era

When the concept of “Web 3.0” first emerged about a decade ago the idea was clear: Create a more user-controlled internet that lets you do everything you can now, except without servers or intermediaries to manage the flow of information.

Where Web2, which emerged in the early 2000s, relies on centralized systems to store data and supply compute, all owned—and monetized by—a handful of global conglomerates, Web3 turns that structure on its head. Instead, data and compute are decentralized through technologies like blockchain and peer-to-peer networks.

What was once a futuristic concept is quickly becoming a more concrete reality, even at a time when Web2 still dominates. Six out of ten Fortune 500 companies are exploring blockchain-based solutions, most taking a hybrid approach that combines traditional Web2 business models and infrastructure with the decentralized technologies and principles of Web3.

Popular use cases include cloud services, supply chain management, and, most notably financial services. In fact, at one point, the daily volume of transactions processed on decentralized finance exchanges exceeded $10 billion.

Gaining a Web3 edge

Among the advantages of Web3 for the enterprise are greater ownership and control of sensitive data, says Erman Tjiputra, founder and CEO of the AIOZ Network, which is building infrastructure for Web3, powered by decentralized physical infrastructure networks (DePIN), blockchain-based systems that govern physical infrastructure assets.

More cost-effective compute is another benefit, as is enhanced security and privacy as the cyberattack landscape grows more hostile, he adds. And it could even help protect companies from outages caused by a single point of failure, which can lead to downtime, data loss, and revenue deficits.

But perhaps the most exciting opportunity, says Tjiputra, is the ability to build and scale AI reliably and affordably. By leveraging a people-powered internet infrastructure, companies can far more easily access—and contribute to—shared resource like bandwidth, storage, and processing power to run AI inference, train models, and store data. All while using familiar developer tooling and open, usage-based incentives.

“We’re in a compute crunch where requirements are insatiable, and Web3 creates this ability to benefit while contributing,” explains Tjiputra.

In 2025, AIOZ Network launched a distributed compute platform and marketplace where developers and enterprises can access and monetize AI assets, and run AI inference or training on AIOZ Network’s more than 300,000 contributing devices. The model allows companies to move away from opaque datasets and models and scale flexibly, without centralized lock in.

Overcoming Web3 deployment challenges

Despite the promise, it is still early days for Web3, and core systemic challenges are leaving senior leadership and developers hesitant about its applicability at scale.

One hurdle is a lack of interoperability. The current fragmentation of blockchain networks creates a segregated ecosystem that makes it challenging to transfer assets or data between platforms. This often complicates transactions and introduces new security risks due to the reliance on mechanisms such as cross-chain bridges. These are tools that allow asset transfers between platforms but which have been shown to be vulnerable to targeted attacks.

“We have countless blockchains running on different protocols and consensus models,” says Tjiputra. “These blockchains need to work with each other so applications can communicate regardless of which chain they are on. This makes interoperability fundamental.”

Regulatory uncertainty is also a challenge. Outdated legal frameworks can sit at odds with decentralized infrastructures, especially when it comes to compliance with data protection and anti-money laundering regulations.

“Enterprises care about verifiability and compliance as much as innovation, so we need frameworks where on-chain transparency strengthens accountability instead of adding friction,” Tjiputra says.

And this is compounded by user experience (UX) challenges, says Tjiputra. “The biggest setback in Web3 today is UX,” he says. “For example, in Web2, if I forget my bank username or password, I can still contact the bank, log in and access my assets. The trade-off in Web3 is that, should that key be compromised or lost, we lose access to those assets. So, key recovery is a real problem.”

Building a bridge to Web3

Although such systemic challenges won’t be solved overnight, by leveraging DePIN networks, enterprises can bridge the gap between Web2 and Web3, without making a wholesale switch. This can minimize risk while harnessing much of the potential.

AIOZ Network’s own ecosystem includes capacity for media streaming, AI compute, and distributed storage that can be plugged into an existing Web2 tech stack. “You don’t need to go full Web3,” says Tjiputra. “You can start by plugging distributed storage into your workflow, test it, measure it, and see the benefits firsthand.”

The AIOZ Storage solution, for example, offers scalable distributed object storage by leveraging the global network of contributor devices on AIOZ DePIN. It is also compatible with existing storage systems or commonly used web application programming interfaces (APIs).

“Say we have a programmer or developer who uses Amazon S3 Storage or REST APIs, then all they need to do is just repoint the endpoints,” explains Tjiputra. “That’s it. It’s the same tools, it’s really simple. Even with media, with a single one-stop shop, developers can do transcoding and streaming with a simple REST API.”

Built on Cosmos, a network of hundreds of different blockchains that can communicate with each other, and a standardized framework enabled by Ethereum Virtual Machine (EVM), AIOZ Network has also prioritized interoperability. “Applications shouldn’t care which chain they’re on. Developers should target APIs without worrying about consensus mechanisms. That’s why we built on Cosmos and EVM—interoperability first.”

This hybrid model, which allows enterprises to use both Web2 and Web3 advantages in tandem, underpins what Tjiputra sees as the longer-term ambition for the much-hyped next iteration of the internet.

“Our vision is a truly peer-to-peer foundation for a people-powered internet, one that minimizes single points of failure through multi-region, multi-operator design,” says Tjiputra. “By distributing compute and storage across contributors, we gain both cost efficiency and end-to-end security by default.

“Ideally, we want to evolve the internet toward a more people-powered model, but we’re not there yet. We’re still at the starting point and growing.”

Indeed, Web3 isn’t quite snapping at the heels of the world’s Web2 giants, but its commercial advantages in an era of AI have become much harder to ignore. And with DePIN bridging the gap, enterprises and developers can step into that potential while keeping one foot on surer ground.

To learn more from AIOZ Network, you can read the AIOZ Network Vision Paper.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The overlooked driver of digital transformation

When business leaders talk about digital transformation, their focus often jumps straight to cloud platforms, AI tools, or collaboration software. Yet, one of the most fundamental enablers of how organizations now work, and how employees experience that work, is often overlooked: audio.

As Genevieve Juillard, CEO of IDC, notes, the shift to hybrid collaboration made every space, from corporate boardrooms to kitchen tables, meeting-ready almost overnight. In the scramble, audio quality often lagged, creating what research now shows is more than a nuisance. Poor sound can alter how speakers are perceived, making them seem less credible or even less trustworthy.

“Audio is the gatekeeper of meaning,” stresses Julliard. “If people can’t hear clearly, they can’t understand you. And if they can’t understand you, they can’t trust you, and they can’t act on what you said. And no amount of sharp video can fix that.” Without clarity, comprehension and confidence collapse.

For Shure, which has spent a century advancing sound technology, the implications extend far beyond convenience. Chris Schyvinck, Shure’s president and CEO, explains that ineffective audio undermines engagement and productivity. Meetings stall, decisions slow, and fatigue builds.

“Use technology to make hybrid meetings seamless, and then be clear on which conversations truly require being in the same physical space,” says Juillard. “If you can strike that balance, you’re not just making work more efficient, you’re making it more sustainable, you’re also making it more inclusive, and you’re making it more resilient.”

When audio is prioritized on equal footing with video and other collaboration tools, organizations can gain something rare: frictionless communication. That clarity ensures the machines listening in, from AI transcription engines to real-time translation systems, can deliver reliable results.

The research from Shure and IDC highlights two blind spots for leaders. First, buying decisions too often privilege price over quality, with costly consequences in productivity and trust. Second, organizations underestimate the stress poor sound imposes on employees, intensifying the cognitive load of already demanding workdays. Addressing both requires leaders to view audio not as a peripheral expense but as core infrastructure.

Looking ahead, audio is becoming inseparable from AI-driven collaboration. Smarter systems can already filter out background noise, enhance voices in real time, and integrate seamlessly into hybrid ecosystems.

“We should be able to provide improved accessibility and a more equitable meeting experience for people,” says Schyvinck.

For Schyvinck and Juillard, the future belongs to companies that treat audio transformation as an integral part of digital transformation, building workplaces that are more sustainable, equitable, and resilient.

This episode of Business Lab is produced in partnership with Shure.

Full Transcript

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

This episode is produced in partnership with Shure.

As companies continue their journeys towards digital transformation, audio modernization is an often overlooked but key component of any successful journey. Clear audio is imperative not only for quality communication, but also for brand equity, both for internal and external stakeholders and even the company as a whole.

Two words for you: audio transformation.

My guests today are Chris Schyvinck, President and CEO at Shure. And Genevieve Juillard, CEO at IDC.

Welcome Chris and Genevieve.

Chris Schyvinck: It’s really nice to be here. Thank you very much.

Genevieve Juillard: Yeah, thank you so much for having us. Great to be here.

Megan Tatum: Thank you both so much for being here. Genevieve, we could start with you. Let’s start with some history perhaps for context. How would you describe the evolution of audio technology and how use cases and our expectations of audio have evolved? What have been some of the major drivers throughout the years and more recently, perhaps would you consider the pandemic to be one of those drivers?

Genevieve: It’s interesting. If you go all the way back to 1976, Norman Macrae of The Economist predicted that video chat would actually kill the office, that people would just work from home. Obviously, that didn’t happen then, but the core technology for remote collaboration has actually been around for decades. But until the pandemic, most of us only experienced it in very specific contexts. Offices had dedicated video conferencing rooms and most ran on expensive proprietary systems. And then almost overnight, everything including literally the kitchen table had to be AV ready. The cultural norms shifted just as fast. Before the pandemic, it was perfectly fine to keep your camera off in a meeting, and now that’s seen as disengaged or even rude, and that changes what normalized video conferencing and my hybrid meetings.

But in a rush to equip a suddenly remote workforce, we hit two big problems. Supply chain disruptions and a massive spike in demand. High-quality gear was hard to get so low-quality audio and video became the default. And here’s a key point. We now know from research that audio quality matters more than video quality for meeting outcomes. You can run a meeting without video, but you can’t run a meeting without clear audio. Audio is the gatekeeper of meaning. If people can’t hear clearly, they can’t understand you. And if they can’t understand you, they can’t trust you and they can’t act on what you said. And no amount of sharp video can fix that.

Megan: Oh, true. It’s fascinating, isn’t it? And Chris, Shure and IDC recently released some research titled “The Hidden Influencer Rethinking Audio Could Impact Your Organization Today, Tomorrow, and Forever.” The research highlighted that importance of audio that Genevieve’s talking about in today’s increasingly virtual world. What did you glean from those results and did anything surprise you?

Chris: Yeah, well, the research certainly confirmed a lot of hunches we’ve had through the years. When you think about a company like Shure that’s been doing audio for 100 years, we just celebrated that anniversary this year.

Megan: Congratulations.

Chris: Our legacy business is over more in the music and performance arena. And so just what Genevieve said in terms of, “Yeah, you can have a performance and look at somebody, but that’s like 10% of it, right? 90% is hearing that person sing, perform, and talk.” We’ve always, of course, from our perspective, understood that clean, clear, crisp audio is what is needed in any setting. When you translate what’s happening on the stage into a meeting or collaboration space at a corporation, we’ve thought that that is just equally as important.

And we always had this hunch that if people don’t have the good audio, they’re going to have fatigue, they’re going to get a little disengaged, and the whole meeting is going to become quite unproductive. The research just really amplified that hunch for us because it really depicted the fact that people not only get kind of frustrated and disengaged, they might actually start to distrust what the other person with bad audio is saying or just cast it in a different light. And the degree to which that frustration becomes almost personal was very surprising to us. Like I said, it validated some hunches, but it really put an exclamation point on it for us.

Megan: And Genevieve, based on the research results, I understand that IDC pulled together some recommendations for organizations. What is it that leaders need to know and what is the biggest blind spot for them to overcome as well?

Genevieve: The biggest blind spot is this. If your microphone has poor audio quality, like Chris said, people will literally perceive you as less intelligent and less trustworthy. And by the way, that’s not an opinion. It’s what the science says. But yet, when we surveyed first time business buyers, the number one factor they used to choose audio gear was price. However, for repeat buyers, the top factor flipped to audio quality. My guess is they learn the lesson the hard way. The second blind spot is to Chris’s point, it’s the stress that bad audio creates. Poor sound forces your brain to work harder to decode what’s being said. That’s a cognitive load and it creates stress. And over a full day of meetings, that stress adds up. Now, we don’t have long-term studies yet on the effects, but we do know that prolonged stress is something that every company should be working to reduce.

Good audio lightens that cognitive load. It keeps people engaged and it levels the playing field. Whether you’re in a room or you’re halfway across the world, and here’s one that’s often overlooked, bad audio can sabotage AI transcription tools. As AI becomes more and more central to everyday work, that starts to become really critical. If your audio isn’t clear, the transcription won’t be accurate. And there’s a world of difference between working, for example, the consulting department and the insulting department, and that is an actual example from the field.

The bottom line is you fix the audio, you cut friction, you save time, and you make meetings more productive.

Megan: I mean, it’s just a huge game changer, isn’t it, really? I mean, and given that, Chris, in your experience across industries, are audio technologies being included in digital transformation strategies and also artificial intelligence implementation? Do we need a separate audio transformation perhaps?

Chris: Well, like I mentioned earlier, yes, people tend to initially focus on that visual platform, but increasingly the attention to audio is really coming into focus. And I’d hate to tear apart audio as a separate sort of strategy because at the same time, we, as an audio expert, are trying to really seamlessly integrate audio into the rest of the ecosystem. It really does need to be put on an equal footing with the rest of the components in that ecosystem. And to Genevieve’s point, as we are seeing audio and video systems with more AI functionalities, the importance of real-time translations that are being used, voice recognition, being able to attribute who said what in a meeting and take action items, it’s really, I think starting to elevate the importance of that clear audio. And it’s got to be part of a comprehensive, really collaboration plan that helps some company figure out what’s their whole digital transformation about. It just really has to be included in that comprehensive plan, but put on equal footing with the rest of the components in that system.

Megan: Yeah, absolutely. And in the broader landscape, Genevieve, in terms of discussing the importance of audio quality, what have you noticed across research projects about the effects of good and bad audio, not only from that company perspective, but from employee and client perspectives as well?

Genevieve: Well, let’s start with employees.

Megan: Sure.

Genevieve: Bad audio adds friction you don’t need, we’ve talked about this. When you’re straining to hear or make sense of what’s being said, your brain is burning energy on decoding instead of contributing. That frustration, it builds up, and by the end of the day, it hurts productivity. From a company perspective, the stakes get even higher. Meetings are where decisions happen or at least where they’re supposed to happen. And if people can’t hear clearly, decisions get delayed, mistakes creep in, and the whole process slows down. Poor audio doesn’t just waste time, it chips away at the ability to move quickly and confidently. And then there’s the client experience. So whether it’s in sales, customer service, or any external conversation, poor audio can make you sound less credible and yet less trustworthy. Again, that’s not my opinion. That’s what the research shows. So that’s quite a big risk when you’re trying to close a deal or solve a major problem.

The takeaway is good audio, it matters, it’s a multiplier. It makes meetings more productive and it can help decisions happen faster and client interactions be stronger.

Megan: It’s just so impactful, isn’t it, in so many different ways. I mean, Chris, how are you seeing these research results reflected as companies work through digital and AI transformations? What is it that leaders need to understand about what is involved in audio implementation across their organization?

Chris: Well, like I said earlier, I do think that audio is finally maybe getting its place in the spotlight a little bit up there with our cousins over in the video side. Audio, it’s not just a peripheral aspect anymore. It’s a very integral part of that sort of comprehensive collaboration plan I was talking about earlier. And when we think about how can we contribute solutions that are really more easy to use for our end users, because if you create something complicated, we were talking about the days gone by of walking into a room. It’s a very complicated system, and you need to find the right person that knows how to run it. Increasingly, you just need to have some plug and play kind of solutions. We’re thinking about a more sustainable strategy for our solutions where we make really high-quality hardware. We’ve done that account for a hundred years. People will come up to me and tell the story of the SM58 microphone they bought in 1980 and how they’re still using it every day.

We know how to do that part of it. If somebody is willing to make that investment upfront, put some high-quality hardware into their system, then we are getting to the point now where updates can be handled via software downloads or cloud connectivity. And just really being able to provide sort of a sustainable solution for people over time.

More in our industry, we’re collaborating with other industry partners to go in that direction, make something that’s very simple for anybody to walk into a room or on their individual at home setup and do something pretty simple. And I think we have the right industry groups, the right industry associations that can help make sure that the ecosystems have the proper standards, the right kind of ways to make sure everything is interoperable within a system. We’re all kind of heading in that direction with that end user in mind.

Megan: Fantastic. And when the internet of things was emerging, efforts began to create sort of these data ecosystems, it seems there’s an argument to be made that we need audio ecosystems as well. I wonder, Chris, what might an audio ecosystem look like and what would be involved in implementation?

Chris: Well, I think it does have to be part of that bigger ecosystem I was just talking about where we do collaborate with others in industry and we try to make sure that we’re all playing by the kind of same set of rules and protocols and standards and whatnot. And when you think about compatibility across all the devices that sit in a room or sit in your, again, maybe your at home setup, making sure that the audio quality is as good as it can be, that you can interoperate with everything else in the system. That’s just become very paramount in our day-to-day work here. Your hardware has to be scalable like I just alluded to a moment ago. You have to figure out how you can integrate with existing technologies, different platforms.

We were joking when we came into this session that when you’re going from the platform at your company, maybe you’re on Teams and you go into a Zoom setting or you go into a Google setting, you really have to figure out how to adapt to all those different sort of platforms that are out there. I think the ecosystem that we’re trying to build, we’re trying to be on that equal footing with the rest of the components in that system. And people really do understand that if you want to have extra functionalities in meetings and you want to be able to transcribe or take notes and all of that, that audio is an absolutely critical piece.

Megan: Absolutely. And speaking of bit of all those different platforms and use cases, that sort of audio is so relevant to Genevieve that goes back to this idea of in audio one size does not fit all and needs may change. How can companies also plan their audio implementations to be flexible enough to meet current needs and to be able to grow with future advancements?

Genevieve: I’m glad you asked this question. Even years after the pandemic, many companies, they’re still trying to get the balance right between remote, in office, how to support it. But even if a company has a strict return to office in-person policy, the reality is that work still isn’t going away for that company. They may have teams across cities or countries, clients and external stakeholders will have their own office preferences that they have to adapt to. Supporting hybrid work is actually becoming more important, not less. And our research shows that companies are leaning into, not away from, hybrid setups. About one third of companies are now redesigning or resizing office spaces every single year. For large organizations with multiple sites, staggered leases, that’s a moving target. It’s really important that they have audio solutions that can work before, during, after all of those changes that they’re constantly making. And so that’s where flexibility becomes really important. Companies need to buy not just for right now, but for the future.

And so here’s IDC’s kind of pro-tip, which is make sure as a company that you go with a provider that offers top-notch audio quality and also has strong partnerships and certifications with the big players and communications technology because that will save you money in the long run. Your systems will stay compatible, your investments will last longer, and you won’t be scrambling when that next shift happens.

Megan: Of course. And speaking of building for the future, as companies begin to include sustainability in their company goals, Chris, I wonder how can audio play a role in those sustainability efforts and how might that play into perhaps the return on investment in building out a high-quality audio ecosystem?

Chris: Well, I totally agree with what Genevieve just said in terms of hybrid work is not going anywhere. You get all of those big headlines that talk about XYZ company telling people to get back into the office. And I saw a fantastic piece of data just last week that showed the percent of in-office hours of the American workers versus out-of-office remote kind of work. It has basically been flatlined since 2022. This is our new way of working. And of course, like Genevieve mentioned, you have people in all these different locations. And in a strange way, living through the pandemic did teach us that we can do some things by not having to hop on an airplane and travel to go somewhere. Certainly that helps with a more sustainable strategy over time, and you’re saving on travel and able to get things done much more quickly.

And then from a product offering perspective, I’ll go back to the vision I was painting earlier where we and others in our industry see that we can create great solid hardware platforms. We’ve done it for decades, and now that advancements around AI and all of our software that enables products and everything else that has happened in the last probably decade, we can get enhancements and additions and new functionality to people in simpler ways on existing hardware. I think we’re all careening down this path of having a much more sustainable ecosystem for all collaboration. It’s really quite an exciting time, and that pays off with any company implementing a system, their ROI is going to be much better in the long run.

Megan: Absolutely. And Genevieve, what trends around sustainability are you seeing? What opportunities do you see for audio to play into those sustainability efforts going forward?

Genevieve: Yeah, similar to Chris. In some industries, there’s still a belief that the best work happens when everyone’s in the same room. And yes, face-to-face time is really important for building relationships, for brainstorming, for closing big deals, but it does come at a cost. The carbon footprint of daily commutes, the sales visits, the constant business travel. And then there’s the basic consideration, as we’ve talked about, of just pure practicality. The good news is with the right AV setup, especially high-quality audio, many of those interactions can happen virtually without losing effectiveness, as Chris said it, but our research shows it.

Our research shows that virtual meetings can be just as productive as in-person ones, and every commute or flight you avoid, of course makes a measurable sustainability impact. I don’t think, personally, that the takeaway is replace all in-person meetings, but instead it’s to be intentional. Use technology to make hybrid meetings seamless, and then be clear on which conversations truly require being in the same physical space. If you can strike that balance, you’re not just making work more efficient, you’re making it more sustainable, you’re also making it more inclusive, and you’re making it more resilient.

Megan: Such an important point. And let’s close with a future forward look, if we can. Genevieve, what innovations or advancements in the audio field are you most excited to see to come to fruition, and what potential interesting use cases do you see on the horizon?

Genevieve: I’m especially interested in how AI and audio are converging. We’re now seeing AI that can identify and isolate human voices in noisy environments. For example, right now, there are some jets flying overhead. It’s very loud in here, but I suspect you may not even know that that’s happening.

Megan: We can’t hear a thing. No.

Genevieve: Right. That technology, it’s pulling voices forward so that conversations like ours are crystal clear. And that’s a big deal, especially as companies invest more and more in AI tools, especially for that translating, transcribing and summarizing meetings. But as we’ve talked before, AI is only as good as the audio it hears. If the sound is poor or a word gets misheard, the meaning can shift entirely. And sometimes that’s just inconvenient, or it can even be funny. But in really high stakes settings, like healthcare for example, a single mis-transcribed word can have serious consequences. So that’s why our position as high quality audio is critical and it’s necessary for making AI powered communication accurate, trustworthy, and useful because when the input is clean, the output can actually live up to its promise.

Megan: Fantastic. And Chris, finally, what are you most excited to see developed? What advancements are you most looking forward to seeing?

Chris: Well, I really do believe that this is one of the most exciting times that I know I’ve lived through in my career. Just the pace of how fast technology is moving, the sudden emergence of all things AI. I was actually in a roundtable session of CEOs yesterday from lots of different industries, and the facilitator was talking about change management internally in companies as you’re going through all of these technology shifts and some of the fear that people have around AI and things like that. And the facilitator asked each of us to give one word that describes how we’re feeling right now. And the first CEO that went used the word dread. And that absolutely floored me because you enter into these eras with some skepticism and trying to figure out how to make things work and go down the right path. But my word was truly optimism.

When I look at all the ways that we are able to deliver better audio to people more quickly, there’s so many opportunities in front of us. We’re working on things outside of AI like algorithms that Genevieve just mentioned that filter out the bad sounds that you don’t want entering into a meeting. We’ve been doing that for quite a long time now. There’s also opportunities to do real time audio improvements, enhancements, make audio more personal for people. How do they want to be able to very simply, through voice commands perhaps, adjust their audio? There shouldn’t have to be a whole lot of techie settings that come along with our solutions.

We should be able to provide improved accessibility and a little bit more equitable meeting experience for people. And we’re looking at tech technology solutions around immersive audio. How can you maybe feel like you’re a bit more engaged in the meeting, kind of creating some realistic virtual experiences, if you will. There’s just so many opportunities in front of us, and I can just picture a day when you walk into a room and you tell the room, “Hey, call Genevieve. We’re going to have a meeting for an hour, and we might need to have Megan on call to come in at a certain time.”

And all of this will just be very automatic, very seamless, and we’ll be able to see each other and talk at the same time. And this isn’t years away. This is happening really, really quickly. And I do think it’s a really exciting time for audio and just all together collaboration in our industry.

Megan: Absolutely. Sounds like there’s plenty of reason to be optimistic. Thank you both so much.

That was Chris Schyvinck, President and CEO at Shure. And Genevieve Juillard, CEO at IDC, whom I spoke with from Brighton, England.

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. And if you enjoy this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Creating psychological safety in the AI era

Rolling out enterprise-grade AI means climbing two steep cliffs at once. First, understanding and implementing the tech itself. And second, creating the cultural conditions where employees can maximize its value. While the technical hurdles are significant, the human element can be even more consequential; fear and ambiguity can stall momentum of even the most promising initiatives.

Psychological safety—feeling free to express opinions and take calculated risks without worrying about career repercussions1—is essential for successful AI adoption. In psychologically safe workspaces, employees are empowered to challenge assumptions and raise concerns about new tools without fear of reprisal. This is nothing short of a necessity when introducing a nascent and profoundly powerful technology that still lacks established best practices.

“Psychological safety is mandatory in this new era of AI,” says Rafee Tarafdar, executive vice president and chief technology officer at Infosys. “The tech itself is evolving so fast—companies have to experiment, and some things will fail. There needs to be a safety net.”

To gauge how psychological safety influences success with enterprise-level AI, MIT Technology Review Insights conducted a survey of 500 business leaders. The findings reveal high self-reported levels of psychological safety, but also suggest that fear still has a foothold. Anecdotally, industry experts highlight a reason for the disconnect between rhetoric and reality: while organizations may promote a safe to experiment message publicly, deeper cultural undercurrents can counteract that intent.

Building psychological safety requires a coordinated, systems-level approach, and human resources (HR) alone cannot deliver such transformation. Instead, enterprises must deeply embed psychological safety into their collaboration processes.

Key findings for this report include:

  • Companies with experiment-friendly cultures have greater success with AI projects. The majority of executives surveyed (83%) believe a company culture that prioritizes psychological safety measurably improves the success of AI initiatives. Four in five leaders agree that organizations fostering such safety are more successful at adopting AI, and 84% have observed connections between psychological safety and tangible AI outcomes.
  • Psychological barriers are proving to be greater obstacles to enterprise AI adoption than technological challenges. Encouragingly, nearly three-quarters (73%) of respondents indicated they feel safe to provide honest feedback and express opinions freely in their workplace. Still, a significant share (22%) admit they’ve hesitated to lead an AI project because they might be blamed if it misfires.
  • Achieving psychological safety is a moving target for many organizations. Fewer than half of leaders (39%) rate their organization’s current level of psychological safety as “very high.” Another 48%report a “moderate” degree of it. This may mean that some enterprises are pursuing AI adoption on cultural foundations that are not yet fully stable.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The fast and the future-focused are revolutionizing motorsport

When the ABB FIA Formula E World Championship launched its first race through Beijing’s Olympic Park in 2014, the idea of all-electric motorsport still bordered on experimental. Batteries couldn’t yet last a full race, and drivers had to switch cars mid-competition. Just over a decade later, Formula E has evolved into a global entertainment brand broadcast in 150 countries, driving both technological innovation and cultural change in sport.  

“Gen4, that’s to come next year,” says Dan Cherowbrier, Formula E’s chief technology and information officer. “You will see a really quite impressive car that starts us to question whether EV is there. It’s actually faster—it’s actually more than traditional [internal combustion engines] ICE.” 

That acceleration isn’t just happening on the track. Formula E’s digital transformation, powered by its partnership with Infosys, is redefining what it means to be a fan. “It’s a movement to make motor sport accessible and exciting for the new generation,” says principal technologist at Infosys, Rohit Agnihotri. 

From real-time leaderboards and predictive tools to personalized storylines that adapt to what individual fans care most about—whether it’s a driver rivalry or battery performance—Formula E and Infosys are using AI-powered platforms to create fan experiences as dynamic as the races themselves. “Technology is not just about meeting expectations; it’s elevating the entire fan experience and making the sport more inclusive,” says Agnihotri.  

AI is also transforming how the organization itself operates. “Historically, we would be going around the company, banging on everyone’s doors and dragging them towards technology, making them use systems, making them move things to the cloud,” Cherowbrier notes. “What AI has done is it’s turned that around on its head, and we now have people turning up, banging on our door because they want to use this tool, they want to use that tool.” 

As audiences diversify and expectations evolve, Formula E is also a case study in sustainable innovation. Machine learning tools now help determine the most carbon-optimal way to ship batteries across continents, while remote broadcast production has sharply reduced travel emissions and democratized the company’s workforce. These advances show how digital intelligence can expand reach without deepening carbon footprints. 

For Cherowbrier, this convergence of sport, sustainability, and technology is just the beginning. With its data-driven approach to performance, experience, and impact, Formula E is offering a glimpse into how entertainment, innovation, and environmental responsibility can move forward in tandem. 

“Our goal is clear,” says Agnihotri. “Help Formula E be the most digital and sustainable motor sport in the world. The future is electric, and with AI, it’s more engaging than ever.” 

This episode of Business Lab is produced in partnership with Infosys. 

Full Transcript:  

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab, and into the marketplace.  

The ABB FIA Formula E World Championship, the world’s first all-electric racing series, made its debut in the grounds of the Olympic Park in Beijing in 2014. A little more than 10 years later, it’s a global entertainment brand with 10 teams, 20 drivers, and broadcasts in 150 countries. Technology is central to how Formula E is navigating that scale and to how it’s delivering more powerful personalized experiences.  

Two words for you: elevated fandom.  

My guests today are Rohit Agnihotri, principal technologist at Infosys, and Dan Cherowbrier, CTIO of Formula E.  

This episode is produced in partnership with Infosys.  

Welcome, Rohit and Dan. 

Dan Cherowbrier: Hi. Thanks for having us. 

Megan: Dan, as I mentioned there, the first season of the ABB FIA Formula E World Championship launched in 2014. Can you talk us through how the first all-electric motor sport has evolved in the last decade? How has it changed in terms of its scale, the markets it operates in, and also, its audiences, of course? 

Dan: When Formula E launched back in 2014, there were hardly any domestic EVs on the road. And probably if you’re from London, the ones you remember are the hybrid Priuses; that was what we knew of really. And at the time, they were unable to get a battery big enough for a car to do a full race. So the first generation of car, the first couple of seasons, the driver had to do a pit stop midway through the race, get out of one car, and get in another car, and then carry on, which sounds almost farcical now, but it’s what you had to do then to drive innovation, is to do that in order to go to the next stage. 

Then in Gen2, that came up four years later, they had a battery big enough to start full races and start to actually make it a really good sport. Gen3, they’re going for some real speeds and making it happen. Gen4, that’s to come next year, you’ll see acceleration in line with Formula One. I’ve been fortunate enough to see some of the testing. You will see a really quite impressive car that starts us to question whether EV is there. It’s actually faster, it’s actually more than traditional ICE. 

That’s the tech of the car. But then, if you also look at the sport and how people have come to it and the fans and the demographic of the fans, a lot has changed in the last 11 years. We were out to enter season 12. In the last 11 years, we’ve had a complete democratization of how people access content and what people want from content. And as a new generation of fan coming through. This new generation of fan is younger. They’re more gender diverse. We have much closer to 50-50 representation in our fan base. And they want things personalized, and they’re very demanding about how they want it and the experience they expect. No longer are you just able to give them one race and everybody watches the same thing. We need to make things for them. You see that sort of change that’s come through in the last 11 years. 

Megan: It’s a huge amount of change in just over a decade, isn’t it? To navigate. And I wonder, Rohit, what was the strategic plan for Infosys when associating with Formula E? What did Infosys see in partnering with such a young sport? 

Rohit: Yeah. That’s a great question, Megan. When we looked at Formula E, we didn’t just see a racing championship. We saw the future. A sport, that’s electric, sustainable, and digital first. That’s exactly where Infosys wants to be, at the intersection of technology, innovation, and purpose. Our plan has three big goals. First, grow the fan base. Formula E wants to reach 500 million fans by 2030. That is not just a number. It’s a movement to make motor sport accessible and exciting for the new generation. To make that happen, we are building an AI-powered platform that gives personalized content to the fans, so that every fan feels connected and valued. Imagine a fan in Tokyo getting race insights tailored for their favorite driver, while another in London gets a sustainability story that matters to him. That’s the level of personalization we are aiming for. 

Second, bringing technology innovation. We have already launched the Stats Centre, which turns race data into interactive stories. And soon, Race Centre will take this to the next level with real time leaderboards to the race or tracks, overtakes, attack mode timelines, and even AI generated live commentary. Fans will not just watch, they will interact, predict podium finishes, and share their views globally. And third, supports sustainability. Formula E is already net-zero, but now their goal is to cut carbon by 45% by 2030. We’ll be enabling that through AI-driven sustainability, data management, tracking every watt of energy, every logistics decision. and modeling scenarios to make racing even greener. Partnering with a young sport gives us a chance to shape its digital future and show how technology can make racing exciting and responsible. For us, Formula E is not just a sport, it’s a statement about where the world is headed. 

Megan: Fantastic. 500 million fans, that’s a huge number, isn’t it? And with more scale often comes a kind of greater expectation. Dan, I know you touched on this a little in your first question, but what is it that your fans now really want from their interactions? Can you talk a bit more about what experiences they’re looking for? And also, how complex that really is to deliver that as well? 

Dan: I think a really telling thing about the modern day fan is I probably can’t tell you what they want from their experiences, because it’s individual and it’s unique for each of them. 

Megan: Of course. 

Dan: And it’s changing and it’s changing so fast. What somebody wants this month is going to be different from what they want in a couple of months’ time. And we’re having to learn to adapt to that. My CTO title, we often put focus on the technology in the middle of it. That’s what the T is. Actually, if you think about it, it’s continual transformation officer. You are constantly trying to change what you deliver and how you deliver it. Because if fans come through, they find new experiences, they find that in other sports. Sometimes not in sports, they find it outside, and then they’re coming in, and they expect that from you. So how can we make them more part of the sport, more personalized experience, get to know the athletes and the personalities and the characters within it? We’re a very technology centric sport. A lot of motor sport is, but really, people want to see people, right? And even when it’s technology, they want to see people interacting with technology, and it’s how do you get that out to show people. 

Megan: Yeah, it’s no mean feat. Rohit, you’ve worked with brands on delivering these sort of fan experiences across different sports. Is motor sports perhaps more complicated than others, given that fans watch racing for different reasons than just a win? They could be focused on team dynamics, a particular driver, the way the engine is built, and so on and so forth. How does motor sports compare and how important is it therefore, that Formula E has embraced technology to manage expectations? 

Rohit: Yeah, that’s an interesting point. Motor sports are definitely more complex than other sports. Fans don’t just care about who wins, they care about how some follow team strategies, others love driver rivalries, and many are fascinated by the car technology. Formula E adds another layer, sustainability and electric innovation. This makes personalization really important. Fans want more than results. They want stories and insights. Formula E understood this early and embraced technology. 

Think about the data behind a single race, lap times, energy usage, battery performance, attack mode activation, pit strategies, it’s a lot of data. If you just show the raw numbers, it’s overwhelming. But with Infosys Topaz, we turn that into simple and engaging stories. Fans can see how a driver fought back from 10th place to finish on the podium, or how a team managed energy better to gain an edge. And for new fans, we are adding explainer videos and interactive tools in the Race Center, so that they can learn about their sport easily. This is important because Formula E is still young, and many fans are discovering it for the first time. Technology is not just about meeting expectations; it’s elevating the entire fan experience and making the sport more inclusive. 

Megan: There’s an awful lot going on there. What are some of the other ways that Formula E has already put generative AI and other emerging technologies to use? Dan, when we’ve spoken about the demand for more personalized experiences, for example. 

Dan: I see the implementation of AI for us in three areas. We have AI within the sport. That’s in our DNA of the sport. Now, each team is using that, but how can we use that as a championship as well? How do we make it a competitive landscape? Now, we have AI that is in the fan-facing product. That’s what we’re working heavily on Infosys with, but we also have it in our broadcast product. As an example, you might have heard of a super slow-mo camera. A super slow-mo camera is basically, by taking three cameras and having them in exactly the same place so that you get three times the frame rate, and then you can do a slow-motion shot from that. And they used to be really expensive. Quite bulky cameras to put in. We are now using AI to take a traditional camera and interpolate between two frames to make it into a super slow image, and you wouldn’t really know the difference. Now, the joy of that, it means every camera can now be a super slow-mo camera. 

Megan: Wow. 

Dan: In other ways, we use it a little bit in our graphics products, and we iterate and we use it for things like showing driver audio. When the driver is speaking to his engineer or her engineer in the garage, we show that text now on screen. We do that using AI. We use AI to pick out the difference between the driver and another driver and the team engineer or the team principal and show that in a really good way. 

And we wouldn’t be able to do that. We’re not big enough to have a team of 24 people on stenographers typing. We have to use AI to be able to do that. That’s what’s really helped us grow. And then the last one is, how we use it in our business. Because ultimately, as we’ve got the fans, we’ve got the sport, but we also are running a business and we have to pick up these racetracks and move them around the world, and we have all these staff who have to get places. We have insurance who has to do all that kind of stuff, and we use it heavily in that area, particularly when it comes to what has a carbon impact for us. 

So things like our freight and our travel. And we are using the AI tools to tell us, a battery for instance, should we fly it? Should we send it by sea freight? Should we send it by row freight? Or should we just have lots of them? And that sort of depends. Now, a battery, if it was heavy, you’d think you probably wouldn’t fly it. But actually, because of the materials in it, because of the source materials that make it, we’re better off flying it. We’ve used AI to work through all those different machinations of things that would be too difficult to do at speed for a person. 

Megan: Well, sounds like there’s some fascinating things going on. I mean, of course, for a global brand, there is also the challenge of working in different markets. You mentioned moving everything around the world there. Each market with its own legal frameworks around data privacy, AI. How has technology also helped you navigate all of that, Dan? 

Dan: The other really interesting thing about AI is… I’ve worked in technology leadership roles for some time now. And historically, we would be going around the company, banging on everyone’s doors and dragging them towards technology, making them use systems, making them move things to the cloud and things like that. What AI has done is it’s turned that around on its head, and we now have people turning up, banging on our door because they want to use this tool, they want to use that tool. And we’re trying to accommodate all of that and it’s a great pleasure to see people that are so keen. AI is driving the tech adoption in general, which really helps the business. 

Megan: Dan, as the world’s first all-electric motor sport series, sustainability is obviously a real cornerstone of what Formula E is looking to do. Can you share with us how technology is helping you to achieve some of your ambitions when it comes to sustainability? 

Dan: We’ve been the only sport with a certified net-zero pathway, and we have to stay that part. It’s a really core fundamental part of our DNA. I sit on our management team here. There is a sustainability VP that sits there as well, who checks and challenges everything we do. She looks at the data centers we use, why we use them, why we’ve made the decisions we’ve made, to make sure that we’re making them all for the right reasons and the right ways. We specifically embed technology in a couple of ways. One is, we mentioned a little bit earlier, on our freight. Formula E’s freight for the whole championship is probably akin to one Formula One team, but it’s still by far, our biggest contributor to our impact. So we look about how we can make sure that we’ve refined that to get the minimum amount of air freight and sea freight, and use local wherever we can. That’s also part of our pledge about investing in the communities that we race in. 

The second then is about our staff travel. And we’ve done a really big piece of work over the last four to five years, partly accelerated through the covid-19 era actually, of doing remote working and remote TV production. Used to be traditionally, you would fly a hundred plus people out to racetracks, and then they would make the television all on site in trucks, and then they would be satellite distributed out of the venue. Now, what we do is we put in some internet connections, dual and diverse internet connections, and we stream every single camera back. 

Megan: Right. 

Dan: That means on site, we only need camera operators. Some of them actually, are remotely operated anyway, but we need camera operators, and then some engineering teams to just keep everything running. And then back in our home base, which is in London, in the UK, we have our remote production center where we layer on direction, graphics, audio, replay, team radio, all of those bits that break the color and make the program and add to that significant body of people. We do that all remotely now. Really interesting actually, a bit. So that’s the carbon sustainability story, but there is a further ESG piece that comes out of it and we haven’t really accommodated when we went into it, is the diversity in our workforce by doing that. We were discovering that we had quite a young, equally diverse workforce until around the age of 30. And then once that happened, then we were finding we were losing women, and that’s really because they didn’t want to travel. 

Megan: Right. 

Dan: And that’s the age of people starting to have children, and things were starting to change. And then we had some men that were traveling instead, and they weren’t seeing their children and it was sort of dividing it unnecessarily. But by going remote, by having so much of our people able to remotely… Or even if they do have to travel, they’re not traveling every single week. They’re now doing that one in three. They’re able to maintain the careers and the jobs they want to do, whilst having a family lifestyle. And it also just makes a better product by having people in that environment. 

Megan: That’s such an interesting perspective, isn’t it? It’s a way of environmental sustainability intersects with social sustainability. And Rohit, and your work are so interesting. And Rohit, can you share any of the ways that Infosys has worked with Formula E, in terms of the role of technology as we say, in furthering those ambitions around sustainability? 

Rohit: Yeah. Infosys understands that sustainability is at the heart of Formula E, and it’s a big part of why this partnership matters. Formula E is already net-zero certified, but now, they have an ambitious goal to cut carbon emissions by 45%. Infosys is helping in two ways. First, we have built AI-powered sustainability data tools that make carbon reporting accurate and traceable. Every watt of energy, every logistic decision, every material use can be tracked. Second, we use predictive analytics to model scenarios, like how changing race logistics or battery technology impact emissions so Formula E can make smarter, greener decisions. For us, it’s about turning sustainability from a report into an action plan, and making Formula E a global leader in green motor sport. 

Megan: And in April 2025, Formula E working with Infosys launched its Stats Centre, which provides fans with interactive access to the performances of their drivers and teams, key milestones and narratives. I know you touched on this before, but I wonder if you could tell us a bit more about the design of that platform, Rohit, and how it fits into Formula E’s wider plans to personalize that fan experience? 

Rohit: Sure. The Stats Centre was a big step forward. Before this, fans had access to basic statistics on the website and the mobile app, but nothing told the full story and we wanted to change that. Built on Infosys Topaz, the Stats Centre uses AI to turn race data into interactive stories. Fans can explore key stat cards that adapt to race timelines, and even chat with an AI companion to get instant answers. It’s like having a person race analyst at your fingertips. And we are going further. Next year, we’ll launch Race Centre. It’ll have live data boards, 2D track maps showing every driver’s position, overtakes and more attack timelines, and AI-generated commentary. Fans can predict podium finishes, vote for the driver of the race, and share their views on social media. Plus, we are adding video explainers for new fans, covering rules, strategies, and car technology. Our goal is simple: make every moment exciting and easy to understand. Whether you are a hardcore fan or someone watching Formula E for the first time, you’ll feel connected and informed. 

Megan: Fantastic. Sounds brilliant. And as you’ve explained, Dan, leveraging data and AI can come with these huge benefits when it comes to the depth of fan experience that you can deliver, but it can also expose you to some challenges. How are you navigating those at Formula E? 

Dan: The AI generation has presented two significant challenges to us. One is that traditional SEO, traditional search engine optimization, goes out the window. Right? You are now looking at how do we design and build our systems and how do we populate them with the right content and the right data, so that the engines are picking it up correctly and displaying it? The way that the foundational models are built and the speed and the cadence of which they’re updated, means quite often… We’re a very fast-changing organization. We’re a fast-changing product. Often, the models don’t keep up. And that’s because they are a point in time when they were trained. And that’s something that the big organizations, the big tech organizations will fix with time. But for now, what we have to do is we have to learn about how we can present our fan-facing, web-facing products to show that correctly. That’s all about having really accurate first-party content, effectively earned media. That’s the piece we need to do. 

Then the second sort of challenge is sadly, whilst these tools are available to all of us, and we are using them effectively, so are another part of the technology landscape, and that is the cybersecurity basically they come with. If you look at the speed of the cadence and severity of hacks that are happening now, it’s just growing and growing and growing, and that’s because they have access to these tools too. And we’re having to really up our game and professionalize. And that’s really hard for an innovative organization. You don’t want to shut everything down. You don’t want to protect everything too much because you want people to be able to try new things. Right? If I block everything to only things that the IT team had heard of, we’d never get anything new in, and it’s about getting that balance right. 

Megan: Right. 

Dan: Rohit, you probably have similar experiences? 

Megan: How has Infosys worked with Formula E to help it navigate some of that, Rohit? 

Rohit: Yeah. Infosys has helped Formula E tackle some of the challenges in three key ways, simplify complex race data into engaging fan experience through platforms like Stats Centre, building a secure and scalable cloud data backbone for the real-time insights, and enabling sustainability goals with AI-driven carbon tracking and predictive analytics. This solution makes the sport interactive, more digital, and more responsible. 

Megan: Fantastic. I wondered if we could close with a bit of a future forward look. Can you share with us any innovations on the horizon at Formula E that you are really excited about, Dan? 

Dan: We have mentioned the Race Centre is going to launch in the next couple of months, but the really exciting thing for me is we’ve got an amazing season ahead of us. It’s the last season of our Gen3 car, with 10 really exciting teams on the grid. We are going at speed with our tech innovation roadmap and what our fans want. And we’re building up towards our Gen4 car, which will come out for season 13 in a year’s time. That will get launched in 2026, and I think it will be a game changer in how people perceive electric motor sport and electric cars in general. 

Megan: It sounds like there’s all sorts of exciting things going on. And Rohit too, what’s coming up via this partnership that you are really looking forward to sharing with everyone? 

Rohit: Two things stand out for me. First is the AI-powered fan data platform that I’ve already spoken about. Second is the launch of Race Centre. It’s going to change how fans experience live racing. And beyond final engagement, we are helping Formula E lead in sustainability with AI tools that model carbon impact and optimize logistics. This means every race can be smarter and greener. Our goal is clear: help Formula E be the most digital and sustainable motor sport in the world. The future is electric, and with AI, it’s more engaging than ever. 

Megan: Fantastic. Thank you so much, both. That was Rohit Agnihotri, principal technologist at Infosys, and Dan Cherowbrier, CITO of Formula E, whom I spoke with from Brighton, England.  

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.  

This show is available wherever you get your podcasts. And if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review and this episode was produced by Giro Studios. Thanks for listening. 

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Securing VMware workloads in regulated industries

At a regional hospital, a cardiac patient’s lab results sit behind layers of encryption, accessible to his surgeon but shielded from those without strictly need-to-know status. Across the street at a credit union, a small business owner anxiously awaits the all-clear for a wire transfer, unaware that fraud detection systems have flagged it for further review.

Such scenarios illustrate how companies in regulated industries juggle competing directives: Move data and process transactions quickly enough to save lives and support livelihoods, but carefully enough to maintain ironclad security and satisfy regulatory scrutiny.

Organizations subject to such oversight walk a fine line every day. And recently, a number of curveballs have thrown off that hard-won equilibrium. Agencies are ramping up oversight thanks to escalating data privacy concerns; insurers are tightening underwriting and requiring controls like MFA and privileged-access governance as a condition of coverage. Meanwhile, the shifting VMware landscape has introduced more complexity for IT teams tasked with planning long-term infrastructure strategies. 

Download the full article

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Harnessing human-AI collaboration for an AI roadmap that moves beyond pilots

The past year has marked a turning point in the corporate AI conversation. After a period of eager experimentation, organizations are now confronting a more complex reality: While investment in AI has never been higher, the path from pilot to production remains elusive. Three-quarters of enterprises remain stuck in experimentation mode, despite mounting pressure to convert early tests into operational gains.

“Most organizations can suffer from what we like to call PTSD, or process technology skills and data challenges,” says Shirley Hung, partner at Everest Group. “They have rigid, fragmented workflows that don’t adapt well to change, technology systems that don’t speak to each other, talent that is really immersed in low-value tasks rather than creating high impact. And they are buried in endless streams of information, but no unified fabric to tie it all together.”

The central challenge, then, lies in rethinking how people, processes, and technology work together.

Across industries as different as customer experience and agricultural equipment, the same pattern is emerging: Traditional organizational structures—centralized decision-making, fragmented workflows, data spread across incompatible systems—are proving too rigid to support agentic AI. To unlock value, leaders must rethink how decisions are made, how work is executed, and what humans should uniquely contribute.

“It is very important that humans continue to verify the content. And that is where you’re going to see more energy being put into,” Ryan Peterson, EVP and chief product officer at Concentrix.

Much of the conversation centered on what can be described as the next major unlock: operationalizing human-AI collaboration. Rather than positioning AI as a standalone tool or a “virtual worker,” this approach reframes AI as a system-level capability that augments human judgment, accelerates execution, and reimagines work from end to end. That shift requires organizations to map the value they want to create; design workflows that blend human oversight with AI-driven automation; and build the data, governance, and security foundations that make these systems trustworthy.

“My advice would be to expect some delays because you need to make sure you secure the data,” says Heidi Hough, VP for North America aftermarket at Valmont. “As you think about commercializing or operationalizing any piece of using AI, if you start from ground zero and have governance at the forefront, I think that will help with outcomes.”

Early adopters are already showing what this looks like in practice: starting with low-risk operational use cases, shaping data into tightly scoped enclaves, embedding governance into everyday decision-making, and empowering business leaders, not just technologists, to identify where AI can create measurable impact. The result is a new blueprint for AI maturity grounded in reengineering how modern enterprises operate.

“Optimization is really about doing existing things better, but reimagination is about discovering entirely new things that are worth doing,” says Hung.

Watch the webcast.

This webcast is produced in partnership with Concentrix.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.