AI search could break the web

In late October, News Corp filed a lawsuit against Perplexity AI, a popular AI search engine. At first glance, this might seem unremarkable. After all, the lawsuit joins more than two dozen similar cases seeking credit, consent, or compensation for the use of data by AI developers. Yet this particular dispute is different, and it might be the most consequential of them all.

At stake is the future of AI search—that is, chatbots that summarize information from across the web. If their growing popularity is any indication, these AI “answer engines” could replace traditional search engines as our default gateway to the internet. While ordinary AI chatbots can reproduce—often unreliably—information learned through training, AI search tools like Perplexity, Google’s Gemini, or OpenAI’s now-public SearchGPT aim to retrieve and repackage information from third-party websites. They return a short digest to users along with links to a handful of sources, ranging from research papers to Wikipedia articles and YouTube transcripts. The AI system does the reading and writing, but the information comes from outside.

At its best, AI search can better infer a user’s intent, amplify quality content, and synthesize information from diverse sources. But if AI search becomes our primary portal to the web, it threatens to disrupt an already precarious digital economy. Today, the production of content online depends on a fragile set of incentives tied to virtual foot traffic: ads, subscriptions, donations, sales, or brand exposure. By shielding the web behind an all-knowing chatbot, AI search could deprive creators of the visits and “eyeballs” they need to survive. 

If AI search breaks up this ecosystem, existing law is unlikely to help. Governments already believe that content is falling through cracks in the legal system, and they are learning to regulate the flow of value across the web in other ways. The AI industry should use this narrow window of opportunity to build a smarter content marketplace before governments fall back on interventions that are ineffective, benefit only a select few, or hamper the free flow of ideas across the web.

Copyright isn’t the answer to AI search disruption

News Corp argues that using its content to extract information for AI search amounts to copyright infringement, claiming that Perplexity AI “compete[s] for readers while simultaneously freeriding” on publishers.That sentiment is likely shared by the New York Times, which sent a cease-and-desist letter to Perplexity AI in mid-October.

In some respects, the case against AI search is stronger than other cases that involve AI training. In training, content has the biggest impact when it is unexceptional and repetitive; an AI model learns generalizable behaviors by observing recurring patterns in vast data sets, and the contribution of any single piece of content is limited. In search, content has the most impact when it is novel or distinctive, or when the creator is uniquely authoritative. By design, AI search aims to reproduce specific features from that underlying data, invoke the credentials of the original creator, and stand in place of the original content. 

Even so, News Corp faces an uphill battle to prove that Perplexity AI infringes copyright when it processes and summarizes information. Copyright doesn’t protect mere facts, or the creative, journalistic, and academic labor needed to produce them. US courts have historically favored tech defendants who use content for sufficiently transformative purposes, and this pattern seems likely to continue. And if News Corp were to succeed, the implications would extend far beyond Perplexity AI. Restricting the use of information-rich content for noncreative or nonexpressive purposes could limit access to abundant, diverse, and high-quality data, hindering wider efforts to improve the safety and reliability of AI systems. 

Governments are learning to regulate the distribution of value online

If existing law is unable to resolve these challenges, governments may look to new laws. Emboldened by recent disputes with traditional search and social media platforms, governments could pursue aggressive reforms modeled on the media bargaining codes enacted in Australia and Canada or proposed in California and the US Congress. These reforms compel designated platforms to pay certain media organizations for displaying their content, such as in news snippets or knowledge panels. The EU imposed similar obligations through copyright reform, while the UK has introduced broad competition powers that could be used to enforce bargaining. 

In short, governments have shown they are willing to regulate the flow of value between content producers and content aggregators, abandoning their traditional reluctance to interfere with the internet.

However, mandatory bargaining is a blunt solution for a complex problem. These reforms favor a narrow class of news organizations, operating on the assumption that platforms like Google and Meta exploit publishers. In practice, it’s unclear how much of their platform traffic is truly attributable to news, with estimates ranging from 2% to 35% of search queries and just 3% of social media feeds. At the same time, platforms offer significant benefit to publishers by amplifying their content, and there is little consensus about the fair apportionment of this two-way value. Controversially, the four bargaining codes regulate simply indexing or linking to news content, not just reproducing it. This threatens the “ability to link freely” that underpins the web. Moreover, bargaining rules focused on legacy media—just 1,400 publications in Canada, 1,500 in the EU, and 62 organizations in Australia—ignore countless everyday creators and users who contribute the posts, blogs, images, videos, podcasts, and comments that drive platform traffic.

Yet for all its pitfalls, mandatory bargaining may become an attractive response to AI search. For one thing, the case is stronger. Unlike traditional search—which indexes, links, and displays brief snippets from sources to help a user decide whether to click through—AI search could directly substitute generated summaries for the underlying source material, potentially draining traffic, eyeballs, and exposure from downstream websites. More than a third of Google sessions end without a click, and the proportion is likely to be significantly higher in AI search. AI search also simplifies the economic calculus: Since only a few sources contribute to each response, platforms—and arbitrators—can more accurately track how much specific creators drive engagement and revenue.  

Ultimately, the devil is in the details. Well-meaning but poorly designed mandatory bargaining rules might do little to fix the problem, protect only a select few, and potentially cripple the free exchange of information across the web. 

Industry has a narrow window to build a fairer reward system

However, the mere threat of intervention could have a bigger impact than actual reform. AI firms quietly recognize the risk that litigation will escalate into regulation. For example, Perplexity AI, OpenAI, and Google are already striking deals with publishers and content platforms, some covering AI training and others focusing on AI search. But like early bargaining laws, these agreements benefit only a handful of firms, some of which (such as Reddit) haven’t yet committed to sharing that revenue with their own creators. 

This policy of selective appeasement is untenable. It neglects the vast majority of creators online, who cannot readily opt out of AI search and who do not have the bargaining power of a legacy publisher. It takes the urgency out of reform by mollifying the loudest critics. It legitimizes a few AI firms through confidential and intricate commercial deals, making it difficult for new entrants to obtain equal terms or equal indemnity and potentially entrenching a new wave of search monopolists. In the long term, it could create perverse incentives for AI firms to favor low-cost and low-quality sources over high-quality but more expensive news or content, fostering a culture of uncritical information consumption in the process.

Instead, the AI industry should invest in frameworks that reward creators of all kinds for sharing valuable content. From YouTube to TikTok to X, tech platforms have proven they can administer novel rewards for distributed creators in complex content marketplaces. Indeed, fairer monetization of everyday content is a core objective of the “web3” movement celebrated by venture capitalists. The same reasoning carries over to AI search. If queries yield lucrative engagement but users don’t click through to sources, commercial AI search platforms should find ways to attribute that value to creators and share it back at scale.

Of course, it’s possible that our digital economy was broken from the start. Subsistence on trickle-down ad revenue may be unsustainable, and the attention economy has inflicted real harm to privacy, integrity, and democracy online. Supporting quality news and fresh content may require other forms of investment or incentives. 

But we shouldn’t give up on the prospect of a fairer digital economy. If anything, while AI search makes content bargaining more urgent, it also makes it more feasible than ever before. AI pioneers should seize this opportunity to lay the foundations for a smart, equitable, and scalable reward system. If they don’t, governments now have the frameworks—and confidence—to impose their own vision of shared value.

Benjamin Brooks is a fellow at the Berkman Klein Center at Harvard scrutinizing the regulatory and legislative response to AI. He previously led public policy for Stability AI, a developer of open models for image, language, audio, and video generation. His views do not necessarily represent those of any affiliated organization, past or present. 

Avoiding value decay in digital transformation

Mission-critical digital transformation projects too often end with a whimper rather than a bang. An estimated three-quarters of corporate transformation efforts fail to deliver their intended return on investment.

Given the rapidly evolving technology landscape, companies often struggle to deliver short-term results while simultaneously reinventing the organization and keeping the business running day-to-day. Post-implementation, some companies cannot even perform basic functions like processing orders efficiently or closing the books quickly at the end of a quarter. The problem: Leaders often fail to consider how to sustain value creation over time as programs scale from the pilot phase to wide-scale execution.

“Most implementations are viewed as IT projects,” says Tim Hertzig, a principal in Deloitte’s Technology practice and global product owner of Deloitte’s Ascend digital transformation solution. “These projects fail to achieve the value they initially aspire to, because they don’t factor in change management that ensures adoption and they don’t consider industry-leading practices.”’

Technology rarely drives value alone, according to Kristi Kaplan, Deloitte principal and US executive sponsor of Deloitte’s Ascend platform. “Rather it’s how technology is implemented and adopted in an organization that actually creates the value,” she says. To deliver business results that gain momentum rather than fade away, executives need a long-term transformation plan.

According to Deloitte’s analysis, the right combination of digital transformation actions can unlock as much as $1.25 trillion in additional market capitalization across all Fortune 500 companies. On the other hand, implementing digital change for its own sake without a strategy and technology-aligned investments—“random acts of digital”—could cost firms $1.5 trillion.

Best practices for implementation

To unlock this potential value, there are a number of best practices leading companies use to design and execute digital transformations successfully, Deloitte has found. Three stand out:

Ensure inclusive governance: Project governance needs to span business, HR, finance, and IT stakeholders, creating transparency in reporting and decision-making to maintain forward momentum. Successful projects are jointly owned; all executives understand where they are in the project lifecycle and what decisions need to be made to keep the program moving.

“Where that transparency doesn’t exist, or where all the stakeholders are not at the table and do not feel ownership in these programs, the result can be an IT organization that’s driving what truly needs to be a business transformation,” says Kaplan. “When business leaders fail to own things like change management, technology adoption, and organizational retraining, the risk profile goes way up.”

“Executives need the assurance and the visibility that the ROI of their technology investments is being realized, and when there are risks, they need transparency before problems grow into full blown issues,” Hertzig adds. “That transparency becomes embedded into the governance rhythms of an organization.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Readying business for the age of AI

Rapid advancements in AI technology offer unprecedented opportunities to enhance business operations, customer and employee engagement, and decision-making. Executives are eager to see the potential of AI realized. Among 100 c-suite respondents polled in WNS Analytics’ “The Future of Enterprise Data & AI” report, 76% say they are already implementing or planning to implement generative AI solutions. Among those same leaders, however, 67% report struggling with data migration, and others cite grappling with data quality, talent shortages, and data democratization issues. 

MIT Technology Review Insights recently had a conversation with Alex Sidgreaves, chief data officer at Zurich Insurance; Bogdan Szostek, chief data officer at Animal Friends; Shan Lodh, director of data platforms at Shawbrook Bank; and Gautam Singh, head of data, analytics, and AI at WNS Analytics, to discuss how enterprises can navigate the burgeoning era of AI.

AI across industries

There is no shortage of AI use cases across sectors. Retailers are tailoring shopping experiences to individual preferences by leveraging customer behavior data and advanced machine learning models. Traditional AI models can deliver personalized offerings. However, with generative AI, these personalized offerings are elevated by incorporating tailored communication that considers the customer’s persona, behavior, and past interactions. In insurance, by leveraging generative AI, companies can identify subrogation recovery opportunities that a manual handler might overlook, enhancing efficiency and maximizing recovery potential. Banking and financial services institutions are leveraging AI to bolster customer due diligence and enhance anti-money laundering efforts by leveraging AI-driven credit risk management practices. AI technologies are enhancing diagnostic accuracy through sophisticated image recognition in radiology, allowing for earlier and more precise detection of diseases while predictive analytics enable personalized treatment plans.

The core of successful AI implementation lies in understanding its business value, building a robust data foundation, aligning with the strategic goals of the organization, and infusing skilled expertise across every level of an enterprise.

  • “I think we should also be asking ourselves, if we do succeed, what are we going to stop doing? Because when we empower colleagues through AI, we are giving them new capabilities [and] faster, quicker, leaner ways of doing things. So we need to be true to even thinking about the org design. Oftentimes, an AI program doesn’t work, not because the technology doesn’t work, but the downstream business processes or the organizational structures are still kept as before.” Shan Lodh, director of data platforms, Shawbrook Bank

Whether automating routine tasks, enhancing customer experiences, or providing deeper insights through data analysis, it’s essential to define what AI can do for an enterprise in specific terms. AI’s popularity and broad promises are not good enough reasons to jump headfirst into enterprise-wide adoption. 

“AI projects should come from a value-led position rather than being led by technology,” says Sidgreaves. “The key is to always ensure you know what value you’re bringing to the business or to the customer with the AI. And actually always ask yourself the question, do we even need AI to solve that problem?”

Having a good technology partner is crucial to ensure that value is realized. Gautam Singh, head of data, analytics, and AI at WNS, says, “At WNS Analytics, we keep clients’ organizational goals at the center. We have focused and strengthened around core productized services that go deep in generating value for our clients.” Singh explains their approach, “We do this by leveraging our unique AI and human interaction approach to develop custom services and deliver differentiated outcomes.”

The foundation of any advanced technology adoption is data and AI is no exception. Singh explains, “Advanced technologies like AI and generative AI may not always be the right choice, and hence we work with our clients to understand the need, to develop the right solution for each situation.” With increasingly large and complex data volumes, effectively managing and modernizing data infrastructure is essential to provide the basis for AI tools. 

This means breaking down silos and maximizing AI’s impact involves regular communication and collaboration across departments from marketing teams working with data scientists to understand customer behavior patterns to IT teams ensuring their infrastructure supports AI initiatives. 

  • “I would emphasize the growing customer’s expectations in terms of what they expect our businesses to offer them and to provide us a quality and speed of service. At Animal Friends, we see the generative AI potential to be the biggest with sophisticated chatbots and voice bots that can serve our customers 24/7 and deliver the right level of service, and being cost effective for our customers. Bogdan Szostek, chief data officer, Animal Friends

Investing in domain experts with insight into the regulations, operations, and industry practices is just as necessary in the success of deploying AI systems as the right data foundations and strategy. Continuous training and upskilling are essential to keep pace with evolving AI technologies.

Ensuring AI trust and transparency

Creating trust in generative AI implementation requires the same mechanisms employed for all emerging technologies: accountability, security, and ethical standards. Being transparent about how AI systems are used, the data they rely on, and the decision-making processes they employ can go a long way in forging trust among stakeholders. In fact, The Future of Enterprise Data & AI report cites 55% of organizations identify “building trust in AI systems among stakeholders” as the biggest challenge when scaling AI initiatives. 

“We need talent, we need communication, we need the ethical framework, we need very good data, and so on,” says Lodh. “Those things don’t really go away. In fact, they become even more necessary for generative AI, but of course the usages are more varied.” 

AI should augment human decision-making and business workflows. Guardrails with human oversight ensure that enterprise teams have access to AI tools but are in control of high-risk and high-value decisions.

“Bias in AI can creep in from almost anywhere and will do so unless you’re extremely careful. Challenges come into three buckets. You’ve got privacy challenges, data quality, completeness challenges, and then really training AI systems on data that’s biased, which is easily done,” says Sidgreaves. She emphasizes it is vital to ensure that data is up-to-date, accurate, and clean. High-quality data enhances the reliability and performance of AI models. Regular audits and data quality checks can help maintain the integrity of data.

An agile approach to AI implementation

ROI is always top of mind for business leaders looking to cash in on the promised potential of AI systems. As technology continues to evolve rapidly and the potential use cases of AI grow, starting small, creating measurable benchmarks, and adopting an agile approach can ensure success in scaling solutions. By starting with pilot projects and scaling successful initiatives, companies can manage risks and optimize resources. Sidgreaves, Szostek, and Lodh stress that while it may be tempting to throw everything at the wall and see what sticks, accessing the greatest returns from expanding AI tools means remaining flexible, strategic, and iterative. 

In insurance, two areas where AI has a significant ROI impact are risk and operational efficiency. Sidgreaves underscores that reducing manual processes is essential for large, heritage organizations, and generative AI and large language models (LLMs) are revolutionizing this aspect by significantly diminishing the need for manual activities.

To illustrate her point, she cites a specific example: “Consider the task of reviewing and drafting policy wording. Traditionally, this process would take an individual up to four weeks. However, with LLMs, this same task can now be completed in a matter of seconds.”  

Lodh adds that establishing ROI at the project’s onset and implementing cross-functional metrics are crucial for capturing a comprehensive view of a project’s impact. For instance, using LLMs for writing code is a great example of how IT and information security teams can collaborate. By assessing the quality of static code analysis generated by LLMs, these teams can ensure that the code meets security and performance standards.

“It’s very hard because technology is changing so quickly,” says Szostek. “We need to truly apply an agile approach, do not try to prescribe all the elements of the future deliveries in 12, 18, 24 months. We have to test and learn and iterate, and also fail fast if that’s needed.” 

Navigating the future of the AI era 

The rapid evolution of the digital age continues to bring immense opportunities for enterprises globally from the c-suite to the factory floor. With no shortage of use cases and promises to boost efficiencies, drive innovation, and improve customer and employee experiences, few business leaders dismiss the proliferation of AI as mere hype. However, the successful and responsible implementation of AI requires a careful balance of strategy, transparency, and robust data privacy and security measures.

  • “It’s really easy as technology people to be driven by the next core thing, but we would have to be solving a business problem. So the key is to always ensure you know what value you’re bringing to the business or to the customer with the AI. And actually always ask yourself the question, do we even need AI to solve that problem?” — Alex Sidgreaves, chief data officer, Zurich Insurance

Fully harnessing the power of AI while maintaining trust means defining clear business values, ensuring accountability, managing data privacy, balancing innovation with ethical use, and staying ahead of future trends. Enterprises must remain vigilant and adaptable, committed to ethical practices and an agile approach to thrive in this rapidly changing business landscape.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The US physics community is not done working on trust

In April 2024, Nature released detailed information about investigations into claims made by Ranga Dias, a physicist at the University of Rochester, in two high-profile papers the journal had published about the discovery of room-temperature superconductivity. Those two papers, which showed evidence of fabricated data, were eventually retracted, along with other papers from the Dias group on related physics, including one in Physical Review Letters

This work made it into top journals because reviewers are used to being able to trust that data have not been so completely manipulated, and Dias’s experiments required very high pressures that other labs could not easily replicate. One natural reaction from the physics community would be “How could we ever have let this happen?” But another should be “Here we go again!” 

Alas, a pattern of similar behavior has been known for at least two decades. The history of such deceptions led the American Physical Society (APS) to study occurrences of fabrication, falsification, plagiarism, and harassment, and to create structures to address the issue. The APS work helped solidify community standards, but ethical violations are still a critical problem. 

Back in 2003, in response to two high-profile cases of premeditated fraud in physics, one of them remarkably similar to the cases being discussed now, the APS created a Task Force on Ethics. It conducted surveys to learn about the kind of ethics training physics researchers receive, and to determine the community’s awareness of a variety of ethics issues. The most compelling responses came from a survey of APS “junior members” (those who had earned their PhD in the previous three years). Approximately 50% of these members responded, illustrating tremendous concern about a number of ethics violations they had either observed or been forced to participate in. A 2004 Physics Today article that presented the survey data showed the types of ethics violations reported, including instances of data fabrication, fraud, and plagiarism (the federal definition of research misconduct). It also brought to light serious accusations of bullying and sexual harassment. The survey data revealed that ethics education was casual at best. 

Following the publication of the survey results and many discussions within the physics community, the APS issued an ethics statement focused on respectful treatment of subordinates. It also charged a task force with improving resources for ethics education, resulting in a collection of physics-centric case studies to facilitate training and discussion on ethical matters. And together with the scientific community, the APS’s journals established an explicit focus on publication ethics. 

In 2018 the APS updated and consolidated its ethics statements and expanded the scope of ethical misbehaviors to include harassment, sexual misconduct, conflicts of commitment, and misuse of public funds. The resulting Ethics Guidelines were adopted by the APS Council in 2019, and at the same time a standing Ethics Committee was established to monitor ethics issues in the physics community. Continuing its focus on education, the APS collaborated with the American Association of Physics Teachers (AAPT) to develop additional materials. The online guide Effective Practices for Physics Programs (known as EP3) is an excellent resource, designed to facilitate efforts by departments and other groups to educate our community through discussions. We particularly recommend the chapter titled “Guide to Ethics.” The APS has joined the Committee on Publication Ethics and the International Association of Scientific, Technical, and Medical Publishers to combat the threat posed by paper mills

What sort of impact have these actions had? In 2020, the APS Ethics Committee, in partnership with the Statistical Research Center of the American Institute of Physics, conducted two additional surveys, described in 2023 and 2024 articles in Physics Today. One targeted early-career members (those who had earned their PhD within the previous five years) and graduate students for comparison to the 2004 survey results, and the other focused on physics department chairs in the US. The surveys showed that ethics education in physics departments had improved in the intervening 15 years, but that bullying and sexual harassment were still problems for a number of members. Importantly, most cases of ethical violations experienced or observed by this group go unreported, for fear of inaction or reprisals. When the results of the two surveys were compared, clear differences emerged between the perspectives of department chairs and those of students and postdocs on the extent of ethical violations and the best way to deliver ethics training.

These surveys showed that improved education alone is not enough to sustain a culture of ethics in physics. They uncovered suggestive patterns to explain why some complaints about ethical violations are reported and resolved but most are not. The main reason young scientists keep quiet about fabrication, falsification, plagiarism, or harassment is that they fear complaints will destroy their careers while the perpetrators go untouched. In cases that were resolved, there were people that those with complaints trusted well enough to share their concerns, and those people in turn had enough power and connections to follow through and find a resolution. We call this a trust network. Key figures in a trust network could be an associate chair, an ombudsperson, or a faculty member. These people take it on themselves to listen to concerns, whoever raises them, and bring them to the institution’s attention. Indeed, similar networks would be highly valuable in any institution that employs professional scientists for research and development, since unethical behavior can happen anywhere. How to create and nurture such networks is a matter that needs more attention. 

Just as reviewers and journal editors need to be able to trust that data in a paper are not fabricated or falsified, all participants in the scientific enterprise need to be able to trust that their institutions fully support them as ethical people. Ranga Dias’s graduate students had worries about data quality early on but were caught in a power dynamic. Problems might have been recognized earlier if the students had been able to be fully engaged in the institutional response.

Fostering trust networks and continuing to use education to build an understanding of all the nuances involved in ethical decision-making are powerful tools to reinforce ethical behavior. We need to ingrain them as deeply as technical expertise.

Frances Houle is a senior scientist in the Chemical Sciences and Molecular Biophysics and Integrated Bioimaging Divisions at Lawrence Berkeley National Laboratory and was chair of the APS Ethics Committee in 2021. 

Kate Kirby is chief executive officer emerita of the APS and senior physicist (retired) and former associate director of the Harvard-Smithsonian Center for Astrophysics.

Laura Greene is the chief scientist of the National High Magnetic Field Laboratory, the Marie Krafft Professor of Physics at Florida State University, and the 2017 APS president. She presently serves on the President’s Council of Advisors on Science and Technology. 

Michael Marder is professor of physics, director of the Center for Nonlinear Dynamics, and executive director of UTeach at the University of Texas at Austin and was the founding chair of the APS Ethics Committee, serving in 2019 and 2020.

Transforming the energy industry through disruptive innovation

In the rhythm of our fast-paced lives, most of us don’t stop to think about where electricity comes from or or how it powers homes, industries, and the technologies that connect people around the world. As populations and economies grow, energy demands are set to increase by 50% by 2050–challenging century-old energy systems to adapt with innovative and agile solutions. This comes at a time when climate change is making its presence felt more than ever; 2023 marked the warmest year since records began in 1850, crossing the 1.5 degrees global warming threshold. 

Nadège Petit of Schneider Electric confronts this challenge head-on, saying, “We have no choice but to change the way we produce, distribute, and consume energy, and do it sustainably to tackle both the energy and climate crises.” She explains further that digital technologies are key to navigating this path, and Schneider Electric’s AI-enabled IoT solutions can empower customers to take control of their energy use, enhancing efficiency and resiliency.

Petit acknowledges the complexity of crafting and implementing robust sustainability strategies. She highlights the importance of taking an incremental stepwise approach, and adopting open standards, to drive near-term impact while laying the foundation for long-term decarbonization goals. 

Because the energy landscape is evolving rapidly, it’s critical to not just keep pace but to anticipate and shape the future. Much like actively managing health through food and fitness regimes, energy habits need to be monitored as well. This can transform passive consumers to become energy prosumers–those that produce, consume, and manage energy. Petit’s vision is one where “buildings and homes generate their own energy from renewable sources, use what’s needed, and feed the excess back to the grid.”  

To catalyze this transformation, Petit underscores the power of collaboration and innovation. For example, Schneider Electric’s SE Ventures invests in startups to provide new perspectives and capabilities to accelerate sustainable energy solutions. 

“It’s all about striking a balance to ensure that our relationship with startups are mutually beneficial, knowing when to provide guidance and resources when they need it, but also when to step back and allow them to thrive independently,” says Petit. 

This episode of Business Lab is produced in partnership with Schneider Electric. 

Full transcript 

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Our topic today is disruptive innovation in the energy industry and beyond. We use energy every day. It powers our homes, buildings, economies, and lifestyles, but where it came from or how our use affects the global energy ecosystem is changing, and our energy ecosystem needs to change with it.

 My guest is Nadège Petit, the chief innovation officer at Schneider Electric. 

This podcast is produced in partnership with Schneider Electric. 

Welcome, Nadège. 

Nadège Petit: Hi, everyone. Thank you for having me today. 

Laurel: Well, we’re glad you’re here. 

Let’s start off with a simple question to build that context around our conversation. What is Schneider Electric’s mission? And as the chief innovation officer leading its Innovation at the Edge team, what are some examples of what the team is working on right now? 

Nadège: Let me set up this scene a little bit here. In recent years, our world has been shaped by a series of significant disruptions. The pandemic has driven a sharp increase in the demand of digital tools and technologies, with a projected 6x growth in the number of IoT devices between 2020 and 2030, and a 140x growth in IP traffic between 2020 and 2040. 

Simultaneously, there has been a parallel acceleration in energy demands. Electrical consumption has been increasing by 5,000 terawatt hours every 10 years over the past two decades. This is set to double in the next 10 years and then quadruple by 2040 This is amplified by the most severe energy crisis that we are facing now since the 1970s. Over 80% of carbon emissions are coming from energy, so electrifying the world and decarbonizing [the] energy sector is a must. We cannot overlook the climate crisis while meeting these energy demands. In 2023, the global average temperature was the warmest on record since 1850, surpassing the 1.5 degrees global warming limit. So, we have no choice but to change the way we produce, distribute, and consume energy, and do it sustainably to tackle both the energy and climate crises. This gives us a rare opportunity to reimagine and create a clean energy future we want. 

Schneider Electric as an energy management and digital automation company, aims to be the digital partner for sustainability and efficiency for our customers. With end-to-end experience in the energy sector, we are uniquely positioned to help customers digitize, electrify, and deploy sustainable technologies to help them progress toward net-zero. 

As for my role, we know that innovation is pivotal to drive the energy transition. The Innovation at the Edge team leads the way in discovering, developing, and delivering disruptive technologies that will define a more digital, electric, and sustainable energy landscape. We function today as an innovation engine, bridging internal and external innovation, to introduce new solutions, services and businesses to the market. Ultimately, we are crafting the future businesses for Schneider Electric in this sector. And to do this, we nourish a culture that recognizes and celebrates innovation. We welcome new ideas, consider new perspectives inside and outside the organization, and seek out unusual combinations that can kindle revolutionary ideas. We like to think of ourselves as explorers and forces of change, looking for and solving new customer problems. So curiosity and daring to disrupt are in our DNA. And this is the true spirit of Innovation at the Edge at Schneider Electric. 

Laurel: And it’s clear that urgency certainly comes out, especially for enterprises. Because they’re trying to build strong sustainability strategies to not just reach those environmental, social, and governance, or ESG, goals and targets; but also to improve resiliency and efficiency. What’s the role of digital technologies when we think about this all together in enabling a more sustainable future? 

Nadège: We see a sustainable future, and our goal is to enable the shift to an all-electric and all-digital world. That kind of transition isn’t possible without digital technology. We see digital as a key enabler of sustainability and decarbonization. The technology is already available now, it’s a matter of acceleration and adoption of it. And all of us, we have a role to play here. 

At Schneider Electric, we have built a suite of solutions that enable customers to accelerate their sustainability journey. Our flagship suite of IoT-enabled solution infrastructure empowers customers to monitor energy, carbon, and resource usage; and enabling them to implement strategies for efficiency, optimization, and resiliency. We have seen remarkable success stories of clients leveraging our digital EcoStruxure solution in buildings, utilities, data centers, hospitality, healthcare, and more, all over the place. If I were to take one example, I can take the example of PG&E customer, a leading California utility that everybody knows; they are using our EcoStruxure distributed energy resources management system, we call it DERMS, to manage grid reliability more effectively, which is crucial in the face of extreme weather events impacting the grid and consumers.

Schneider has also built an extensive ecosystem of partners because we do need to do it at scale together to accelerate digital transformation for customers. We also invest in cutting-edge technologies that make need-based collaboration and co-innovation possible. It’s all about working together towards one common goal. Ultimately the companies that embrace digital transformation will be the ones that will thrive on disruption. 

Laurel: It’s clear that building a strong sustainability strategy and then following through on the implementation does take time, but addressing climate change requires immediate action. How does your team at Schneider Electric as a whole work to balance those long-term commitments and act with urgency in the short term? It sounds like that internal and external innovation opportunity really could play a role here. 

Nadège: Absolutely. You’re absolutely right. We already have many of the technologies that will take us to net-zero. For example, 70% of CO2 emissions can be removed with existing technologies. By deploying electrification and digital solutions, we can get to our net-zero goals much faster. We know it’s a gradual process and as you already discussed previously, we do need to accelerate the adoption of it. By taking an incremental stepwise approach, we can drive near-term impact while laying the foundation for long-term decarbonization goals. 

Building on the same example of PG&E, which I referenced earlier; through our collaboration, piece by piece progressively, we are building the backbone of a sustainable, digitized, and reliable energy future in California with the deployment of EcoStruxure DERMS. As grid reliability and flexibility become more important, DERMS enable us to keep pace with 21st-century grid demands as they evolve. 

Another critical component of moving fast is embracing open systems and platforms, creating an interoperable ecosystem. By adopting open standards, you empower a wide range of experts to collaborate together, including startups, large organizations, senior decision-makers, and those on the ground. This future-proof investment ensures flexible and scalable solutions, that avoids expensive upgrades in the future and obsolescence. That is why at Innovation at the Edge we’re creating a win-win partnership to push market adoption of the innovative technology available today, but laying the foundation of an even more innovative tomorrow. Innovation at the Edge today provides the space to nurture those ideas, collaborate together, iterate, learn, and grow at pace. 

Laurel: What’s your strategy for investing in, and then adopting those disruptive technologies and business models, especially when you’re trying to build that kind of innovation for tomorrow? 

Nadège: I strongly believe innovation is a key driver of the energy transition. It’s very hard to create the right conditions for consistent innovation, as we discuss short-term and long-term. I want to quote again the famous book from Clayton Christenson, The Innovator’s Dilemma, about how big organizations can get so good at what they are already doing that they struggle to adapt as the market changes. And we are in this dilemma. So we do need to stay ahead. Leaders need to grasp disruptive technology, put customers first, foster innovation, and tackle emerging challenges head on. The phrase “that’s no longer how we do it,” really resonates with me as I look at the role of innovation in the energy space. 

At Schneider, innovation is more than just a buzzword. It’s our strategy for navigating the energy transition. We are investing in truly new and disruptive ideas, tech, and business models, taking the risk and the challenge. We complement our current offering constantly, and we include the new prosumer business that we’re building, and this is pivotal to accelerate the energy transition. We foster open innovation through investment and incubation of cutting-edge technology in energy management, electrical mobility, industrial automation, cybersecurity, artificial intelligence, sustainability, and other topics that will help to go through this innovation. I also can quote some joint ventures that we are creating with partners like GreenStruxure or AlphaStruxure. Those are offering energy-as-a-service solutions, so a new business model enabling organizations to leverage existing technology to achieve decarbonization at scale. As an example, GreenStruxure is helping Bimbo Bakeries move closer to net-zero with micro-grid system at six of their locations. This will provide 20% of Bimbo Bakeries’ USA energy usage and save an estimate of 1,700 tons of CO2 emission per year. 

Laurel: Yeah, that’s certainly remarkable. Following up on that, how does Schneider Electric define prosumer and how does that audience actually fit into Schneider Electric’s strategy when you’re trying to develop these new models? 

Nadège: Prosumer is my favorite word. Let’s redefine it again. Everybody’s speaking of prosumer, but what is prosumer? Prosumer refers to consumers that are actively involved in energy management; producing and consuming their own energy using technologies like solar panels, EV chargers, EV batteries, and EV storage. This is all digitally enabled. So everybody now, the customers, industrial customers, want to understand their energy. So becoming a prosumer comes with perks like lower energy bills. Fantastic, right? Increase independence, clean energy use, and potential compensation from utility providers. It’s beneficial to all of us; it’s beneficial to our planet, it’s beneficial to the decarbonization of the world. Imagine a future where buildings and homes generate their own energy from renewable sources, use what’s needed, and feed the excess back to the grid. This is a fantastic opportunity, and the interest in this is massive. 

To give you some figures; in 2019 we saw 100 gigawatts of new solar PV capacities deployed globally, and by last year this number had nearly quadrupled. So transformation is happening now. Electric vehicles, as an example, their sales have been soaring too, with a projected 14 million sales by 2023, six times the 2019 number. These technologies are already making a dent in emissions and the energy crisis. 

However, the journey to become a prosumer is complex. It’s all about scale and adoption, and it involves challenges with asset integration, grid modernization, regulatory compliance. So we are all part of this ecosystem, and it takes a lot of leadership to make it happen. So at Innovation at the Edge, we’re creating an ecosystem of solutions to streamline the prosumer journey from education and management to purchasing, installation, management, and maintenance of these new distributed resources. What we are doing, we are bringing together internal innovations that we already have in-house at Schneider Electric, like micro-grid, EV charging solutions, battery storage, and more with external innovation from portfolio companies. I can quote companies like Qmerit, EnergySage, EV Connect, Uplight, and AutoGrid, and we deliver end-to-end solutions from grid to prosumer. 

I want to insist one more time, it’s very important to accelerate and to be part of this accelerated adoption. These efforts are not just about strengthening our business, they’re about simplifying the energy ecosystem and moving the industry toward greater sustainability. It’s a collaborative journey that’s shaping the future of energy, and I’m very excited about this. 

Laurel: Focusing on that kind of urgency, innovation in large companies can be hampered by bureaucracy and go slow. What are some best practices for innovation without all of those delays? 

Nadège: Schneider Electric, we are not strangers to innovation, specifically in the energy management and industrial automation space. But to really push the envelope, we look beyond our walls for fresh ideas and expertise. And this is where SE Ventures comes in. It’s our one-billion-euro venture capital fund, from which we make bold bets and bring disruptive ideas to life by supporting and investing in startups that complement our current offering and explore future business. So based in Silicon Valley, but with a global reach, SE Ventures leverages our market knowledge and customer proximity to drive near-term value and commercial relationships with our businesses, customers, and partners. 

We also focus on partnership and incubation. So through partnerships with startups, we accelerate time to market. We accelerate the R&D roadmap and explore new products, new markets with startups. When it comes to incubation, we seek out game-changing ideas and entrepreneurs. We are providing mentorship, resources, and market insight at every stage of their journey. As an example, we also invested in funds like E14, the fund that started out at MIT Media Lab, to gain early insight into disruptive trends and technology. It’s very important to be early-stage here. 

So SE Ventures has successfully today developed multiple unicorns in our portfolio. We’re working with several other high-growth companies, targeted to become future unicorns in key strategic areas. That is totally consistent with Schneider’s mission. 

It’s all about striking a balance to ensure that our relationship with startups are mutually beneficial, knowing when to provide guidance and resources when they need it, but also when to step back and allow them to thrive independently. 

Laurel: With that future lens on, what kind of trends or developments in the energy industry are you seeing, and how are you preparing for them? Are you getting a lot of that kind of excitement from those startups and venture fund ideas? 

Nadège: Yeah, absolutely. There are multiple strengths. You need to listen to startups, to innovators, to people coming up with bold ideas. I want to highlight a couple of those. The energy industry is set to see major shifts. We know it, and we want to be part of it. We discussed prosumers. Prosumer is something very important. A lot of people now understand their body, doing exercises, monitoring it; tomorrow, people will all monitor their energy. Those are prosumers. We believe that prosumers, that’s individuals and businesses, they’re central to the energy transition. And this is a key focal point for us. 

Another trend that we also discuss is digital and also AI. AI has the potential to be transformative as we build the new energy landscape. One example is AI-powered virtual power plants, or what we call VPP, that can optimize a large portfolio of distributed energy resources to ensure greater grid resiliency. Increasingly, AI can be at the heart of the modern electrical grid. So at Schneider Electric, we are watching those trends very carefully. We are listening to the external world, to our customers, and we are showing that we are positioning our solution and global hubs to best serve the needs of our customers. 

Laurel: Lastly, as a woman in a leadership position, could you tell us how you’ve navigated your career so far, and how others in the industry can create a more diverse and inclusive environment within their companies and teams? 

Nadège: An inclusive environment starts with us as leaders. Establishing a culture where we value differences, different opinions, believe in equal opportunity for everyone, and foster a sense of belonging, is something very important in this environment. It’s also important for organizations to create commitments around diversity, equity, and inclusion, and communicate them publicly so it drives accountability, and report on the progress and how we make it happen. 

I was truly fortunate to have started and grown my career at a company like Schneider Electric where I was surrounded by people who empowered me to be my best self. This is something that should drive all women to be the best of herself. It wasn’t always easy. I have learned how important it is to have a voice and to be bold, to speak up for what you are passionate about, and to use that passion to drive impact. These are values I also work to instill in my own teenage daughters, and I’m thrilled to see them finding their own passion within STEM. So the next generation is the driving force in shaping a more sustainable world, and it’s crucial that we focus on leaving the planet a better and more equal place where they can thrive. 

Laurel: Words to the wise. Thank you so much Nadege for joining us today on the Business Lab. 

Nadège: Thank you. 

Laurel: That was Nadège Petit, the chief innovation officer at Schneider Electric, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review. 

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the global director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. 

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening. 

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

A wave of retractions is shaking physics

Recent highly publicized scandals have gotten the physics community worried about its reputation—and its future. Over the last five years, several claims of major breakthroughs in quantum computing and superconducting research, published in prestigious journals, have disintegrated as other researchers found they could not reproduce the blockbuster results. 

Last week, around 50 physicists, scientific journal editors, and emissaries from the National Science Foundation gathered at the University of Pittsburgh to discuss the best way forward.“To be honest, we’ve let it go a little too long,” says physicist Sergey Frolov of the University of Pittsburgh, one of the conference organizers. 

The attendees gathered in the wake of retractions from two prominent research teams. One team, led by physicist Ranga Dias of the University of Rochester, claimed that it had invented the world’s first room temperature superconductor in a 2023 paper in Nature. After independent researchers reviewed the work, a subsequent investigation from Dias’s university found that he had fabricated and falsified his data. Nature retracted the paper in November 2023. Last year, Physical Review Letters retracted a 2021 publication on unusual properties in manganese sulfide that Dias co-authored. 

The other high-profile research team consisted of researchers affiliated with Microsoft working to build a quantum computer. In 2021, Nature retracted the team’s 2018 paper that claimed the creation of a pattern of electrons known as a Majorana particle, a long-sought breakthrough in quantum computing. Independent investigations of that research found that the researchers had cherry-picked their data, thus invalidating their findings. Another less-publicized research team pursuing Majorana particles fell to a similar fate, with Science retracting a 2017 article claiming indirect evidence of the particles in 2022.

In today’s scientific enterprise, scientists perform research and submit the work to editors. The editors assign anonymous referees to review the work, and if the paper passes review, the work becomes part of the accepted scientific record. When researchers do publish bad results, it’s not clear who should be held accountable—the referees who approved the work for publication, the journal editors who published it, or the researchers themselves. “Right now everyone’s kind of throwing the hot potato around,” says materials scientist Rachel Kurchin of Carnegie Mellon University, who attended the Pittsburgh meeting.

Much of the three-day meeting, named the International Conference on Reproducibility in Condensed Matter Physics (a field that encompasses research into various states of matter and why they exhibit certain properties), focused on the basic scientific principle that an experiment and its analysis must yield the same results when repeated. “If you think of research as a product that is paid for by the taxpayer, then reproducibility is the quality assurance department,” Frolov told MIT Technology Review. Reproducibility offers scientists a check on their work, and without it, researchers might waste time and money on fruitless projects based on unreliable prior results, he says. 

In addition to presentations and panel discussions, there was a workshop during which participants split into groups and drafted ideas for guidelines that researchers, journals, and funding agencies could follow to prioritize reproducibility in science. The tone of the proceedings stayed civil and even lighthearted at times. Physicist Vincent Mourik of Forschungszentrum Jülich, a German research institution, showed a photo of a toddler eating spaghetti to illustrate his experience investigating another team’s now-retracted experiment. ​​Occasionally the discussion almost sounded like a couples counseling session, with NSF program director Tomasz Durakiewicz asking a panel of journal editors and a researcher to reflect on their “intimate bond based on trust.”

But researchers did not shy from directly criticizing Nature, Science, and the Physical Review family of journals, all of which sent editors to attend the conference. During a panel, physicist Henry Legg of the University of Basel in Switzerland called out the journal Physical Review B for publishing a paper on a quantum computing device by Microsoft researchers that, for intellectual-property reasons, omitted information required for reproducibility. “It does seem like a step backwards,” Legg said. (Sitting in the audience, Physical Review B editor Victor Vakaryuk said that the paper’s authors had agreed to release “the remaining device parameters” by the end of the year.) 

Journals also tend to “focus on story,” said Legg, which can lead editors to be biased toward experimental results that match theoretical predictions. Jessica Thomas, the executive editor of the American Physical Society, which publishes the Physical Review journals, pushed back on Legg’s assertion. “I don’t think that when editors read papers, they’re thinking about a press release or [telling] an amazing story,” Thomas told MIT Technology Review. “I think they’re looking for really good science.” Describing science through narrative is a necessary part of communication, she says. “We feel a responsibility that science serves humanity, and if humanity can’t understand what’s in our journals, then we have a problem.” 

Frolov, whose independent review with Mourik of the Microsoft work spurred its retraction, said he and Mourik have had to repeatedly e-mail the Microsoft researchers and other involved parties to insist on data. “You have to learn how to be an asshole,” he told MIT Technology Review. “It shouldn’t be this hard.” 

At the meeting, editors pointed out that mistakes, misconduct, and retractions have always been a part of science in practice. “I don’t think that things are worse now than they have been in the past,” says Karl Ziemelis, an editor at Nature.

Ziemelis also emphasized that “retractions are not always bad.” While some retractions occur because of research misconduct, “some retractions are of a much more innocent variety—the authors having made or being informed of an honest mistake, and upon reflection, feel they can no longer stand behind the claims of the paper,” he said while speaking on a panel. Indeed, physicist James Hamlin of the University of Florida, one of the presenters and an independent reviewer of Dias’s work, discussed how he had willingly retracted a 2009 experiment published in Physical Review Letters in 2021 after another researcher’s skepticism prompted him to reanalyze the data. 

What’s new is that “the ease of sharing data has enabled scrutiny to a larger extent than existed before,” says Jelena Stajic, an editor at Science. Journals and researchers need a “more standardized approach to how papers should be written and what needs to be shared in peer review and publication,” she says.

Focusing on the scandals “can be distracting” from systemic problems in reproducibility, says attendee Frank Marsiglio, a physicist at the University of Alberta in Canada. Researchers aren’t required to make unprocessed data readily available for outside scrutiny. When Marsiglio has revisited his own published work from a few years ago, sometimes he’s had trouble recalling how his former self drew those conclusions because he didn’t leave enough documentation. “How is somebody who didn’t write the paper going to be able to understand it?” he says.

Problems can arise when researchers get too excited about their own ideas. “What gets the most attention are cases of fraud or data manipulation, like someone copying and pasting data or editing it by hand,” says conference organizer Brian Skinner, a physicist at Ohio State University. “But I think the much more subtle issue is there are cool ideas that the community wants to confirm, and then we find ways to confirm those things.”

But some researchers may publish bad data for a more straightforward reason. The academic culture, popularly described as “publish or perish,” creates an intense pressure on researchers to deliver results. “It’s not a mystery or pathology why somebody who’s under pressure in their work might misstate things to their supervisor,” said Eugenie Reich, a lawyer who represents scientific whistleblowers, during her talk.

Notably, the conference lacked perspectives from researchers based outside the US, Canada, and Europe, and from researchers at companies. In recent years, academics have flocked to companies such as Google, Microsoft, and smaller startups to do quantum computing research, and they have published their work in Nature, Science, and the Physical Review journals. Frolov says he reached out to researchers from a couple of companies, but “that didn’t work out just because of timing,” he says. He aims to include researchers from that arena in future conversations.

After discussing the problems in the field, conference participants proposed feasible solutions for sharing data to improve reproducibility. They discussed how to persuade the community to view data sharing positively, rather than seeing the demand for it as a sign of distrust. They also brought up the practical challenges of asking graduate students to do even more work by preparing their data for outside scrutiny when it may already take them over five years to complete their degree. Meeting participants aim to publicly release a paper with their suggestions. “I think trust in science will ultimately go up if we establish a robust culture of shareable, reproducible, replicable results,” says Frolov. 

Sophia Chen is a science writer based in Columbus, Ohio. She has written for the society that publishes the Physical Review journals, and for the news section of Nature

Building a more reliable supply chain

In 2021, when a massive container ship became wedged in the Suez Canal, you could almost hear the collective sigh of frustration around the globe. It was a here-we-go-again moment in a year full of supply chain hiccups. Every minute the ship remained stuck represented about $6.7 million in paralyzed global trade.

The 12 months leading up to the debacle had seen countless manufacturing, production, and shipping snags, thanks to the covid-19 pandemic. The upheaval illuminated the critical role of supply chains in consumers’ everyday lives—nothing, from baby formula to fresh produce to ergonomic office chairs, seemed safe.

For companies producing just about any physical product, the many “black swan” events (catastrophic incidents that are nearly impossible to predict) of the last four years illustrate the importance of supply chain resilience—businesses’ ability to anticipate, respond, and bounce back. Yet many organizations still don’t have robust measures in place for future setbacks.

In a poll of 250 business leaders conducted by MIT Technology Review Insights in partnership with Infosys Cobalt, just 12% say their supply chains are in a “fully modern, integrated” state. Almost half of respondents’ firms (47%) regularly experience some supply chain disruptions—nearly one in five (19%) say they feel “constant pressure,” and 28% experience
“occasional disruptions.” A mere 6% say disruptions aren’t an issue. But there’s hope on the horizon. In 2024, rapidly advancing technologies are making transparent, collaborative, and data-driven supply chains more realistic.

“Emerging technologies can play a vital role in creating more sustainable and circular supply chains,” says Dinesh Rao, executive vice president and co-head of delivery at digital services and consulting company Infosys. “Recent strides in artificial intelligence and machine learning, blockchain, and other systems will help build the ability to deliver future-ready, resilient supply chains.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Unlocking the power of sustainability

According to UN climate experts, 2023 was the warmest year on record. This puts the heat squarely on companies to accelerate their sustainability efforts. “It’s quite clear that the sense of urgency is increasing,” says Jonas Bohlin, chief product officer for environmental, social, and governance (ESG) platform provider Position Green.

That pressure is coming from all directions. New regulations, such as the Corporate Sustainability Reporting Directive (CSDR) in the EU, require that companies publicly report on their sustainability efforts. Investors want to channel their money into green opportunities. Customers want to do business with environmentally responsible companies. And organizations’ reputations for sustainability are playing a bigger role in attracting and retaining employees.

On top of all these external pressures, there is also a significant business case for sustainability efforts. When companies conduct climate risk audits, for example, they are confronted with escalating threats to business continuity from extreme weather events such as floods, wildfires, and hurricanes, which are occurring with increasing frequency and severity.

Mitigating the risks associated with direct damage to facilities and assets, supply chain disruptions, and service outages very quickly becomes a high-priority issue of business resiliency and competitive advantage. A related concern is the impact of climate change on the availability of natural resources, such as water in drought-prone regions like the American Southwest.

Much more than carbon

“The biggest misconception that people have is that sustainability is about carbon emissions,” says Pablo Orvananos, global sustainability consulting lead at Hitachi Digital Services. “That’s what we call carbon tunnel vision. Sustainability is much more than carbon. It’s a plethora of environmental issues and social issues, and companies need to focus on all of it.”

Companies looking to act will find a great deal of complexity surrounding corporate sustainability efforts. Companies are responsible not only for their own emissions and fossil fuels usage (Scope 1), but also the sustainability efforts of their energy suppliers (Scope 2) and their supply chain partners (Scope 3). New regulations require organizations to look beyond just emissions. Companies must ask questions about a broad range of environmental and societal issues: Are supply chain partners sourcing raw materials in an environmentally conscious manner? Are they treating workers fairly?

Sustainability can’t be siloed into one specific task, such as decarbonizing the data center. The only way to achieve sustainability is with a comprehensive, holistic approach, says Daniel Versace, an ESG research analyst at IDC. “A siloed approach to ESG is an approach that’s bound to fail,” he adds.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The worst technology failures of 2023

Welcome to our annual list of the worst technologies. This year, one technology disaster in particular holds lessons for the rest of us: the Titan submersible that imploded while diving to see the Titanic

Everyone had warned Stockton Rush, the sub’s creator, that it wasn’t safe. But he believed innovation meant tossing out the rule book and taking chances. He set aside good engineering in favor of wishful thinking. He and four others died. 

To us it shows how the spirit of innovation can pull ahead of reality, sometimes with unpleasant consequences. It was a phenomenon we saw time and again this year, like when GM’s Cruise division put robotaxis into circulation before they were ready. Was the company in such a hurry because it’s been losing $2 billion a year? Others find convoluted ways to keep hopes alive, like a company that is showing off its industrial equipment but is quietly still using bespoke methods to craft its lab-grown meat. The worst cringe, though, is when true believers can’t see the looming disaster, but we do. That’s the case for the new “Ai Pin,” developed at a cost of tens of millions, that’s meant to replace smartphones. It looks like a titanic failure to us. 

Titan submersible

This summer we were glued to our news feeds as drama unfolded 3,500 meters below the ocean’s surface. An experimental submarine with five people aboard was lost after descending to see the wreck of the Titanic.  

the oceangate submersible underwater

GDA VIA AP IMAGES

The Titan was a radical design for a deep-sea submersible: a minivan-size carbon fiber tube, operated with a joystick, that aerospace engineer Stockton Rush believed would open the depths to a new kind of tourism. His company, OceanGate, had been warned the vessel hadn’t been proved to withstand 400 atmospheres of pressure. His answer? “I think it was General MacArthur who said ‘You’re remembered for the rules you break,” Rush told a YouTuber.

But breaking the rules of physics doesn’t work. On June 22, four days after contact was lost with the Titan, a deep-sea robot spotted the sub’s remains. It was most likely destroyed in a catastrophic implosion.

In addition to Rush, the following passengers perished:

  • Hamish Harding, 58, tourist
  • Shahzada Dawood, 48, tourist
  • Suleman Dawood, 19, tourist
  • Paul-Henri Nargeolet, 77, Titanic expert

More: The Titan Submersible Was “an Accident Waiting to Happen” (The New Yorker), OceanGate Was Warned of Potential for “Catastrophic” Problems With Titanic Mission (New York Times), OceanGate CEO Stockton Rush said in 2021 he knew he’d “broken some rules” (Business Insider)


Lab-grown meat

Instead of killing animals for food, why not manufacture beef or chicken in a laboratory vat? That’s the humane idea behind “lab-grown meat.”

The problem, though, is making the stuff at a large scale. Take Upside Foods. The startup, based in Berkeley, California, had raised more than half a billion dollars and was showing off rows of big, gleaming steel bioreactors.

But journalists soon learned that Upside was a bird in borrowed feathers. Its big tanks weren’t working; it was growing chicken skin cells in much smaller plastic laboratory flasks. Thin layers of cells were then being manually scooped up and pressed into chicken pieces. In other words, Upside was using lots of labor, plastic, and energy to make hardly any meat.

Samir Qurashi, a former employee, told the Wall Street Journal he knows why Upside puffed up the potential of lab-grown food. “It’s the ‘fake it till you make it’ principle,” he said.

And even though lab-grown chicken has FDA approval, there’s doubt whether lab meat will ever compete with the real thing. Chicken goes for $4.99 a pound at the supermarket. Upside still isn’t saying how much the lab version costs to make, but a few bites of it sell for $45 at a Michelin-starred restaurant in San Francisco.

Upside has admitted the challenges. “We signed up for this work not because it’s easy, but because the world urgently needs it,” the company says.

More: I tried lab-grown chicken at a Michelin-starred restaurant (MIT Technology Review), The Biggest Problem With Lab-Grown Chicken Is Growing the Chicken (Bloomberg), Insiders Reveal Major Problems at Lab-Grown-Meat Startup Upside Foods (Wired)


Cruise robotaxi

Sorry, autopilot fans, but we can’t ignore the setbacks this year. Tesla just did a massive software recall after cars set on self-driving mode slammed into emergency vehicles. But the biggest reversal was at Cruise, the division of GM that became the first company to offer driverless taxi rides in San Francisco, day or night, with a fleet exceeding 400 cars.

Cruise argues that robotaxis don’t get tired, don’t get drunk, and don’t get distracted. It even ran a full-page newspaper ad declaring that “humans are terrible drivers.”

a Cruise vehicle parked on the street in front of a residential home as a person descends a front staircase in the background

CRUISE

But Cruise forgot that to err is human—not what we want from robots. Soon, it was Cruise’s sensor-laden Chevy Bolts that started racking up noticeable mishaps, including dragging a pedestrian for 20 feet. This October, the California Department of Motor Vehicles suspended GM’s robotaxis, citing an “unreasonable risk to public safety.”

It’s a blow for Cruise, which has since laid off 25% of its staff and fired its CEO and cofounder, Kyle Vogt, a onetime MIT student. “We have temporarily paused driverless service,” Cruise’s website now reads. It says it’s reviewing safety and taking steps to “regain public trust.”

More: GM’s Self-Driving Car Unit Skids Off Course (Wall Street Journal), Important Updates from Cruise (Getcruise.com)


Plastic proliferation

Plastic is great. It’s strong, it’s light, and it can be pressed into just about any shape: lawn chairs, bobbleheads, bags, tires, or thread.

The problem is there’s too much of it, as Doug Main reported in MIT Technology Review this year. Humans make 430 million tons of plastic a year (significantly more than the weight of all people combined), but only 9% gets recycled. The rest ends up in landfills and, increasingly, in the environment. Not only does the average whale have kilograms of the stuff in its belly, but tiny bits of “microplastic” have been found in soft drinks, plankton, and human bloodstreams, and even floating in the air. The health effects of spreading microplastic pollution have barely been studied.

Awareness of the planetary scourge is growing, and some are calling for a “plastics treaty” to help stop the pollution. It’s going to be a hard sell. That’s because plastic is so cheap and useful. Yet researchers say the best way to cut plastic waste is not to make it in the first place.

More: Think your plastic is being recycled? Think again (MIT Technology Review),  Oh Good, Hurricanes Are Now Made of Microplastics (Wired)


Humane Ai Pin

The New York Times declared it Silicon Valley’s “Big, Bold Sci-Fi Bet” for what comes after the smartphone. The product? A plastic badge called the Ai Pin, with a camera, chips, and sensors.

Humane's AI Pin worn on a sweatshirt

HUMANE

A device to wean us off our phone addiction is a worthy goal, but this blocky $699 pin (which also requires a $24-a-month subscription) isn’t it. An early review called the device, developed by startup Humane Ai, “equal parts magic and awkward.” Emphasis on the awkward. Users must speak voice commands to send messages or chat with an AI (a laser projector in the pin will also display information on your hand). It weighs as much as a golf ball, so you probably won’t be attaching it to a T-shirt. 

It is the creation of a husband-and-wife team of former Apple executives, Bethany Bongiorno and Imran Chaudhri, who were led to their product idea with the guidance of a Buddhist monk named Brother Spirit, raising $240 million and filing 25 patents along the way, according to the Times.

Clearly, there’s a lot of thought, money, and engineering involved in its creation. But as The Verge’s wearables reviewer Victoria Song points out, “it flouts the chief rule of good wearable design: you have to want to wear the damn thing.” As it is, the Ai Pin is neat, but it’s still no competition for the lure of a screen.

More: Can A.I. and Lasers Cure Our Smartphone Addiction? (New York Times) Screens are good, actually (The Verge)


Social media superconductor

A room-temperature superconductor is a material offering no electrical resistance. If it existed, it would make possible new types of batteries and powerful quantum computers, and bring nuclear fusion closer to reality. It’s a true Holy Grail.

So when a report emerged this July from Korea that a substance called LK-99 was the real thing, attention seekers on the internet were ready to share. The news popped up first in Asia, along with an online video of a bit of material floating above a magnet. Then came the booster fuel of social media hot takes.

Pellet of LK-99 being repelled by a magnet

HYUN-TAK KIM/WIKIMEDIA

“Today might have seen the biggest physics discovery of my lifetime,” said a post to X that has been viewed 30 million times. “I don’t think people fully grasp the implications … Here’s how it could totally change our lives.”

No matter that the post had been written by a marketer at a coffee company. It was exciting—and hilarious—to see well-funded startups drop their work on rockets and biotech to try to make the magic substance. Kenneth Chang, a reporter at the New York Times, dubbed LK-99 “the Superconductor of the Summer.”

But summer’s dreams soon ripped at the seams after real physicists couldn’t replicate the work. No, LK-99 is not a superconductor. Instead, impurities in the recipe could have misled the Korean researchers—and, thanks to social media, the rest of us too.

More: LK-99 Is the Superconductor of the Summer (New York Times)  LK-99 isn’t a superconductor—how science sleuths solved the mystery (Nature)


Rogue geoengineering

Solar geoengineering is the idea to cool the planet by releasing reflective materials into the atmosphere. It’s a fraught concept, because it won’t stop the greenhouse effect—only mask it. And who gets to decide to block the sun?

Mexico banned geoengineering trials early this year after a startup called Make Sunsets decided it could commercialize the effort. Cofounder Luke Iseman decided to launch balloons in Mexico designed to disperse reflective sulfur dioxide into the sky. The startup is still selling “cooling credits” for $10 each on its website.

Injecting particles into the sky is theoretically cheap and easy, and climate warming is a huge threat. But moving too fast can create a backlash that stalls progress, according to my colleague James Temple. “They’re violating the rights of communities to dictate their own future,” one critic said.

Iseman remains unrepentant. “I don’t poll billions before taking a flight,” he has said. “I’m not going to ask for permission from every person in the world before I try to do a bit to cool Earth.” 

More: The flawed logic of rushing out extreme climate solutions (MIT Technology Review), Mexico bans solar geoengineering experiments after startup’s field tests (The Verge), Researchers launched a solar geoengineering test flight in the UK last fall (MIT Technology Review)

Augmenting the realities of work

Imagine an integrated workplace with 3D visualizations that augment presentations, interactive and accelerated onboarding, and controlled training simulations. This is the future of immersive technology that global head of Immersive Technology Research at JPMorgan Chase, Blair MacIntyre is working to build. Augmented reality (AR) and virtual reality (VR) technologies can blend physical and digital dimensions together and infuse new innovations and efficiencies into business and customer experiences.

“These technologies can offer newer ways of collaborating over distance both synchronously and asynchronously than we can get with the traditional work technologies that we use right now,” says MacIntyre. “It’s these new ways to collaborate, ways of using the environment and space in new and interesting ways that will hopefully offer new value and change the way we work.”

Many enterprises are integrating VR into business practices like video conference calls. But having some participants in a virtual world and some sidelined creates imbalances in the employee experience. MacIntyre’s team is looking for ways to use AR/VR technologies that can be additive, like 3D data visualizations that enhance financial forecasting within a bank, not ones that overhaul entire experiences.

Although the potential of AR/VR is quickly evolving, it’s unlikely that customers’ interactions or workplace environments will be entirely moved to the virtual world anytime soon. Rather, MacIntyre’s immersive technology research looks to infuse efficiencies into existing practices.

“It’s thinking about how the technologies integrate and how we can add value where there is value and not trying to replace everything we do with these technologies,” MacIntyre says.

AI can help remove some of the tedium from immersive technologies that have made them impractical for widespread enterprise use in the past. Using VR technology in the workplace may prohibit taking notes and having access to traditional input devices and files. AI tools can take and transcribe notes and fill in any other gaps to help remove that friction and eliminate redundancies.

Connected Internet of things (IoT) devices are also key to enabling AR/VR technologies. To create a valuable immersive experience, MacIntyre says, it’s imperative to know as much about the surrounding world of the user as well as their needs, habits, and preferences.

“If we can figure out more ways of enabling people to work together in a distributed way, we can start enabling more people to participate meaningfully in a wider variety of jobs,” says MacIntyre.

This episode of Business Lab is produced in association with JPMorgan Chase.

Full transcript

Laurel: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is emerging technologies, specifically, immersive technologies like augmented and virtual reality. Keeping up with technology trends may be a challenge for most enterprises, but it’s a critical way to think about future possibilities from product to customer service to employee experience. Augmented and virtual realities aren’t necessarily new, but when it comes to applying them beyond gaming, it’s a brave new world.

Two words for you: emerging realities.

My guest is Blair MacIntyre, who is the global head of Immersive Technology Research at JPMorgan Chase.

This podcast is produced in association with JPMorgan Chase.

Welcome, Blair.

Blair MacIntyre: Thank you. It’s great to be here.

Laurel: Well, let’s do a little bit of context setting. Your career has been focused on researching and exploring immersive technology, including software and design tools, privacy and ethics, and game and experience design. So what brought you to JPMorgan Chase, and could you describe your current role?

Blair: So before joining the firm, I had spent the last 23 years as a professor at Georgia Tech and Northeastern University. During that time, as you say, I explored a lot of ways that we can both create things with these technologies, immersive technologies and also, what they might be useful for and what the impacts on people in society and how we experience life are. But as these technologies have become more real, moved out of the lab, starting to see real products from real companies, we have this opportunity to actually see how they might be useful in practice and to have, for me, an impact on how these technologies will be deployed and used that goes beyond the traditional impact that professors might have. So beyond writing papers, beyond teaching students. That’s what brought me to the firm, and so my current role is, really, to explore that, to understand all the different ways that immersive technology could impact the firm and its customers. Right? So we think about not just customer-facing and not just products, but also employees and their experience as well.

Laurel: That’s really interesting. So why does JPMorgan Chase have a dedicated immersive technology focus in its global technology applied research division, and what are the primary goals of your team’s research within finance and large enterprises as a whole?

Blair: That’s a great question. So JPMorgan Chase has a fairly wide variety of research going on within the company. There’s large efforts in AI/ML, in quantum computing, blockchain. So they’re interested in looking at all of the range of new technologies and how they might impact the firm and our customers, and immersive technologies represent one of those technologies that could over time have a relatively large impact, I think, especially on the employee experience and how we interact with our customers. So they really want to have a group of people focusing on, really, looking both in the near and long term, and thinking about how we can leverage the technology now and how we might be able to leverage it down the road, and not just how we can, but what we should not do. Right? So we’re interested in understanding of these applications that are being proposed or people are imagining could be used. Which ones actually have value to the company, and which ones may not actually have value in practice?

Laurel: So when people think of immersive technologies like augmented reality and virtual reality, AR and VR, many think of headsets or smartphone apps for gaming and retail shopping experiences. Could you give an overview of the state of immersive technology today and what use cases you find to be the most innovative and interesting in your research?

Blair: So, as you say, I think many people think about smartphones, and we’ve seen, at least in movies and TV shows, head mounts of various kinds. The market, I would divide it right now into the two parts, the handheld phone and tablet experience. So you can do augmented reality now, and that really translates to we take the camera feed, and we can overlay computer graphics on it to do things like see what something you might want to buy looks like in your living room or do, in an enterprise situation, remote maintenance assistance where I can take my phone, point it at a piece of technology, and a remote expert could draw on it or help me do something with it.

There’s the phone-based things, and we carry these things in our pockets all the time, and they’re relatively cheap. So there’s a lot of opportunities when it’s appropriate to use those, but the big downside of those devices is that you have to hold them in your hands, so if you wanted to try to put information all around you, you would have to hold the device up and look around, which is uncomfortable and awkward. So that is where the head mount displays come in.

So either virtual reality displays which, right now, many of us think about computer games and education as use cases in the consumer world or augmented reality displays. These sorts of displays now let us do the same kind of things we might do with our phones, but we can do it without our hands having to hold something so we can be doing whatever work it was we wanted to do, right? Repairing the equipment, taking notes, working with things in the world around us, and we can have information spread all around us, which I think is the big advantage of head mounts.

So many of the things people imagine when they think about augmented reality in particular involve this serendipitous access to information. I’m walking into a conference room, and I see sort of my notes and information about the people I’m meeting there and the materials from our last meeting, whatever it is, or I’m walking down the street, and I see advertising or other kinds of, say, tourism information, but those things only work if the device is out of mind. If I can put it on, and then go about my life, I’m not going to walk into a conference room, and hold up a phone, and look at everybody through it.

So that, I think, is the big difference. You could implement the same sorts of applications on both the handheld devices and the head-worn devices, but the two different form factors are going to make very different applications appropriate for those two sorts of technologies.

On the virtual reality side, we’re at the point now where the displays we can buy are light enough and comfortable enough that we could wear them for half an hour, a couple hours without discomfort. So a lot of the applications that people imagine there, I think the most popular things that people have done research on and that I see having a near-term impact in the enterprise are immersive training applications where you can get into a situation rather than, say, watching a video or a little click-through presentation as part of your annual training. You could really be in an experience and hopefully learn more from it. So I think those sorts of experiences where we’re totally immersed and focused is where virtual reality comes in.

The big thing that I think is most exciting about head-worn displays in particular where we can wear them while we’re doing work as opposed to just having these ephemeral experiences with a phone is the opportunity to do things together, to collaborate. So I might want to look at a map on a table and see a bunch of data floating above the map, but it would be better if you and our other colleagues were around the table with me, and we can all see the same things, or if we want to take a training experience, I could be in there getting my training experience, but maybe someone else is joining me and being able to both offer feedback or guidance and so on.

Essentially, when I think about these technologies, I think about the parallels to how we do work regularly, right? We generally collaborate with people. We might grab a colleague and have them look at our laptop to show them something. I might send someone something on my phone, and then we can talk about it. So much of what we do involves interactions with other people and with the data that we are doing our job with that anything we do with these immersive technologies is really going to have to mimic that and give us the ability to do our real work in these immersive spaces with the people that we normally work with.

Laurel: Well, speaking of working with people, how can the scale of an institution like JPMorgan Chase help propel this research forward in immersive technology, and what opportunities does it provide that are otherwise limited in a traditional university or startup research environment?

Blair: I think it comes down to a few different things. On one hand, we have the access to people who are really doing the things that we want to build technologies to help with. Right? So if I wanted to look how I could use immersive visualization of data to help people in human resources do planning or help people who are doing financial modeling look at the data in new and interesting ways, now I could actually do the research in conjunction with the real people who do that work. Right? So I’ve already and I’ve been at the firm for a little over a year, and many conversations we’ve had were either we’ve had an idea or somebody has come to us with an idea. Through the course of the conversations, relatively quickly, we hone in on things that are much more sophisticated, much more powerful than what we might have thought of at a university where we didn’t have that sort of direct access to people doing the work.

On the other hand, if we actually build something, we can actually test it with the same people, which is an amazing opportunity. Right? When I go to a conference, we’re going to put 20 people who actually represent the real users of those systems. So, for me, that’s where I think the big opportunity of doing research in an enterprise is, is building solutions for the real people of that enterprise and being able to test it with those people.

Laurel: Recent years have actually changed what customers and employees expect from enterprises as well, like omnichannel retail experiences. So immersive technologies can be used to bridge gaps between physical and virtual environments as you were saying earlier. What are the different opportunities that AR and VR can offer enterprises, and how can these technologies be used to improve employee and customer experience?

Blair: So I alluded back to some of that in previous answers. I think the biggest opportunities have to do with how employees within the organization can do new things together, can interact, and also how companies can interact with customers. Now, we’re not going to move all of our interactions with our customers into the virtual world, or the metaverse, or whatever you want to call it nowadays anytime soon. Right? But I think there are opportunities for customers who are interested in those technologies, and comfortable with them, and excited by them to get new kinds of experiences and new ways of interacting with our firm or other firms than you could get with webpages and in-person meetings.

The other big opportunity I think is as we move to a more hybrid work environment and a distributed work environment, so a company like JPMorgan Chase is huge and spread around the world. We have over 300,000 employees now in most countries around the world. There might be groups of people, but they’re connected together through video right now. These technologies, I think, can offer new ways of collaborating over distance both synchronously and asynchronously than we can get with the traditional work technologies that we use right now. So it’s those new ways to collaborate, ways of using the environment and space in new and interesting ways that is going to, hopefully, offer new value and change the way we work.

Laurel: Yeah, and staying on that topic, we can’t really have a discussion about technology without talking about AI which is another evolving, increasingly popular technology. So that’s being used by many enterprises to reduce redundancies and automate repetitive tasks. In this way, how can immersive technology provide value to people in their everyday work with the help of AI?

Blair: So I think the big opportunity that AI brings to immersive technologies is helping ease a lot of the tedium and burden that may have prevented these technologies from being practical in the past, and this could happen in a variety of ways. When I’m in a virtual reality experience, I don’t have access to a keyboard, I don’t have access to traditional input devices, I don’t have necessarily the same sorts of access to my files, and so on. With a lot of the new AI technologies that are coming around, I can start relying on the computer to take notes. I can have new ways of pulling up information that I otherwise wouldn’t have access to. So, I think AI reducing the friction of using these technologies is a huge opportunity, and the research community is actively looking at that because friction has been one of the big problems with these technologies up till now.

Laurel: So, other than AI, what are other emerging technologies that can aid in immersive technology research and development?

Blair: So, aside from AI, if we step back and look at all of the emerging technologies as a whole and how they complement each other, I think we can see new opportunities. So, in our research, we work closely with people doing computer vision and other sort of sensing research to understand the world. We work closely with people looking at internet of things and connected devices because at a 10,000-foot level, all of these technologies are based on the idea of understanding, sensing the world, understanding what people are doing in it, understanding what people’s needs might be, and then somehow providing information to them or actuating things in the world, displaying stuff on walls or displays.

From that viewpoint, immersive technologies are primarily one way of displaying things in a new and interesting way and getting input from people, knowing what people want to do, allowing them to interact with data. But in order to do that, they need to know as much about the world around the user as possible, the structure of it, but also, who’s there, what we are doing, and so on. So all of these other technologies, especially the Internet of things (IoT) and other forms and ways of sensing what’s happening in the world are very complimentary and together can create new sorts of experiences that neither could do alone.

Laurel: So what are some of the challenges, but also, possible opportunities in your research that contrast the future potential of AR and VR to where the technology is today?

Blair: So I think one of the big limitations of technology today is that most of the experiences are very siloed and disconnected from everything else we do. During the pandemic, many of us experimented with how we could have conferences online in various ways, right? A lot of companies, small companies and larger companies, started looking at how you could create immersive meetings and big group experiences using virtual reality technology, but all of those experiences that people created were these closed systems that you couldn’t bring things into. So one of the things we’re really interested in is how we stop thinking about creating new kinds of experiences and new ways of doing things, and instead think about how do we add these technologies to our existing work practices to enhance them in some way.

So, for example. Right now, we do video meetings. It would be more interesting for some people to be able to join those meetings, say, in VR. Companies have experimented with that, but most of the experiments that people are doing assume that everyone is going to move into virtual reality, or we’re going to bring, say, the people in as a little video wall on the side of a big virtual reality room, making them second class citizens.

I’m really interested and my team is interested in how we can start incorporating technologies like this while keeping everyone a first-class participant in these meetings. As one example, a lot of the systems that large enterprises build, and we’re no different, are web-based right now. So if, let’s say, I have a system to do financial forecasting, you could imagine there’s a bunch of those at a bank, and it’s a web-based system, I’m really interested in how do we add the ability for people to go into a virtual reality or augmented reality experience, say, a 3D visualization of some kind of data at the moment they want to do it, do the work that they want to do, invite colleagues in to discuss things, and then go back to the work as it was always done on a desktop web browser.

So that idea of thinking of these technologies as a capability, a feature instead of a new whole application and way of doing things permeates all the work we’re doing. When I look down the road at where this can go, I see in, say, let’s say, two to five years, I see people with displays maybe sitting on their desk. They have their tablet and their phone, and they might also have another display or two sitting there. They’re doing their work, and at different times, they might be in a video chat, they might pick up a head mount and put it on to do different things, but it’s all integrated. I’m really interested in how we connect these together and reduce friction. Right? If it takes you four or five minutes to move your work into a VR experience, nobody is going to do it because it just is too problematic. So it’s that. It’s thinking about how the technologies integrate and how we can add value where there is value and not trying to replace everything we do with these technologies.

Laurel: So to stay on that future focus, how do you foresee the immersive technology landscape entirely evolving over the next decade, and how will your research enable those changes?

Blair: So, at some level, it’s really hard to answer that question. Right? So if I think back 10 years to where immersive technologies were, it would have been inconceivable for us to imagine the videos that are coming out. So, at some level, I can say, “Well, I have no idea where we’re going to be in 10 years.” On the other hand, it’s pretty safe to imagine the kinds of technologies that we’re experimenting with now just getting better, and more comfortable, and more easy to integrate into work. So I think the landscape is going to evolve in the near term to be more amenable to work.

Especially for augmented reality, the threshold that these devices would have to get to such that a lot of people would be willing to wear them all the time while they’re walking down the street, playing sports, doing whatever, that’s a very high bar because it has to be small, it has to be light, it has to be cheap, it has to have a battery that lasts all day, etcetera, etcetera. On the other hand, in the enterprise, in any business situation, it’s easy to imagine the scenario I described. It’s sitting on my desk, I pick it up, I put it on, I take it off.

In the medium term after that, I think we will see more consumer applications as people start solving more of the problems that are preventing people from wearing these devices for longer periods of time. Right? It’s not just size, and battery power, and comfort, it’s also things like optics. Right? A lot of people — not a lot, but say, let’s say 10%, 15% of people might experience headaches, or nausea, or other kinds of discomfort when they wear a VR display as they’re currently built, and a lot of that has to do with the fact that the optics that you’re looking at when you’re putting this display are built in a way that makes it hard to comfortably focus at objects at different distances away from you without getting into the nitty-gritty details. For many of us, that’s fine. We can deal with the slight problems. But for some people, it’s problematic.

So as we figure out how to solve problems like that, more people can wear them, and more people can use them. I think that’s a really critical issue for not just consumers, but for the enterprise because if we think about a future where more of our business applications and the kind of way we work are done with technologies like this, these technologies have to be accessible to everybody. Right? If that 10% or 15% of people get headaches and feel nauseous wearing this device, you’ve now disenfranchised a pretty significant portion of your workforce, but I think those can be solved, and so we need to be thinking about how we can enable everybody to use them.

On the other hand, technologies like this can enfranchise more people, where right now, working remotely, working in a distributed sense is hard. For many kinds of work, it’s difficult to do remotely. If we can figure out more ways of enabling people to work together in a distributed way, we can start enabling more people to participate meaningfully in a wider variety of jobs.

Laurel: Blair, that was fantastic. It’s so interesting. I really appreciate your perspective and sharing it here with us on the Business Lab.

Blair: It was great to be here. I enjoyed talking to you.

Laurel: That was Blair MacIntyre, the global head of Immersive Technology Research at JPMorgan Chase, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the global director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.


This podcast is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations are not the responsibility of JPMorgan Chase & Co.