Enabling human-centric support with generative AI

It’s a stormy holiday weekend, and you’ve just received the last notification you want in the busiest travel week of the year: The first leg of your flight is significantly delayed.

You might expect this means you’ll be sitting on hold with airline customer service for half an hour. But this time, the process looks a little different: You have a brief text exchange with the airline’s AI chatbot, which quickly assesses your situation and places you in a priority queue. Shortly after, a human agent takes over, confirms the details, and gets you rebooked on an earlier flight so you can make your connection. You’ll be home in time to enjoy mom’s pot roast.

Generative AI is becoming a key component of business operations and customer service interactions today. According to Salesforce research, three out of five workers (61%) either currently use or plan to use generative AI in their roles. A full 68% of these employees are confident that the technology—which can churn out text, video, image, and audio content almost instantaneously—will enable them to provide more enriching customer experiences.

But the technology isn’t a complete solution—or a replacement for human workers. Sixty percent of the surveyed employees believe that human oversight is indispensable for effective and trustworthy generative AI.

Generative AI enables people and increases efficiencies in business operations, but using it to empower employees will make all the difference. Its full business value will only be achieved when it is used thoughtfully to blend with human empathy, ingenuity, and emotional intelligence.

Generative AI pilots across industries

Though the technology is still nascent, many generative AI use cases are starting to emerge.

In sales and marketing, generative AI can assist with creating targeted ad content, identifying leads, upselling, cross-selling, and providing real-time sales analytics. When used for internal functions like IT, HR, and finance, generative AI can improve help-desk services, simplify recruitment processes, generate job descriptions, assist with onboarding and exit processes, and even write code.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Pairing live support with accurate AI outputs

A live agent spends hours each week manually documenting routine interactions. Another combs through multiple knowledge bases to find the right solution, scrambling to piece it together while the customer waits on hold. A third types out the same response they’ve written dozens of times before.

These repetitive tasks can be draining, leaving less time for meaningful customer interactions—but generative AI is changing this reality. By automating routine workflows, AI augments the efforts of live agents, freeing them to do what they do best: solving complex problems and applying human understanding and empathy to help customers during critical situations.

“Enterprises are trying to rush to figure out how to implement or incorporate generative AI into their business to gain efficiencies,” says Will Fritcher, deputy chief client officer at TP. “But instead of viewing AI as a way to reduce expenses, they should really be looking at it through the lens of enhancing the customer experience and driving value.”

Doing this requires solving two intertwined challenges: empowering live agents by automating routine tasks and ensuring AI outputs remain accurate, reliable, and precise. And the key to both these goals? Striking the right balance between technological innovation and human judgment.

A key role in customer support

Generative AI’s potential impact on customer support is twofold: Customers stand to benefit from faster, more consistent service for simple requests, while
also receiving undivided human attention for complex, emotionally charged situations. For employees, eliminating repetitive tasks boosts job satisfaction and reduces burnout.The tech can also be used to streamline customer support workflows and enhance service quality in various ways, including:

Automated routine inquiries: AI systems handle straightforward customer requests, like resetting passwords or checking account balances.

Real-time assistance: During interactions, AI pulls up contextually relevant resources, suggests responses, and guides live agents to solutions faster.

Fritcher notes that TP is relying on many of these capabilities in its customer support solutions. For instance, AI-powered coaching marries AI-driven metrics with human expertise to provide feedback on 100% of customer interactions, rather than the traditional 2%
to 4% that was monitored pre-generative AI.

Call summaries: By automatically documenting customer interactions, AI saves live agents valuable time that can be reinvested in customer care.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Accelerating AI innovation through application modernization

Business applications powered by AI are revolutionizing customer experiences, accelerating the speed of business, and driving employee productivity. In fact, according to research firm Frost & Sullivan’s 2024 Global State of AI report, 89% of organizations believe AI and machine learning will help them grow revenue, boost operational efficiency, and improve customer experience.

Take for example, Vodafone. The telecommunications company is using a suite of Azure AI services, such as Azure OpenAI Service, to deliver real-time, hyper-personalized experiences across all of its customer touchpoints, including its digital chatbot TOBi. By leveraging AI to increase customer satisfaction, Naga Surendran, senior director of product marketing for Azure Application Services at Microsoft, says Vodafone has managed to resolve 70% of its first-stage inquiries through AI-powered digital channels. It has also boosted the productivity of support agents by providing them with access to AI capabilities that mirror those of Microsoft Copilot, an AI-powered productivity tool.

“The result is a 20-point increase in net promotor score,” he says. “These benefits are what’s driving AI infusion into every business process and application.”

Yet realizing measurable business value from AI-powered applications requires a new game plan. Legacy application architectures simply aren’t capable of meeting the high demands of AI-enhanced applications. Rather, the time is now for organizations to modernize their infrastructure, processes, and application architectures using cloud native technologies to stay competitive.

The time is now for modernization

Today’s organizations exist in an era of geopolitical shifts, growing competition, supply chain disruptions, and evolving consumer preferences. AI applications can help by supporting innovation, but only if they have the flexibility to scale when needed. Fortunately, by modernizing applications, organizations can achieve the agile development, scalability, and fast compute performance needed to support rapid innovation and accelerate the delivery of AI applications. David Harmon, director of software development for AMD says companies, “really want to make sure that they can migrate their current [environment] and take advantage of all the hardware changes as much as possible.” The result is not only a reduction in the overall development lifecycle of new applications but a speedy response to changing world circumstances.

Beyond building and deploying intelligent apps quickly, modernizing applications, data, and infrastructure can significantly improve customer experience. Consider, for example, Coles, an Australian supermarket that invested in modernization and is using data and AI to deliver dynamic e-commerce experiences to its customers both online and in-store. With Azure DevOps, Coles has shifted from monthly to weekly deployments of applications while, at the same time, reducing build times by hours. What’s more, by aggregating views of customers across multiple channels, Coles has been able to deliver more personalized customer experiences. In fact, according to a 2024 CMSWire Insights report, there is a significant rise in the use of AI across the digital customer experience toolset, with 55% of organizations now using it to some degree, and more beginning their journey.

But even the most carefully designed applications are vulnerable to cybersecurity attacks. If given the opportunity, bad actors can extract sensitive information from machine learning models or maliciously infuse AI systems with corrupt data. “AI applications are now interacting with your core organizational data,” says Surendran. “Having the right guard rails is important to make sure the data is secure and built on a platform that enables you to do that.” The good news is modern cloud based architectures can deliver robust security, data governance, and AI guardrails like content safety to protect AI applications from security threats and ensure compliance with industry standards.

The answer to AI innovation

New challenges, from demanding customers to ill-intentioned hackers, call for a new approach to modernizing applications. “You have to have the right underlying application architecture to be able to keep up with the market and bring applications faster to market,” says Surendran. “Not having that foundation can slow you down.”

Enter cloud native architecture. As organizations increasingly adopt AI to accelerate innovation and stay competitive, there is a growing urgency to rethink how applications are built and deployed in the cloud. By adopting cloud native architectures, Linux, and open source software, organizations can better facilitate AI adoption and create a flexible platform purpose built for AI and optimized for the cloud. Harmon explains that open source software creates options, “And the overall open source ecosystem just thrives on that. It allows new technologies to come into play.”

Application modernization also ensures optimal performance, scale, and security for AI applications. That’s because modernization goes beyond just lifting and shifting application workloads to cloud virtual machines. Rather, a cloud native architecture is inherently designed to provide developers with the following features:

  • The flexibility to scale to meet evolving needs
  • Better access to the data needed to drive intelligent apps
  • Access to the right tools and services to build and deploy intelligent applications easily
  • Security embedded into an application to protect sensitive data

Together, these cloud capabilities ensure organizations derive the greatest value from their AI applications. “At the end of the day, everything is about performance and security,” says Harmon. Cloud is no exception.

What’s more, Surendran notes that “when you leverage a cloud platform for modernization, organizations can gain access to AI models faster and get to market faster with building AI-powered applications. These are the factors driving the modernization journey.”

Best practices in play

For all the benefits of application modernization, there are steps organizations must take to ensure both technological and operational success. They are:

Train employees for speed. As modern infrastructure accelerates the development and deployment of AI-powered applications, developers must be prepared to work faster and smarter than ever. For this reason, Surendran warns, “Employees must be skilled in modern application development practices to support the digital business needs.” This includes developing expertise in working with loosely coupled microservices to build scalable and flexible application and AI integration.

Start with an assessment. Large enterprises are likely to have “hundreds of applications, if not thousands,” says Surendran. As a result, organizations must take the time to evaluate their application landscape before embarking on a modernization journey. “Starting with an assessment is super important,” continues Surendran. “Understanding, taking inventory of the different applications, which team is using what, and what this application is driving from a business process perspective is critical.”

Focus on quick wins. Modernization is a huge, long-term transformation in how companies build, deliver, and support applications. Most businesses are still learning and developing the right strategy to support innovation. For this reason, Surendran recommends focusing on quick wins while also working on a larger application estate transformation. “You have to show a return on investment for your organization and business leaders,” he says. For example, modernize some apps quickly with re-platforming and then infuse them with AI capabilities.

Partner up. “Modernization can be daunting,” says Surendran. Selecting the right strategy, process, and platform to support innovation is only the first step. Organizations must also “bring on the right set of partners to help them go through change management and the execution of this complex project.”

Address all layers of security. Organizations must be unrelenting when it comes to protecting their data. According to Surendran, this means adopting a multi-layer approach to security that includes: security by design, in which products and services are developed from the get-go with security in mind; security by default, in which protections exist at every layer and interaction where data exists; and security by ongoing operations, which means using the right tools and dashboards to govern applications throughout their lifecycle.

A look to the future

Most organizations are already aware of the need for application modernization. But with the arrival of AI comes the startling revelation that modernization efforts must be done right, and that AI applications must be built and deployed for greater business impact. Adopting a cloud native architecture can help by serving as a platform for enhanced performance, scalability, security, and ongoing innovation. “As soon as you modernize your infrastructure with a cloud platform, you have access to these rapid innovations in AI models,” says Surendran. “It’s about being able to continuously innovate with AI.”

Read more about how to accelerate app and data estate readiness for AI innovation with Microsoft Azure and AMD. Explore Linux on Azure.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Why materials science is key to unlocking the next frontier of AI development

The Intel 4004, the first commercial microprocessor, was released in 1971. With 2,300 transistors packed into 12mm2, it heralded a revolution in computing. A little over 50 years later, Apple’s M2 Ultra contains 134 billion transistors.

The scale of progress is difficult to comprehend, but the evolution of semiconductors, driven for decades by Moore’s Law, has paved a path from the emergence of personal computing and the internet to today’s AI revolution.

But this pace of innovation is not guaranteed, and the next frontier of technological advances—from the future of AI to new computing paradigms—will only happen if we think differently.

Atomic challenges

The modern microchip stretches both the limits of physics and credulity. Such is the atomic precision, that a few atoms can decide the function of an entire chip. This marvel of engineering is the result of over 50 years of exponential scaling creating faster, smaller transistors.

But we are reaching the physical limits of how small we can go, costs are increasing exponentially with complexity, and efficient power consumption is becoming increasingly difficult. In parallel, AI is demanding ever-more computing power. Data from Epoch AI indicates the amount of computing needed to develop AI is quickly outstripping Moore’s Law, doubling every six months in the “deep learning era” since 2010.

These interlinked trends present challenges not just for the industry, but society as a whole. Without new semiconductor innovation, today’s AI models and research will be starved of computational resources and struggle to scale and evolve. Key sectors like AI, autonomous vehicles, and advanced robotics will hit bottlenecks, and energy use from high-performance computing and AI will continue to soar.

Materials intelligence

At this inflection point, a complex, global ecosystem—from foundries and designers to highly specialized equipment manufacturers and materials solutions providers like Merck—is working together more closely than ever before to find the answers. All have a role to play, and the role of materials extends far, far beyond the silicon that makes up the wafer.

Instead, materials intelligence is present in almost every stage of the chip production process—whether in chemical reactions to carve circuits at molecular scale (etching) or adding incredibly thin layers to a wafer (deposition) with atomic precision: a human hair is 25,000 times thicker than layers in leading edge nodes.

Yes, materials provide a chip’s physical foundation and the substance of more powerful and compact components. But they are also integral to the advanced fabrication methods and novel chip designs that underpin the industry’s rapid progress in recent decades.

For this reason, materials science is taking on a heightened importance as we grapple with the limits of miniaturization. Advanced materials are needed more than ever for the industry to unlock the new designs and technologies capable of increasing chip efficiency, speed, and power. We are seeing novel chip architectures that embrace the third dimension and stack layers to optimize surface area usage while lowering energy consumption. The industry is harnessing advanced packaging techniques, where separate “chiplets” are fused with varying functions into a more efficient, powerful single chip. This is called heterogeneous integration.

Materials are also allowing the industry to look beyond traditional compositions. Photonic chips, for example, harness light rather than electricity to transmit data. In all cases, our partners rely on us to discover materials never previously used in chips and guide their use at the atomic level. This, in turn, is fostering the necessary conditions for AI to flourish in the immediate future.

New frontiers

The next big leap will involve thinking differently. The future of technological progress will be defined by our ability to look beyond traditional computing.

Answers to mounting concerns over energy efficiency, costs, and scalability will be found in ambitious new approaches inspired by biological processes or grounded in the principles of quantum mechanics.

While still in its infancy, quantum computing promises processing power and efficiencies well beyond the capabilities of classical computers. Even if practical, scalable quantum systems remain a long way off, their development is dependent on the discovery and application of state-of-the-art materials.

Similarly, emerging paradigms like neuromorphic computing, modelled on the human brain with architectures mimicking our own neural networks, could provide the firepower and energy-efficiency to unlock the next phase of AI development. Composed of a deeply complex web of artificial synapses and neurons, these chips would avoid traditional scalability roadblocks and the limitations of today’s Von Neumann computers that separate memory and processing.

Our biology consists of super complex, intertwined systems that have evolved by natural selection, but it can be inefficient; the human brain is capable of extraordinary feats of computational power, but it also requires sleep and careful upkeep. The most exciting step will be using advanced compute—AI and quantum—to finally understand and design systems inspired by biology. This combination will drive the power and ubiquity of next-generation computing and associated advances to human well-being.

Until then, the insatiable demand for more computing power to drive AI’s development poses difficult questions for an industry grappling with the fading of Moore’s Law and the constraints of physics. The race is on to produce more powerful, more efficient, and faster chips to progress AI’s transformative potential in every area of our lives.

Materials are playing a hidden, but increasingly crucial role in keeping pace, producing next-generation semiconductors and enabling the new computing paradigms that will deliver tomorrow’s technology.

But materials science’s most important role is yet to come. Its true potential will be to take us—and AI—beyond silicon into new frontiers and the realms of science fiction by harnessing the building blocks of biology.

This content was produced by EMD Electronics. It was not written by MIT Technology Review’s editorial staff.

Moving generative AI into production

Generative AI has taken off. Since the introduction of ChatGPT in November 2022, businesses have flocked to large language models (LLMs) and generative AI models looking for solutions to their most complex and labor-intensive problems. The promise that customer service could be turned over to highly trained chat platforms that could recognize a customer’s problem and present user-friendly technical feedback, for example, or that companies could break down and analyze their troves of unstructured data, from videos to PDFs, has fueled massive enterprise interest in the technology. 

This hype is moving into production. The share of businesses that use generative AI in at least one business function nearly doubled this year to 65%, according to McKinsey. The vast majority of organizations (91%) expect generative AI applications to increase their productivity, with IT, cybersecurity, marketing, customer service, and product development among the most impacted areas, according to Deloitte. 

Yet, difficulty successfully deploying generative AI continues to hamper progress. Companies know that generative AI could transform their businesses—and that failing to adopt will leave them behind—but they are faced with hurdles during implementation. This leaves two-thirds of business leaders dissatisfied with progress on their AI deployments. And while, in Q3 2023, 79% of companies said they planned to deploy generative AI projects in the next year, only 5% reported having use cases in production in May 2024. 

“We’re just at the beginning of figuring out how to productize AI deployment and make it cost effective,” says Rowan Trollope, CEO of Redis, a maker of real-time data platforms and AI accelerators. “The cost and complexity of implementing these systems is not straightforward.”

Estimates of the eventual GDP impact of generative AI range from just under $1 trillion to a staggering $4.4 trillion annually, with projected productivity impacts comparable to those of the Internet, robotic automation, and the steam engine. Yet, while the promise of accelerated revenue growth and cost reductions remains, the path to get to these goals is complex and often costly. Companies need to find ways to efficiently build and deploy AI projects with well-understood components at scale, says Trollope.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Accelerating generative AI deployment with microservices

In this exclusive webcast, we delve into the transformative potential of portable microservices for the deployment of generative AI models. We explore how startups and large organizations are leveraging this technology to streamline generative AI deployment, enhance customer service, and drive innovation across domains, including chatbots, document analysis, and video generation.

Our discussion focuses on overcoming key challenges such as deployment complexity, security, and cost management. We also discuss how microservices can help executives realize business value with generative AI while maintaining control over data and intellectual property.

Delivering the next-generation barcode

The world’s first barcode, designed in 1948, took more than 25 years to make it out of the lab and onto a retail package. Since then, the barcode has done much more than make grocery checkouts faster—it has remade our understanding of how physical objects can be identified and tracked, creating a new pace and set of expectations for the speed and reliability of modern commerce.

Nearly eighty years later, a new iteration of that technology, which encodes data in two dimensions, is poised to take the stage. Today’s 2D barcode is not only out of the lab but “open to a world of possibility,” says Carrie Wilkie, senior vice president of standards and technology at GS1 US.

2D barcodes encode substantially more information than their 1D counterparts. This enables them to link physical objects to a wide array of digital resources. For consumers, 2D barcodes can provide a wealth of product information, from food allergens, expiration dates, and safety recalls to detailed medication use instructions, coupons, and product offers. For businesses, 2D barcodes can enhance operational efficiencies, create traceability at the lot or item level, and drive new forms of customer engagement.

An array of 2D barcode types supports the information needs of a variety of industries. The GS1 DataMatrix, for example, is used on medication or medical devices, encoding expiration dates, batch and lot numbers, and FDA National Drug Codes. The QR Code is familiar to consumers who have used one to open a website from their phone. Adding a GS1 Digital Link URI to a QR Code enables it to serve two purposes: as both a traditional barcode for supply chain operations, enabling tracking throughout the supply chain and price lookup at checkout, and also as a consumer-facing link to digital information, like expiry dates and serial numbers.

Regardless of type, however, all 2D barcodes require a business ecosystem backed by data. To capture new value from advanced barcodes, organizations must supply and manage clean, accurate, and interoperable data around their products and materials. For 2D barcodes to deliver on their potential, businesses will need to collaborate with partners, suppliers, and customers and commit to common data standards across the value chain.

Driving the demand for 2D barcodes

Shifting to 2D barcodes—and enabling the data ecosystems behind them—will require investment by business. Consumer engagement, compliance, and sustainability are among the many factors driving this transition.

Real-time consumer engagement: Today’s customers want to feel connected to the brands they interact with and purchase from. Information is a key element of that engagement and empowerment. “When I think about customer satisfaction,” says Leslie Hand, group vice president for IDC Retail Insights, “I’m thinking about how I can provide more information that allows them to make better decisions about their own lives and the things they buy.”

2D barcodes can help by connecting consumers to online content in real time. “If, by using a 2D barcode, you have the capability to connect to a consumer in a specific region, or a specific store, and you have the ability to provide information to that consumer about the specific product in their hand, that can be a really powerful consumer engagement tool,” says Dan Hardy, director of customer operations for HanesBrands, Inc. “2D barcodes can bring brand and product connectivity directly to an individual consumer, and create an interaction that supports your brand message at an individual consumer/product level.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Chasing AI’s value in life sciences

Inspired by an unprecedented opportunity, the life sciences sector has gone all in on AI. For example, in 2023, Pfizer introduced an internal generative AI platform expected to deliver $750 million to $1 billion in value. And Moderna partnered with OpenAI in April 2024, scaling its AI efforts to deploy ChatGPT Enterprise, embedding the tool’s capabilities across business functions from legal to research.

In drug development, German pharmaceutical company Merck KGaA has partnered with several AI companies for drug discovery and development. And Exscientia, a pioneer in using AI in drug discovery, is taking more steps toward integrating generative AI drug design with robotic lab automation in collaboration with Amazon Web Services (AWS).

Given rising competition, higher customer expectations, and growing regulatory challenges, these investments are crucial. But to maximize their value, leaders must carefully consider how to balance the key factors of scope, scale, speed, and human-AI collaboration.

The early promise of connecting data

The common refrain from data leaders across all industries—but specifically from those within data-rich life sciences organizations—is “I have vast amounts of data all over my organization, but the people who need it can’t find it.” says Dan Sheeran, general manager of health care and life sciences for AWS. And in a complex healthcare ecosystem, data can come from multiple sources including hospitals, pharmacies, insurers, and patients.

“Addressing this challenge,” says Sheeran, “means applying metadata to all existing data and then creating tools to find it, mimicking the ease of a search engine. Until generative AI came along, though, creating that metadata was extremely time consuming.”

ZS’s global head of the digital and technology practice, Mahmood Majeed notes that his teams regularly work on connected data programs, because “connecting data to enable connected decisions across the enterprise gives you the ability to create differentiated experiences.”

Majeed points to Sanofi’s well-publicized example of connecting data with its analytics app, plai, which streamlines research and automates time-consuming data tasks. With this investment, Sanofi reports reducing research processes from weeks to hours and the potential to improve target identification in therapeutic areas like immunology, oncology, or neurology by 20% to 30%.

Achieving the payoff of personalization

Connected data also allows companies to focus on personalized last-mile experiences. This involves tailoring interactions with healthcare providers and understanding patients’ individual motivations, needs, and behaviors.

Early efforts around personalization have relied on “next best action” or “next best engagement” models to do this. These traditional machine learning (ML) models suggest the most appropriate information for field teams to share with healthcare providers, based on predetermined guidelines.

When compared with generative AI models, more traditional machine learning models can be inflexible, unable to adapt to individual provider needs, and they often struggle to connect with other data sources that could provide meaningful context. Therefore, the insights can be helpful but limited.  

Sheeran notes that companies have a real opportunity to improve their ability to gain access to connected data for better decision-making processes, “Because the technology is generative, it can create context based on signals. How does this healthcare provider like to receive information? What insights can we draw about the questions they’re asking? Can their professional history or past prescribing behavior help us provide a more contextualized answer? This is exactly what generative AI is great for.”

Beyond this, pharmaceutical companies spend millions of dollars annually to customize marketing materials. They must ensure the content is translated, tailored to the audience and consistent with regulations for each location they offer products and services. A process that usually takes weeks to develop individual assets has become a perfect use case for generative copy and imagery. With generative AI, the process is reduced to from weeks to minutes and creates competitive advantage with lower costs per asset, Sheeran says.

Accelerating drug discovery with AI, one step at a time

Perhaps the greatest hope for AI in life sciences is its ability to generate insights and intellectual property using biology-specific foundation models. Sheeran says, “our customers have seen the potential for very, very large models to greatly accelerate certain discrete steps in the drug discovery and development processes.” He continues, “Now we have a much broader range of models available, and an even larger set of models coming that tackle other discrete steps.”

By Sheeran’s count, there are approximately six major categories of biology-specific models, each containing five to 25 models under development or already available from universities and commercial organizations.

The intellectual property generated by biology-specific models is a significant consideration, supported by services such as Amazon Bedrock, which ensures customers retain control over their data, with transparency and safeguards to prevent unauthorized retention and misuse.

Finding differentiation in life sciences with scope, scale, and speed

Organizations can differentiate with scope, scale, and speed, while determining how AI can best augment human ingenuity and judgment. “Technology has become so easy to access. It’s omnipresent. What that means is that it’s no longer a differentiator on its own,” says Majeed. He suggests that life sciences leaders consider:

Scope: Have we zeroed in on the right problem? By clearly articulating the problem relative to the few critical things that could drive advantage, organizations can identify technology and business collaborators and set standards for measuring success and driving tangible results.

Scale: What happens when we implement a technology solution on a large scale? The highest-priority AI solutions should be the ones with the most potential for results.Scale determines whether an AI initiative will have a broader, more widespread impact on a business, which provides the window for a greater return on investment, says Majeed.

By thinking through the implications of scale from the beginning, organizations can be clear on the magnitude of change they expect and how bold they need to be to achieve it. The boldest commitment to scale is when companies go all in on AI, as Sanofi is doing, setting goals to transform the entire value chain and setting the tone from the very top.

Speed: Are we set up to quickly learn and correct course? Organizations that can rapidly learn from their data and AI experiments, adjust based on those learnings, and continuously iterate are the ones that will see the most success. Majeed emphasizes, “Don’t underestimate this component; it’s where most of the work happens. A good partner will set you up for quick wins, keeping your teams learning and maintaining momentum.”

Sheeran adds, “ZS has become a trusted partner for AWS because our customers trust that they have the right domain expertise. A company like ZS has the ability to focus on the right uses of AI because they’re in the field and on the ground with medical professionals giving them the ability to constantly stay ahead of the curve by exploring the best ways to improve their current workflows.”

Human-AI collaboration at the heart

Despite the allure of generative AI, the human element is the ultimate determinant of how it’s used. In certain cases, traditional technologies outperform it, with less risk, so understanding what it’s good for is key. By cultivating broad technology and AI fluency throughout the organization, leaders can teach their people to find the most powerful combinations of human-AI collaboration for technology solutions that work. After all, as Majeed says, “it’s all about people—whether it’s customers, patients, or our own employees’ and users’ experiences.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Cultivating the next generation of AI innovators in a global tech hub

A few years ago, I had to make one of the biggest decisions of my life: continue as a professor at the University of Melbourne or move to another part of the world to help build a brand new university focused entirely on artificial intelligence.

With the rapid development we have seen in AI over the past few years, I came to the realization that educating the next generation of AI innovators in an inclusive way and sharing the benefits of technology across the globe is more important than maintaining the status quo. I therefore packed my bags for the Mohammed bin Zayed University of Artificial Intelligence (MBZUAI) in Abu Dhabi.

The world in all its complexity

Today, the rewards of AI are mostly enjoyed by a few countries in what the Oxford Internet Institute dubs the “Compute North.” These countries, such as the US, the U.K., France, Canada, and China, have dominated research and development, and built state of the art AI infrastructure capable of training foundational models. This should come as no surprise, as these countries are home to many of the world’s top universities and large tech corporations.

But this concentration of innovation comes at a cost for the billions of people who live outside these dominant countries and have different cultural backgrounds.

Large language models (LLMs) are illustrative of this disparity. Researchers have shown that many of the most popular multilingual LLMs perform poorly with languages other than English, Chinese, and a handful of other (mostly) European languages. Yet, there are approximately 6,000 languages spoken today, many of them in communities in Africa, Asia, and South America. Arabic alone is spoken by almost 400 million people and Hindi has 575 million speakers around the world.

For example, LLaMA 2 performs up to 50% better in English compared to Arabic, when measured using the LM-Evaluation-Harness framework. Meanwhile, Jais, an LLM co-developed by MBZUAI, exceeds LLaMA 2 in Arabic and is comparable to Meta’s model in English (see table below).

The chart shows that the only way to develop AI applications that work for everyone is by creating new institutions outside the Compute North that consistently and conscientiously invest in building tools designed for the thousands of language communities across the world.

Environments of innovation

One way to design new institutions is to study history and understand how today’s centers of gravity in AI research emerged decades ago. Before Silicon Valley earned its reputation as the center of global technological innovation, it was called Santa Clara Valley and was known for its prune farms. However, the main catalyst was Stanford University, which had built a reputation as one of the best places in the world to study electrical engineering. Over the years, through a combination of government-led investment through grants and focused research, the university birthed countless inventions that advanced computing and created a culture of entrepreneurship. The results speak for themselves: Stanford alumni have founded companies such as Alphabet, NVIDIA, Netflix, and PayPal, to name a few.

Today, like MBZUAI’s predecessor in Santa Clara Valley, we have an opportunity to build a new technology hub centered around a university.

And that’s why I chose to join MBZUAI, the world’s first research university focused entirely on AI. From MBZUAI’s position at the geographical crossroads of East and West, our goal is to attract the brightest minds from around the world and equip them with the tools they need to push the boundaries of AI research and development.

A community for inclusive AI

MBZUAI’s student body comes from more than 50 different countries around the globe. It has attracted top researchers such as Monojit Choudhury from Microsoft, Elizabeth Churchill from Google, Ted Briscoe from the University of Cambridge, Sami Haddin from the Technical University of Munich, and Yoshihiko Nakamura from the University of Tokyo, just to name a few.

These scientists may be from different places but they’ve found a common purpose at MBZUAI with our interdisciplinary nature, relentless focus on making AI a force for global progress, and emphasis on collaboration across disciplines such as robotics, NLP, machine learning, and computer vision.

In addition to traditional AI disciplines, MBZUAI has built departments in sibling areas that can both contribute to and benefit from AI, including human computer interaction, statistics and data science, and computational biology.

Abu Dhabi’s commitment to MBZUAI is part of a broader vision for AI that extends beyond academia. MBZUAI’s scientists have collaborated with G42, an Abu Dhabi-based tech company, on Jais, an Arabic-centric LLM that is the highest-performing open-weight Arabic LLM; and also NANDA, an advanced Hindi LLM. MBZUAI’s Institute of Foundational Models has created LLM360, an initiative designed to level the playing field of large model research and development by publishing fully open source models and datasets that are competitive with closed source or open weights models available from tech companies in North America or China.

MBZUAI is also developing language models that specialize in Turkic languages, which have traditionally been underrepresented in NLP, yet are spoken by millions of people.

Another recent project has brought together native speakers of 26 languages from 28 different countries to compile a benchmark dataset that evaluates the performance of vision language models and their ability to understand cultural nuances in images.

These kinds of efforts to expand the capabilities of AI to broader communities are necessary if we want to maintain the world’s cultural diversity and provide everyone with AI tools that are useful to them. At MBZUAI, we have created a unique mix of students and faculty to drive globally-inclusive AI innovation for the future. By building a broad community of scientists, entrepreneurs, and thinkers, the university is increasingly establishing itself as a driving force in AI innovation that extends far beyond Abu Dhabi, with the goal of developing technologies that are inclusive for the world’s diverse languages and culture.

This content was produced by the Mohamed bin Zayed University of Artificial Intelligence. It was not written by MIT Technology Review’s editorial staff.

Investing in AI to build next-generation infrastructure

The demand for new and improved infrastructure across the world is not being met. The Asian Development Bank has estimated that in Asia alone, roughly $1.7 trillion needs to be invested annually through to 2030 just to sustain economic growth and offset the effects of climate change. Globally, that figure has been put at $15 trillion.

In the US, for example, it is no secret that the country’s highways, railways and bridges are in need of updating. But similar to many other sectors, there are significant shortages in skilled workers and resources, which delays all-important repairs and maintenance and harms efficiency.

This infrastructure gap – the difference between funding and construction – is vast. And while governments and companies everywhere are feeling the strain of constructing an energy efficient and sustainable built environment, it’s proving more than humans can do alone. To redress this imbalance, many organizations are turning to various forms of AI, including large language models (LLMs) and machine learning (ML). Collectively, they are not yet able to fix all current infrastructure problems but they are already helping to reduce costs, risks, and increase efficiency.

Overcoming resource constraints

A shortage of skilled engineering and construction labor is a major problem. In the US, it is estimated that there will be a 33% shortfall in the supply of new talent by 2031, with unfilled positions in software, industrial, civil and electrical engineering. Germany reported a shortage of 320,000 science, technology, engineering, and mathematics (STEM) specialists in 2022 and another engineering powerhouse, Japan, has forecast a deficit of more than 700,000 engineers by 2030. Considering the duration of most engineering projects (repairing a broken gas pipeline for example, can take decades), the demand for qualified engineers will only continue to outstrip supply unless something is done.

Immigration and visa restrictions for international engineering students, and a lack of retention in formative STEM jobs, exert additional constraints. Plus, there is the issue of task duplication which is something AI can do with ease.

Julien Moutte, CTO of Bentley Systems explains, “There’s a massive amount of work that engineers have to do that is tedious and repetitive. Between 30% to 50% of their time is spent just compressing 3D models into 2D PDF formats. If that work can be done by AI-powered tools, they can recover half their working time which could then be invested in performing higher value tasks.”

With guidance, AI can automate the same drawings hundreds of times. Training engineers to ask the right questions and use AI optimally will ease the burden and stress of repetition.

However, this is not without challenges. Users of ChatGPT, or other LLMs, know the pitfalls of AI hallucinations, where the model can logically predict a sequence of words but without contextual understanding of what the words mean. This can lead to nonsensical outputs, but in engineering, hallucinations can sometimes be altogether more risky. “If a recommendation was made by AI, it needs to be validated,” says Moutte. “Is that recommendation safe? Does it respect the laws of physics? And it’s a waste of time for the engineers to have to review all these things.”

But this can be offset by having existing company tools and products running simulations and validating the designs using established engineering rules and design codes which again relieves the burden of having the engineers having to do the validating themselves.

Improving resource efficiency

An estimated 30% of building materials, such as steel and concrete, are wasted on a typical construction site in the United States and United Kingdom, with the majority ending up in landfills, although countries such as Germany and The Netherlands have recently implemented recycling measures. This, and the rising cost of raw materials, is putting pressure on companies to think of solutions to improve construction efficiency and sustainability.

AI can provide solutions to both of these issues during the design and construction phases. Digital twins can help workers spot deviations in product quality even and provide the insights needed to minimize waste and energy output and crucially, save money.

Machine learning models use real-time data from field statistics and process variables to flag off-spec materials, product deviations and excess energy usage, such as machinery and transportation for construction site workers. Engineers can then anticipate the gaps and streamline the processes, making large-scale overall improvements for each project which can be replicated in the future.

“Being able to anticipate and reduce that waste with that visual awareness, with the application of AI to make sure that you are optimizing those processes and those designs and the resources that you need to construct that infrastructure is massive,” says Moutte.

He continues, “The big game changer is going to be around sustainability because we need to create infrastructure with more sustainable and efficient designs, and there’s a lot of room for improvement.” And an important part of this will be how AI can help create new materials and models to reduce waste.

Human and AI partnership

AI might never be entirely error-free, but for the time being, human intervention can catch mistakes. Although there may be some concern in the construction sector that AI will replace humans, there are elements to any construction project that only people can do.

AI lacks the critical thinking and problem-solving that humans excel at, so additional training for engineers to supervise and maintain the automated systems is key so that each side can work together optimally. Skilled workers have creativity and intuition, as well as customer service expertise, while AI is not yet capable of such novel solutions.

With the engineers implementing appropriate guardrails and frameworks, AI can contribute the bulk of automation and repetition to projects, thereby creating a symbiotic and optimal relationship between humans and machines.

“Engineers have been designing impressive buildings for decades already, where they are not doing all the design manually. You need to make sure that those structures are validated first by engineering principles, physical rules, local codes, and the rest. So we have all the tools to be able to validate those designs,” explains Moutte.

As AI advances alongside human care and control, it can help futureproof the construction process where every step is bolstered by the strengths of both sides. By addressing the concerns of the construction industry – costs, sustainability, waste and task repetition – and upskilling engineers to manage AI to address these at the design and implementation stage, the construction sector looks set to be less riddled with potholes.

“We’ve already seen how AI can be used to create new materials and reduce waste,” explains Moutte. “As we move to 2050, I believe engineers will need those AI capabilities to create the best possible designs and I’m looking forward to releasing some of those AI-enabled features in our products.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.