How cloud and AI transform and improve customer experiences

As AI technologies become increasingly mainstream, there’s mounting competitive pressure to transform traditional infrastructures and technology stacks. Traditional brick-and-mortar companies are finding cloud and data to be the foundational keys to unlocking their paths to digital transformation, and to competing in modern, AI-forward industry landscapes. 

In this exclusive webcast, experts discuss the building blocks for digital transformation, approaches for upskilling employees and putting digital processes in place, and data management best practices. The discussion also looks at what the near future holds and emphasizes the urgency for companies to transform now to stay relevant. 

Learn from the experts

  • Digital transformation, from the ground up, starts by moving infrastructure and data to the cloud
  • AI implementation requires a talent transformation at scale, across the organization
  • AI is a company-wide initiative—everyone in the company will become either an AI creator or consumer

Featured speakers

Mohammed Rafee Tarafdar, Chief Technology Officer, Infosys

Rafee is Infosys’s Chief Technology Officer. He is responsible for the technology vision and strategy, sensing & scaling emerging technologies, advising and partnering with clients to help them succeed in their AI transformation journey and building high technology talent density. He is leading the AI First transformation journey for Infosys and has implemented population and enterprise scale platforms. He is the co-author of “The Live Enterprise” book and has been recognized as a top 50 technology global leader by Forbes in 2023 and Top 25 Tech Wavemaker by Entrepreneur India magazine in 2024.

Sam Jaddi, Chief Information Officer, ADT

Sam Jaddi is the Chief Information Officer for ADT. With more than 26 years of experience in technology innovation, Sam has deep knowledge of the security and smart home industry. His team helps to drive ADT’s business platforms and processes to improve both customer and employee experiences in the future. Sam has helped set the technology strategy, vision and direction for the company’s Digital transformation. Prior to Sam’s role at ADT, he served as Chief Technology Officer at Stanley, overseeing the company’s new security division, leading global integration initiatives, IT strategy, transformation and international operations.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The business of the future is adaptive

Manufacturing is in a state of flux. From supply chain disruptions to rising costs, tougher environmental regulations, and a changing consumer market, the sector faces a series of competing challenges.

But a new way of operating offers a way to tackle complexities head-on: adaptive production hardwires flexibility and resilience into the enterprise, drawing on powerful tools like artificial intelligence, digital twins, and robotics. Taking automation a step further, adaptive production allows manufacturers to respond in real time to demand fluctuations, adapt to supply chain disruptions, and autonomously optimize operations. It also facilitates an unprecedented level of personalization and customization for regional markets.

Time to adapt

The journey to adaptive production is not just about addressing today’s pressures, like rising costs and supply chain disruptions—it’s about positioning businesses for long-term success in a world of constant change. “In the coming years,” says Jana Kirchheim, director of manufacturing for Microsoft Germany, “I expect that new key technologies like copilots, small language models, high-performance computing, or the adaptive cloud approach will revolutionize the shop floor and accelerate industrial automation by enabling faster adjustments and re-programming for specific tasks.” These capabilities make adaptive production a transformative force, enhancing responsiveness and opening doors to systems with increasing autonomy—designed to complement human ingenuity rather than replace it.

These advances enable more than technical upgrades—they drive fundamental shifts in how manufacturers operate. John Hart, professor of mechanical engineering and director of MIT’s Center for Advanced Production Technologies, explains that automation is “going from a rigid high-volume, low-mix focus”—where factories make large quantities of very few products—“to more flexible high-volume, high-mix, and low-volume, high-mix scenarios”—where many product types can be made in custom quantities. These new capabilities demand a fundamental shift in how value is created and captured.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Driving business value by optimizing the cloud

Organizations are deepening their cloud investments at an unprecedented pace, recognizing its fundamental role in driving business agility and innovation. Synergy Research Group reports that companies spent $84 billion worldwide on cloud infrastructure services in the third quarter of 2024, a 23% rise over the third quarter of 2023 and the fourth consecutive quarter in which the year-on-year growth rate has increased.

Allowing users to access IT systems from anywhere in the world, cloud services also ensure solutions remain highly configurable and automated.

At the same time, hosted services like generative AI and tailored industry solutions can help companies quickly launch applications and grow the business. To get the most out of these services, companies are turning to cloud optimization—the process of selecting and allocating cloud resources to reduce costs while maximizing performance.

But despite all the interest in the cloud, many workloads remain stranded on-premises, and many more are not optimized for efficiency and growth, greatly limiting the forward momentum. Companies are missing out on a virtuous cycle of mutually reinforcing results that comes from even more efficient use of the cloud.

Organizations can enhance security, make critical workloads more resilient, protect the customer experience, boost revenues, and generate cost savings. These benefits can fuel growth and avert expenses, generating capital that can be invested in innovation.

“Cloud optimization involves making sure that your cloud spending is efficient so you’re not spending wastefully,” says André Dufour, Director and General Manager for AWS Cloud Optimization at Amazon Web Services. “But you can’t think of it only as cost savings at the expense of other things. Dollars freed up through optimization can be redirected to fund net new innovations, like generative AI.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Adapting for AI’s reasoning era

Anyone who crammed for exams in college knows that an impressive ability to regurgitate information is not synonymous with critical thinking.

The large language models (LLMs) first publicly released in 2022 were impressive but limited—like talented students who excel at multiple-choice exams but stumble when asked to defend their logic. Today’s advanced reasoning models are more akin to seasoned graduate students who can navigate ambiguity and backtrack when necessary, carefully working through problems with a methodical approach.

As AI systems that learn by mimicking the mechanisms of the human brain continue to advance, we’re witnessing an evolution in models from rote regurgitation to genuine reasoning. This capability marks a new chapter in the evolution of AI—and what enterprises can gain from it. But in order to tap into this enormous potential, organizations will need to ensure they have the right infrastructure and computational resources to support the advancing technology.

The reasoning revolution

“Reasoning models are qualitatively different than earlier LLMs,” says Prabhat Ram, partner AI/HPC architect at Microsoft, noting that these models can explore different hypotheses, assess if answers are consistently correct, and adjust their approach accordingly. “They essentially create an internal representation of a decision tree based on the training data they’ve been exposed to, and explore which solution might be the best.”

This adaptive approach to problem-solving isn’t without trade-offs. Earlier LLMs delivered outputs in milliseconds based on statistical pattern-matching and probabilistic analysis. This was—and still is—efficient for many applications, but it doesn’t allow the AI sufficient time to thoroughly evaluate multiple solution paths.

In newer models, extended computation time during inference—seconds, minutes, or even longer—allows the AI to employ more sophisticated internal reinforcement learning. This opens the door for multi-step problem-solving and more nuanced decision-making.

To illustrate future use cases for reasoning-capable AI, Ram offers the example of a NASA rover sent to explore the surface of Mars. “Decisions need to be made at every moment around which path to take, what to explore, and there has to be a risk-reward trade-off. The AI has to be able to assess, ‘Am I about to jump off a cliff? Or, if I study this rock and I have a limited amount of time and budget, is this really the one that’s scientifically more worthwhile?’” Making these assessments successfully could result in groundbreaking scientific discoveries at previously unthinkable speed and scale.

Reasoning capabilities are also a milestone in the proliferation of agentic AI systems: autonomous applications that perform tasks on behalf of users, such as scheduling appointments or booking travel itineraries. “Whether you’re asking AI to make a reservation, provide a literature summary, fold a towel, or pick up a piece of rock, it needs to first be able to understand the environment—what we call perception—comprehend the instructions and then move into a planning and decision-making phase,” Ram explains.

Enterprise applications of reasoning-capable AI systems

The enterprise applications for reasoning-capable AI are far-reaching. In health care, reasoning AI systems could analyze patient data, medical literature, and treatment protocols to support diagnostic or treatment decisions. In scientific research, reasoning models could formulate hypotheses, design experimental protocols, and interpret complex results—potentially accelerating discoveries across fields from materials science to pharmaceuticals. In financial analysis, reasoning AI could help evaluate investment opportunities or market expansion strategies, as well as develop risk profiles or economic forecasts.

Armed with these insights, their own experience, and emotional intelligence, human doctors, researchers, and financial analysts could make more informed decisions, faster. But before setting these systems loose in the wild, safeguards and governance frameworks will need to be ironclad, particularly in high-stakes contexts like health care or autonomous vehicles.

“For a self-driving car, there are real-time decisions that need to be made vis-a-vis whether it turns the steering wheel to the left or the right, whether it hits the gas pedal or the brake—you absolutely do not want to hit a pedestrian or get into an accident,” says Ram. “Being able to reason through situations and make an ‘optimal’ decision is something that reasoning models will have to do going forward.”

The infrastructure underpinning AI reasoning

To operate optimally, reasoning models require significantly more computational resources for inference. This creates distinct scaling challenges. Specifically, because the inference durations of reasoning models can vary widely—from just a few seconds to many minutes—load balancing across these diverse tasks can be challenging.

Overcoming these hurdles requires tight collaboration between infrastructure providers and hardware manufacturers, says Ram, speaking of Microsoft’s collaboration with NVIDIA, which brings its accelerated computing platform to Microsoft products, including Azure AI.

“When we think about Azure, and when we think about deploying systems for AI training and inference, we really have to think about the entire system as a whole,” Ram explains. “What are you going to do differently in the data center? What are you going to do about multiple data centers? How are you going to connect them?” These considerations extend into reliability challenges at all scales: from memory errors at the silicon level, to transmission errors within and across servers, thermal anomalies, and even data center-level issues like power fluctuations—all of which require sophisticated monitoring and rapid response systems.

By creating a holistic system architecture designed to handle fluctuating AI demands, Microsoft and NVIDIA’s collaboration allows companies to harness the power of reasoning models without needing to manage the underlying complexity. In addition to performance benefits, these types of collaborations allow companies to keep pace with a tech landscape evolving at breakneck speed. “Velocity is a unique challenge in this space,” says Ram. “Every three months, there is a new foundation model. The hardware is also evolving very fast—in the last four years, we’ve deployed each generation of NVIDIA GPUs and now NVIDIA GB200NVL72. Leading the field really does require a very close collaboration between Microsoft and NVIDIA to share roadmaps, timelines, and designs on the hardware engineering side, qualifications and validation suites, issues that arise in production, and so on.”

Advancements in AI infrastructure designed specifically for reasoning and agentic models are critical for bringing reasoning-capable AI to a broader range of organizations. Without robust, accessible infrastructure, the benefits of reasoning models will remain relegated to companies with massive computing resources.

Looking ahead, the evolution of reasoning-capable AI systems and the infrastructure that supports them promises even greater gains. For Ram, the frontier extends beyond enterprise applications to scientific discovery and breakthroughs that propel humanity forward: “The day when these agentic systems can power scientific research and propose new hypotheses that can lead to a Nobel Prize, I think that’s the day when we can say that this evolution is complete.”

To learn more, please read Microsoft and NVIDIA accelerate AI development and performance, watch the NVIDIA GTC AI Conference sessions on demand, and explore the topic areas of Azure AI solutions and Azure AI infrastructure.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

A vision for the future of automation

The manufacturing industry is at a crossroads: Geopolitical instability is fracturing supply chains from the Suez to Shenzhen, impacting the flow of materials. Businesses are battling rising costs and inflation, coupled with a shrinking labor force, with more than half a million unfilled manufacturing jobs in the U.S. alone. And climate change is further intensifying the pressure, with more frequent extreme weather events and tightening environmental regulations forcing companies to rethink how they operate. New solutions are imperative.

Meanwhile, advanced automation, powered by the convergence of emerging and established technologies, including industrial AI, digital twins, the internet of things (IoT), and advanced robotics, promises greater resilience, flexibility, sustainability, and efficiency for industry. Individual success stories have demonstrated the transformative power of these technologies, providing examples of AI-driven predictive maintenance reducing downtime by up to 50%. Digital twin simulations can significantly reduce time to market, and bring environment dividends, too: One survey found 77% of leaders expect digital twins to reduce carbon emissions by 15% on average.

Yet, broad adoption of this advanced automation has lagged. “That’s not necessarily or just a technology gap,” says John Hart, professor of mechanical engineering and director of the Center for Advanced Production Technologies at MIT. “It relates to workforce capabilities and financial commitments and risk required.” For small and medium enterprises, and those with brownfield sites—older facilities with legacy systems— the barriers to implementation are significant.

In recent years, governments have stepped in to accelerate industrial progress. Through a revival of industrial policies, governments are incentivizing high-tech manufacturing, re-localizing critical production processes, and reducing reliance on fragile global supply chains.

All these developments converge in a key moment for manufacturing. The external pressures on the industry—met with technological progress and these new political incentives—may finally enable the shift toward advanced automation.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The machines are rising — but developers still hold the keys

Rumors of the ongoing death of software development — that it’s being slain by AI — are greatly exaggerated. In reality, software development is at a fork in the road: embracing the (currently) far-off notion of fully automated software development or acknowledging the work of a software developer is much more than just writing lines of code.

The decision the industry makes could have significant long-term consequences. Increasing complacency around AI-generated code and a shift to what has been termed “vibe coding” — where code is generated through natural language prompts until the results seem to work — will lead to code that’s more error-strewn, more expensive to run and harder to change in the future. And, if the devaluation of software development skills continues, we may even lack a workforce with the skills and knowledge to fix things down the line. 

This means software developers are going to become more important to how the world builds and maintains software. Yes, there are many ways their practices will evolve thanks to AI coding assistance, but in a world of proliferating machine-generated code, developer judgment and experience will be vital.

The dangers of AI-generated code are already here

The risks of AI-generated code aren’t science fiction: they’re with us today. Research done by GitClear earlier this year indicates that with AI coding assistants (like GitHub Copilot) going mainstream, code churn — which GitClear defines as “changes that were either incomplete or erroneous when the author initially wrote, committed, and pushed them to the company’s git repo” — has significantly increased. GitClear also found there was a marked decrease in the number of lines of code that have been moved, a signal for refactored code (essentially the care and feeding to make it more effective).

In other words, from the time coding assistants were introduced there’s been a pronounced increase in lines of code without a commensurate increase in lines deleted, updated, or replaced. Simultaneously, there’s been a decrease in lines moved — indicating a lot of code has been written but not refactored. More code isn’t necessarily a good thing (sometimes quite the opposite); GitClear’s findings ultimately point to complacency and a lack of rigor about code quality.

Can AI be removed from software development?

However, AI doesn’t have to be removed from software development and delivery. On the contrary, there’s plenty to be excited about. As noted in the latest volume of the Technology Radar — Thoughtworks’ report on technologies and practices from work with hundreds of clients all over the world — the coding assistance space is full of opportunities. 

Specifically, the report noted tools like Cursor, Cline and Windsurf can enable software engineering agents. What this looks like in practice is an agent-like feature inside developer environments that developers can ask specific sets of coding tasks to be performed in the form of a natural language prompt. This enables the human/machine partnership.

That being said, to only focus on code generation is to miss the variety of ways AI can help software developers. For example, Thoughtworks has been interested in how generative AI can be used to understand legacy codebases, and we see a lot of promise in tools like Unblocked, which is an AI team assistant that helps teams do just that. In fact, Anthropic’s Claude Code helped us add support for new languages in an internal tool, CodeConcise. We use CodeConcise to understand legacy systems; and while our success was mixed, we do think there’s real promise here.

Tightening practices to better leverage AI

It’s important to remember much of the work developers do isn’t developing something new from scratch. A large proportion of their work is evolving and adapting existing (and sometimes legacy) software. Sprawling and janky code bases that have taken on technical debt are, unfortunately, the norm. Simply applying AI will likely make things worse, not better, especially with approaches like vibe.  

This is why developer judgment will become more critical than ever. In the latest edition of the Technology Radar report, AI-friendly code design is highlighted, based on our experience that AI coding assistants perform best with well-structured codebases. 

In practice, this requires many different things, including clear and expressive naming to ensure context is clearly communicated (essential for code maintenance), reducing duplicate code, and ensuring modularity and effective abstractions. Done together, these will all help make code more legible to AI systems.

Good coding practices are all too easy to overlook when productivity and effectiveness are measured purely in terms of output, and even though this was true before there was AI tooling, software development needs to focus on good coding first.

AI assistance demands greater human responsibility

Instagram co-founder Mike Krieger recently claimed that in three years software engineers won’t write any code: they will only review AI-created code. This might sound like a huge claim, but it’s important to remember that reviewing code has always been a major part of software development work. With this in mind, perhaps the evolution of software development won’t be as dramatic as some fear.

But there’s another argument: as AI becomes embedded in how we build software, software developers will take on more responsibility, not less. This is something we’ve discussed a lot at Thoughtworks: the job of verifying that an AI-built system is correct will fall to humans. Yes, verification itself might be AI-assisted, but it will be the role of the software developer to ensure confidence. 

In a world where trust is becoming highly valuable — as evidenced by the emergence of the chief trust officer — the work of software developers is even more critical to the infrastructure of global industry. It’s vital software development is valued: the impact of thoughtless automation and pure vibes could prove incredibly problematic (and costly) in the years to come.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Powering the food industry with AI

There has never been a more pressing time for food producers to harness technology to tackle the sector’s tough mission. To produce ever more healthy and appealing food for a growing global population in a way that is resilient and affordable, all while minimizing waste and reducing the sector’s environmental impact. From farm to factory, artificial intelligence and machine learning can support these goals by increasing efficiency, optimizing supply chains, and accelerating the research and development of new types of healthy products. 

In agriculture, AI is already helping farmers to monitor crop health, tailor the delivery of inputs, and make harvesting more accurate and efficient. In labs, AI is powering experiments in gene editing to improve crop resilience and enhance the nutritional value of raw ingredients. For processed foods, AI is optimizing production economics, improving the texture and flavor of products like alternative proteins and healthier snacks, and strengthening food safety processes too. 

But despite this promise, industry adoption still lags. Data-sharing remains limited and companies across the value chain have vastly different needs and capabilities. There are also few standards and data governance protocols in place, and more talent and skills are needed to keep pace with the technological wave. 

All the same, progress is being made and the potential for AI in the food sector is huge. Key findings from the report are as follows: 

Predictive analytics are accelerating R&D cycles in crop and food science. AI reduces the time and resources needed to experiment with new food products and turns traditional trial-and-error cycles into more efficient data-driven discoveries. Advanced models and simulations enable scientists to explore natural ingredients and processes by simulating thousands of conditions, configurations, and genetic variations until they crack the right combination. 

AI is bringing data-driven insights to a fragmented supply chain. AI can revolutionize the food industry’s complex value chain by breaking operational silos and translating vast streams of data into actionable intelligence. Notably, large language models (LLMs) and chatbots can serve as digital interpreters, democratizing access to data analysis for farmers and growers, and enabling more informed, strategic decisions by food companies. 

Partnerships are crucial for maximizing respective strengths. While large agricultural companies lead in AI implementation, promising breakthroughs often emerge from strategic collaborations that leverage complementary strengths with academic institutions and startups. Large companies contribute extensive datasets and industry experience, while startups bring innovation, creativity, and a clean data slate. Combining expertise in a collaborative approach can increase the uptake of AI. 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Five benefits of a health tech accelerator program

In the ever-evolving world of health care, the role of technology is becoming increasingly crucial. From improving patient outcomes to streamlining administrative processes, digital technologies are changing the face of the industry. However, for startups developing health tech solutions, breaking into the market and scaling their products can be a challenging journey, requiring access to resources, expertise, and a network they might not have. This is where health tech accelerator programs come in.

Health tech accelerator programs are designed to support early-stage startups in the health technology space, providing them with the resources, mentorship, and funding they need to grow and succeed. These programs are often highly competitive, and startups that are selected gain access to a wealth of opportunities that can significantly accelerate their development. In this article, we’ll explore five key benefits of participating in a health tech accelerator program.

1. Access to mentorship and expertise

One of the most valuable aspects of health tech accelerator programs is the access they provide to experienced mentors and industry experts. Health tech startups often face unique challenges, such as navigating complex health-care regulations, developing scalable technologies, and understanding the intricacies of health systems. Having mentors who have firsthand experience in these areas can provide critical guidance.

These mentors often include clinicians, informaticists, investors, health-care professionals, and thought leaders. Their insights can help startups refine their business strategies, optimize their digital health solutions, and navigate the health-care landscape. With this guidance, startups are better positioned to make informed decisions, avoid common pitfalls, and accelerate their growth.

2. Funding and investment opportunities

For many startups, securing funding is one of the biggest hurdles they face. Health tech innovation can be expensive, especially in the early stages when startups are working on solution development, regulatory approvals, and pilot testing. Accelerator programs often provide startups with seed funding, as well as the opportunity to connect with venture capitalists, angel investors, and other potential backers.

Many accelerator programs culminate in a “demo day,” where startups pitch their solutions to a room full of investors and other key decision-makers. These events can be crucial in securing the funding necessary to scale a digital health solution or product. Beyond initial funding, the exposure gained from being part of a well-known accelerator program can lead to additional investment opportunities down the road.

3. Networking and industry connections

The health-care industry is notoriously complex and fragmented, making it difficult for new players to break in without the right connections. Health tech accelerator programs offer startups the opportunity to network with key leaders in the health-care and technology ecosystems, including clinicians, payers, pharmaceutical companies, government agencies, and potential customers.

Through structured networking events, mentorship sessions, and partnerships with established organizations, startups gain access to a wide range of stakeholders who can help substantiate their products, open doors to new markets, and provide feedback that can be used to refine their offerings. In the health tech space, strong industry connections are often critical to gaining traction and scaling successfully.

4. Market validation and credibility

The health tech industry is highly regulated and risk-averse, meaning that customers and investors are often wary of new technologies. Participating in an accelerator program can serve as a form of market validation, signaling that a startup’s offering has been vetted by experts and has the potential for success.

The credibility gained from being accepted into a prestigious accelerator program can be a game-changer. It provides startups with a level of legitimacy that can help them stand out in a crowded and competitive market. Whether it’s attracting investors, forging partnerships, or securing early customers, the reputation of the accelerator can give a startup a significant boost.

Additionally, accelerator programs often have ties to major health-care institutions and organizations. This can provide startups with opportunities to pilot their products in real-world health-care settings, which can serve as both a test of the product’s viability and a powerful proof of concept for future customers and investors.

5. Access to resources and infrastructure

Another significant benefit of accelerators is the access to resources and infrastructure that startups might not obtain otherwise. These resources can include everything from access to clinical data for model building and testing, legal and regulatory support, and technology infrastructure to deploy and scale. For early-stage health tech companies, these resources can be a game-changer.

Conclusion

Health tech startups are at the forefront of transforming health care, but navigating the challenges of innovation, regulation, and market entry can be daunting. Health tech accelerator programs offer invaluable support by providing startups with the mentorship, funding, networking opportunities, credibility, and resources they need to succeed.

Mayo Clinic Platform_Accelerate is a 30-week accelerator program from Mayo Clinic Platform focused on helping startups with digital technologies advance their solution development and get to market faster. Learn more about the program and the access it provides to clinical data, Mayo Clinic experts, technical resources, investors, and more at https://www.mayoclinicplatform.org/accelerate/.

This content was produced by Mayo Clinic Platform. It was not written by MIT Technology Review’s editorial staff.

Customizing generative AI for unique value

Since the emergence of enterprise-grade generative AI, organizations have tapped into the rich capabilities of foundational models, developed by the likes of OpenAI, Google DeepMind, Mistral, and others. Over time, however, businesses often found these models limiting since they were trained on vast troves of public data. Enter customization—the practice of adapting large language models (LLMs) to better suit a business’s specific needs by incorporating its own data and expertise, teaching a model new skills or tasks, or optimizing prompts and data retrieval.

Customization is not new, but the early tools were fairly rudimentary, and technology and development teams were often unsure how to do it. That’s changing, and the customization methods and tools available today are giving businesses greater opportunities to create unique value from their AI models.

We surveyed 300 technology leaders in mostly large organizations in different industries to learn how they are seeking to leverage these opportunities. We also spoke in-depth with a handful of such leaders. They are all customizing generative AI models and applications, and they shared with us their motivations for doing so, the methods and tools they’re using, the difficulties they’re encountering, and the actions they’re taking to surmount them.

Our analysis finds that companies are moving ahead ambitiously with customization. They are cognizant of its risks, particularly those revolving around data security, but are employing advanced methods and tools, such as retrieval-augmented generation (RAG), to realize their desired customization gains.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Architecting tomorrow’s network

Technological advances continue to move at breakneck speeds. While companies struggle through their digital transformation journeys, even more new technologies emerge, with promises of opportunity, cost savings—and added complexity. Many companies have yet to fully adopt AI and ML technologies, let alone figure out how newer technologies like generative AI might fit into their programs.

A 2024 IDC survey revealed 22% of tech leaders said their organizations haven’t yet reached full digital maturity, and 41% of respondents said the complexity of integrating new technologies and approaches with existing tech stacks is the biggest challenge for tech adoption.

To fuel successful technology adoption and maximize outcomes, companies need to focus on simplifying infrastructure architecture rather than how to make new technologies fit into existing stacks. “When it comes to digital transformation, choosing an architectural approach over a purely technology-driven one is about seeing the bigger picture,” says Rajarshi Purkayastha, the VP of solutions at Tata Communications. “Instead of focusing on isolated tools or systems, an architectural approach connects the dots—linking silos rather than simply trying to eliminate them.”

Establishing the robust global network most companies need to connect these dots and link their silos requires more capability and bandwidth than traditional networks like multiprotocol label switching (MPLS) circuits can typically provide in a cost-effective way. To keep pace with innovation, consumer demands, and market competition, today’s wide area networks (WANs) need to support flexible, anywhere connectivity for multi-cloud based services, remote locations and users, and edge data centers.

Understanding hybrid WAN

Traditional MPLS became the gold standard for most WAN architectures in the early 2000s to address the mounting challenges brought by the rapid growth of the internet and subsequent rapid expansions of enterprise networks. Today, as technological advances continue to accelerate, however, the limitations of MPLS are becoming apparent: MPLS networking is expensive; hard-wired connectivity is difficult to scale; and on its own, it doesn’t fit well with cloud computing adoption strategies.

In 2014, Gartner predicted hybrid WANs would be the future of networking. Hybrid WANs differ from traditional WANs in that the hybrid architecture facilitates multiple connection points: private network connections for mission-critical business, usually via the legacy MPLS circuits; and public network connections, typically utilizing internet connections such as 5G, LTE, or VPN, for less critical data traffic; and dedicated internet access (DIA) for somewhat critical traffic.

In 2025, we are seeing signs Gartner’s hybrid WAN prediction might be coming to fruition. At Tata Communications, for example, hybrid WAN is a key component of its network fabric—one facet of its digital fabric architecture, which weaves together networking, interaction, cloud, and IoT technologies.

“Our digital fabric simplifies the complexity of managing diverse technologies, breaks down silos, and provides a secure, unified platform for hyper-connected ecosystems,” explains Purkayastha. “By doing so, it ensures businesses have the agility, visibility, and scalability to succeed in their digital transformation journey—turning challenges into opportunities for innovation and growth.”

Hybrid WAN provides the flexible, real-time data traffic channeling an architectural approach requires to create a programmable, performant, and secure network that can reduce complexities and ease adoption of emerging technologies. “It’s not just about solving today’s challenges—it lays the groundwork for a resilient, scalable future,” says Purkayastha.

Benefits of hybrid WAN

Hybrid networking architectures support digital transformation journeys and emerging tech adoption in several ways.

More efficient, even intelligent, data trafficking. A hybrid architecture brings together multiple avenues of data flow from MPLS and internet connectivity, which provides highly flexible, resilient architecture along with increased bandwidth to decrease network congestion. It also allows companies to prioritize critical data traffic. Hybrid WANs can also combine the hyper-secure connectivity of MPLS with software-defined WAN (SD-WAN) technology, which allows for intelligent switching across a company’s information highways. If, for instance, one route encounters latency or malfunctions, that traffic will be automatically re-routed, helping to maintain continuous connectivity and reduce downtime.

Increased scalability. The agility and flexibility of a hybrid WAN allows companies to dynamically scale bandwidth up or down as application needs change. An agile WAN architecture also paves the way for scaling business operations.

Less complex cloud migration and easier adoption of new technologies. Adding internet connectivity to MPLS circuits allows for seamless data trafficking to the cloud, providing a more direct way for companies to transition to cloud-first strategies. Easing cloud migration also opens doors for emerging technologies like AI, generative AI, and machine learning, enabling companies to innovate to remain relevant in their markets.

Improved productivity. The internet speed and connectivity of a hybrid WAN keeps geographically separated company locations and remote workers connected, increasing efficiency and collaboration.

Easier integration with legacy systems. A hybrid approach allows legacy MPLS connections to remain, while offloading less sensitive data traffic to the internet. The ability to incorporate legacy applications and processes into a hybrid architecture not only eases integration and adoption, but helps to maximize returns on network investments.

Network cost savings. Many of the benefits on this list translate into cost savings, as internet bandwidth is considerably cheaper than MPLS networking. A reduction in downtime reduces expenses companywide, and the ability to customize bandwidth usage at scale gives companies more control over network expenses while maximizing connectivity.

Deploying a hybrid WAN

A recent collaboration between Air France-KLM Group and Tata Communications highlights the benefits a hybrid WAN can bring to a global enterprise.

Air France looked to increase its network and application performance threefold without incurring additional costs—and while ensuring the security and integrity of their network. A hybrid WAN solution—specifically, using MPLS and internet services from Tata Communications and other third-party providers—afforded the flexibility, resilience, and continuous connectivity they needed.

According to Tata Communications, the hybrid architecture increased Air France’s network availability to more than 99.94%, supporting its global office locations as well as their customer-facing applications, including passenger and cargo bookings and operating service centers.

“However, which connectivity to choose based on location type and application is complex, given the fact that networks vary by region, and one has to also take into account regulations, for e.g., in China,” says Purkayastha. “This is what Tata Communications helps customers with—choosing the right type of network, resulting in both cost savings and a better user experience.”

Enabling business for success

Innovating and expanding enterprise operations in today’s era of increasingly complex technology evolutions requires businesses to find agile and cost-effective avenues to stay connected and competitive.

As emerging machine learning and AI technologies aren’t likely to slow, hybrid network architectures likely are going to become necessary infrastructure components for companies of all sizes. The flexibility, resiliency, and configurability of a hybrid WAN provides a relatively straightforward, lightweight network upgrade to allow companies to focus on business objectives with less time and expense worrying about network reliability and reach. “At the end of the day, it isn’t just about technology—it’s about enabling your business to stay agile, competitive, and ready to innovate, no matter how the landscape shifts,” says Purkayastha.