Adapting for AI’s reasoning era

Anyone who crammed for exams in college knows that an impressive ability to regurgitate information is not synonymous with critical thinking.

The large language models (LLMs) first publicly released in 2022 were impressive but limited—like talented students who excel at multiple-choice exams but stumble when asked to defend their logic. Today’s advanced reasoning models are more akin to seasoned graduate students who can navigate ambiguity and backtrack when necessary, carefully working through problems with a methodical approach.

As AI systems that learn by mimicking the mechanisms of the human brain continue to advance, we’re witnessing an evolution in models from rote regurgitation to genuine reasoning. This capability marks a new chapter in the evolution of AI—and what enterprises can gain from it. But in order to tap into this enormous potential, organizations will need to ensure they have the right infrastructure and computational resources to support the advancing technology.

The reasoning revolution

“Reasoning models are qualitatively different than earlier LLMs,” says Prabhat Ram, partner AI/HPC architect at Microsoft, noting that these models can explore different hypotheses, assess if answers are consistently correct, and adjust their approach accordingly. “They essentially create an internal representation of a decision tree based on the training data they’ve been exposed to, and explore which solution might be the best.”

This adaptive approach to problem-solving isn’t without trade-offs. Earlier LLMs delivered outputs in milliseconds based on statistical pattern-matching and probabilistic analysis. This was—and still is—efficient for many applications, but it doesn’t allow the AI sufficient time to thoroughly evaluate multiple solution paths.

In newer models, extended computation time during inference—seconds, minutes, or even longer—allows the AI to employ more sophisticated internal reinforcement learning. This opens the door for multi-step problem-solving and more nuanced decision-making.

To illustrate future use cases for reasoning-capable AI, Ram offers the example of a NASA rover sent to explore the surface of Mars. “Decisions need to be made at every moment around which path to take, what to explore, and there has to be a risk-reward trade-off. The AI has to be able to assess, ‘Am I about to jump off a cliff? Or, if I study this rock and I have a limited amount of time and budget, is this really the one that’s scientifically more worthwhile?’” Making these assessments successfully could result in groundbreaking scientific discoveries at previously unthinkable speed and scale.

Reasoning capabilities are also a milestone in the proliferation of agentic AI systems: autonomous applications that perform tasks on behalf of users, such as scheduling appointments or booking travel itineraries. “Whether you’re asking AI to make a reservation, provide a literature summary, fold a towel, or pick up a piece of rock, it needs to first be able to understand the environment—what we call perception—comprehend the instructions and then move into a planning and decision-making phase,” Ram explains.

Enterprise applications of reasoning-capable AI systems

The enterprise applications for reasoning-capable AI are far-reaching. In health care, reasoning AI systems could analyze patient data, medical literature, and treatment protocols to support diagnostic or treatment decisions. In scientific research, reasoning models could formulate hypotheses, design experimental protocols, and interpret complex results—potentially accelerating discoveries across fields from materials science to pharmaceuticals. In financial analysis, reasoning AI could help evaluate investment opportunities or market expansion strategies, as well as develop risk profiles or economic forecasts.

Armed with these insights, their own experience, and emotional intelligence, human doctors, researchers, and financial analysts could make more informed decisions, faster. But before setting these systems loose in the wild, safeguards and governance frameworks will need to be ironclad, particularly in high-stakes contexts like health care or autonomous vehicles.

“For a self-driving car, there are real-time decisions that need to be made vis-a-vis whether it turns the steering wheel to the left or the right, whether it hits the gas pedal or the brake—you absolutely do not want to hit a pedestrian or get into an accident,” says Ram. “Being able to reason through situations and make an ‘optimal’ decision is something that reasoning models will have to do going forward.”

The infrastructure underpinning AI reasoning

To operate optimally, reasoning models require significantly more computational resources for inference. This creates distinct scaling challenges. Specifically, because the inference durations of reasoning models can vary widely—from just a few seconds to many minutes—load balancing across these diverse tasks can be challenging.

Overcoming these hurdles requires tight collaboration between infrastructure providers and hardware manufacturers, says Ram, speaking of Microsoft’s collaboration with NVIDIA, which brings its accelerated computing platform to Microsoft products, including Azure AI.

“When we think about Azure, and when we think about deploying systems for AI training and inference, we really have to think about the entire system as a whole,” Ram explains. “What are you going to do differently in the data center? What are you going to do about multiple data centers? How are you going to connect them?” These considerations extend into reliability challenges at all scales: from memory errors at the silicon level, to transmission errors within and across servers, thermal anomalies, and even data center-level issues like power fluctuations—all of which require sophisticated monitoring and rapid response systems.

By creating a holistic system architecture designed to handle fluctuating AI demands, Microsoft and NVIDIA’s collaboration allows companies to harness the power of reasoning models without needing to manage the underlying complexity. In addition to performance benefits, these types of collaborations allow companies to keep pace with a tech landscape evolving at breakneck speed. “Velocity is a unique challenge in this space,” says Ram. “Every three months, there is a new foundation model. The hardware is also evolving very fast—in the last four years, we’ve deployed each generation of NVIDIA GPUs and now NVIDIA GB200NVL72. Leading the field really does require a very close collaboration between Microsoft and NVIDIA to share roadmaps, timelines, and designs on the hardware engineering side, qualifications and validation suites, issues that arise in production, and so on.”

Advancements in AI infrastructure designed specifically for reasoning and agentic models are critical for bringing reasoning-capable AI to a broader range of organizations. Without robust, accessible infrastructure, the benefits of reasoning models will remain relegated to companies with massive computing resources.

Looking ahead, the evolution of reasoning-capable AI systems and the infrastructure that supports them promises even greater gains. For Ram, the frontier extends beyond enterprise applications to scientific discovery and breakthroughs that propel humanity forward: “The day when these agentic systems can power scientific research and propose new hypotheses that can lead to a Nobel Prize, I think that’s the day when we can say that this evolution is complete.”

To learn more, please read Microsoft and NVIDIA accelerate AI development and performance, watch the NVIDIA GTC AI Conference sessions on demand, and explore the topic areas of Azure AI solutions and Azure AI infrastructure.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

A vision for the future of automation

The manufacturing industry is at a crossroads: Geopolitical instability is fracturing supply chains from the Suez to Shenzhen, impacting the flow of materials. Businesses are battling rising costs and inflation, coupled with a shrinking labor force, with more than half a million unfilled manufacturing jobs in the U.S. alone. And climate change is further intensifying the pressure, with more frequent extreme weather events and tightening environmental regulations forcing companies to rethink how they operate. New solutions are imperative.

Meanwhile, advanced automation, powered by the convergence of emerging and established technologies, including industrial AI, digital twins, the internet of things (IoT), and advanced robotics, promises greater resilience, flexibility, sustainability, and efficiency for industry. Individual success stories have demonstrated the transformative power of these technologies, providing examples of AI-driven predictive maintenance reducing downtime by up to 50%. Digital twin simulations can significantly reduce time to market, and bring environment dividends, too: One survey found 77% of leaders expect digital twins to reduce carbon emissions by 15% on average.

Yet, broad adoption of this advanced automation has lagged. “That’s not necessarily or just a technology gap,” says John Hart, professor of mechanical engineering and director of the Center for Advanced Production Technologies at MIT. “It relates to workforce capabilities and financial commitments and risk required.” For small and medium enterprises, and those with brownfield sites—older facilities with legacy systems— the barriers to implementation are significant.

In recent years, governments have stepped in to accelerate industrial progress. Through a revival of industrial policies, governments are incentivizing high-tech manufacturing, re-localizing critical production processes, and reducing reliance on fragile global supply chains.

All these developments converge in a key moment for manufacturing. The external pressures on the industry—met with technological progress and these new political incentives—may finally enable the shift toward advanced automation.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The machines are rising — but developers still hold the keys

Rumors of the ongoing death of software development — that it’s being slain by AI — are greatly exaggerated. In reality, software development is at a fork in the road: embracing the (currently) far-off notion of fully automated software development or acknowledging the work of a software developer is much more than just writing lines of code.

The decision the industry makes could have significant long-term consequences. Increasing complacency around AI-generated code and a shift to what has been termed “vibe coding” — where code is generated through natural language prompts until the results seem to work — will lead to code that’s more error-strewn, more expensive to run and harder to change in the future. And, if the devaluation of software development skills continues, we may even lack a workforce with the skills and knowledge to fix things down the line. 

This means software developers are going to become more important to how the world builds and maintains software. Yes, there are many ways their practices will evolve thanks to AI coding assistance, but in a world of proliferating machine-generated code, developer judgment and experience will be vital.

The dangers of AI-generated code are already here

The risks of AI-generated code aren’t science fiction: they’re with us today. Research done by GitClear earlier this year indicates that with AI coding assistants (like GitHub Copilot) going mainstream, code churn — which GitClear defines as “changes that were either incomplete or erroneous when the author initially wrote, committed, and pushed them to the company’s git repo” — has significantly increased. GitClear also found there was a marked decrease in the number of lines of code that have been moved, a signal for refactored code (essentially the care and feeding to make it more effective).

In other words, from the time coding assistants were introduced there’s been a pronounced increase in lines of code without a commensurate increase in lines deleted, updated, or replaced. Simultaneously, there’s been a decrease in lines moved — indicating a lot of code has been written but not refactored. More code isn’t necessarily a good thing (sometimes quite the opposite); GitClear’s findings ultimately point to complacency and a lack of rigor about code quality.

Can AI be removed from software development?

However, AI doesn’t have to be removed from software development and delivery. On the contrary, there’s plenty to be excited about. As noted in the latest volume of the Technology Radar — Thoughtworks’ report on technologies and practices from work with hundreds of clients all over the world — the coding assistance space is full of opportunities. 

Specifically, the report noted tools like Cursor, Cline and Windsurf can enable software engineering agents. What this looks like in practice is an agent-like feature inside developer environments that developers can ask specific sets of coding tasks to be performed in the form of a natural language prompt. This enables the human/machine partnership.

That being said, to only focus on code generation is to miss the variety of ways AI can help software developers. For example, Thoughtworks has been interested in how generative AI can be used to understand legacy codebases, and we see a lot of promise in tools like Unblocked, which is an AI team assistant that helps teams do just that. In fact, Anthropic’s Claude Code helped us add support for new languages in an internal tool, CodeConcise. We use CodeConcise to understand legacy systems; and while our success was mixed, we do think there’s real promise here.

Tightening practices to better leverage AI

It’s important to remember much of the work developers do isn’t developing something new from scratch. A large proportion of their work is evolving and adapting existing (and sometimes legacy) software. Sprawling and janky code bases that have taken on technical debt are, unfortunately, the norm. Simply applying AI will likely make things worse, not better, especially with approaches like vibe.  

This is why developer judgment will become more critical than ever. In the latest edition of the Technology Radar report, AI-friendly code design is highlighted, based on our experience that AI coding assistants perform best with well-structured codebases. 

In practice, this requires many different things, including clear and expressive naming to ensure context is clearly communicated (essential for code maintenance), reducing duplicate code, and ensuring modularity and effective abstractions. Done together, these will all help make code more legible to AI systems.

Good coding practices are all too easy to overlook when productivity and effectiveness are measured purely in terms of output, and even though this was true before there was AI tooling, software development needs to focus on good coding first.

AI assistance demands greater human responsibility

Instagram co-founder Mike Krieger recently claimed that in three years software engineers won’t write any code: they will only review AI-created code. This might sound like a huge claim, but it’s important to remember that reviewing code has always been a major part of software development work. With this in mind, perhaps the evolution of software development won’t be as dramatic as some fear.

But there’s another argument: as AI becomes embedded in how we build software, software developers will take on more responsibility, not less. This is something we’ve discussed a lot at Thoughtworks: the job of verifying that an AI-built system is correct will fall to humans. Yes, verification itself might be AI-assisted, but it will be the role of the software developer to ensure confidence. 

In a world where trust is becoming highly valuable — as evidenced by the emergence of the chief trust officer — the work of software developers is even more critical to the infrastructure of global industry. It’s vital software development is valued: the impact of thoughtless automation and pure vibes could prove incredibly problematic (and costly) in the years to come.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Powering the food industry with AI

There has never been a more pressing time for food producers to harness technology to tackle the sector’s tough mission. To produce ever more healthy and appealing food for a growing global population in a way that is resilient and affordable, all while minimizing waste and reducing the sector’s environmental impact. From farm to factory, artificial intelligence and machine learning can support these goals by increasing efficiency, optimizing supply chains, and accelerating the research and development of new types of healthy products. 

In agriculture, AI is already helping farmers to monitor crop health, tailor the delivery of inputs, and make harvesting more accurate and efficient. In labs, AI is powering experiments in gene editing to improve crop resilience and enhance the nutritional value of raw ingredients. For processed foods, AI is optimizing production economics, improving the texture and flavor of products like alternative proteins and healthier snacks, and strengthening food safety processes too. 

But despite this promise, industry adoption still lags. Data-sharing remains limited and companies across the value chain have vastly different needs and capabilities. There are also few standards and data governance protocols in place, and more talent and skills are needed to keep pace with the technological wave. 

All the same, progress is being made and the potential for AI in the food sector is huge. Key findings from the report are as follows: 

Predictive analytics are accelerating R&D cycles in crop and food science. AI reduces the time and resources needed to experiment with new food products and turns traditional trial-and-error cycles into more efficient data-driven discoveries. Advanced models and simulations enable scientists to explore natural ingredients and processes by simulating thousands of conditions, configurations, and genetic variations until they crack the right combination. 

AI is bringing data-driven insights to a fragmented supply chain. AI can revolutionize the food industry’s complex value chain by breaking operational silos and translating vast streams of data into actionable intelligence. Notably, large language models (LLMs) and chatbots can serve as digital interpreters, democratizing access to data analysis for farmers and growers, and enabling more informed, strategic decisions by food companies. 

Partnerships are crucial for maximizing respective strengths. While large agricultural companies lead in AI implementation, promising breakthroughs often emerge from strategic collaborations that leverage complementary strengths with academic institutions and startups. Large companies contribute extensive datasets and industry experience, while startups bring innovation, creativity, and a clean data slate. Combining expertise in a collaborative approach can increase the uptake of AI. 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Five benefits of a health tech accelerator program

In the ever-evolving world of health care, the role of technology is becoming increasingly crucial. From improving patient outcomes to streamlining administrative processes, digital technologies are changing the face of the industry. However, for startups developing health tech solutions, breaking into the market and scaling their products can be a challenging journey, requiring access to resources, expertise, and a network they might not have. This is where health tech accelerator programs come in.

Health tech accelerator programs are designed to support early-stage startups in the health technology space, providing them with the resources, mentorship, and funding they need to grow and succeed. These programs are often highly competitive, and startups that are selected gain access to a wealth of opportunities that can significantly accelerate their development. In this article, we’ll explore five key benefits of participating in a health tech accelerator program.

1. Access to mentorship and expertise

One of the most valuable aspects of health tech accelerator programs is the access they provide to experienced mentors and industry experts. Health tech startups often face unique challenges, such as navigating complex health-care regulations, developing scalable technologies, and understanding the intricacies of health systems. Having mentors who have firsthand experience in these areas can provide critical guidance.

These mentors often include clinicians, informaticists, investors, health-care professionals, and thought leaders. Their insights can help startups refine their business strategies, optimize their digital health solutions, and navigate the health-care landscape. With this guidance, startups are better positioned to make informed decisions, avoid common pitfalls, and accelerate their growth.

2. Funding and investment opportunities

For many startups, securing funding is one of the biggest hurdles they face. Health tech innovation can be expensive, especially in the early stages when startups are working on solution development, regulatory approvals, and pilot testing. Accelerator programs often provide startups with seed funding, as well as the opportunity to connect with venture capitalists, angel investors, and other potential backers.

Many accelerator programs culminate in a “demo day,” where startups pitch their solutions to a room full of investors and other key decision-makers. These events can be crucial in securing the funding necessary to scale a digital health solution or product. Beyond initial funding, the exposure gained from being part of a well-known accelerator program can lead to additional investment opportunities down the road.

3. Networking and industry connections

The health-care industry is notoriously complex and fragmented, making it difficult for new players to break in without the right connections. Health tech accelerator programs offer startups the opportunity to network with key leaders in the health-care and technology ecosystems, including clinicians, payers, pharmaceutical companies, government agencies, and potential customers.

Through structured networking events, mentorship sessions, and partnerships with established organizations, startups gain access to a wide range of stakeholders who can help substantiate their products, open doors to new markets, and provide feedback that can be used to refine their offerings. In the health tech space, strong industry connections are often critical to gaining traction and scaling successfully.

4. Market validation and credibility

The health tech industry is highly regulated and risk-averse, meaning that customers and investors are often wary of new technologies. Participating in an accelerator program can serve as a form of market validation, signaling that a startup’s offering has been vetted by experts and has the potential for success.

The credibility gained from being accepted into a prestigious accelerator program can be a game-changer. It provides startups with a level of legitimacy that can help them stand out in a crowded and competitive market. Whether it’s attracting investors, forging partnerships, or securing early customers, the reputation of the accelerator can give a startup a significant boost.

Additionally, accelerator programs often have ties to major health-care institutions and organizations. This can provide startups with opportunities to pilot their products in real-world health-care settings, which can serve as both a test of the product’s viability and a powerful proof of concept for future customers and investors.

5. Access to resources and infrastructure

Another significant benefit of accelerators is the access to resources and infrastructure that startups might not obtain otherwise. These resources can include everything from access to clinical data for model building and testing, legal and regulatory support, and technology infrastructure to deploy and scale. For early-stage health tech companies, these resources can be a game-changer.

Conclusion

Health tech startups are at the forefront of transforming health care, but navigating the challenges of innovation, regulation, and market entry can be daunting. Health tech accelerator programs offer invaluable support by providing startups with the mentorship, funding, networking opportunities, credibility, and resources they need to succeed.

Mayo Clinic Platform_Accelerate is a 30-week accelerator program from Mayo Clinic Platform focused on helping startups with digital technologies advance their solution development and get to market faster. Learn more about the program and the access it provides to clinical data, Mayo Clinic experts, technical resources, investors, and more at https://www.mayoclinicplatform.org/accelerate/.

This content was produced by Mayo Clinic Platform. It was not written by MIT Technology Review’s editorial staff.

Customizing generative AI for unique value

Since the emergence of enterprise-grade generative AI, organizations have tapped into the rich capabilities of foundational models, developed by the likes of OpenAI, Google DeepMind, Mistral, and others. Over time, however, businesses often found these models limiting since they were trained on vast troves of public data. Enter customization—the practice of adapting large language models (LLMs) to better suit a business’s specific needs by incorporating its own data and expertise, teaching a model new skills or tasks, or optimizing prompts and data retrieval.

Customization is not new, but the early tools were fairly rudimentary, and technology and development teams were often unsure how to do it. That’s changing, and the customization methods and tools available today are giving businesses greater opportunities to create unique value from their AI models.

We surveyed 300 technology leaders in mostly large organizations in different industries to learn how they are seeking to leverage these opportunities. We also spoke in-depth with a handful of such leaders. They are all customizing generative AI models and applications, and they shared with us their motivations for doing so, the methods and tools they’re using, the difficulties they’re encountering, and the actions they’re taking to surmount them.

Our analysis finds that companies are moving ahead ambitiously with customization. They are cognizant of its risks, particularly those revolving around data security, but are employing advanced methods and tools, such as retrieval-augmented generation (RAG), to realize their desired customization gains.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Architecting tomorrow’s network

Technological advances continue to move at breakneck speeds. While companies struggle through their digital transformation journeys, even more new technologies emerge, with promises of opportunity, cost savings—and added complexity. Many companies have yet to fully adopt AI and ML technologies, let alone figure out how newer technologies like generative AI might fit into their programs.

A 2024 IDC survey revealed 22% of tech leaders said their organizations haven’t yet reached full digital maturity, and 41% of respondents said the complexity of integrating new technologies and approaches with existing tech stacks is the biggest challenge for tech adoption.

To fuel successful technology adoption and maximize outcomes, companies need to focus on simplifying infrastructure architecture rather than how to make new technologies fit into existing stacks. “When it comes to digital transformation, choosing an architectural approach over a purely technology-driven one is about seeing the bigger picture,” says Rajarshi Purkayastha, the VP of solutions at Tata Communications. “Instead of focusing on isolated tools or systems, an architectural approach connects the dots—linking silos rather than simply trying to eliminate them.”

Establishing the robust global network most companies need to connect these dots and link their silos requires more capability and bandwidth than traditional networks like multiprotocol label switching (MPLS) circuits can typically provide in a cost-effective way. To keep pace with innovation, consumer demands, and market competition, today’s wide area networks (WANs) need to support flexible, anywhere connectivity for multi-cloud based services, remote locations and users, and edge data centers.

Understanding hybrid WAN

Traditional MPLS became the gold standard for most WAN architectures in the early 2000s to address the mounting challenges brought by the rapid growth of the internet and subsequent rapid expansions of enterprise networks. Today, as technological advances continue to accelerate, however, the limitations of MPLS are becoming apparent: MPLS networking is expensive; hard-wired connectivity is difficult to scale; and on its own, it doesn’t fit well with cloud computing adoption strategies.

In 2014, Gartner predicted hybrid WANs would be the future of networking. Hybrid WANs differ from traditional WANs in that the hybrid architecture facilitates multiple connection points: private network connections for mission-critical business, usually via the legacy MPLS circuits; and public network connections, typically utilizing internet connections such as 5G, LTE, or VPN, for less critical data traffic; and dedicated internet access (DIA) for somewhat critical traffic.

In 2025, we are seeing signs Gartner’s hybrid WAN prediction might be coming to fruition. At Tata Communications, for example, hybrid WAN is a key component of its network fabric—one facet of its digital fabric architecture, which weaves together networking, interaction, cloud, and IoT technologies.

“Our digital fabric simplifies the complexity of managing diverse technologies, breaks down silos, and provides a secure, unified platform for hyper-connected ecosystems,” explains Purkayastha. “By doing so, it ensures businesses have the agility, visibility, and scalability to succeed in their digital transformation journey—turning challenges into opportunities for innovation and growth.”

Hybrid WAN provides the flexible, real-time data traffic channeling an architectural approach requires to create a programmable, performant, and secure network that can reduce complexities and ease adoption of emerging technologies. “It’s not just about solving today’s challenges—it lays the groundwork for a resilient, scalable future,” says Purkayastha.

Benefits of hybrid WAN

Hybrid networking architectures support digital transformation journeys and emerging tech adoption in several ways.

More efficient, even intelligent, data trafficking. A hybrid architecture brings together multiple avenues of data flow from MPLS and internet connectivity, which provides highly flexible, resilient architecture along with increased bandwidth to decrease network congestion. It also allows companies to prioritize critical data traffic. Hybrid WANs can also combine the hyper-secure connectivity of MPLS with software-defined WAN (SD-WAN) technology, which allows for intelligent switching across a company’s information highways. If, for instance, one route encounters latency or malfunctions, that traffic will be automatically re-routed, helping to maintain continuous connectivity and reduce downtime.

Increased scalability. The agility and flexibility of a hybrid WAN allows companies to dynamically scale bandwidth up or down as application needs change. An agile WAN architecture also paves the way for scaling business operations.

Less complex cloud migration and easier adoption of new technologies. Adding internet connectivity to MPLS circuits allows for seamless data trafficking to the cloud, providing a more direct way for companies to transition to cloud-first strategies. Easing cloud migration also opens doors for emerging technologies like AI, generative AI, and machine learning, enabling companies to innovate to remain relevant in their markets.

Improved productivity. The internet speed and connectivity of a hybrid WAN keeps geographically separated company locations and remote workers connected, increasing efficiency and collaboration.

Easier integration with legacy systems. A hybrid approach allows legacy MPLS connections to remain, while offloading less sensitive data traffic to the internet. The ability to incorporate legacy applications and processes into a hybrid architecture not only eases integration and adoption, but helps to maximize returns on network investments.

Network cost savings. Many of the benefits on this list translate into cost savings, as internet bandwidth is considerably cheaper than MPLS networking. A reduction in downtime reduces expenses companywide, and the ability to customize bandwidth usage at scale gives companies more control over network expenses while maximizing connectivity.

Deploying a hybrid WAN

A recent collaboration between Air France-KLM Group and Tata Communications highlights the benefits a hybrid WAN can bring to a global enterprise.

Air France looked to increase its network and application performance threefold without incurring additional costs—and while ensuring the security and integrity of their network. A hybrid WAN solution—specifically, using MPLS and internet services from Tata Communications and other third-party providers—afforded the flexibility, resilience, and continuous connectivity they needed.

According to Tata Communications, the hybrid architecture increased Air France’s network availability to more than 99.94%, supporting its global office locations as well as their customer-facing applications, including passenger and cargo bookings and operating service centers.

“However, which connectivity to choose based on location type and application is complex, given the fact that networks vary by region, and one has to also take into account regulations, for e.g., in China,” says Purkayastha. “This is what Tata Communications helps customers with—choosing the right type of network, resulting in both cost savings and a better user experience.”

Enabling business for success

Innovating and expanding enterprise operations in today’s era of increasingly complex technology evolutions requires businesses to find agile and cost-effective avenues to stay connected and competitive.

As emerging machine learning and AI technologies aren’t likely to slow, hybrid network architectures likely are going to become necessary infrastructure components for companies of all sizes. The flexibility, resiliency, and configurability of a hybrid WAN provides a relatively straightforward, lightweight network upgrade to allow companies to focus on business objectives with less time and expense worrying about network reliability and reach. “At the end of the day, it isn’t just about technology—it’s about enabling your business to stay agile, competitive, and ready to innovate, no matter how the landscape shifts,” says Purkayastha.

The evolution of AI: From AlphaGo to AI agents, physical AI, and beyond

In March 2016, the world witnessed a unique moment in the evolution of artificial intelligence (AI) when AlphaGo, an AI developed by DeepMind, played against Lee Sedol, one of the greatest Go players of the modern era. The match reached a critical juncture in Game 2 with Move 37, where AlphaGo made a move so unconventional and creative that it stunned both the audience and Lee Sedol himself.

This moment has since been recognized as a pivotal point in the evolution of AI. It was not merely a demonstration of AI’s proficiency in playing Go but a revelation that machines could think outside the box and exhibit creativity. This moment fundamentally altered the perception of AI, transforming it from a tool that follows predefined rules to an entity capable of innovation. Since that fateful match, AI continues to drive profound changes across industries, from content recommendations to fraud detection. However, the game-changing power of AI became evident when ChatGPT brought generative AI to the masses.


Experience the latest in AI innovation. Join Microsoft at the NVIDIA GTC AI Conference. Learn more and register.


The critical moment of ChatGPT

The release of ChatGPT by OpenAI in November 2022 marked another significant milestone in the evolution of AI. ChatGPT, a large language model capable of generating human-like text, demonstrated the potential of AI to understand and generate natural language. This capability opened up new possibilities for AI applications, from customer service to content creation. The world responded to ChatGPT with a mix of awe and excitement, recognizing the potential of AI to transform how humans communicate and interact with technology to enhance our lives.

The rise of agentic AI

Today, the rise of agentic AI — systems capable of advanced reasoning and task execution — is revolutionizing the way organizations operate. Agentic AI systems are designed to pursue complex goals with autonomy and predictability. They are productivity enablers that can effectively incorporate humans in the loop via the use of multi-modality. These systems can take goal-directed actions with minimal human oversight, make contextual decisions, and dynamically adjust plans based on changing conditions.    

Deploy agentic AI today    

Microsoft and NVIDIA are at the forefront of developing and deploying agentic AI systems, providing the necessary infrastructure and tools to enable advanced capabilities such as:

Azure AI services: Microsoft Azure AI services have been instrumental in creating agentic AI systems. For instance, the Azure AI Foundry and Azure OpenAI Service provide the foundational tools for building AI agents that can autonomously perceive, decide, and act in pursuit of specific goals. These services enable the development of AI systems that go beyond simple task execution to more complex, multi-step processes.

AI agents and agentic AI systems: Microsoft has developed various AI agents that automate and execute business processes, working alongside or on behalf of individuals and organizations. These agents, accessible via Microsoft Copilot Studio, Azure AI, or GitHub, are designed to autonomously perceive, decide, and act, adapting to new circumstances and conditions. For example, the mobile data recorder (MDR) copilot at BMW, powered by Azure AI, allows engineers to chat with the interface using natural language, converting conversations into technical insights.

Multi-agent systems: Microsoft’s research and development in multi-agent AI systems have led to the creation of modular, collaborative agents that can dynamically adapt to different tasks. These systems are designed to work together seamlessly, enhancing overall performance and efficiency. For example, Magnetic-One, a high-performance generalist agentic system, is designed to solve open-ended tasks across various domains, representing a significant advancement in agent technology.

Collaboration with NVIDIA: Microsoft and NVIDIA have collaborated deeply across the entire technology stack, including Azure accelerated instances equipped with NVIDIA GPUs. This enables users to develop agentic AI applications by leveraging NVIDIA GPUs alongside NVIDIA NIM models and NeMo microservices across their selected Azure services, such as Azure Machine Learning, Azure Kubernetes Service, or Azure Virtual Machines. Furthermore, NVIDIA NeMo microservices offer capabilities to support the creation and ongoing enhancement of agentic AI applications.

Physical AI and beyond

Looking ahead, the next wave in AI development is physical AI, powered by AI models that can understand and engage with our world and generate their actions based on advanced sensory input. Physical AI will enable a new frontier of digitalization for heavy industries, delivering more intelligence and autonomy to the world’s warehouses and factories, and driving major advancements in autonomous transportation. The NVIDIA Omniverse development platform is available on Microsoft Azure to enable developers to build advanced physical AI, simulation, and digital twin applications that accelerate industrial digitalization.

As AI continues to evolve, it promises to bring even more profound changes to our world. The journey that was sparked from a single move on a Go board to the emergence of agentic and physical AI underscores the incredible potential of AI to innovate, transform industries, and elevate our daily lives.

Experience the latest in AI innovation at NVIDIA GTC

Discover cutting-edge AI solutions from Microsoft and NVIDIA, that push the boundaries of innovation. Join Microsoft at the NVIDIA GTC AI Conference from March 17 to 21, 2025, in San Jose, California, or virtually.

Visit Microsoft at booth #514 to connect with Azure and NVIDIA AI experts and explore the latest AI technology and hardware. Attend Microsoft’s sessions to learn about Azure’s comprehensive AI platform and accelerate your innovation journey.

Learn more and register today.

This content was produced by Microsoft and NVIDIA. It was not written by MIT Technology Review’s editorial staff.

Designing the future of entertainment

An entertainment revolution, powered by AI and other emerging technologies, is fundamentally changing how content is created and consumed today. Media and entertainment (M&E) brands are faced with unprecedented opportunities—to reimagine costly and complex production workloads, to predict the success of new scripts or outlines, and to deliver immersive entertainment in novel formats like virtual reality (VR) and the metaverse. Meanwhile, the boundaries between entertainment formats—from gaming to movies and back—are blurring, as new alliances form across industries, and hardware innovations like smart glasses and autonomous vehicles make media as ubiquitous as air.

At the same time, media and entertainment brands are facing competitive threats. They must reinvent their business models and identify new revenue streams in a more fragmented and complex consumer landscape. They must keep up with advances in hardware and networking, while building an IT infrastructure to support AI and related technologies. Digital media standards will need to evolve to ensure interoperability and seamless experiences, while companies search for the right balance between human and machine, and protect their intellectual property and data.

This report examines the key technology shifts transforming today’s media and entertainment industry and explores their business implications. Based on in-depth interviews with media and entertainment executives, startup founders, industry analysts, and experts, the report outlines the challenges and opportunities that tech-savvy business leaders will find ahead.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Harnessing cloud and AI to power a sustainable future 

Organizations working toward ambitious sustainability targets are finding an ally in emerging technologies. In agriculture, for instance, AI can use satellite imagery and real-time weather data to optimize irrigation and reduce water usage. In urban areas, cloud-enabled AI can power intelligent traffic systems, rerouting vehicles to cut commute times and emissions. At an industrial level, advanced algorithms can predict equipment failures days or even weeks in advance. 

But AI needs a robust foundation to deliver on its lofty promises—and cloud computing provides that bedrock. As AI and cloud continue to converge and mature, organizations are discovering new ways to be more environmentally conscious while driving operational efficiencies. 

Data from a poll conducted by MIT Technology Review Insights in 2024 suggests growing momentum for this dynamic duo: 38% of executives polled say that cloud and AI are key components of their company’s sustainability initiatives, and another 35% say the combination is making a meaningful contribution to sustainability goals (see Figure 1). 

This enthusiasm isn’t just theoretical, either. Consider that 45% of respondents identified energy consumption optimization as their most relevant use case for AI and cloud in sustainability initiatives. And organizations are backing these priorities with investment—more than 50% of companies represented in the poll plan to increase their spending on cloud and AI-focused sustainability initiatives by 25% or more over the next two years. 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.