Accelerating generative AI deployment with microservices

In this exclusive webcast, we delve into the transformative potential of portable microservices for the deployment of generative AI models. We explore how startups and large organizations are leveraging this technology to streamline generative AI deployment, enhance customer service, and drive innovation across domains, including chatbots, document analysis, and video generation.

Our discussion focuses on overcoming key challenges such as deployment complexity, security, and cost management. We also discuss how microservices can help executives realize business value with generative AI while maintaining control over data and intellectual property.

Why AI could eat quantum computing’s lunch

Tech companies have been funneling billions of dollars into quantum computers for years. The hope is that they’ll be a game changer for fields as diverse as finance, drug discovery, and logistics.

Those expectations have been especially high in physics and chemistry, where the weird effects of quantum mechanics come into play. In theory, this is where quantum computers could have a huge advantage over conventional machines.

But while the field struggles with the realities of tricky quantum hardware, another challenger is making headway in some of these most promising use cases. AI is now being applied to fundamental physics, chemistry, and materials science in a way that suggests quantum computing’s purported home turf might not be so safe after all.

The scale and complexity of quantum systems that can be simulated using AI is advancing rapidly, says Giuseppe Carleo, a professor of computational physics at the Swiss Federal Institute of Technology (EPFL). Last month, he coauthored a paper published in Science showing that neural-network-based approaches are rapidly becoming the leading technique for modeling materials with strong quantum properties. Meta also recently unveiled an AI model trained on a massive new data set of materials that has jumped to the top of a leaderboard for machine-learning approaches to material discovery.

Given the pace of recent advances, a growing number of researchers are now asking whether AI could solve a substantial chunk of the most interesting problems in chemistry and materials science before large-scale quantum computers become a reality. 

“The existence of these new contenders in machine learning is a serious hit to the potential applications of quantum computers,” says Carleo “In my opinion, these companies will find out sooner or later that their investments are not justified.”

Exponential problems

The promise of quantum computers lies in their potential to carry out certain calculations much faster than conventional computers. Realizing this promise will require much larger quantum processors than we have today. The biggest devices have just crossed the thousand-qubit mark, but achieving an undeniable advantage over classical computers will likely require tens of thousands, if not millions. Once that hardware is available, though, a handful of quantum algorithms, like the encryption-cracking Shor’s algorithm, have the potential to solve problems exponentially faster than classical algorithms can. 

But for many quantum algorithms with more obvious commercial applications, like searching databases, solving optimization problems, or powering AI, the speed advantage is more modest. And last year, a paper coauthored by Microsoft’s head of quantum computing, Matthias Troyer, showed that these theoretical advantages disappear if you account for the fact that quantum hardware operates orders of magnitude slower than modern computer chips. The difficulty of getting large amounts of classical data in and out of a quantum computer is also a major barrier. 

So Troyer and his colleagues concluded that quantum computers should instead focus on problems in chemistry and materials science that require simulation of systems where quantum effects dominate. A computer that operates along the same quantum principles as these systems should, in theory, have a natural advantage here. In fact, this has been a driving idea behind quantum computing ever since the renowned physicist Richard Feynman first proposed the idea.

The rules of quantum mechanics govern many things with huge practical and commercial value, like proteins, drugs, and materials. Their properties are determined by the interactions of their constituent particles, in particular their electrons—and simulating these interactions in a computer should make it possible to predict what kinds of characteristics a molecule will exhibit. This could prove invaluable for discovering things like new medicines or more efficient battery chemistries, for example. 

But the intuition-defying rules of quantum mechanics—in particular, the phenomenon of entanglement, which allows the quantum states of distant particles to become intrinsically linked—can make these interactions incredibly complex. Precisely tracking them requires complicated math that gets exponentially tougher the more particles are involved. That can make simulating large quantum systems intractable on classical machines.

This is where quantum computers could shine. Because they also operate on quantum principles, they are able to represent quantum states much more efficiently than is possible on classical machines. They could also take advantage of quantum effects to speed up their calculations.

But not all quantum systems are the same. Their complexity is determined by the extent to which their particles interact, or correlate, with each other. In systems where these interactions are strong, tracking all these relationships can quickly explode the number of calculations required to model the system. But in most that are of practical interest to chemists and materials scientists, correlation is weak, says Carleo. That means their particles don’t affect each other’s behavior significantly, which makes the systems far simpler to model.

The upshot, says Carleo, is that quantum computers are unlikely to provide any advantage for most problems in chemistry and materials science. Classical tools that can accurately model weakly correlated systems already exist, the most prominent being density functional theory (DFT). The insight behind DFT is that all you need to understand a system’s key properties is its electron density, a measure of how its electrons are distributed in space. This makes for much simpler computation but can still provide accurate results for weakly correlated systems.

Simulating large systems using these approaches requires considerable computing power. But in recent years there’s been an explosion of research using DFT to generate data on chemicals, biomolecules, and materials—data that can be used to train neural networks. These AI models learn patterns in the data that allow them to predict what properties a particular chemical structure is likely to have, but they are orders of magnitude cheaper to run than conventional DFT calculations. 

This has dramatically expanded the size of systems that can be modeled—to as many as 100,000 atoms at a time—and how long simulations can run, says Alexandre Tkatchenko, a physics professor at the University of Luxembourg. “It’s wonderful. You can really do most of chemistry,” he says.

Olexandr Isayev, a chemistry professor at Carnegie Mellon University, says these techniques are already being widely applied by companies in chemistry and life sciences. And for researchers, previously out of reach problems such as optimizing chemical reactions, developing new battery materials, and understanding protein binding are finally becoming tractable.

As with most AI applications, the biggest bottleneck is data, says Isayev. Meta’s recently released materials data set was made up of DFT calculations on 118 million molecules. A model trained on this data achieved state-of-the-art performance, but creating the training material took vast computing resources, well beyond what’s accessible to most research teams. That means fulfilling the full promise of this approach will require massive investment.

Modeling a weakly correlated system using DFT is not an exponentially scaling problem, though. This suggests that with more data and computing resources, AI-based classical approaches could simulate even the largest of these systems, says Tkatchenko. Given that quantum computers powerful enough to compete are likely still decades away, he adds, AI’s current trajectory suggests it could reach important milestones, such as precisely simulating how drugs bind to a protein, much sooner.

Strong correlations

When it comes to simulating strongly correlated quantum systems—ones whose particles interact a lot—methods like DFT quickly run out of steam. While more exotic, these systems include materials with potentially transformative capabilities, like high-temperature superconductivity or ultra-precise sensing. But even here, AI is making significant strides.

In 2017, EPFL’s Carleo and Microsoft’s Troyer published a seminal paper in Science showing that neural networks could model strongly correlated quantum systems. The approach doesn’t learn from data in the classical sense. Instead, Carleo says, it is similar to DeepMind’s AlphaZero model, which mastered the games of Go, chess, and shogi using nothing more than the rules of each game and the ability to play itself.

In this case, the rules of the game are provided by Schrödinger’s equation, which can precisely describe a system’s quantum state, or wave function. The model plays against itself by arranging particles in a certain configuration and then measuring the system’s energy level. The goal is to reach the lowest energy configuration (known as the ground state), which determines the system’s properties. The model repeats this process until energy levels stop falling, indicating that the ground state—or something close to it—has been reached.

The power of these models is their ability to compress information, says Carleo. “The wave function is a very complicated mathematical object,” he says. “What has been shown by several papers now is that [the neural network] is able to capture the complexity of this object in a way that can be handled by a classical machine.”

Since the 2017 paper, the approach has been extended to a wide range of strongly correlated systems, says Carleo, and results have been impressive. The Science paper he published with colleagues last month put leading classical simulation techniques to the test on a variety of tricky quantum simulation problems, with the goal of creating a benchmark to judge advances in both classical and quantum approaches.

Carleo says that neural-network-based techniques are now the best approach for simulating many of the most complex quantum systems they tested. “Machine learning is really taking the lead in many of these problems,” he says.

These techniques are catching the eye of some big players in the tech industry. In August, researchers at DeepMind showed in a paper in Science that they could accurately model excited states in quantum systems, which could one day help predict the behavior of things like solar cells, sensors, and lasers. Scientists at Microsoft Research have also developed an open-source software suite to help more researchers use neural networks for simulation.

One of the main advantages of the approach is that it piggybacks on massive investments in AI software and hardware, says Filippo Vicentini, a professor of AI and condensed-matter physics at École Polytechnique in France, who was also a coauthor on the Science benchmarking paper: “Being able to leverage these kinds of technological advancements gives us a huge edge.”

There is a caveat: Because the ground states are effectively found through trial and error rather than explicit calculations, they are only approximations. But this is also why the approach could make progress on what has looked like an intractable problem, says Juan Carrasquilla, a researcher at ETH Zurich, and another coauthor on the Science benchmarking paper.

If you want to precisely track all the interactions in a strongly correlated system, the number of calculations you need to do rises exponentially with the system’s size. But if you’re happy with an answer that is just good enough, there’s plenty of scope for taking shortcuts. 

“Perhaps there’s no hope to capture it exactly,” says Carrasquilla. “But there’s hope to capture enough information that we capture all the aspects that physicists care about. And if we do that, it’s basically indistinguishable from a true solution.”

And while strongly correlated systems are generally too hard to simulate classically, there are notable instances where this isn’t the case. That includes some systems that are relevant for modeling high-temperature superconductors, according to a 2023 paper in Nature Communications.

“Because of the exponential complexity, you can always find problems for which you can’t find a shortcut,” says Frank Noe, research manager at Microsoft Research, who has led much of the company’s work in this area. “But I think the number of systems for which you can’t find a good shortcut will just become much smaller.”

No magic bullets

However, Stefanie Czischek, an assistant professor of physics at the University of Ottawa, says it can be hard to predict what problems neural networks can feasibly solve. For some complex systems they do incredibly well, but then on other seemingly simple ones, computational costs balloon unexpectedly. “We don’t really know their limitations,” she says. “No one really knows yet what are the conditions that make it hard to represent systems using these neural networks.”

Meanwhile, there have also been significant advances in other classical quantum simulation techniques, says Antoine Georges, director of the Center for Computational Quantum Physics at the Flatiron Institute in New York, who also contributed to the recent Science benchmarking paper. “They are all successful in their own right, and they are also very complementary,” he says. “So I don’t think these machine-learning methods are just going to completely put all the other methods out of business.”

Quantum computers will also have their niche, says Martin Roetteler, senior director of quantum solutions at IonQ, which is developing quantum computers built from trapped ions. While he agrees that classical approaches will likely be sufficient for simulating weakly correlated systems, he’s confident that some large, strongly correlated systems will be beyond their reach. “The exponential is going to bite you,” he says. “There are cases with strongly correlated systems that we cannot treat classically. I’m strongly convinced that that’s the case.”

In contrast, he says, a future fault-tolerant quantum computer with many more qubits than today’s devices will be able to simulate such systems. This could help find new catalysts or improve understanding of metabolic processes in the body—an area of interest to the pharmaceutical industry.

Neural networks are likely to increase the scope of problems that can be solved, says Jay Gambetta, who leads IBM’s quantum computing efforts, but he’s unconvinced they’ll solve the hardest challenges businesses are interested in.

“That’s why many different companies that essentially have chemistry as their requirement are still investigating quantum—because they know exactly where these approximation methods break down,” he says.

Gambetta also rejects the idea that the technologies are rivals. He says the future of computing is likely to involve a hybrid of the two approaches, with quantum and classical subroutines working together to solve problems. “I don’t think they’re in competition. I think they actually add to each other,” he says.

But Scott Aaronson, who directs the Quantum Information Center at the University of Texas, says machine-learning approaches are directly competing against quantum computers in areas like quantum chemistry and condensed-matter physics. He predicts that a combination of machine learning and quantum simulations will outperform purely classical approaches in many cases, but that won’t become clear until larger, more reliable quantum computers are available.

“From the very beginning, I’ve treated quantum computing as first and foremost a scientific quest, with any industrial applications as icing on the cake,” he says. “So if quantum simulation turns out to beat classical machine learning only rarely, I won’t be quite as crestfallen as some of my colleagues.”

One area where quantum computers look likely to have a clear advantage is in simulating how complex quantum systems evolve over time, says EPFL’s Carleo. This could provide invaluable insights for scientists in fields like statistical mechanics and high-energy physics, but it seems unlikely to lead to practical uses in the near term. “These are more niche applications that, in my opinion, do not justify the massive investments and the massive hype,” Carleo adds.

Nonetheless, the experts MIT Technology Review spoke to said a lack of commercial applications is not a reason to stop pursuing quantum computing, which could lead to fundamental scientific breakthroughs in the long run.

“Science is like a set of nested boxes—you solve one problem and you find five other problems,” says Vicentini. “The complexity of the things we study will increase over time, so we will always need more powerful tools.”

An easier-to-use technique for storing data in DNA is inspired by our cells 

It turns out that you don’t need to be a scientist to encode data in DNA. Researchers have been working on DNA-based data storage for decades, but a new template-based method inspired by our cells’ chemical processes is easy enough for even nonscientists to practice. The technique could pave the way for an unusual but ultra-stable way to store information. 

The idea of storing data in DNA was first proposed in the 1950s by the physicist Richard Feynman. Genetic material has exceptional storage density and durability; a single gram of DNA can store a trillion gigabytes of data and retain the information for thousands of years. Decades later, a team led by George Church at Harvard University put the idea into practice, encoding a 53,400-word book.

This early approach relied on DNA synthesis—stringing genetic sequences together piece by piece, like beads on a thread, using the four nucleotide building blocks A, T, C, and G to encode information. The process was expensive, time consuming, and error prone, creating only one bit (or an eighth of a byte) with each nucleotide added to a strand. Crucially, the process required skilled expertise to carry out.

The new method, published in Nature last week, is more efficient, storing 350 bits at a time by encoding strands in parallel. Rather than hand-threading each DNA strand, the team assembles strands from pre-built DNA bricks about 20 nucleotides long, encoding information by altering some and not others along the way. Peking University’s Long Qian and team got the idea for such templates from the way cells share the same basic set of genes but behave differently in response to chemical changes in DNA strands. “Every cell in our bodies has the same genome sequence, but genetic programming comes from modifications to DNA. If life can do this, we can do this,” she says. 

Qian and her colleagues encoded data through methylation, a chemical reaction that switches genes on and off by attaching a methyl compound—a small methane-related molecule. Once the bricks are locked into their assigned spots on the strand, researchers select which bricks to methylate, with the presence or absence of the modification standing in for binary values of 0 or 1. The information can then be deciphered using nanopore sequencers to detect whether a brick has been methylated. In theory, the new method is simple enough to be carried out without detailed knowledge of how to manipulate DNA.

The storage capacity of each DNA strand caps off at roughly 70 bits. For larger files, researchers splintered data into multiple strands identified by unique barcodes encoded in the bricks. The strands were then read simultaneously and sequenced according to their barcodes. With this technique, researchers encoded the image of a tiger rubbing from the Han dynasty, troubleshooting the encoding process until the image came back with no errors. The same process worked for more complex images, like a photorealistic print of a panda. 

To gauge the real-world applicability of their approach, the team enlisted 60 students from diverse academic backgrounds—not just scientists—to encode any writing of their choice. The volunteers transcribed their writing into binary code through a web server. Then, with a kit sent by the team, they pipetted an enzyme into a 96-well plate of the DNA bricks, marking which would be methylated. The team then ran the samples through a sequencer to make the DNA strand. Once the computer received the sequence, researchers ran a decoding algorithm and sent the restored message back to a web server for students to retrieve with a password. The writing came back with a 1.4% error rate in letters, and the errors were eventually corrected through language-learning models. 

Once it’s more thoroughly developed, Qian sees the technology becoming useful as long-term storage for archival information that isn’t accessed every day, like medical records, financial reports, or scientific data.  

The success nonscientists achieved using the technique in coding trials suggests that the DNA storage could eventually become a practical technology. “Everyone is storing data every day, and so to compete with traditional data storage technologies, DNA methods need to be usable by the everyday person,” says Jeff Nivala, co-director of University of Washington’s Molecular Information Systems Lab. “This is still an early demonstration of going toward nonexperts, but I think it’s pretty unique that they’re able to do that.”

DNA storage still has many strides left to make before it can compete with traditional data storage. The new system is more expensive than either traditional data storage techniques or previous DNA-synthesis methods, Nivala says, though the encoding process could become more efficient with automation on a larger scale. With future development, template-based DNA storage might become a more secure method of tackling ever-climbing data demands. 

This lab robot mixes chemicals

Lab scientists spend much of their time doing laborious and repetitive tasks, be it pipetting liquid samples or running the same analyses over and over again. But what if they could simply tell a robot to do the experiments, analyze the data, and generate a report? 

Enter Organa, a benchtop robotic system devised by researchers at the University of Toronto that can perform chemistry experiments. In a paper posted on the arXiv preprint server, the team reported that the system could automate some chemistry lab tasks using a combination of computer vision and a large language model (LLM) that translates scientists’ verbal cues into an experimental pipeline. 

Imagine having a robot that can collaborate with a human scientist on a chemistry experiment, says Alán Aspuru-Guzik, a chemist, computer scientist, and materials scientist at the University of Toronto, who is one of the project’s leaders. Aspuru-Guzik’s vision is to elevate traditional lab automation to “eventually make an AI scientist,” one that can perform and troubleshoot an experiment and even offer feedback on the results. 

Aspuru-Guzik and his team designed Organa to be flexible. That means that instead of performing only one task or one part of an experiment as a typical fixed automation system would, it can perform a multistep experiment on cue. The system is also equipped with visualization tools that can monitor progress and provide feedback on how the experiment is going.  

“This is one of the early examples of showing how you can have a bidirectional conversation with an AI assistant for a robotic chemistry lab,” says Milad Abolhasani, a chemical and material engineer at North Carolina State University, who was not involved in the project. 

Most automated lab equipment is not easily customizable or reprogrammable to suit the chemists’ needs, says Florian Shkurti, a computer scientist at the University of Toronto and a co-leader of the project. And even if it is, the chemists would need to have programming skills. But with Organa, scientists can simply convey their experiments through speech. As scientists prompt the robot with their experimental objectives and setup, Organa’s LLM translates this natural-language instruction into χDL codes, a standard chemical description language. The algorithm breaks down the codes into steps and goals, with a road map to execute each task. If there is an ambiguous instruction or an unexpected outcome, it can flag the issue for the scientist to resolve.

About two-thirds of Organa’s hardware components are made from off-the-shelf parts, making it easier to replicate across laboratories, Aspuru-Guzik says. The robot has a camera detector that can identify both opaque objects and transparent ones, such as a chemical flask.

Organa’s first task was to characterize the electrochemical properties of quinones, the electroactive molecules used in rechargeable batteries. The experiment has 19 parallel steps, including routine chemistry steps such as pH and solubility tests, recrystallization, and an electrochemical measurement. It also involves a tedious electrode-precleaning step, which takes up to six hours. “Chemists really, really hate this,” says Shkurti.

Organa completed the 19-step experiment in about the same amount of time it would take a human—and with comparable results. While the efficiency was not noticeably better than in a manual run, the robot can be much more productive if it is run overnight. “We always get the advantage of it being able to work 24 hours,” Shkurti says. Abolhasani adds, “That’s going to save a lot of our highly trained scientists time that they can use to focus on thinking about the scientific problem, not doing these routine tasks in the lab.” 

Organa’s most sophisticated feature is perhaps its ability to provide feedback on generated data. “We were surprised to find that this visual language model can spot outliers on chemistry graphs,” explains Shkurti. The system also flags these ambiguities or uncertainties and suggests methods of troubleshooting. 

The group is now working on improving the LLM’s ability to plan tasks and then revise those plans to make the system more amenable to experimental uncertainties. 

“There’s a lot roboticists have to offer to scientists in order to amplify what they can do and get them better data,” Shkurti says. “I am really excited to try to create new possibilities.” 

Kristel Tjandra is a freelance science writer based in Oahu. 

Cloud transformation clears businesses for digital takeoff

In an age where customer experience can make or break a business, Cathay Pacific is embracing cloud transformation to enhance service delivery and revolutionize operations from the inside out. It’s not just technology companies that are facing pressure to deliver better customer service, do more with data, and improve agility. An almost 80-year-old airline, Cathay Pacific embarked on its digital transformation journey in 2014, spurred by a critical IT disruption that became the catalyst for revamping their technology.

By embracing the cloud, the airline has not only streamlined operations but also paved the way for innovative solutions like DevSecOps and AI integration. This shift has enabled Cathay to deliver faster, more reliable services to both passengers and staff, while maintaining a robust security framework in an increasingly digital world. 

According to Rajeev Nair, general manager of IT infrastructure and security at Cathay Pacific, becoming a digital-first airline was met with early resistance from both business and technical teams. The early stages required a lot of heavy lifting as they shifted legacy apps first from their server room to a dedicated data center and then to the cloud. From there began the process of modernization that Cathay Pacific, now in its final stages of this transformation, continues to fine tune.

The cloud migration also helped Cathay align with their ESG goals. “Two years ago, if you asked me what IT could do for sustainability, we would’ve been clueless,” says Nair. However, through cloud-first strategies and green IT practices, the airline has made notable strides in reducing its carbon footprint. Currently, the business is in the process of moving to a smaller data center, reducing physical infrastructure and its carbon emissions significantly by 2025.

The broader benefits of this cloud transformation for Cathay Pacific go beyond sustainability. Agility, time-to-market, and operational efficiency have improved drastically. “If you ask many of the enterprises, they would probably say that shifting to the cloud is all about cost-saving,” says Nair. “But for me, those are secondary aspects and the key is about how to enable the business to be more agile and nimble so that the business capability could be delivered much faster by IT and the technology team.”

By 2025, Cathay Pacific aims to have 100% of their business applications running on the cloud, significantly enhancing their agility, customer service, and cost efficiency, says Nair.

As Cathay Pacific continues its digital evolution, Nair remains focused on future-proofing the airline through emerging technologies. Looking ahead, he is particularly excited about the potential of AI, generative AI, and virtual reality to further enhance both customer experience and internal operations. From more immersive VR-based training for cabin crew to enabling passengers to preview in-flight products before boarding, these innovations are set to redefine how the airline engages with its customers and staff. 

“We have been exploring that for quite some time, but we believe that it will continue to be a mainstream technology that can change the way we serve the customer,” says Nair.

This episode of Business Lab is produced in association with Infosys Cobalt.

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Our topic today is cloud transformation to meet business goals and customer needs. It’s not just tech companies that have to stay one step ahead. Airlines too are under pressure to deliver better customer service, do more with data, and improve agility. 

Two words for you: going further. 

My guest is Rajeev Nair, who is the general manager of IT infrastructure and security at Cathay Pacific. This podcast is produced in association with Infosys Cobalt. Welcome, Rajeev. 

Rajeev Nair: Thank you. Thank you, Megan. Thank you for having me. 

Megan: Thank you ever so much for joining us. Now to get some context for our conversation today, could you first describe how Cathay Pacific’s digital transformation journey began, and explain, I guess, what stage of this transformation this almost 80-year-old airline is currently in, too? 

Rajeev: Sure, definitely Megan. So for Cathay, we started this transformation journey probably a decade back, way back in 2014. It all started with facing some major service disruption within Cathay IT where it had a massive impact on the business operation. That prompted us to trigger and initiate this transformation journey. So the first thing is we started looking at many of our legacy applications. Back in those days we still had mainframe systems that provided so many of our critical services. We started looking at migrating those legacy apps first, moving them outside of that legacy software and moving them into a proper data center. Back in those days, our data center used to be our corporate headquarters. We didn’t have a dedicated data center and it used to be in a server room. So those were the initial stages of our transformation journey, just a basic building block. So we started moving into a proper data center so that resilience and availability could be improved. 

And as a second phase, we started looking at the cloud. Those days, cloud was just about to kick off in this part of the world. We started looking at migrating to the cloud and it has been a huge challenge or resistance even from the business as well as from the technology team. Once we started moving, shifting apps to the cloud, we had multiple transformation programs to do that modernization activities. Once that is done, then the third phase of the journey is more about your network. Once your applications are moved to the cloud, your network design needs to be completely changed. Then we started looking at how we could modernize our network because Cathay operates in about 180 regions across the world. So our network is very crucial for us. We started looking at redesigning our network. 

And then, it comes to your security aspects. Things moving to the cloud, your network design is getting changed, your cybersecurity needs heavy lifting to accommodate the modern world. We started focusing on cybersecurity initiatives where our security posture has been improved a lot over the last few years. And with those basic building blocks done on the hardware and on the technology side, then comes your IT operations. Because one is your hardware and software piece, but how do you sustain your processes to ensure that it can support those changing technology landscapes? We started investing a lot around the IT operations side, but things like ITIL processes have been revisited. We started adopting many of the DevOps and the DevSecOps practices. So a lot of emphasis around processes and practices to help the team move forward, right? 

And those operations initiatives are in phase. As we stand today, we are at the final stage of our cloud journey where we are looking at how we can optimize it better. So we shifted things to the cloud and that has been a heavy lifting that has been done in the early phases. Now we are focusing around how we can rewrite or refactor your application so that it can better liberate your cloud technologies where we could optimize the performance, thereby optimizing your usage and the cloud resources wherein you could save on the cost as well as on the sustainability aspect. That is where we stand. By 2025, we are looking at moving 100% of our business applications to the cloud and also reducing our physical footprint in our data centers as well. 

Megan: Fantastic. And you mentioned sustainability there. I wonder how does the focus on environmental, social, and governance goals or ESG tie into your wider technology strategy? 

Rajeev: Sure. And to be very honest, Megan, if you asked me this question two years back, we would’ve been clueless on what IT could do from a sustainability aspect. But over the last two years, there has been a lot of focus around ESG components within the technology space where we have done a lot of initiatives since last year to improve and be efficient on the sustainability front. So a couple of key areas that we have done. One is definitely the cloud-first strategy where adopting the cloud-first policy reduces your carbon footprint and it also helps us in migrating away from our data center. So as we speak, we are doing a major project to further reduce our data center size by relocating to a much smaller data center, which will be completed by the end of next year. That will definitely help us to reduce our footprint. 

The second is around adopting the various green IT practices, things like energy efficient devices, be it your PCs or the laptop or virtualizations, and e-based management policies and management aspects. Some of the things are very basic and fundamental in nature. Stuff like we moved away from a dual monitor to a single monitor wherein we could reduce your energy consumption by half, or changing some of your software policies like screen timeouts and putting a monitor in standby. Those kinds of basic things really helped us to optimize and manage. And the last one is around FinOps. So FinOps is a process in the practice that is being heavily adopted in the cloud organization, but it is just not about optimizing your course because by adopting the FinOps practices and tying in with the GreenOps processes, we are able to focus a lot around reducing our CO2 footprint and optimizing sustainability. Those are some of the practices that we have been doing with Cathay. 

Megan: Yeah. fantastic benefits from relatively small changes there. Other than ESG, what are the other benefits for an enterprise like Cathay Pacific in terms of shifting from those legacy systems to the cloud that you found? 

Rajeev: For me, the key is about agility and time-to-market capability. If you ask many of the enterprises, they would probably say that shifting to the cloud is all about cost-saving. But for me, those are secondary aspects. The key is about how to enable the business to be more agile and nimble so that the business capability can be delivered much faster by IT and the technology team. So as an example, gone are the days when we take about a few months before we provision hardware and have the platform and the applications ready. Now the platforms are being delivered to the developers within an hour’s time so that the developers can quickly build their development environment and be ready for development and testing activities. Right? So agility is a key and the number one factor. 

The second is by shifting to the cloud, you’re also liberating many of the latest technologies that the cloud comes up with and the provider has to offer. Things like capacity and the ability to scale up and down your resources and services according to your business needs and fluctuations are a huge help from a technology aspect. That way you can deliver customer-centered solutions faster and more efficiently than many of our airline customers and competitors. 

And the last one is, of course, your cost saving aspect and the operational efficiency. By moving away from the legacy systems, we can reduce a lot of capex [capital expenditure]. Like, say for example, I don’t need to spend money on investing in hardware and spend resources to manage those hardware and data center operations, especially in Hong Kong where human resources are pretty expensive and scarce to find. It is very important that I rely on these sorts of technologies to manage those optimally. Those are some of the key aspects that we see from a cloud adoption perspective. 

Megan: Fantastic. And it sounds like it’s been a several year process so far. So after what sounds like pretty heavy investment when it comes to moving legacy hardware on-prem systems to the cloud. What’s your approach now to adapting your IT operations off the back of that? 

Rajeev: Exactly. That is, sort of, just based early in my transformation journey, but yeah, absolutely. By moving to the cloud, it is just not about the hardware, but it’s also about how your operations and your processes align with this changing technology and new capabilities. And, for example, by adopting more agile and scalable approach to managing IT infrastructures and applications as well. Also leveraging the data and insights that the cloud enables. To achieve this, the fundamental aspect of this is how you can revisit and fine tune your IT service management processes, and that is where your core of IT operations have been built in the past. And to manage that properly we recently, I think, over the last three years we were looking at implementing a new IT service management solution, which is built on a product called ServiceNow. So they are built on the core ITIL processes framework to help us manage the service management, the operations management, and asset management. 

Those are some of the capabilities which we rolled out with the help of our partners like Infosys so that it could provide a framework to fine tune and optimize IT processes. And we also adopted things like DevOps and DevSecOps because what we have also noticed is the processes like ITIL, which was very heavy over the last few years around support activities is also shifting. So we wanted to adopt some of these development practices into the support and operations functions to be more agile by shifting left some of these capabilities. And in this journey, Infosys has been our key partner, not only on the cloud transformation side, but also on implementation of ServiceNow, which is our key service management tool where they provided us end-to-end support starting from the planning phase or the initial conceptual phase and also into the design and development and also to the deployment and maintenance. We haven’t completed this journey and it’s still a project that is currently ongoing, and by 2025 we should be able to complete this successfully across the enterprise. 

Megan: Fascinating. It’s an awful lot of change going on. I mean, there must be an internal shift, therefore, that comes with cloud transformation too, I imagine. I wonder, what’s your approach been to up skilling your team to help it excel in this new way of working? 

Rajeev: Yeah, absolutely. And that is always the hardest part. You can change your technology and processes is but changing your people, that’s always toughest and the hardest bit. And essentially this is all about change management, and that has been one of our struggles in our early part of the cloud transformation journey. What we did is we invested a lot in terms of uplifting our traditional infrastructure team. All the traditional technology teams have to go through that learning curve in adopting cloud technology early in our project. And we also provided a lot of training programs, including some of our cloud partners were able to up skill and train these resources. 

But the key differences that we are seeing is even after providing all those training and upskilling programs, we could see that there was a lot of resistance and a lot of doubts in people’s mind about how cloud is going to help the organization. And the best part is what we did is we included these team members into our project so that they get the hands-on experience. And once they start seeing the benefits around these technologies, there was no looking back. And the team was able to completely embrace the cloud technologies to the point that we still have a traditional technology team who’s supporting the remaining hardware and the servers of the world, but they’re also very keen to shift across the line and adopt and embrace the cloud technology. But it’s been quite a journey for us. 

Megan: That’s great to hear that you’ve managed to bring them along with you. And I suppose it’d be remiss of me if we’re talking about embracing new technologies not to talk about AI, although still in its early stages in most industries. I wonder how is Cathay Pacific approaching AI adoption as well? 

Rajeev: Sure. I think these days none of these conversations can be complete without talking about AI and gen AI. We started this early exploratory phase early into the game, especially in this part of the world. But for us, the key is approaching this based on the customer’s pain points and business needs and then we work backward to identify what type of AI is best suitable or relevant to us. In Cathay, currently, we focus on three main types of AI. One is of course conversational AI. Essentially, it is a form of an internal and external chatbot. Our chatbot, we call it Vera, serves customers directly and can handle about 50% of the inquiries successfully. And just about two weeks back, we upgraded the LLM with a new model, the chatbot with a new model, which is able to be more efficient and much more responsive in terms of the human work. So that’s one part of the AI that we heavily invested on. 

Second is RPA, or robotic process automation, especially what you’re seeing is during the pandemic and post-Covid era, there is limited resources available, especially in Hong Kong and across our supply chain. So RPA or the robotic processes helps to automate mundane repetitive tasks, which doesn’t only fill the resource gap, but it also directly enhances the employee experience. And so far in Cathay, we have about a hundred bots in production serving various business units, serving approximately 30,000 hours every year of human activity. So that’s the second part. 

The third one is around ML and it’s the gen AI. So like our digital team or the data science team has developed about 70-plus ML models in Cathay that turned the organization data into insights or actionable items. These models help us to make a better decision. For example, what meals to be loaded into the aircraft and specific routes, in terms of what quantity and what kind of product offers we promote to customers, and including the fare loading and the pricing of our passenger as well as a cargo bay space. There is a lot of exploration that is being done in this space as well. And a couple of examples I could relate is if you ever happen to come to Hong Kong, next time at the airport, you could hear the public announcement system and that is also AI-powered recently. In the past, our staff used to manually make those announcements and now it has been moved away and has been moved into AI-powered voice technology so that we could be consistent in our announcement. 

Megan: Oh, fantastic. I’ll have to listen for it next time I’m at Hong Kong airport. And you’ve mentioned this topic a couple of times in the conversation. Look, when we’re talking about cloud modernization, cybersecurity can be a roadblock to agility, I guess, if it’s not managed effectively. So could you also tell us in a little more detail how Cathay Pacific has integrated security into its digital transformation journey, particularly with the adoption of development security operations practices that you’ve mentioned? 

Rajeev: Yeah, this is an interesting one. I look after cybersecurity as well as the infrastructure services. With both of these critical functions around my hand, I need to be mindful of both aspects, right? Yes, it’s an interesting one and it has changed over the period of time, and I fully understand why cybersecurity practices needs to be rigid because there is a lot of compliance and it is a highly regulated function, but if something goes wrong, as a CISO we are held accountable for those faults. I can understand why the team is so rigid in their practices. And I also understand from a business perspective it could be perceived as a road blocker to agility. 

One of the key aspects that we have done in Cathay is we have been following DevOps for quite a number of years, and recently, I think in the last two years, we started implementing DevSecOps into our STLC [software testing life cycle]. And what it essentially means is rather than the core cybersecurity team being responsible for many of the security testing and those sorts of aspects, we want to shift left some of these capabilities into the developers so that the people who develop the code now are held accountable for the testing and the quality of the output. And they’re also enabled in terms of the cybersecurity process. Right? 

Of course, when we started off this journey, there has been a huge resistance on the security team itself because they don’t really trust the developers trying to do the testing or the testing outputs. But over a period of time with the introduction of various tools and automation that is put in place, this is now getting into a matured stage wherein it is now enabling the upfront teams to take care of all the aspects of security, like threat modeling, code scanning, and the vulnerability testing. But at the end, the security teams would be still validating and act as a sort of a gatekeeper, but in a very light and inbuilt processes. And this way we can ensure that our cloud applications are secure by design and by default they can deliver them faster and more reliably to our customers. And in this entire process, right? 

In the past, security has been always perceived as an accountability of the cybersecurity team. And by enabling the developers of the security aspects, now you have a better ownership in the organization when it comes to cybersecurity and it is building a better cybersecurity culture within the organization. And that, to me, is a key because from a security aspect, we always say that people are your first line of defense and often they’re also the last line of defense. I’m glad that by these processes we are able to improve that maturity in the organization. 

Megan: Absolutely. And you mentioned that obviously cybersecurity is something that’s really important to a lot of customers nowadays as well. I wondered if you could offer some other examples too of how your digital transformation has improved that customer experience in other ways? 

Rajeev: Yeah, definitely. Maybe I can quote a few examples, Megan. One is around our pilots. You would’ve seen when you travel through the airport or in the aircraft that pilots usually carry a briefcase when they load the flight, and you are often probably wondering what exactly they carry. Basically, that contains a bunch of papers. It contains your weather charts, your navigation routes, and the flight plans, the crew details. It’s a whole stack of paper that they have to carry on each and every flight. And in Cathay, by digitization, we have automated that in their processes, where now they carry an iPad instead of a bunch of papers or briefing pack. So that iPad includes all these softwares that is required for the captain to operate the flight in a legally and a safe manner. 

Paperless cockpit operation is nothing new. Many airlines have attempted to do that, but I should say that Cathay has been on the forefront in truly establishing a paperless operation, where many of the other airlines have shown great interest in using our software. That is one aspect from a fly crew perspective. Second, from a customer perspective, we have an app called Customer 360, which is a completely in-house developed model, which has all the customer direct transactions, surveys, or how they interact at the various checkpoints with our crew or at the boarding. You have all this data feed of a particular customer where our agents or the cabin crew can understand the customer’s sentiment and their reaction to service recovery action. 

Say for example, the customer calls up a call center and ask for a refund or miles compensation. Based on the historical usage, we could prioritize the best action to improve the customer satisfaction. We are connected to all these models and enable the frontline teams so that they can use this when they engage with the customer. An example at the airport, our agents will be able to see a lot of useful insights about the customers beyond the basic information like the flight itinerary or the online shopping history at the Cathay shop, et cetera, so that they can see the overall satisfaction level and get additional insights on recommended actions to restore or improve the customer satisfaction level. This is basically used by our frontline agents at the airport, our cabin crew as well as all the airport team, and the customer team so that they have great consistency in the service no matter what touchpoint the customers are choosing to contact us. 

Megan: Fantastic. 

Rajeev: So these are a few example looking from a back end as well as from a front line of the team perspective. 

Megan: Yeah, absolutely. I’m sure there’s a few people listening who were wondering what pilots carry in that suitcase. So thank you so much for clearing that up. And finally, Rajeev, I guess looking ahead, what emerging technologies are you excited to explore further going forward to enhance digital capabilities and customer experience in the years to come? 

Rajeev: Yeah, so we will continue to explore AI and gen AI capability, which has been the spotlight for the last 18 months or so, be it for the passenger or even for the staff internally. We will continue to explore that. But apart from AI, one other aspect I believe could go at great ways around the AR and the VR capabilities, basically virtual reality. We have been exploring that for quite some time, but we believe that it will continue to be a mainstream technology that can change the way we serve the customer. Say for example, in Cathay, we already have a VR cave for our cabin crew training, virtual reality capabilities, and in a few months’ time, we are actually launching a learning facility based on VR where we could be able to provide more immersive learning experience for the cabin crew and later for the other employees. 

Basically, before a cabin crew is able to operate a flight, they go through a rigorous training in Cathay City in our headquarters, basically to know how to serve our passengers, how to handle an emergency situation, those sorts of aspects. And in many cases, we travel the crew from various outports or various countries back into Hong Kong to train them and equip them for these training activities. You can imagine that costs us a lot of money and effort to bring all the people back to Hong Kong. And by having VR capabilities, we are able to do that anywhere in the world without having that physical presence. That’s one area where it’ll go mainstream. 

The second is around other business units. Apart from the cabin crew, we are also experimenting the VR on the customer front. For example, we are able to launch a new business class seat product we call the Aria Suite by next year. And VR technology will help the customers to visualize the seat details without them able to get on board. So without them flying, even before that, they’re able to experience a product on the ground. At our physical shop in Hong Kong, customers can now use a virtual reality technology to visualize how our designer furniture and lifestyle products fit in the sitting rooms. The list of VR capabilities goes very long. The list goes on. And this is also a great and important way to engage with our customers in particular. 

Megan: Wow. Sounds like some exciting stuff on the way. Thank you ever so much, Rajeev, for talking us through that. That was Rajeev Nair, the general manager of IT infrastructure and security at Cathay Pacific, who I spoke with from an unexpectedly sunny Brighton, England.

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. 

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, this episode was produced by Giro Studios. Thanks for listening. 

Google says it’s made a quantum computing breakthrough that reduces errors

Google researchers claim to have made a breakthrough in quantum error correction, one that could pave the way for quantum computers that finally live up to the technology’s promise.

Proponents of quantum computers say the machines will be able to benefit scientific discovery in fields ranging from particle physics to drug and materials design—if only their builders can make the hardware behave as intended. 

One major challenge has been that quantum computers can store or manipulate information incorrectly, preventing them from executing algorithms that are long enough to be useful. The new research from Google Quantum AI and its academic collaborators demonstrates that they can actually add components to reduce these errors. Previously, because of limitations in engineering, adding more components to the quantum computer tended to introduce more errors. Ultimately, the work bolsters the idea that error correction is a viable strategy toward building a useful quantum computer. Some critics had doubted that it was an effective approach, according to physicist Kenneth Brown of Duke University, who was not involved in the research. 

“This error correction stuff really works, and I think it’s only going to get better,” wrote Michael Newman, a member of the Google team, on X. (Google, which posted the research to the preprint server arXiv in August, declined to comment on the record for this story.) 

Quantum computers encode data using objects that behave according to the principles of quantum mechanics. In particular, they store information not only as 1s and 0s, as a conventional computer does, but also in “superpositions” of 1 and 0. Storing information in the form of these superpositions and manipulating their value using quantum interactions such as entanglement (a way for particles to be connected even over long distances) allows for entirely new types of algorithms.

In practice, however, developers of quantum computers have found that errors quickly creep in because the components are so sensitive. A quantum computer represents 1, 0, or a superposition by putting one of its components in a particular physical state, and it is too easy to accidentally alter those states. A component then ends up in a physical state that does not correspond to the information it’s supposed to represent. These errors accumulate over time, which means that the quantum computer cannot deliver accurate answers for long algorithms without error correction.

To perform error correction, researchers must encode information in the quantum computer in a distinctive way. Quantum computers are made of individual components known as physical qubits, which can be made from a variety of different materials, such as single atoms or ions. In Google’s case, each physical qubit consists of a tiny superconducting circuit that must be kept at an extremely cold temperature. 

Early experiments on quantum computers stored each unit of information in a single physical qubit. Now researchers, including Google’s team, have begun experimenting with encoding each unit of information in multiple physical qubits. They refer to this constellation of physical qubits as a single “logical” qubit, which can represent 1, 0, or a superposition of the two. By design, the single “logical” qubit can hold onto a unit of information more robustly than a single “physical” qubit can. Google’s team corrects the errors in the logical qubit using an algorithm known as a surface code, which makes use of the logical qubit’s constituent physical qubits.

In the new work, Google made a single logical qubit out of varying numbers of physical qubits. Crucially, the researchers demonstrated that a logical qubit composed of 105 physical qubits suppressed errors more effectively than a logical qubit composed of 72 qubits. That suggests that putting increasing numbers of physical qubits together into a logical qubit “can really suppress the errors,” says Brown. This charts a potential path to building a quantum computer with a low enough error rate to perform a useful algorithm, although the researchers have yet to demonstrate they can put multiple logical qubits together and scale up to a larger machine. 

The researchers also report that the lifetime of the logical qubit exceeds the lifetime of its best constituent physical qubit by a factor of 2.4. Put another way, Google’s work essentially demonstrates that it can store data in a reliable quantum “memory.”

However, this demonstration is just a first step toward an error-corrected quantum computer, says Jay Gambetta, the vice president of IBM’s quantum initiative. He points out that while Google has demonstrated a more robust quantum memory, it has not performed any logical operations on the information stored in that memory. 

“At the end of the day, what matters is: How big of a quantum circuit could you run?” he says. (A “quantum circuit” is a series logic of operations executed on a quantum computer.) “And do you have a path to show how you’re going to run bigger and bigger quantum circuits?”

IBM, whose quantum computers are also composed of qubits made of superconducting circuits, is taking an error correction approach that’s different from Google’s surface code method.  It thinks this method, known as low-density parity-check code, will be easier to scale, with each logical qubit requiring fewer physical qubits to achieve comparable error suppression rates. By 2026, IBM intends to demonstrate that it can make 12 logical qubits out of 244 physical qubits, says Gambetta.

Other researchers are exploring other promising approaches, too. Instead of superconducting circuits, a team affiliated with the Boston-based quantum computing company QuEra uses neutral atoms as physical qubits. Earlier this year, it published in Nature a study showing that it had executed algorithms using up to 48 logical qubits made of rubidium atoms.

Gambetta cautions researchers to be patient and not to overhype the progress. “I just don’t want the field to think error correction is done,” he says. Hardware development simply takes a long time because the cycle of designing, building, and troubleshooting is time consuming, especially when compared with software development. “I don’t think it’s unique to quantum,” he says. 

To execute algorithms with guaranteed practical utility, a quantum computer needs to perform around a billion logical operations, says Brown. “And no one’s near a billion operations yet,” he says. Another milestone would be to create a quantum computer with 100 logical qubits, which QuEra has set as a goal for 2026. A quantum computer of that size would be capable of simulations beyond the reach of classical computers. Google scientists have made a single high-quality logical qubit—but the next step is to show that they can actually do something with it.

Integrating security from code to cloud

The Human Genome Project, SpaceX’s rocket technology, and Tesla’s Autopilot system may seem worlds apart in form and function, but they all share a common characteristic: the use of open-source software (OSS) to drive innovation.

Offering publicly accessible code that can be viewed, modified, and distributed freely, OSS expedites developer productivity and creates a collaborative space for groundbreaking advancements.

“Open source is critical,” says David Harmon, director of software engineering for AMD. “It provides an environment of collaboration and technical advancements. Savvy users can look at the code themselves; they can evaluate it; they can review it and know that the code that they’re getting is legit and functional for what they’re trying to do.”

But OSS can also compromise an organization’s security posture by introducing hidden vulnerabilities that fall under the radar of busy IT teams, especially as cyberattacks targeting open source are on the rise. OSS may contain weaknesses, for example, that can be exploited to gain unauthorized access to confidential systems or networks. Bad actors can even intentionally introduce into OSS a space for exploits—“backdoors”—that can compromise an organization’s security posture. 

“Open source is an enabler to productivity and collaboration, but it also presents security challenges,” says Vlad Korsunsky, corporate vice president of cloud and enterprise security for Microsoft. Part of the problem is that open source introduces into the organization code that can be hard to verify and difficult to trace. Organizations often don’t know who made changes to open-source code or the intent of those changes, factors that can increase a company’s attack surface.

Complicating matters is that OSS’s increasing popularity coincides with the rise of cloud and its own set of security challenges. Cloud-native applications that run on OSS, such as Linux, deliver significant benefits, including greater flexibility, faster release of new software features, effortless infrastructure management, and increased resiliency. But they also can create blind spots in an organization’s security posture, or worse, burden busy development and security teams with constant threat signals and never-ending to-do lists of security improvements.

“When you move into the cloud, a lot of the threat models completely change,” says Harmon. “The performance aspects of things are still relevant, but the security aspects are way more relevant. No CTO wants to be in the headlines associated with breaches.”

Staying out of the news, however, is becoming increasingly more difficult: According to cloud company Flexera’s State of the Cloud 2024 survey, 89% of enterprises use multi-cloud environments. Cloud spend and security top respondents’ lists of cloud challenges. Security firm Tenable’s 2024 Cloud Security Outlook reported that 95% of its surveyed organizations suffered a cloud breach during the 18 months before their survey.

Code-to-cloud security

Until now, organizations have relied on security testing and analysis to examine an application’s output and identify security issues in need of repair. But these days, addressing a security threat requires more than simply seeing how it is configured in runtime. Rather, organizations must get to the root cause of the problem.

It’s a tall order that presents a balancing act for IT security teams, according to Korsunsky. “Even if you can establish that code-to-cloud connection, a security team may be reluctant to deploy a fix if they’re unsure of its potential impact on the business. For example, a fix could improve security but also derail some functionality of the application itself and negatively impact employee productivity,” he says.

Rather, to properly secure an application, says Korsunsky, IT security teams should collaborate with developers and application security teams to better understand the software they’re working with and to determine the impacts of applying security fixes.

Fortunately, a code-to-cloud security platform with comprehensive cloud-native security can help by identifying and stopping software vulnerabilities at the root. Code-to-cloud creates a pipeline between code repositories and cloud deployment, linking how the application was written to how it performs—“connecting the things that you see in runtime to where they’re developed and how they’re deployed,” says Korsunsky.

The result is a more collaborative and consolidated approach to security that enables security teams to identify a code’s owner and to work with that owner to make an application more secure. This ensures that security is not just an afterthought but a critical aspect of the entire software development lifecycle, from writing code to running it in the cloud.

Better yet, an IT security team can gain complete visibility into the security posture of preproduction application code across multi-pipeline and multi-cloud environments while, at the same time, minimizing cloud misconfigurations from reaching production environments. Together, these proactive strategies not only prevent risks from arising but allow IT security teams to focus on critical emerging threats.

The path to security success

Making the most of a code-to-cloud security platform requires more than innovative tools. Establishing best practices in your organization can ensure a stronger, long-term security posture.

Create a comprehensive view of assets: Today’s organizations rely on a wide array of security tools to safeguard their digital assets. But these solutions must be consolidated into a single pane of glass to manage exposure of the various applications and resources that operate across an entire enterprise, including the cloud. “Companies can’t have separate solutions for separate environments, separate cloud, separate platforms,” warns Korsunsky. “At the end of the day, attackers don’t think in silos. They’re after the crown jewels of an enterprise and they’ll do whatever it takes to get those. They’ll move laterally across environments and clouds—that’s why companies need a consolidated approach.”

Take advantage of artificial intelligence (AI): Many IT security teams are overwhelmed with incidents that require immediate attention. That’s all the more reason for organizations to outsource straightforward security tasks to AI. “AI can sift through the noise so that organizations don’t have to deploy their best experts,” says Korsunsky. For instance, by leveraging its capabilities for comparing and distinguishing written texts and images, AI can be used as a copilot to detect phishing emails. After all, adds Korsunsky, “There isn’t much of an advantage for a human being to read long emails and try to determine whether or not they’re credible.” By taking over routine security tasks, AI frees employees to focus on more critical activities.

Find the start line: Every organization has a long list of assets to secure and vulnerabilities to fix. So where should they begin? “Protect your most critical assets by knowing where your most critical data is and what’s effectively exploitable,” recommends Korsunsky. This involves conducting a comprehensive inventory of a company’s assets and determining how their data interconnects and what dependencies they require.

Protect data in use: The Confidential Computing Consortium is a community, part of the Linux Foundation, focused on accelerating the adoption of confidential computing through open collaboration. Confidential computing can protect an organization’s most sensitive data during processing by performing computations in a hardware-based Trusted Execution Environment (TEE), such as Azure confidential virtual machines based on AMD EPYC CPUs. By encrypting data in memory in a TEE, organizations can ensure that their most sensitive data is only processed after a cloud environment has been verified, helping prevent data access by cloud providers, administrators, or unauthorized users.

A solution for the future As Linux, OSS, and cloud-native applications continue to increase in popularity, so will the pressure on organizations to prioritize security. The good news is that a code-to-cloud approach to cloud security can empower organizations to get a head start on security—during the software development process—while providing valuable insight into an organization’s security posture and freeing security teams to focus on business-critical tasks.

Secure your Linux and open source workloads from code to cloud with Microsoft Azure and AMD. Learn more about Linux on Azure  and Microsoft Security.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Architecting cloud data resilience

Cloud has become a given for most organizations: according to PwC’s 2023 cloud business survey, 78% of companies have adopted cloud in most or all parts of the business. These companies have migrated on-premises systems to the cloud seeking faster time to market, greater scalability, cost savings, and improved collaboration.

Yet while cloud adoption is widespread, research by McKinsey shows that companies’ concerns around the resiliency and reliability of cloud operations, coupled with an ever-evolving regulatory environment, are limiting their ability to derive full value from the cloud. As the value of a business’s data grows ever clearer, the stakes of making sure that data is resilient are heightened. Business leaders now justly fear that they might run afoul of mounting data regulations and compliance requirements, that bad actors might target their data in a ransomware attack, or that an operational disruption affecting their data might grind the entire business to a halt.

For all its competitive advantages, moving to the cloud presents unique challenges for data resilience. In fact, the qualities of cloud that make it so appealing to businesses—scalability, flexibility, and the ability to handle rapidly changing data—are the same ones that make it challenging to ensure the resilience of mission-critical applications and their data in the cloud.

“A widely held misconception is that the durability of the cloud automatically protects your data,” says Rick Underwood, CEO of Clumio, a backup and recovery solutions provider. “But a multitude of factors in cloud environments can still reach your data and wipe it out, maliciously encrypt it, or corrupt it.”

Complicating matters is that moving data to the cloud can lead to reduced data visibility, as individual teams begin creating their own instances and IT teams may not be able to see and track all the organization’s data. “When you make copies of your data for all of these different cloud services, it’s very hard to keep track of where your critical information goes and what needs to be compliant,” says Underwood. The result, he adds, is a “Wild West in terms of identifying, monitoring, and gaining overall visibility into your data in the cloud. And if you can’t see your data, you can’t protect it.”

The end of traditional backup architecture

Until recently, many companies relied on traditional backup architectures to protect their data. But the inability of these backup systems to handle vast volumes of cloud data—and scale to accommodate explosive data growth—is becoming increasingly evident, particularly to cloud-native enterprises. In addition to issues of data volume, many traditional backup systems are ill-equipped to handle the sheer variety and rate of change of today’s enterprise data.

In the early days of cloud, Steven Bong, founder and CEO of AuditFile, had difficulty finding a backup solution that could meet his company’s needs. AuditFile supplies audit software for certified public accountants (CPAs) and needed to protect their critical and sensitive audit work papers. “We had to back up our data somehow,” he says. “Since there weren’t any elegant solutions commercially available, we had a home-grown solution. It was transferring data, backing it up from different buckets, different regions. It was fragile. We were doing it all manually, and that was taking up a lot of time.”

Frederick Gagle, vice president of technology for BioPlus Specialty Pharmacy, notes that backup architectures that weren’t designed for cloud don’t address the unique features and differences of cloud platforms. “A lot of backup solutions,” he says, “started off being on-prem, local data backup solutions. They made some changes so they could work in the cloud, but they weren’t really designed with the cloud in mind, so a lot of features and capabilities aren’t native.”

Underwood agrees, saying, “Companies need a solution that’s natively architected to handle and track millions of data operations per hour. The only way they can accomplish that is by using a cloud-native architecture.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

AI’s growth needs the right interface

If you took a walk in Hayes Valley, San Francisco’s epicenter of AI froth, and asked the first dude-bro you saw wearing a puffer vest about the future of the interface, he’d probably say something about the movie Her, about chatty virtual assistants that will help you do everything from organize your email to book a trip to Coachella to sort your text messages.

Nonsense. Setting aside that Her (a still from the film is shown above) was about how technology manipulates us into a one-sided relationship, you’d have to be pudding-brained to believe that chatbots are the best way to use computers. The real opportunity is close, but it isn’t chatbots.

Instead, it’s computers built atop the visual interfaces we know, but which we can interact with more fluidly, through whatever combination of voice and touch is most natural. Crucially, this won’t just be a computer that we can use. It’ll also be a computer that empowers us to break and remake it, to whatever ends we want. 

Chatbots fail because they ignore a simple fact that’s sold 20 billion smartphones: For a computer to be useful, we need an easily absorbed mental model of both its capabilities and its limitations. The smartphone’s victory was built on the graphical user interface, which revolutionized how we use computers—and how many computers we use!—because it made it easy to understand what a computer could do. There was no mystery. In a blink, you saw the icons and learned without realizing it.

Today we take the GUI for granted. Meanwhile, chatbots can feel like magic, letting you say anything and get a reasonable-­sounding response. But magic is also the power to mislead. Chatbots and open-ended conversational systems are doomed as general-­purpose interfaces because while they may seem able to understand anything, they can’t actually do everything. 

In that gap between anything and everything sits a teetering mound of misbegotten ideas and fatally hyped products.

“But dude, maybe a chatbot could help you book that flight to Coachella?” Sure. But could it switch your reservation when you have a problem? Could it ask you, in turn, which flight is best given your need to be back in Hayes Valley by Friday at 2? 

We take interactive features for granted because of the GUI’s genius. But with a chatbot, you can never know up front where its abilities begin and end. Yes, the list of things they can do is growing every day. But how do you remember what does and doesn’t work, or what’s supposed to work soon? And how are you supposed to constantly update your mental model as those capabilities grow?

If you’ve ever used a digital assistant or smart speaker, you already know that mismatched expectations create products we’ll never use to their full potential. When you first tried one, you probably asked it to do whatever you could think of. Some things worked; most didn’t. So you eventually settled on asking for just the few things you could remember that always worked: timers and music. LLMs, when used as primary interfaces, re-create the trouble that arises when your mental model isn’t quite right. 

Chatbots have their uses and their users. But their usefulness is still capped because they are open-ended computer interfaces that challenge you to figure them out through trial and error. Instead, we need to combine the ease of natural-­language input with machines that will simply show us what they are capable of. 

For example, imagine if, instead of stumbling around trying to talk to the smart devices in your home like a doofus, you could simply look at something with your smart glasses (or whatever) and see a right-click for the real world, giving you a menu of what you can control in all the devices that increasingly surround us. It won’t be a voice that tells you what’s possible—it’ll be an old-fashioned computer screen, and an old-fashioned GUI, which you can operate with your voice or with your hands, or both in combination if you want.

But that’s still not the big opportunity! 

Why shouldn’t we be able to not merely consume technology but instead architect it to suit our own ends?

I think the future interface we want is made from computers and apps that work in ways similar to the phones and laptops we have now—but that we can remake to suit whatever uses we want. Compare this with the world we have now: If you don’t like your hotel app, you can’t make a new one. If you don’t want all the bloatware in your banking app, tough luck. We’re surrounded by apps that are nominally tools. But unlike any tool previously known to man, these are tools that serve only the purpose that someone else defined for them. Why shouldn’t we be able to not merely consume technology, like the gelatinous former Earthlings in Wall-E, but instead architect technology to suit our own ends?

That world seemed close in the 1970s, to Steve Wozniak and the Homebrew Computer Club. It seemed to approach again in the 1990s, with the World Wide Web. But today, the imbalance between people who own computers and people who remake them has never been greater. We, the heirs of the original tool-using primates, have been reduced from wielders of those tools to passive consumers of technology delivered in slick buttons we can use but never change. This runs against what it is to be Homo sapiens, a species defined by our love and instinct for repurposing tools to whatever ends we like.

Imagine if you didn’t have to accept the features some tech genius announced on a wave of hype. Imagine if, instead of downloading some app someone else built, you could describe the app you wanted and then make it with a computer’s help, by reassembling features from any other apps ever created. Comp sci geeks call this notion of recombining capabilities “composability.” I think the future is composability—but composability that anyone can command. 

This idea is already lurching to life. Notion—originally meant as enterprise software that let you collect and create various docs in one place—has exploded with Gen Z, because unlike most software, which serves only a narrow or rigid purpose, it allows you to make and share templates for how to do things of all kinds. You can manage your finances or build a kindergarten lesson plan in one place, with whatever tools you need. 

Now imagine if you could tell your phone what kinds of new templates you want. An LLM can already assemble all the things you need and draw the right interface for them. Want a how-to app about knitting? Sure. Or your own guide to New York City? Done. That computer will probably be using an LLM to assemble these apps. Great. That just means that you, as a normie, can inspect and tinker with the prompt powering the software you just created, like a mechanic looking under the hood.

One day, hopefully soon, we’ll look back on this sad and weird era when our digital tools were both monolithic and ungovernable as a blip when technology conflicted with the human urge to constantly tinker with the world around us. And we’ll realize that the key to building a different relationship with technology was simply to give each of us power over how the interface of the future is designed. 

Cliff Kuang is a user-experience designer and the author of User Friendly: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play.

The author who listens to the sound of the cosmos

In 1983, while on a field recording assignment in Kenya, the musician and soundscape ecologist Bernie Krause noticed something remarkable. Lying in his tent late one night, listening to the calls of hyenas, tree frogs, elephants, and insects in the surrounding old-growth forest, Krause heard what seemed to be a kind of collective orchestra. Rather than a chaotic cacophony of nighttime noises, it was as if each animal was singing within a defined acoustic bandwidth, like living instruments in a larger sylvan ensemble. 

Unsure of whether this structured musicality was real or the invention of an exhausted mind, Krause analyzed his soundscape recordings on a spectrogram when he returned home. Sure enough, the insects occupied one frequency niche, the frogs another, and the mammals a completely separate one. Each group had claimed a unique part of the larger sonic spectrum, a fact that not only made communication easier, Krause surmised, but also helped convey important information about the health and history of the ecosystem.

cover of A Book of Noises
A Book of Noises:
Notes on the Auraculous

Caspar Henderson
UNIVERSITY OF CHICAGO PRESS, 2024

Krause describes his “niche hypothesis” in the 2012 book The Great Animal Orchestra, dubbing these symphonic soundscapes the “biophony”—his term for all the sounds generated by nonhuman organisms in a specific biome. Along with his colleague Stuart Gage from Michigan State University, he also coins two more terms—“anthropophony” and “geophony”—to describe sounds associated with humanity (think music, language, traffic jams, jetliners) and those originating from Earth’s natural processes (wind, waves, volcanoes, and thunder).

In A Book of Noises: Notes on the Auraculous, the Oxford-based writer and journalist Caspar Henderson makes an addition to Krause’s soundscape triumvirate: the “cosmophony,” or the sounds of the cosmos. Together, these four categories serve as the basis for a brief but fascinating tour through the nature of sound and music with 48 stops (in the form of short essays) that explore everything from human earworms to whale earwax.

We start, appropriately enough, with a bang. Sound, Henderson explains, is a pressure wave in a medium. The denser the medium, the faster it travels. For hundreds of thousands of years after the Big Bang, the universe was so dense that it trapped light but allowed sound to pass through it freely. As the primordial plasma of this infant universe cooled and expansion continued, matter collected along the ripples of these cosmic waves, which eventually became the loci for galaxies like our own. “The universe we see today is an echo of those early years,” Henderson writes, “and the waves help us measure [its] size.” 

The Big Bang may seem like a logical place to start a journey into sound, but cosmophony is actually an odd category to invent for a book about noise. After all, there’s not much of it in the vacuum of space. Henderson gets around this by keeping the section short and focusing more on how humans have historically thought about sound in the heavens. For example, there are two separate essays on our multicentury obsession with “the music of the spheres,” the idea that there exists a kind of ethereal harmony produced by the movements of heavenly objects.

Since matter matters when it comes to sound—there can be none of the latter without the former—we also get an otherworldly examination of what human voices would sound like on different terrestrial and gas planets in our solar system, as well as some creative efforts from musicians and scientists who have transmuted visual data from space into music and other forms of audio. These are fun and interesting forays, but it isn’t until the end of the equally short “Sounds of Earth” (geophony) section that readers start to get a sense of the “auraculousness”—ear-related wonder—Henderson references in the subtitle.

Judging by the quantity and variety of entries in the “biophony” and “anthropophony” sections, you get the impression Henderson himself might be more attuned to these particular wonders as well. You really can’t blame him. 

The sheer number of fascinating ways that sound is employed across the human and nonhuman animal kingdom is mind-boggling, and it’s in these final two sections of the book that Henderson’s prose and curatorial prowess really start to shine—or should I say sing

We learn, for example, about female frogs that have devised their own biological noise-canceling system to tune out the male croaks of other species; crickets that amplify their chirps by “chewing a hole in a leaf, sticking their heads through it, and using it as a megaphone”; elephants that listen and communicate with each other seismically; plants that react to the buzz of bees by increasing the concentration of sugar in their flowers’ nectar; and moths with tiny bumps on their exoskeletons that jam the high-frequency echolocation pulses bats use to hunt them. 

Henderson has a knack for crisp characterization (“Singing came from winging”) and vivid, playful descriptions (“Through [the cochlea], the booming and buzzing confusion of the world, all its voices and music, passes into the three pounds of wobbly blancmange inside the nutshell numbskulls that are our kingdoms of infinite space”). He also excels at injecting a sense of wonder into aspects of sound that many of us take for granted. 

It turns out that sound is not just a great way to communicate and navigate underwater—it may be the best way.

In an essay about its power to heal, he marvels at ultrasound’s twin uses as a medical treatment and a method of examination. In addition to its kidney-stone-blasting and tumor-ablating powers, sound, Henderson says, can also be a literal window into our bodies. “It is, truly, an astonishing thing that our first glimpse of the greatest wonder and trial of our lives, parenthood, comes in the form of a fuzzy black and white smudge made from sound.”

While you can certainly quibble with some of the topical choices and their treatment in A Book of Noises, what you can’t argue with is the clear sense of awe that permeates almost every page. It’s an infectious and edifying kind of energy. So much so that by the time Henderson wraps up the book’s final essay, on silence, all you want to do is immerse yourself in more noise.

Singing in the key of sea

For the multiple generations who grew up watching his Academy Award–­winning 1956 documentary film, The Silent World, Jacques-Yves Cousteau’s mischaracterization of the ocean as a place largely devoid of sound seems to have calcified into common knowledge. The science writer Amorina Kingdon offers a thorough and convincing rebuttal of this idea in her new book, Sing Like Fish: How Sound Rules Life Under Water.

cover of Sing Like Fish
Sing Like Fish: How Sound
Rules Life Under Water

Amorina Kingdon
CROWN, 2024

Beyond serving as a 247-page refutation of this unfortunate trope, Kingdon’s book aims to open our ears to all the marvels of underwater life by explaining how sound behaves in this watery underworld, why it’s so important to the animals that live there, and what we can learn when we start listening to them.

It turns out that sound is not just a great way to communicate and navigate underwater—it may be the best way. For one thing, it travels four and a half times faster there than it does on land. It can also go farther (across entire seas, under the right conditions) and provide critical information about everything from who wants to eat you to who wants to mate with you. 

To take advantage of the unique way sound propagates in the world’s oceans, fish rely on a variety of methods to “hear” what’s going on around them. These mechanisms range from so-called lateral lines—rows of tiny hair cells along the outside of their body that can sense small movements and vibrations in the water around them—to otoliths, dense lumps of calcium carbonate that form inside their inner ears. 

Because fish are more or less the same density as water, these denser otoliths move at a different amplitude and phase in response to vibrations passing through their body. The movement is then registered by patches of hair cells that line the chambers where otoliths are embedded, which turn the vibrations of sound into nerve impulses. The philosopher of science Peter Godfrey-Smith may have put it best: “It is not too much to say that a fish’s body is a giant pressure-sensitive ear.” 

While there are some minor topical overlaps with Henderson’s book—primarily around whale-related sound and communication—one of the more admirable attributes of Sing Like Fish is Kingdon’s willingness to focus on some of the oceans’ … let’s say, less charismatic noise-­makers. We learn about herring (“the inveterate farters of the sea”), which use their flatuosity much as a fighter jet might use countermeasures to avoid an incoming missile. When these silvery fish detect the sound of a killer whale, they’ll fire off a barrage of toots, quickly decreasing both their bodily buoyancy and their vulnerability to the location-revealing clicks of the whale hunting them. “This strategic fart shifts them deeper and makes them less reflective to sound,” writes Kingdon.  

Readers are also introduced to the plainfin midshipman, a West Coast fish with “a booming voice” and “a perpetual look of accusation.” In addition to having “a fishy case of resting bitch face,” the male midshipman also has a unique hum, which it uses to attract gravid females in the spring. That hum became the subject of various conspiracy theories in the mid-’80s, when houseboat owners in Sausalito, California, started complaining about a mysterious seasonal drone. Thanks to a hydrophone and a level-headed local aquarium director, the sound was eventually revealed to be not aliens or a secret government experiment, but simply a small, brownish-green fish looking for love.

Kingdon’s command of, and enthusiasm for, the science of underwater sound is uniformly impressive. But it’s her recounting of how and why we started listening to the oceans in the first place that’s arguably one of the book’s most fascinating topics. It’s a wide-ranging tale, one that spans “firearm-­happy Victorian-era gentleman” and “whales that sounded suspiciously like Soviet submarines.” It’s also a powerful reminder of how war and military research can both spur and stifle scientific discovery in surprising ways.  

The fact that Sing Like Fish ends up being both an exquisitely reported piece of journalism and a riveting exploration of a sense that tends to get short shrift only amplifies Kingdon’s ultimate message—that we all need to start paying more attention to the ways in which our own sounds are impinging on life underwater. As we’ve started listening more to the seas, what we’re increasingly hearing is ourselves, she writes: “Piercing sonar, thudding seismic air guns for geological imaging, bangs from pile drivers, buzzing motorboats, and shipping’s broadband growl. We make a lot of noise.”

That noise affects underwater communication, mating, migrating, and bonding in all sorts of subtle and obvious ways. And its impact is often made worse when combined with other threats, like climate change. The good news is that while noise can be a frustratingly hard thing to regulate, there are efforts underway to address our poor underwater aural etiquette. The International Maritime Organization is currently updating its ship noise guidelines for member nations. At the same time, the International Organization for Standardization is creating more guidelines for measuring underwater noise. 

“The ocean is not, and has never been, a silent place,” writes Kingdon. But to keep it filled with the right kinds of noise (i.e., the kinds that are useful to the creatures living there), we’ll have to recommit ourselves to doing two things that humans sometimes aren’t so great at: learning to listen and knowing when to shut up.   

Music to our ears (and minds)

We tend to do both (shut up and listen) when music is being played—at least if it’s the kind we like. And yet the nature of what the composer Edgard Varèse famously called “organized sound” largely remains a mystery to us. What exactly is music? What distinguishes it from other sounds? Why do we enjoy making it? Why do we prefer certain kinds? Why is it so effective at influencing our emotions and (often) our memories?  

In their recent book Every Brain Needs Music: The Neuroscience of Making and Listening to Music, Larry Sherman and Dennis Plies look inside our heads to try to find some answers to these vexing questions. Sherman is a professor of neuroscience at the Oregon Health and Science University, and Plies is a professional musician and teacher. Unfortunately, if the book reveals anything, it’s that limiting your exploration of music to one lens (neuroscience) also limits the insights you can gain into its nature. 

cover of Every Brain Needs Music
Every Brain Needs Music:
The Neuroscience of Making
and Listening to Music

Larry Sherman and Dennis Plies
COLUMBIA UNIVERSITY PRESS, 2023

That’s not to say that getting a better sense of how specific patterns of vibrating air molecules get translated into feelings of joy and happiness isn’t valuable. There are some genuinely interesting explanations of what happens in our brains when we play, listen to, and compose music—supported by some truly great watercolor-­based illustrations by Susi Davis that help to clarify the text. But much of this gets bogged down in odd editorial choices (there are, for some reason, three chapters on practicing music) and conclusions that aren’t exactly earth-shattering (humans like music because it connects us). 

Every Brain Needs Music purports to be for all readers, but unless you’re a musician who’s particularly interested in the brain and its inner workings, I think most people will be far better served by A Book of Noises or other, more in-depth explorations of the importance of music to humans, like Michael Spitzer’s The Musical Human: A History of Life on Earth

“We have no earlids,” the late composer and naturalist R. Murray Schafer once observed. He also noted that despite this anatomical omission, we’ve become quite good at ignoring or tuning out large portions of the sonic world around us. Some of this tendency may be tied to our supposed preference for other sensory modalities. Most of us are taught from an early age that we are primarily visual creatures—that seeing is believing, that a picture is worth a thousand words. This idea is likely reinforced by a culture that also tends to focus primarily on the visual experience.

Yet while it may be true that we rely heavily on our eyes to make sense of the world, we do a profound disservice to ourselves and the rest of the natural world when we underestimate or downplay sound. Indeed, if there’s a common message that runs through all three of these books, it’s that attending to sound in all its forms isn’t just personally rewarding or edifying; it’s a part of what makes us fully human. As Bernie Krause discovered one night more than 40 years ago, once you start listening, it’s amazing what you can hear. 

Bryan Gardiner is a writer based in Oakland, California.