This spa’s water is heated by bitcoin mining

At first glance, the Bathhouse spa in Brooklyn looks not so different from other high-end spas. What sets it apart is out of sight: a closet full of cryptocurrency-­mining computers that not only generate bitcoins but also heat the spa’s pools, marble hammams, and showers. 

When cofounder Jason Goodman opened Bathhouse’s first location in Williamsburg in 2019, he used conventional pool heaters. But after diving deep into the world of bitcoin, he realized he could fit cryptocurrency mining seamlessly into his business. That’s because the process, where special computers (called miners) make trillions of guesses per second to try to land on the string of numbers that will earn a bitcoin, consumes tremendous amounts of electricitywhich in turn produces plenty of heat that usually goes to waste. 

 “I thought, ‘That’s interestingwe need heat,’” Goodman says of Bathhouse. Mining facilities typically use fans or water to cool their computers. And pools of water, of course, are a prominent feature of the spa. 

It takes six miners, each roughly the size of an Xbox One console, to maintain a hot tub at 104 °F. At Bathhouse’s  Williamsburg location, miners hum away quietly inside two large tanks, tucked in a storage closet among liquor bottles and teas. To keep them cool and quiet, the units are immersed directly in non-conductive oil, which absorbs the heat they give off and is pumped through tubes beneath Bathhouse’s hot tubs and hammams. 

Mining boilers, which cool the computers by pumping in cold water that comes back out at 170 °F, are now also being used at the site. A thermal battery stores excess heat for future use. 

Goodman says his spas aren’t saving energy by using bitcoin miners for heat, but they’re also not using any more than they would with conventional water heating. “I’m just inserting miners into that chain,” he says. 

Goodman isn’t the only one to see the potential in heating with crypto. In Finland, Marathon Digital Holdings turned fleets of bitcoin miners into a district heating system to warm the homes of 80,000 residents. HeatCore, an integrated energy service provider, has used bitcoin mining to heat a commercial office building in China and to keep pools at a constant temperature for fish farming. This year it will begin a pilot project to heat seawater for desalination. On a smaller scale, bitcoin fans who also want some extra warmth can buy miners that double as space heaters. 

Crypto enthusiasts like Goodman think much more of this is comingespecially under the Trump administration, which has announced plans to create a bitcoin reserve. This prospect alarms environmentalists. 

The energy required for a single bitcoin transaction varies, but as of mid-March it was equivalent to the energy consumed by an average US household over 47.2 days, according to the Bitcoin Energy Consumption Index, run by the economist Alex de Vries. 

Among the various cryptocurrencies, bitcoin mining gobbles up the most energy by far. De Vries points out that others, like ethereum, have eliminated mining and implemented less energy-­intensive algorithms. But bitcoin users resist any change to their currency, so de Vries is doubtful a shift away from mining will happen anytime soon. 

One key barrier to using bitcoin for heating, de Vries says, is that the heat can only be transported short distances before it dissipates. “I see this as something that is extremely niche,” he says. “It’s just not competitive, and you can’t make it work at a large scale.” 

The more renewable sources that are added to electric grids to replace fossil fuels, the cleaner crypto mining will become. But even if bitcoin is powered by renewable energy, “that doesn’t make it sustainable,” says Kaveh Madani, director of the United Nations University Institute for Water, Environment, and Health. Mining burns through valuable resources that could otherwise be used to meet existing energy needs, Madani says. 

For Goodman, relaxing into bitcoin-heated water is a completely justifiable use of energy. It soothes the muscles, calms the mind, and challenges current economic structures, all at the same time. 

Carrie Klein is a freelance journalist based in New York City.

A vision for the future of automation

The manufacturing industry is at a crossroads: Geopolitical instability is fracturing supply chains from the Suez to Shenzhen, impacting the flow of materials. Businesses are battling rising costs and inflation, coupled with a shrinking labor force, with more than half a million unfilled manufacturing jobs in the U.S. alone. And climate change is further intensifying the pressure, with more frequent extreme weather events and tightening environmental regulations forcing companies to rethink how they operate. New solutions are imperative.

Meanwhile, advanced automation, powered by the convergence of emerging and established technologies, including industrial AI, digital twins, the internet of things (IoT), and advanced robotics, promises greater resilience, flexibility, sustainability, and efficiency for industry. Individual success stories have demonstrated the transformative power of these technologies, providing examples of AI-driven predictive maintenance reducing downtime by up to 50%. Digital twin simulations can significantly reduce time to market, and bring environment dividends, too: One survey found 77% of leaders expect digital twins to reduce carbon emissions by 15% on average.

Yet, broad adoption of this advanced automation has lagged. “That’s not necessarily or just a technology gap,” says John Hart, professor of mechanical engineering and director of the Center for Advanced Production Technologies at MIT. “It relates to workforce capabilities and financial commitments and risk required.” For small and medium enterprises, and those with brownfield sites—older facilities with legacy systems— the barriers to implementation are significant.

In recent years, governments have stepped in to accelerate industrial progress. Through a revival of industrial policies, governments are incentivizing high-tech manufacturing, re-localizing critical production processes, and reducing reliance on fragile global supply chains.

All these developments converge in a key moment for manufacturing. The external pressures on the industry—met with technological progress and these new political incentives—may finally enable the shift toward advanced automation.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The machines are rising — but developers still hold the keys

Rumors of the ongoing death of software development — that it’s being slain by AI — are greatly exaggerated. In reality, software development is at a fork in the road: embracing the (currently) far-off notion of fully automated software development or acknowledging the work of a software developer is much more than just writing lines of code.

The decision the industry makes could have significant long-term consequences. Increasing complacency around AI-generated code and a shift to what has been termed “vibe coding” — where code is generated through natural language prompts until the results seem to work — will lead to code that’s more error-strewn, more expensive to run and harder to change in the future. And, if the devaluation of software development skills continues, we may even lack a workforce with the skills and knowledge to fix things down the line. 

This means software developers are going to become more important to how the world builds and maintains software. Yes, there are many ways their practices will evolve thanks to AI coding assistance, but in a world of proliferating machine-generated code, developer judgment and experience will be vital.

The dangers of AI-generated code are already here

The risks of AI-generated code aren’t science fiction: they’re with us today. Research done by GitClear earlier this year indicates that with AI coding assistants (like GitHub Copilot) going mainstream, code churn — which GitClear defines as “changes that were either incomplete or erroneous when the author initially wrote, committed, and pushed them to the company’s git repo” — has significantly increased. GitClear also found there was a marked decrease in the number of lines of code that have been moved, a signal for refactored code (essentially the care and feeding to make it more effective).

In other words, from the time coding assistants were introduced there’s been a pronounced increase in lines of code without a commensurate increase in lines deleted, updated, or replaced. Simultaneously, there’s been a decrease in lines moved — indicating a lot of code has been written but not refactored. More code isn’t necessarily a good thing (sometimes quite the opposite); GitClear’s findings ultimately point to complacency and a lack of rigor about code quality.

Can AI be removed from software development?

However, AI doesn’t have to be removed from software development and delivery. On the contrary, there’s plenty to be excited about. As noted in the latest volume of the Technology Radar — Thoughtworks’ report on technologies and practices from work with hundreds of clients all over the world — the coding assistance space is full of opportunities. 

Specifically, the report noted tools like Cursor, Cline and Windsurf can enable software engineering agents. What this looks like in practice is an agent-like feature inside developer environments that developers can ask specific sets of coding tasks to be performed in the form of a natural language prompt. This enables the human/machine partnership.

That being said, to only focus on code generation is to miss the variety of ways AI can help software developers. For example, Thoughtworks has been interested in how generative AI can be used to understand legacy codebases, and we see a lot of promise in tools like Unblocked, which is an AI team assistant that helps teams do just that. In fact, Anthropic’s Claude Code helped us add support for new languages in an internal tool, CodeConcise. We use CodeConcise to understand legacy systems; and while our success was mixed, we do think there’s real promise here.

Tightening practices to better leverage AI

It’s important to remember much of the work developers do isn’t developing something new from scratch. A large proportion of their work is evolving and adapting existing (and sometimes legacy) software. Sprawling and janky code bases that have taken on technical debt are, unfortunately, the norm. Simply applying AI will likely make things worse, not better, especially with approaches like vibe.  

This is why developer judgment will become more critical than ever. In the latest edition of the Technology Radar report, AI-friendly code design is highlighted, based on our experience that AI coding assistants perform best with well-structured codebases. 

In practice, this requires many different things, including clear and expressive naming to ensure context is clearly communicated (essential for code maintenance), reducing duplicate code, and ensuring modularity and effective abstractions. Done together, these will all help make code more legible to AI systems.

Good coding practices are all too easy to overlook when productivity and effectiveness are measured purely in terms of output, and even though this was true before there was AI tooling, software development needs to focus on good coding first.

AI assistance demands greater human responsibility

Instagram co-founder Mike Krieger recently claimed that in three years software engineers won’t write any code: they will only review AI-created code. This might sound like a huge claim, but it’s important to remember that reviewing code has always been a major part of software development work. With this in mind, perhaps the evolution of software development won’t be as dramatic as some fear.

But there’s another argument: as AI becomes embedded in how we build software, software developers will take on more responsibility, not less. This is something we’ve discussed a lot at Thoughtworks: the job of verifying that an AI-built system is correct will fall to humans. Yes, verification itself might be AI-assisted, but it will be the role of the software developer to ensure confidence. 

In a world where trust is becoming highly valuable — as evidenced by the emergence of the chief trust officer — the work of software developers is even more critical to the infrastructure of global industry. It’s vital software development is valued: the impact of thoughtless automation and pure vibes could prove incredibly problematic (and costly) in the years to come.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Amazon’s first quantum computing chip makes its debut

Amazon Web Services today announced Ocelot, its first-generation quantum computing chip. While the chip has only rudimentary computing capability, the company says it is a proof-of-principle demonstration—a step on the path to creating a larger machine that can deliver on the industry’s promised killer applications, such as fast and accurate simulations of new battery materials.

“This is a first prototype that demonstrates that this architecture is scalable and hardware-efficient,” says Oskar Painter, the head of quantum hardware at AWS, Amazon’s cloud computing unit. In particular, the company says its approach makes it simpler to perform error correction, a key technical challenge in the development of quantum computing.  

Ocelot consists of nine quantum bits, or qubits, on a chip about a centimeter square, which, like some forms of quantum hardware, must be cryogenically cooled to near absolute zero in order to operate. Five of the nine qubits are a type of hardware that the field calls a “cat qubit,” named for Schrödinger’s cat, the famous 20th-century thought experiment in which an unseen cat in a box may be considered both dead and alive. Such a superposition of states is a key concept in quantum computing.

The cat qubits AWS has made are tiny hollow structures of tantalum that contain microwave radiation, attached to a silicon chip. The remaining four qubits are transmons—each an electric circuit made of superconducting material. In this architecture, AWS uses cat qubits to store the information, while the transmon qubits monitor the information in the cat qubits. This distinguishes its technology from Google’s and IBM’s quantum computers, whose computational parts are all transmons. 

Notably, AWS researchers used Ocelot to implement a more efficient form of quantum error correction. Like any computer, quantum computers make mistakes. Without correction, these errors add up, with the result that current machines cannot accurately execute the long algorithms required for useful applications. “The only way you’re going to get a useful quantum computer is to implement quantum error correction,” says Painter.

Unfortunately, the algorithms required for quantum error correction usually have heavy hardware requirements. Last year, Google encoded a single error-corrected bit of quantum information using 105 qubits.

Amazon’s design strategy requires only a 10th as many qubits per bit of information, says Painter. In work published in Nature on Wednesday, the team encoded a single error-corrected bit of information in Ocelot’s nine qubits. Theoretically, this hardware design should be easier to scale up to a larger machine than a design made only of transmons, says Painter. 

This design combining cat qubits and transmons makes error correction simpler, reducing the number of qubits needed, says Shruti Puri, a physicist at Yale University who was not involved in the work. (Puri works part-time for another company that develops quantum computers but spoke to MIT Technology Review in her capacity as an academic.)

“Basically, you can decompose all quantum errors into two kinds—bit flips and phase flips,” says Puri. Quantum computers represent information as 1s, 0s, and probabilities, or superpositions, of both. A bit flip, which also occurs in conventional computing, takes place when the computer mistakenly encodes a 1 that should be a 0, or vice versa. In the case of quantum computing, the bit flip occurs when the computer encodes the probability of a 0 as the probability of a 1, or vice versa. A phase flip is a type of error unique to quantum computing, having to do with the wavelike properties of the qubit.

The cat-transmon design allowed Amazon to engineer the quantum computer so that any errors were predominantly phase-flip errors. This meant the company could use a much simpler error correction algorithm than Google’s—one that did not require as many qubits. “Your savings in hardware is coming from the fact that you need to mostly correct for one type of error,” says Puri. “The other error is happening very rarely.” 

The hardware savings also stem from AWS’s careful implementation of an operation known as a C-NOT gate, which is performed during error correction. Amazon’s researchers showed that the C-NOT operation did not disproportionately introduce bit-flip errors. This meant that after each round of error correction, the quantum computer still predominantly made phase-flip errors, so the simple, hardware-efficient error correction code could continue to be used.

AWS began working on designs for Ocelot as early as 2021, says Painter. Its development was a “full-stack problem.” To create high-performing qubits that could ultimately execute error correction, the researchers had to figure out a new way to grow tantalum, which is what their cat qubits are made of, on a silicon chip with as few atomic-scale defects as possible. 

It’s a significant advance that AWS can now fabricate and control multiple cat qubits in a single device, says Puri. “Any work that goes toward scaling up new kinds of qubits, I think, is interesting,” she says. Still, there are years of development to go. Other experts have predicted that quantum computers will require thousands, if not millions, of qubits to perform a useful task. Amazon’s work “is a first step,” says Puri.

She adds that the researchers will need to further reduce the fraction of errors due to bit flips as they scale up the number of qubits. 

Still, this announcement marks Amazon’s way forward. “This is an architecture we believe in,” says Painter. Previously, the company’s main strategy was to pursue conventional transmon qubits like Google’s and IBM’s, and they treated this cat qubit project as “skunkworks,” he says. Now, they’ve decided to prioritize cat qubits. “We really became convinced that this needed to be our mainline engineering effort, and we’ll still do some exploratory things, but this is the direction we’re going.” (The startup Alice & Bob, based in France, is also building a quantum computer made of cat qubits.)

As is, Ocelot basically is a demonstration of quantum memory, says Painter. The next step is to add more qubits to the chip, encode more information, and perform actual computations. But they have many challenges ahead, from how to attach all the wires to how to link multiple chips together. “Scaling is not trivial,” he says.

A new Microsoft chip could lead to more stable quantum computers

Microsoft announced today that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to building quantum computers that could make them more stable and easier to scale up. 

Researchers and companies have been working for years to build quantum computers, which could unlock dramatic new abilities to simulate complex materials and discover new ones, among many other possible applications. 

To achieve that potential, though, we must build big enough systems that are stable enough to perform computations. Many of the technologies being explored today, such as the superconducting qubits pursued by Google and IBM, are so delicate that the resulting systems need to have many extra qubits to correct errors. 

Microsoft has long been working on an alternative that could cut down on the overhead by using components that are far more stable. These components, called Majorana quasiparticles, are not real particles. Instead, they are special patterns of behavior that may arise inside certain physical systems and under certain conditions.

The pursuit has not been without setbacks, including a high-profile paper retraction by researchers associated with the company in 2018. But the Microsoft team, which has since pulled this research effort in house, claims it is now on track to build a fault-tolerant quantum computer containing a few thousand qubits in a matter of years and that it has a blueprint for building out chips that each contain a million qubits or so, a rough target that could be the point at which these computers really begin to show their power.

This week the company announced a few early successes on that path: piggybacking on a Nature paper published today that describes a fundamental validation of the system, the company says it has been testing a topological qubit, and that it has wired up a chip containing eight of them. 

“You don’t get to a million qubits without a lot of blood, sweat, and tears and solving a lot of really difficult technical challenges along the way. And I do not want to understate any of that,” says Chetan Nayak, a Microsoft technical fellow and leader of the team pioneering this approach. That said, he says, “I think that we have a path that we very much believe in, and we see a line of sight.” 

Researchers outside the company are cautiously optimistic. “I’m very glad that [this research] seems to have hit a very important milestone,” says computer scientist Scott Aaronson, who heads the ​​Quantum Information Center at the University of Texas at Austin. “I hope that this stands, and I hope that it’s built up.”

Even and odd

The first step in building a quantum computer is constructing qubits that can exist in fragile quantum states—not 0s and 1s like the bits in classical computers, but rather a mixture of the two. Maintaining qubits in these states and linking them up with one another is delicate work, and over the years a significant amount of research has gone into refining error correction schemes to make up for noisy hardware. 

For many years, theorists and experimentalists alike have been intrigued by the idea of creating topological qubits, which are constructed through mathematical twists and turns and have protection from errors essentially baked into their physics. “It’s been such an appealing idea to people since the early 2000s,” says Aaronson. “The only problem with it is that it requires, in a sense, creating a new state of matter that’s never been seen in nature.”

Microsoft has been on a quest to synthesize this state, called a Majorana fermion, in the form of quasiparticles. The Majorana was first proposed nearly 90 years ago as a particle that is its own antiparticle, which means two Majoranas will annihilate when they encounter one another. With the right conditions and physical setup, the company has been hoping to get behavior matching that of the Majorana fermion within materials.

In the last few years, Microsoft’s approach has centered on creating a very thin wire or “nanowire” from indium arsenide, a semiconductor. This material is placed in close proximity to aluminum, which becomes a superconductor close to absolute zero, and can be used to create superconductivity in the nanowire.

Ordinarily you’re not likely to find any unpaired electrons skittering about in a superconductor—electrons like to pair up. But under the right conditions in the nanowire, it’s theoretically possible for an electron to hide itself, with each half hiding at either end of the wire. If these complex entities, called Majorana zero modes, can be coaxed into existence, they will be difficult to destroy, making them intrinsically stable. 

”Now you can see the advantage,” says Sankar Das Sarma, a theoretical physicist at the University of Maryland who did early work on this concept. “You cannot destroy a half electron, right? If you try to destroy a half electron, that means only a half electron is left. That’s not allowed.”

In 2023, the Microsoft team published a paper in the journal Physical Review B claiming that this system had passed a specific protocol designed to assess the presence of Majorana zero modes. This week in Nature, the researchers reported that they can “read out” the information in these nanowires—specifically, whether there are Majorana zero modes hiding at the wires’ ends. If there are, that means the wire has an extra, unpaired electron.

“What we did in the Nature paper is we showed how to measure the even or oddness,” says Nayak. “To be able to tell whether there’s 10 million or 10 million and one electrons in one of these wires.” That’s an important step by itself, because the company aims to use those two states—an even or odd number of electrons in the nanowire—as the 0s and 1s in its qubits. 

If these quasiparticles exist, it should be possible to “braid” the four Majorana zero modes in a pair of nanowires around one another by making specific measurements in a specific order. The result would be a qubit with a mix of these two states, even and odd. Nayak says the team has done just that, creating a two-level quantum system, and that it is currently working on a paper on the results.

Researchers outside the company say they cannot comment on the qubit results, since that paper is not yet available. But some have hopeful things to say about the findings published so far. “I find it very encouraging,” says Travis Humble, director of the Quantum Science Center at Oak Ridge National Laboratory in Tennessee. “It is not yet enough to claim that they have created topological qubits. There’s still more work to be done there,” he says. But “this is a good first step toward validating the type of protection that they hope to create.” 

Others are more skeptical. Physicist Henry Legg of the University of St Andrews in Scotland, who previously criticized Physical Review B for publishing the 2023 paper without enough data for the results to be independently reproduced, is not convinced that the team is seeing evidence of Majorana zero modes in its Nature paper. He says that the company’s early tests did not put it on solid footing to make such claims. “The optimism is definitely there, but the science isn’t there,” he says.

One potential complication is impurities in the device, which can create conditions that look like Majorana particles. But Nayak says the evidence has only grown stronger as the research has proceeded. “This gives us confidence: We are manipulating sophisticated devices and seeing results consistent with a Majorana interpretation,” he says.

“They have satisfied many of the necessary conditions for a Majorana qubit, but there are still a few more boxes to check,” Das Sarma said after seeing preliminary results on the qubit. “The progress has been impressive and concrete.”

Scaling up

On the face of it, Microsoft’s topological efforts seem woefully behind in the world of quantum computing—the company is just now working to combine qubits in the single digits while others have tied together more than 1,000. But both Nayak and Das Sarma say other efforts had a strong head start because they involved systems that already had a solid grounding in physics. Work on the topological qubit, on the other hand, has meant starting from scratch. 

“We really were reinventing the wheel,” Nayak says, likening the team’s efforts to the early days of semiconductors, when there was so much to sort out about electron behavior and materials, and transistors and integrated circuits still had to be invented. That’s why this research path has taken almost 20 years, he says: “It’s the longest-running R&D program in Microsoft history.”

Some support from the US Defense Advanced Research Projects Agency could help the company catch up. Early this month, Microsoft was selected as one of two companies to continue work on the design of a scaled-up system, through a program focused on underexplored approaches that could lead to utility-scale quantum computers—those whose benefits exceed their costs. The other company selected is PsiQuantum, a startup that is aiming to build a quantum computer containing up to a million qubits using photons.

Many of the researchers MIT Technology Review spoke with would still like to see how this work plays out in scientific publications, but they were hopeful. “The biggest disadvantage of the topological qubit is that it’s still kind of a physics problem,” says Das Sarma. “If everything Microsoft is claiming today is correct … then maybe right now the physics is coming to an end, and engineering could begin.” 

This story was updated with Henry Legg’s current institutional affiliation.

From COBOL to chaos: Elon Musk, DOGE, and the Evil Housekeeper Problem

In trying to make sense of the wrecking ball that is Elon Musk and President Trump’s DOGE, it may be helpful to think about the Evil Housekeeper Problem. It’s a principle of computer security roughly stating that once someone is in your hotel room with your laptop, all bets are off. Because the intruder has physical access, you are in much more trouble. And the person demanding to get into your computer may be standing right beside you.

So who is going to stop the evil housekeeper from plugging a computer in and telling IT staff to connect it to the network?

What happens if someone comes in and tells you that you’ll be fired unless you reveal the authenticator code from your phone, or sign off on a code change, or turn over your PIV card, the Homeland Security–approved smart card used to access facilities and systems and securely sign documents and emails? What happens if someone says your name will otherwise be published in an online list of traitors? Already the new administration is firing, putting on leave, or outright escorting from the building people who refuse to do what they’re told. 

It’s incredibly hard to protect a system from someone—the evil housekeeper from DOGE—who has made their way inside and wants to wreck it. This administration is on the record as wanting to outright delete entire departments. Accelerationists are not only setting policy but implementing it by working within the administration. If you can’t delete a department, then why not just break it until it doesn’t work? 

That’s why what DOGE is doing is a massive, terrifying problem, and one I talked through earlier in a thread on Bluesky

Government is built to be stable. Collectively, we put systems and rules in place to ensure that stability. But whether they actually deliver and preserve stability in the real world isn’t actually about the technology used; it’s about the people using it. When it comes down to it, technology is a tool to be used by humans for human ends. The software used to run our democratically elected government is deployed to accomplish goals tied to policies: collecting money from people, or giving money to states so they can give money to people who qualify for food stamps, or making covid tests available to people.

Usually, our experience of government technology is that it’s out of date or slow or unreliable. Certainly not as shiny as what we see in the private sector. And that technology changes very, very slowly, if it happens at all. 

It’s not as if people don’t realize these systems could do with modernization. In my experience troubleshooting and modernizing government systems in California and the federal government, I worked with Head Start, Medicaid, child welfare, and logistics at the Department of Defense. Some of those systems were already undergoing modernization attempts, many of which were and continue to be late, over budget, or just plain broken. But the changes that are needed to make other systems more modern were frequently seen as too risky or too expensive. In other words, not important enough. 

Of course, some changes are deemed important enough. The covid-19 pandemic and our unemployment insurance systems offer good examples. When covid hit, certain critical government technologies suddenly became visible. Those systems, like unemployment insurance portals, also became politically important, just like the launch of the Affordable Care Act website (which is why it got so much attention when it was botched). 

Political attention can change everything. During the pandemic, suddenly it wasn’t just possible to modernize and upgrade government systems, or to make them simpler, clearer, and faster to use. It actually happened. Teams were parachuted in. Overly restrictive rules and procedures were reassessed and relaxed. Suddenly, government workers were allowed to work remotely and to use Slack.

However, there is a reason this was an exception. 

In normal times, rules and procedures are certainly part of what makes it very, very hard to change government technology. But they are in place to stop changes because, well, changes might break those systems and government doesn’t work without them working consistently. 

A long time ago I worked on a mainframe system in California—the kind that uses COBOL. It was as solid as a rock and worked day in, day out. Because if it didn’t, and reimbursements weren’t received for Medicaid, then the state might become temporarily insolvent. 

That’s why many of the rules about technology in government make it hard to make changes: because sometimes the risk of things breaking is just too high. Sometimes what’s at stake is simply keeping money flowing; sometimes, as with 911, lives are on the line.

Still, government systems and the rules that govern them are ultimately only as good as the people who oversee and enforce them. The technology will only do (and not do) what people tell it to. So if anyone comes in and breaks those rules on purpose—without fear of consequence—there are few practical or technical guardrails to prevent it. 

One system that’s meant to do that is the ATO, or the Authority to Operate. It does what it says: It lets you run a computer system. You are not supposed to operate a system without one. 

But DOGE staffers are behaving in a way that suggests they don’t care about getting ATOs. And nothing is really stopping them. (Someone on Bluesky replied to me: “My first thought about the OPM [email] server was, “there’s no way those fuckers have an ATO.”) 

You might think that there would be technical measures to stop someone right out of high school from coming in and changing the code to a government system. That the system could require two-factor authentication to deploy the code to the cloud. That you would need a smart card to log in to a specific system to do that. Nope—all those technical measures can be circumvented by coercion at the hands of the evil housekeeper. 

Indeed, none of our systems and rules work without enforcement, and consequences flowing from that enforcement. But to an unprecedented degree, this administration, and its individual leaders, have shown absolutely no fear. That’s why, according to Wired, the former X and SpaceX engineer and DOGE staffer Marko Elez had the “ability not just to read but to write code on two of the most sensitive systems in the US government: the Payment Automation Manager and Secure Payment System at the Bureau of the Fiscal Service (BFS).” (Elez reportedly resigned yesterday after the Wall Street Journal began reporting on a series of racist comments he had allegedly made.)

We’re seeing in real time that there are no practical technical measures preventing someone from taking a spanner to the technology that keeps our government stable, that keeps society running every day—despite the very real consequences. 

So we should plan for the worst, even if the likelihood of the worst is low. 

We need a version of the UK government’s National Risk Register, covering everything from the collapse of financial markets to “an attack on government” (but, unsurprisingly, that risk is described in terms of external threats). The register mostly predicts long-term consequences, with recovery taking months. That may end up being the case here. 

We need to dust off those “in the event of an emergency” disaster response procedures dealing with the failure of federal government—at individual organizations that may soon hit cash-flow problems and huge budget deficits without federal funding, at statehouses that will need to keep social programs running, and in groups doing the hard work of archiving and preserving data and knowledge.

In the end, all we have is each other—our ability to form communities and networks to support, help, and care for each other. Sometimes all it takes is for the first person to step forward, or to say no, and for us to rally around so it’s easier for the next person. In the end, it’s not about the technology—it’s about the people.

Dan Hon is principal of Very Little Gravitas, where he helps turn around and modernize large and complex government services and products.

This quantum computer built on server racks paves the way to bigger machines

A Canadian startup called Xanadu has built a new quantum computer it says can be easily scaled up to achieve the computational power needed to tackle scientific challenges ranging from drug discovery to more energy-efficient machine learning.

Aurora is a “photonic” quantum computer, which means it crunches numbers using photonic qubits—information encoded in light. In practice, this means combining and recombining laser beams on multiple chips using lenses, fibers, and other optics according to an algorithm. Xanadu’s computer is designed in such a way that the answer to an algorithm it executes corresponds to the final number of photons in each laser beam. This approach differs from one used by Google and IBM, which involves encoding information in properties of superconducting circuits. 

Aurora has a modular design that consists of four similar units, each installed in a standard server rack that is slightly taller and wider than the average human. To make a useful quantum computer, “you copy and paste a thousand of these things and network them together,” says Christian Weedbrook, the CEO and founder of the company. 

Ultimately, Xanadu envisions a quantum computer as a specialized data center, consisting of rows upon rows of these servers. This contrasts with the industry’s earlier conception of a specialized chip within a supercomputer, much like a GPU.

But this work, which the company published last week in Nature, is just a first step toward that vision. Aurora used 35 chips to construct a total of 12 quantum bits, or qubits. Any useful applications of quantum computing proposed to date will require at least thousands of qubits, or possibly a million. By comparison, Google’s quantum computer Willow, which debuted last year, has 105 qubits (all built on a single chip), and IBM’s Condor has 1,121.

Devesh Tiwari, a quantum computing researcher at Northeastern University, describes Xanadu’s progress in an analogy with building a hotel. “They have built a room, and I’m sure they can build multiple rooms,” he says. “But I don’t know if they can build it floor by floor.”

Still, he says, the work is “very promising.” 

Xanadu’s 12 qubits may seem like a paltry number next to IBM’s 1,121, but Tiwari says this doesn’t mean that quantum computers based on photonics are running behind. In his opinion, the number of qubits reflects the amount of investment more than it does the technology’s promise. 

Photonic quantum computers offer several design advantages. The qubits are less sensitive to environmental noise, says Tiwari, which makes it easier to get them to retain information for longer. It is also relatively straightforward to connect photonic quantum computers via conventional fiber optics, because they already use light to encode information. Networking quantum computers together is key to the industry’s vision of a “quantum internet” where different quantum devices talk to each other. Aurora’s servers also don’t need to be kept as cool as superconducting quantum computers, says Weedbrook, so they don’t require as much cryogenic technology. The server racks operate at room temperature, although photon-counting detectors still need to be cryogenically cooled in another room. 

Xanadu is not the only company pursuing photonic quantum computers; others include PsiQuantum in the US and Quandela in France. Other groups are using materials like neutral atoms and ions to construct their quantum systems. 

From a technical standpoint, Tiwari suspects, no single qubit type will ever be the “winner,” but it’s likely that certain qubits will be better for specific applications. Photonic quantum computers, for example, are particularly well suited to Gaussian boson sampling, an algorithm that could be useful for quickly solving graph problems. “I really want more people to be looking at photonic quantum computers,” he says. He has studied quantum computers with multiple qubit types, including photons and superconducting qubits, and is not affiliated with a company. 

Isaac Kim, a physicist at the University of California, Davis, points out that Xanadu has not demonstrated the error correction ability many experts think a quantum computer will need in order to do any useful task, given that information stored in a quantum computer is notoriously fragile. 

Weedbrook, however, says Xanadu’s next goal is to improve the quality of the photons in the computer, which will ease the error correction requirements. “When you send lasers through a medium, whether it’s free space, chips, or fiber optics, not all the information makes it from the start to the finish,” he says. “So you’re actually losing light and therefore losing information.” The company is working to reduce this loss, which means fewer errors in the first place. 

Xanadu aims to build a quantum data center, with thousands of servers containing a million qubits, in 2029.

Useful quantum computing is inevitable—and increasingly imminent

On January 8, Nvidia CEO Jensen Huang jolted the stock market by saying that practical quantum computing is still 15 to 30 years away, at the same time suggesting those computers will need Nvidia GPUs in order to implement the necessary error correction. 

However, history shows that brilliant people are not immune to making mistakes. Huang’s predictions miss the mark, both on the timeline for useful quantum computing and on the role his company’s technology will play in that future.

I’ve been closely following developments in quantum computing as an investor, and it’s clear to me that it is rapidly converging on utility. Last year, Google’s Willow device demonstrated that there is a promising pathway to scaling up to bigger and bigger computers. It showed that errors can be reduced exponentially as the number of quantum bits, or qubits, increases. It also ran a benchmark test in under five minutes that would take one of today’s fastest supercomputers 10 septillion years. While too small to be commercially useful with known algorithms, Willow shows that quantum supremacy (executing a task that is effectively impossible for any classical computer to handle in a reasonable amount of time) and fault tolerance (correcting errors faster than they are made) are achievable.

For example, PsiQuantum, a startup my company is invested in, is set to break ground on two quantum computers that will enter commercial service before the end of this decade. The plan is for each one to be 10 thousand times the size of Willow, big enough to tackle important questions about materials, drugs, and the quantum aspects of nature. These computers will not use GPUs to implement error correction. Rather, they will have custom hardware, operating at speeds that would be impossible with Nvidia hardware.

At the same time, quantum algorithms are improving far faster than hardware. A recent collaboration between the pharmaceutical giant Boehringer Ingelheim and PsiQuantum demonstrated a more than 200x improvement in algorithms to simulate important drugs and materials. Phasecraft, another company we have invested in, has improved the simulation performance for a wide variety of crystal materials and has published a quantum-enhanced version of a widely used materials science algorithm that is tantalizingly close to beating all classical implementations on existing hardware.

Advances like these lead me to believe that useful quantum computing is inevitable and increasingly imminent. And that’s good news, because the hope is that they will be able to perform calculations that no amount of AI or classical computation could ever achieve.

We should care about the prospect of useful quantum computers because today we don’t really know how to do chemistry. We lack knowledge about the mechanisms of action for many of our most important drugs. The catalysts that drive our industries are generally poorly understood, require expensive exotic materials, or both. Despite appearances, we have significant gaps in our agency over the physical world; our achievements belie the fact that we are, in many ways, stumbling around in the dark.

Nature operates on the principles of quantum mechanics. Our classical computational methods fail to accurately capture the quantum nature of reality, even though much of our high-performance computing resources are dedicated to this pursuit. Despite all the intellectual and financial capital expended, we still don’t understand why the painkiller acetaminophen works, how type-II superconductors function, or why a simple crystal of iron and nitrogen can produce a magnet with such incredible field strength. We search for compounds in Amazonian tree bark to cure cancer and other maladies, manually rummaging through a pitifully small subset of a design space encompassing 1060 small molecules. It’s more than a little embarrassing.

We do, however, have some tools to work with. In industry, density functional theory (DFT) is the workhorse of computational chemistry and materials modeling, widely used to investigate the electronic structure of many-body systems—such as atoms, molecules, and solids. When DFT is applied to systems where electron-electron correlations are weak, it produces reasonable results. But it fails entirely on a broad class of interesting problems. 

Take, for example, the buzz in the summer of 2023 around the “room-temperature superconductor” LK-99. Many accomplished chemists turned to DFT to try to characterize the material and determine whether it was, indeed, a superconductor. Results were, to put it politely, mixed—so we abandoned our best computational methods, returning to mortar and pestle to try to make some of the stuff. Sadly, although LK-99 might have many novel characteristics, a room-temperature superconductor it isn’t. That’s unfortunate, as such a material could revolutionize energy generation, transmission, and storage, not to mention magnetic confinement for fusion reactors, particle accelerators, and more.

AI will certainly help with our understanding of materials, but it is no panacea. New AI techniques have emerged in the last few years, with some promising results. DeepMind’s Graph Networks for Materials Exploration (GNoME), for example, found 380,000 new potentially stable materials. At its core, though, GNoME depends on DFT, so its performance is only as good as DFT’s ability to produce good answers. 

The fundamental issue is that an AI model is only as good as the data it’s trained on. Training an LLM on the entire internet corpus, for instance, can yield a model that has a reasonable grasp of most human culture and can process language effectively. But if DFT fails for any non-trivially correlated quantum systems, how useful can a DFT-derived training set really be? We could also turn to synthesis and experimentation to create training data, but the number of physical samples we can realistically produce is minuscule relative to the vast design space, leaving a great deal of potential untapped. Only once we have reliable quantum simulations to produce sufficiently accurate training data will we be able to create AI models that answer quantum questions on classical hardware.

And that means that we need quantum computers. They afford us the opportunity to shift from a world of discovery to a world of design. Today’s iterative process of guessing, synthesizing, and testing materials is comically inadequate.

In a few tantalizing cases, we have stumbled on materials, like superconductors, with near-magical properties. How many more might these new tools reveal in the coming years? We will eventually have machines with millions of qubits that, when used to simulate crystalline materials, open up a vast new design space. It will be like waking up one day and finding a million new elements with fascinating properties on the periodic table.

Of course, building a million-qubit quantum computer is not for the faint of heart. Such machines will be the size of supercomputers, and require large amounts of capital, cryoplant, electricity, concrete, and steel. They also require silicon photonics components that perform well beyond anything in industry, error correction hardware that runs fast enough to chase photons, and single-photon detectors with unprecedented sensitivity. But after years of research and development, and more than a billion dollars of investment, the challenge is now moving from science and engineering to construction.

It is impossible to fully predict how quantum computing will affect our world, but a thought exercise might offer a mental model of some of the possibilities. 

Imagine our world without metal. We could have wooden houses built with stone tools, agriculture, wooden plows, movable type, printing, poetry, and even thoughtfully edited science periodicals. But we would have no inkling of phenomena like electricity or electromagnetism—no motors, generators, radio, MRI machines, silicon, or AI. We wouldn’t miss them, as we’d be oblivious to their existence. 

Today, we are living in a world without quantum materials, oblivious to the unrealized potential and abundance that lie just out of sight. With large-scale quantum computers on the horizon and advancements in quantum algorithms, we are poised to shift from discovery to design, entering an era of unprecedented dynamism in chemistry, materials science, and medicine. It will be a new age of mastery over the physical world.

Peter Barrett is a general partner at Playground Global, which invests in early-stage deep-tech companies including several in quantum computing, quantum algorithms, and quantum sensing: PsiQuantum, Phasecraft, NVision, and Ideon.

Fueling the future of digital transformation

In the rapidly evolving landscape of digital innovation, staying adaptable isn’t just a strategy—it’s a survival skill. “Everybody has a plan until they get punched in the face,” says Luis Niño, digital manager for technology ventures and innovation at Chevron, quoting Mike Tyson.

Drawing from a career that spans IT, HR, and infrastructure operations across the globe, Niño offers a unique perspective on innovation and how organizational microcultures within Chevron shape how digital transformation evolves. 

Centralized functions prioritize efficiency, relying on tools like AI, data analytics, and scalable system architectures. Meanwhile, business units focus on simplicity and effectiveness, deploying robotics and edge computing to meet site-specific needs and ensure safety.

“From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant,” he says.

Central to this transformation is the rise of industrial AI. Unlike consumer applications, industrial AI operates in high-stakes environments where the cost of errors can be severe. 

“The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes,” says Niño. “If a machine reacts in ways you don’t expect, people could get hurt, and so there’s an extra level of care that needs to happen and that we need to think about as we deploy these technologies.”

Niño highlights Chevron’s efforts to use AI for predictive maintenance, subsurface analytics, and process automation, noting that “AI sits on top of that foundation of strong data management and robust telecommunications capabilities.” As such, AI is not just a tool but a transformation catalyst redefining how talent is managed, procurement is optimized, and safety is ensured.

Looking ahead, Niño emphasizes the importance of adaptability and collaboration: “Transformation is as much about technology as it is about people.” With initiatives like the Citizen Developer Program and Learn Digital, Chevron is empowering its workforce to bridge the gap between emerging technologies and everyday operations using an iterative mindset. 

Niño is also keeping watch over the convergence of technologies like AI, quantum computing, Internet of Things, and robotics, which hold the potential to transform how we produce and manage energy.

“My job is to keep an eye on those developments,” says Niño, “to make sure that we’re managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective.”

This episode of Business Lab is produced in association with Infosys Cobalt.

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Our topic today is digital transformation, from back office operations to infrastructure in the field like oil rigs, companies continue to look for ways to increase profit, meet sustainability goals, and invest in the latest and greatest technology. 

Two words for you: enabling innovation. 

My guest is Luis Niño, who is the digital manager of technology ventures, and innovation at Chevron. This podcast is produced in association with Infosys Cobalt. 

Welcome, Luis. 

Luis Niño: Thank you, Megan. Thank you for having me. 

Megan: Thank you so much for joining us. Just to set some context, Luis, you’ve had a really diverse career at Chevron, spanning IT, HR, and infrastructure operations. I wonder, how have those different roles shaped your approach to innovation and digital strategy? 

Luis: Thank you for the question. And you’re right, my career has spanned many different areas and geographies in the company. It really feels like I’ve worked for different companies every time I change roles. Like I said, different functions, organizations, locations I’ve had since here in Houston and in Bakersfield, California and in Buenos Aires, Argentina. From an organizational standpoint, I’ve seen central teams international service centers, as you mentioned, field infrastructure and operation organizations in our business units, and I’ve also had corporate function roles. 

And the reason why I mentioned that diversity is that each one of those looks at digital transformation and innovation through its own lens. From the priority to scale and streamline in central organizations to the need to optimize and simplify out in business units and what I like to call the periphery, you really learn about the concept first off of microcultures and how different these organizations can be even within our own walls, but also how those come together in organizations like Chevron. 

Over time, I would highlight two things. In central organizations, whether that’s functions like IT, HR, or our technical center, we have a central technical center, where we continuously look for efficiencies in scaling, for system architectures that allow for economies of scale. As you can imagine, the name of the game is efficiency. We have also looked to improve employee experience. We want to orchestrate ecosystems of large technology vendors that give us an edge and move the massive organization forward. In areas like this, in central areas like this, I would say that it is data analytics, data science, and artificial intelligence that has become the sort of the fundamental tools to achieve those objectives. 

Now, if you allow that pendulum to swing out to the business units and to the periphery, the name of the game is effectiveness and simplicity. The priority for the business units is to find and execute technologies that help us achieve the local objectives and keep our people safe. Especially when we are talking about our manufacturing environments where there’s risk for our folks. In these areas, technologies like robotics, the Internet of Things, and obviously edge computing are currently the enablers of information. 

I wouldn’t want to miss the opportunity to say that both of those, let’s call it, areas of the company, rely on the same foundation and that is a foundation of strong data management, of strong network and telecommunications capabilities because those are the veins through which the data flows and everything relies on data. 

In my experience, this pendulum also drives our technology priorities and our technology strategy. From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant. If you are deploying something in the center and you suddenly realize that some business unit already has a solution, you cannot just say, let’s shut it down and go with what I said. You have to adapt, you have to understand behavioral change management and you really have to make sure that change and adjustments are your bread and butter. 

I don’t know if you know this, Megan, but there’s a popular fight happening this weekend with Mike Tyson and he has a saying, and that is everybody has a plan until they get punched in the face. And what he’s trying to say is you have to be adaptable. The plan is good, but you have to make sure that you remain agile. 

Megan: Yeah, absolutely. 

Luis: And then I guess the last lesson really quick is about risk management or maybe risk appetite. Each group has its own risk appetite depending on the lens or where they’re sitting, and this may create some conflict between organizations that want to move really, really fast and have urgency and others that want to take a step back and make sure that we’re doing things right at the balance. I think that at the end, I think that’s a question for leadership to make sure that they have a pulse on our ability to change. 

Megan: Absolutely, and you’ve mentioned a few different elements and technologies I’d love to dig into a bit more detail on. One of which is artificial intelligence because I know Chevron has been exploring AI for several years now. I wonder if you could tell us about some of the AI use cases it’s working on and what frameworks you’ve developed for effective adoption as well. 

Luis: Yeah, absolutely. This is the big one, isn’t it? Everybody’s talking about AI. As you can imagine, the focus in our company is what is now being branded as industrial AI. That’s really a simple term to explain that AI is being applied to industrial and manufacturing settings. And like other AI, and as I mentioned before, the foundation remains data. I want to stress the importance of data here. 

One of the differences however is that in the case of industrial AI, data comes from a variety of sources. Some of them are very critical. Some of them are non-critical. Sources like operating technologies, process control networks, and SCADA, all the way to Internet of Things sensors or industrial Internet of Things sensors, and unstructured data like engineering documentation and IT data. These are massive amounts of information coming from different places and also from different security structures. The complexity of industrial AI is considerably higher than what I would call consumer or productivity AI. 

Megan: Right. 

Luis: The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes. When you’re in an industrial setting, if a machine reacts in ways you don’t expect, people could get hurt, and so there’s an extra level of care that needs to happen and that we need to think about as we deploy these technologies. 

AI sits on top of that foundation and it takes different shapes. It can show up as a copilot like the ones that have been popularized recently, or it can show up as agentic AI, which is something that we’re looking at closely now. And agentic AI is just a term to mean that AI can operate autonomously and can use complex reasoning to solve multistep problems in an industrial setting. 

So with that in mind, going back to your question, we use both kinds of AI for multiple use cases, including predictive maintenance, subsurface analytics, process automation, and workflow optimization, and also end-user productivity. Each one of those use cases obviously needs specific objectives that the business is looking at in each area of the value chain. 

In predictive maintenance, for example, we monitor and we analyze equipment health, we prevent failures, and we allow for preventive maintenance and reduced downtime. The AI helps us understand when machinery needs to be maintained in order to prevent failure instead of just waiting for it to happen. In subsurface analysis, we’re exploring AI to develop better models of hydrocarbon reservoirs. We are exploring AI to forecast geomechanical models and to capture and understand data from fiber optic sensing. Fiber optic sensing is a capability that has proven very valuable to us, and AI is helping us make sense of the wealth of information that comes out of the whole, as we like to say. Of course, we don’t do this alone. We partner with many third-party organizations, with vendors, and with people inside subject matter experts inside of Chevron to move the projects forward. 

There are several other areas beyond industrial AI that we are looking at. AI really is a transformation catalyst, and so areas like finance and law and procurement and HR, we’re also doing testing in those corporate areas. I can tell you that I’ve been part of projects in procurement, in HR. When I was in HR we ran a pretty amazing effort in partnership with a third-party company, and what they do is they seek to transform the way we understand talent, and the way they do that is they are trying to provide data-driven frameworks to make talent decisions. 

And so they redefine talent by framing data in the form of skills, and as they do this, they help de-bias processes that are usually or can be usually prone to unconscious biases and perspectives. It really is fascinating to think of your talent-based skills and to start decoupling them from what we know since the industrial era began, which is people fit in jobs. Now the question is more the other way around. How can jobs adapt to people’s skills? And then in procurement, AI is basically helping us open the aperture to a wider array of vendors in an automated fashion that makes us better partners. It’s more cost-effective. It’s really helpful. 

Before I close here, you did reference frameworks, so the framework of industrial AI versus what I call productivity AI, the understanding of the use cases. All of this sits on top of our responsible AI frameworks. We have set up a central enterprise AI organization and they have really done a great job in developing key areas of responsible AI as well as training and adoption frameworks. This includes how to use AI, how not to use AI, what data we can share with the different GPTs that are available to us. 

We are now members of organizations like the Responsible AI Institute. This is an organization that fosters the safe use of AI and trustworthy AI. But our own responsible AI framework, it involves four pillars. The first one is the principles, and this is how we make sure we continue to stay aligned with the values that drive this company, which we call The Chevron Way. It includes assessment, making sure that we evaluate these solutions in proportion to impact and risk. As I mentioned, when you’re talking about industrial processes, people’s lives are at stake. And so we take a very close look at what we are putting out there and how we ensure that it keeps our people safe. It includes education, I mentioned training our people to augment their capabilities and reinforcing responsible principles, and the last of the four is governance oversight and accountability through control structures that we are putting in place. 

Megan: Fantastic. Thank you so much for those really fascinating specific examples as well. It’s great to hear about. And digital transformation, which you did touch on briefly, has become critical of course to enable business growth and innovation. I wonder what has Chevron’s digital transformation looked like and how has the shift affected overall operations and the way employees engage with technology as well? 

Luis: Yeah, yeah. That’s a really good question. The term digital transformation is interpreted in many different ways. For me, it really is about leveraging technology to drive business results and to drive business transformation. We usually tend to specify emerging technology as the catalyst for transformation. I think that is okay, but I also think that there are ways that you can drive digital transformation with technology that’s not necessarily emerging but is being optimized, and so under this umbrella, we include everything from our Citizen Developer Program to complex industry partnerships that help us maximize the value of data. 

The Citizen Developer Program has been very successful in helping bridge the gap between our technical software engineer and software development practices and people who are out there doing the work, getting familiar, and demystifying the way to build solutions. 

I do believe that transformation is as much about technology as it is about people. And so to go back to the responsible AI framework, we are actively training and upskilling the workforce. We created a program called Learn Digital that helps employees embrace the technologies. I mentioned the concept of demystifying. It’s really important that people don’t fall into the trap of getting scared by the potential of the technology or the fact that it is new and we help them and we give them the tools to bridge the change management gap so they can get to use them and get the most out of them. 

At a high level, our transformation has followed the cyclical nature that pretty much any transformation does. We have identified the data foundations that we need to have. We have understood the impact of the processes that we are trying to digitize. We organize that information, then we streamline and automate processes, we learn, and now machines learn and then we do it all over again. And so this cyclical mindset, this iterative mindset has really taken hold in our culture and it has made us a little bit better at accepting the technologies that are driving the change. 

Megan: And to look at one of those technologies in a bit more detail, cloud computing has revolutionized infrastructure across industries. But there’s also a pendulum ship now toward hybrid and edge computing models. How is Chevron balancing cloud, hybrid, and edge strategies for optimal performance as well? 

Luis: Yeah, that’s a great question and I think you could argue that was the genesis of the digital transformation effort. It’s been a journey for us and it’s a journey that I think we’re not the only ones that may have started it as a cost savings and storage play, but then we got to this ever-increasing need for multiple things like scaling compute power to support large language models and maximize how we run complex models. There’s an increasing need to store vast amounts of data for training and inference models while we improve data management and, while we predict future needs. 

There’s a need for the opportunity to eliminate hardware constraints. One of the promises of cloud was that you would be able to ramp up and down depending on your compute needs as projects demanded. And that hasn’t stopped, that has only increased. And then there’s a need to be able to do this at a global level. For a company like ours that is distributed across the globe, we want to do this everywhere while actively managing those resources without the weight of the infrastructure that we used to carry on our books. Cloud has really helped us change the way we think about the digital assets that we have. 

It’s important also that it has created this symbiotic need to grow between AI and the cloud. So you don’t have the AI without the cloud, but now you don’t have the cloud without AI. In reality, we work on balancing the benefits of cloud and hybrid and edge computing, and we keep operational efficiency as our North Star. We have key partnerships in cloud, that’s something that I want to make sure I talk about. Microsoft is probably the most strategic of our partnerships because they’ve helped us set our foundation for cloud. But we also think of the convenience of hybrid through the lens of leveraging a convenient, scalable public cloud and a very secure private cloud that helps us meet our operational and safety needs. 

Edge computing fills the gap or the need for low latency and real-time data processing, which are critical constraints for decision-making in most of the locations where we operate. You can think of an offshore rig, a refinery, an oil rig out in the field, and maybe even not-so-remote areas like here in our corporate offices. Putting that compute power close to the data source is critical. So we work and we partner with vendors to enable lighter compute that we can set at the edge and, I mentioned the foundation earlier, faster communication protocols at the edge that also solve the need for speed. 

But it is important to remember that you don’t want to think about edge computing and cloud as separate things. Cloud supports edge by providing centralized management by providing advanced analytics among others. You can train models in the cloud and then deploy them to edge devices, keeping real-time priorities in mind. I would say that edge computing also supports our cybersecurity strategy because it allows us to control and secure sensitive environments and information while we embed machine learning and AI capabilities out there. 

So I have mentioned use cases like predictive maintenance and safety, those are good examples of areas where we want to make sure our cybersecurity strategy is front and center. When I was talking about my experience I talked about the center and the edge. Our strategy to balance that pendulum relies on flexibility and on effective asset management. And so making sure that our cloud reflects those strategic realities gives us a good footing to achieve our corporate objectives. 

Megan: As you say, safety is a top priority. How do technologies like the Internet of Things and AI help enhance safety protocols specifically too, especially in the context of emissions tracking and leak detection? 

Luis: Yeah, thank you for the question. Safety is the most important thing that we think and talk about here at Chevron. There is nothing more important than ensuring that our people are safe and healthy, so I would break safety down into two. Before I jump to emissions tracking and leak detection, I just want to make a quick point on personal safety and how we leverage IoT and AI to that end. 

We use sensing capabilities that help us keep workers out of harm’s way, and so things like computer vision to identify and alert people who are coming into safety areas. We also use computer vision, for example, to identify PPE requirements—personal protective equipment requirements—and so if there are areas that require a certain type of clothing, a certain type of identification, or a hard hat, we are using technologies that can help us make sure people have that before they go into a particular area. 

We’re also using wearables. Wearables help us in one of the use cases is they help us track exhaustion and dehydration in locations where that creates inherent risk, and so locations that are very hot, whether it’s because of the weather or because they are enclosed, we can use wearables that tell us how fast the person’s getting dehydrated, what are the levels of liquid or sodium that they need to make sure that they’re safe or if they need to take a break. We have those capabilities now. 

Going back to emissions tracking and leak detection, I think it’s actually the combination of IoT and AI that can transform how we prevent and react to those. In this case, we also deploy sensing capabilities. We use things like computer vision, like infrared capabilities, and we use others that deliver data to the AI models, which then alert and enable rapid response. 

The way I would explain how we use IoT and AI for safety, whether it’s personnel safety or emissions tracking and leak detection, is to think about sensors as the extension of human ability to sense. In some cases, you could argue it’s super abilities. And so if you think of sight normally you would’ve had supervisors or people out there that would be looking at the field and identifying issues. Well, now we can use computer vision with traditional RGB vision, we can use them with infrared, we can use multi-angle to identify patterns, and have AI tell us what’s going on. 

If you keep thinking about the human senses, that’s sight, but you can also use sound through ultrasonic sensors or microphone sensors. You can use touch through vibration recognition and heat recognition. And even more recently, this is something that we are testing more recently, you can use smell. There are companies that are starting to digitize smell. Pretty exciting, also a little bit crazy. But it is happening. And so these are all tools that any human would use to identify risk. Well, so now we can do it as an extension of our human abilities to do so. This way we can react much faster and better to the anomalies. 

A specific example with methane. We have a simple goal with methane, we want to keep methane in the pipe. Once it’s out, it’s really hard or almost impossible to take it back. Over the last six to seven years, we have reduced our methane intensity by over 60% and we’re leveraging technology to achieve that. We have deployed a methane detection program. We have trialed over 10 to 15 advanced methane detection technologies. 

A technology that I have been looking at recently is called Aquanta Vision. This is a company supported by an incubator program we have called Chevron Studio. We did this in partnership with the National Renewable Energy Laboratory, and what they do is they leverage optical gas imaging to detect methane effectively and to allow us to prevent it from escaping the pipe. So that’s just an example of the technologies that we’re leveraging in this space. 

Megan: Wow, that’s fascinating stuff. And on emissions as well, Chevron has made significant investments in new energy technologies like hydrogen, carbon capture, and renewables. How do these technologies fit into Chevron’s broader goal of reducing its carbon footprint? 

Luis: This is obviously a fascinating space for us, one that is ever-changing. It is honestly not my area of expertise. But what I can say is we truly believe we can achieve high returns and lower carbon, and that’s something that we communicate broadly. A few years ago, I believe it was 2021, we established our Chevron New Energies company and they actively explore lower carbon alternatives including hydrogen, renewables, and carbon capture offsets. 

My area, the digital area, and the convergence between digital technologies and the technical sciences will enable the techno-commercial viability of those business lines. Thinking about carbon capture, is something that we’ve done for a long time. We have decades of experience in carbon capture technologies across the world. 

One of our larger projects, the Gorgon Project in Australia, I think they’ve captured something between 5 and 10 million tons of CO2 emissions in the past few years, and so we have good expertise in that space. But we also actively partner in carbon capture. We have joined hubs of carbon capture here in Houston, for example, where we investing in companies like there’s a company called Carbon Clean, a company called Carbon Engineering, and one called Svante. I’m familiar with these names because the corporate VC team is close to me. These companies provide technologies for direct air capture. They provide solutions for hard-to-abate industries. And so we want to keep an eye on these emerging capabilities and make use of them to continuously lower our carbon footprint. 

There are two areas here that I would like to talk about. Hydrogen first. This is another area that we’re familiar with. Our plan is to build on our existing assets and capabilities to deliver a large-scale hydrogen business. Since 2005, I think we’ve been doing retail hydrogen, and we also have several partnerships there. In renewables, we are creating a range of fuels for different transportation types. We use diesel, bio-based diesel, we use renewable natural gas, we use sustainable aviation fuel. Yeah, so these are all areas of importance to us. They’re emerging business lines that are young in comparison to the rest of our company. We’ve been a company for 140 years plus, and this started in 2021, so you can imagine how steep that learning curve is. 

I mentioned how we leverage our corporate venture capital team to learn and to keep an eye out on what are these emerging trends and technologies that we want to learn about. They leverage two things. They leverage a core fund, which is focused on areas that can seek innovation for our core business for the title. And we have a separate future energy fund that explores areas that are emerging. Not only do they invest in places like hydrogen, carbon capture, and renewables, but they also may invest in other areas like wind and geothermal and nuclear capability. So we constantly keep our eyes open for these emerging technologies. 

Megan: I see. And I wonder if you could share a bit more actually about Chevron’s role in driving sustainable business innovation. I’m thinking of initiatives like converting used cooking oil into biodiesel, for example. I wonder how those contribute to that overall goal of creating a circular economy. 

Luis: Yeah, this is fascinating and I was so happy to learn a little bit more about this year when I had the chance to visit our offices in Iowa. I’ll get into that in a second. But happy to talk about this, again with the caveat that it’s not my area of expertise. 

Megan: Of course. 

Luis: In the case of biodiesel, we acquired a company called REG in 2022. They were one of the founders of the renewable fuels industry, and they honestly do incredible work to create energy through a process, I forget the name of the process to be honest. But at the most basic level what they do is they prepare feedstocks that come from different types of biomass, you mentioned cooking oils, there’s also soybeans, there’s animal fats. And through various chemical reactions, what they do is convert components of the feedstock into biodiesel and glycerin. After that process, what they do is they separate un-reactive methanol, which is recovered and recycled into the process, and the biodiesel goes through a final processing to make sure that it meets the standards necessary to be commercialized. 

What REG has done is it has boosted our knowledge as a broader organization on how to do this better. They continuously look for bio-feedstocks that can help us deliver new types of energy. I had mentioned bio-based diesel. One of the areas that we’re very focused on right now is sustainable aviation fuel. I find that fascinating. The reason why this is working and the reason why this is exciting is because they brought this great expertise and capability into Chevron. And in turn, as a larger organization, we’re able to leverage our manufacturing and distribution capabilities to continue to provide that value to our customers. 

I mentioned that I learned a little bit more about this this year. I was lucky earlier in the year I was able to visit our REG offices in Ames, Iowa. That’s where they’re located. And I will tell you that the passion and commitment that those people have for the work that they do was incredibly energizing. These are folks who have helped us believe, really, that our promise of lower carbon is attainable. 

Megan: Wow. Sounds like there’s some fascinating work going on. Which brings me to my final question. Which is sort of looking ahead, what emerging technologies are you most excited about and how do you see them impacting both Chevron’s core business and the energy sector as a whole as well? 

Luis: Yeah, that’s a great question. I have no doubt that the energy business is changing and will continue to change only faster, both our core business as well as the future energy, or the way it’s going to look in the future. Honestly, in my line of work, I come across exciting technology every day. The obvious answers are AI and industrial AI. These are things that are already changing the way we live without a doubt. You can see it in people’s productivity. You can see it in how we optimize and transform workflows. AI is changing everything. I am actually very, very interested in IoT, in the Internet of Things, and robotics, the ability to protect humans in high-risk environments, like I mentioned, is critical to us, the opportunity to prevent high-risk events and predict when they’re likely to happen. 

This is pretty massive, both for our productivity objectives as well as for our lower carbon objectives. If we can predict when we are at risk of particular events, we could avoid them altogether. As I mentioned before, this ubiquitous ability to sense our surroundings is a capability that our industry and I’m going to say humankind, is only beginning to explore. 

There’s another area that I didn’t talk too much about, which I think is coming, and that is quantum computing. Quantum computing promises to change the way we think of compute power and it will unlock our ability to simulate chemistry, to simulate molecular dynamics in ways we have not been able to do before. We’re working really hard in this space. When I say molecular dynamics, think of the way that we produce energy today. It is all about the molecule and understanding the interactions between hydrocarbon molecules and the environment. The ability to do that in multi-variable systems is something that quantum, we believe, can provide an edge on, and so we’re working really hard in this space. 

Yeah, there are so many, and having talked about all of them, AI, IoT, robotics, quantum, the most interesting thing to me is the convergence of all of them. If you think about the opportunity to leverage robotics, but also do it as the machines continue to control limited processes and understand what it is they need to do in a preventive and predictive way, this is such an incredible potential to transform our lives, to make an impact in the world for the better. We see that potential. 

My job is to keep an eye on those developments, to make sure that we’re managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective. 

Megan: Absolutely. Such an important point to finish on. And unfortunately, that is all the time we have for today, but what a fascinating conversation. Thank you so much for joining us on the Business Lab, Luis. 

Luis: Great to talk to you. 

Megan:  Thank you so much. That was Luis Niño, who is the digital manager of technology ventures and innovation at Chevron, who I spoke with today from Brighton, England. 

That’s it for this episode of Business Lab. I’m Megan Tatum, I’m your host and a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. 

This show is available wherever you get your podcasts, and if you enjoyed this episode, we really hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thank you so much for listening. 

Why materials science is key to unlocking the next frontier of AI development

The Intel 4004, the first commercial microprocessor, was released in 1971. With 2,300 transistors packed into 12mm2, it heralded a revolution in computing. A little over 50 years later, Apple’s M2 Ultra contains 134 billion transistors.

The scale of progress is difficult to comprehend, but the evolution of semiconductors, driven for decades by Moore’s Law, has paved a path from the emergence of personal computing and the internet to today’s AI revolution.

But this pace of innovation is not guaranteed, and the next frontier of technological advances—from the future of AI to new computing paradigms—will only happen if we think differently.

Atomic challenges

The modern microchip stretches both the limits of physics and credulity. Such is the atomic precision, that a few atoms can decide the function of an entire chip. This marvel of engineering is the result of over 50 years of exponential scaling creating faster, smaller transistors.

But we are reaching the physical limits of how small we can go, costs are increasing exponentially with complexity, and efficient power consumption is becoming increasingly difficult. In parallel, AI is demanding ever-more computing power. Data from Epoch AI indicates the amount of computing needed to develop AI is quickly outstripping Moore’s Law, doubling every six months in the “deep learning era” since 2010.

These interlinked trends present challenges not just for the industry, but society as a whole. Without new semiconductor innovation, today’s AI models and research will be starved of computational resources and struggle to scale and evolve. Key sectors like AI, autonomous vehicles, and advanced robotics will hit bottlenecks, and energy use from high-performance computing and AI will continue to soar.

Materials intelligence

At this inflection point, a complex, global ecosystem—from foundries and designers to highly specialized equipment manufacturers and materials solutions providers like Merck—is working together more closely than ever before to find the answers. All have a role to play, and the role of materials extends far, far beyond the silicon that makes up the wafer.

Instead, materials intelligence is present in almost every stage of the chip production process—whether in chemical reactions to carve circuits at molecular scale (etching) or adding incredibly thin layers to a wafer (deposition) with atomic precision: a human hair is 25,000 times thicker than layers in leading edge nodes.

Yes, materials provide a chip’s physical foundation and the substance of more powerful and compact components. But they are also integral to the advanced fabrication methods and novel chip designs that underpin the industry’s rapid progress in recent decades.

For this reason, materials science is taking on a heightened importance as we grapple with the limits of miniaturization. Advanced materials are needed more than ever for the industry to unlock the new designs and technologies capable of increasing chip efficiency, speed, and power. We are seeing novel chip architectures that embrace the third dimension and stack layers to optimize surface area usage while lowering energy consumption. The industry is harnessing advanced packaging techniques, where separate “chiplets” are fused with varying functions into a more efficient, powerful single chip. This is called heterogeneous integration.

Materials are also allowing the industry to look beyond traditional compositions. Photonic chips, for example, harness light rather than electricity to transmit data. In all cases, our partners rely on us to discover materials never previously used in chips and guide their use at the atomic level. This, in turn, is fostering the necessary conditions for AI to flourish in the immediate future.

New frontiers

The next big leap will involve thinking differently. The future of technological progress will be defined by our ability to look beyond traditional computing.

Answers to mounting concerns over energy efficiency, costs, and scalability will be found in ambitious new approaches inspired by biological processes or grounded in the principles of quantum mechanics.

While still in its infancy, quantum computing promises processing power and efficiencies well beyond the capabilities of classical computers. Even if practical, scalable quantum systems remain a long way off, their development is dependent on the discovery and application of state-of-the-art materials.

Similarly, emerging paradigms like neuromorphic computing, modelled on the human brain with architectures mimicking our own neural networks, could provide the firepower and energy-efficiency to unlock the next phase of AI development. Composed of a deeply complex web of artificial synapses and neurons, these chips would avoid traditional scalability roadblocks and the limitations of today’s Von Neumann computers that separate memory and processing.

Our biology consists of super complex, intertwined systems that have evolved by natural selection, but it can be inefficient; the human brain is capable of extraordinary feats of computational power, but it also requires sleep and careful upkeep. The most exciting step will be using advanced compute—AI and quantum—to finally understand and design systems inspired by biology. This combination will drive the power and ubiquity of next-generation computing and associated advances to human well-being.

Until then, the insatiable demand for more computing power to drive AI’s development poses difficult questions for an industry grappling with the fading of Moore’s Law and the constraints of physics. The race is on to produce more powerful, more efficient, and faster chips to progress AI’s transformative potential in every area of our lives.

Materials are playing a hidden, but increasingly crucial role in keeping pace, producing next-generation semiconductors and enabling the new computing paradigms that will deliver tomorrow’s technology.

But materials science’s most important role is yet to come. Its true potential will be to take us—and AI—beyond silicon into new frontiers and the realms of science fiction by harnessing the building blocks of biology.

This content was produced by EMD Electronics. It was not written by MIT Technology Review’s editorial staff.