Book review: Surveillance & privacy

Privacy only matters to those with something to hide. So goes one of the more inane and disingenuous justifications for mass government and corporate surveillance. There are others, of course, but the “nothing to hide” argument remains a popular way to rationalize or excuse what’s become standard practice in our digital age: the widespread and invasive collection of vast amounts of personal data.

One common response to this line of reasoning is that everyone, in fact, has something to hide, whether they realize it or not. If you’re unsure of whether this holds true for you, I encourage you to read Means of Control by Byron Tau. 

cover of Means of Control
Means of Control: How the Hidden Alliance of Tech and Government Is Creating a New American Surveillance State
Byron Tau
CROWN, 2024

Midway through his book, Tau, an investigative journalist, recalls meeting with a disgruntled former employee of a data broker—a shady company that collects, bundles, and sells your personal data to other (often shadier) third parties, including the government. This ex-employee had managed to make off with several gigabytes of location data representing the precise movements of tens of thousands of people over the course of a few weeks. “What could I learn with this [data]—­theoretically?” Tau asks the former employee. The answer includes a laundry list of possibilities that I suspect would make even the most enthusiastic oversharer uncomfortable.

“If information is power, and America is a society that’s still interested in the guarantee of liberty, personal dignity, and the individual freedom of its citizens, a serious conversation is needed.”

Bryon Tau, author of Means of Control

Did someone in this group recently visit an abortion clinic? That would be easy to figure out, says the ex-employee. Anyone attend an AA meeting or check into inpatient drug rehab? Again, pretty simple to discern. Is someone being treated for erectile dysfunction at a sexual health clinic? If so, that would probably be gleanable from the data too. Tau never opts to go down that road, but as Means of Control makes very clear, others certainly have done so and will.

While most of us are at least vaguely aware that our phones and apps are a vector for data collection and tracking, both the way in which this is accomplished and the extent to which it happens often remain murky. Purposely so, argues Tau. In fact, one of the great myths Means of Control takes aim at is the very idea that what we do with our devices can ever truly be anonymized. Each of us has habits and routines that are completely unique, he says, and if an advertiser knows you only as an alphanumeric string provided by your phone as you move about the world, and not by your real name, that still offers you virtually no real privacy protection. (You’ll perhaps not be surprised to learn that such “anonymized ad IDs” are relatively easy to crack.)

“I’m here to tell you if you’ve ever been on a dating app that wanted your location, or if you ever granted a weather app permission to know where you are 24/7, there’s a good chance a detailed log of your precise movement patterns has been vacuumed up and saved in some data bank somewhere that tens of thousands of total strangers have access to,” writes Tau.

Unraveling the story of how these strangers—everyone from government intelligence agents and local law enforcement officers to private investigators and employees of ad tech companies—gained access to our personal information is the ambitious task Tau sets for himself, and he begins where you might expect: the immediate aftermath of 9/11.

At no other point in US history was the government’s appetite for data more voracious than in the days after the attacks, says Tau. It was a hunger that just so happened to coincide with the advent of new technologies, devices, and platforms that excelled at harvesting and serving up personal information that had zero legal privacy protections. 

Over the course of 22 chapters, Tau gives readers a rare glimpse inside the shadowy industry, “built by corporate America and blessed by government lawyers,” that emerged in the years and decades following the 9/11 attacks. In the hands of a less skilled reporter, this labyrinthine world of shell companies, data vendors, and intelligence agencies could easily become overwhelming or incomprehensible. But Tau goes to great lengths to connect dots and plots, explaining how a perfect storm of business motivations, technological breakthroughs, government paranoia, and lax or nonexistent privacy laws combined to produce the “digital panopticon” we are all now living in.

Means of Control doesn’t offer much comfort or reassurance for privacy­-minded readers, but that’s arguably the point. As Tau notes repeatedly throughout his book, this now massive system of persistent and ubiquitous surveillance works only because the public is largely unaware of it. “If information is power, and America is a society that’s still interested in the guarantee of liberty, personal dignity, and the individual freedom of its citizens, a serious conversation is needed,” he writes. 

As another new book makes clear, this conversation also needs to include student data. Lindsay Weinberg’s Smart University: Student Surveillance in the Digital Age reveals how the motivations and interests of Big Tech are transforming higher education in ways that are increasingly detrimental to student privacy and, arguably, education as a whole.

cover of Smart University
Smart University: Student Surveillance in the Digital Age
Lindsay Weinberg
JOHNS HOPKINS UNIVERSITY PRESS, 2024

By “smart university,” Weinberg means the growing number of public universities across the country that are being restructured around “the production and capture of digital data.” Similar in vision and application to so-called “smart cities,” these big-data-pilled institutions are increasingly turning to technologies that can track students’ movements around campus, monitor how much time they spend on learning management systems, flag those who seem to need special “advising,” and “nudge” others toward specific courses and majors. “What makes these digital technologies so seductive to higher education administrators, in addition to promises of cost cutting, individualized student services, and improved school rankings, is the notion that the integration of digital technology on their campuses will position universities to keep pace with technological innovation,” Weinberg writes. 

Readers of Smart University will likely recognize a familiar logic at play here. Driving many of these academic tracking and data-gathering initiatives is a growing obsession with efficiency, productivity, and convenience. The result is a kind of Silicon Valley optimization mindset, but applied to higher education at scale. Get students in and out of university as fast as possible, minimize attrition, relentlessly track performance, and do it all under the guise of campus modernization and increased personalization. 

Under this emerging system, students are viewed less as self-empowered individuals and more as “consumers to be courted, future workers to be made employable for increasingly smart workplaces, sources of user-generated content for marketing and outreach, and resources to be mined for making campuses even smarter,” writes Weinberg. 

At the heart of Smart University seems to be a relatively straightforward question: What is an education for? Although Weinberg doesn’t provide a direct answer, she shows that how a university (or society) decides to answer that question can have profound impacts on how it treats its students and teachers. Indeed, as the goal of education becomes less to produce well-rounded humans capable of thinking critically and more to produce “data subjects capable of being managed and who can fill roles in the digital economy,” it’s no wonder we’re increasingly turning to the dumb idea of smart universities to get the job done.  

If books like Means of Control and Smart University do an excellent job exposing the extent to which our privacy has been compromised, commodified, and weaponized (which they undoubtedly do), they can also start to feel a bit predictable in their final chapters. Familiar codas include calls for collective action, buttressed by a hopeful anecdote or two detailing previously successful pro-privacy wins; nods toward a bipartisan privacy bill in the works or other pieces of legislation that could potentially close some glaring surveillance loophole; and, most often, technical guides that explain how each of us, individually, might better secure or otherwise take control and “ownership” of our personal data.

The motivations behind these exhortations and privacy-centric how-to guides are understandable. After all, it’s natural for readers to want answers, advice, or at least some suggestion that things could be different—especially after reading about the growing list of degradations suffered under surveillance capitalism. But it doesn’t take a skeptic to start to wonder if they’re actually advancing the fight for privacy in the way that its advocates truly want.

For one thing, technology tends to move much faster than any one smartphone privacy guide or individual law could ever hope to keep up with. Similarly, framing rampant privacy abuses as a problem we each have to be responsible for addressing individually seems a lot like framing the plastic pollution crisis as something Americans could have somehow solved by recycling. It’s both a misdirection and a misunderstanding of the problem.     

It’s to his credit, then, that Lowry Pressly doesn’t include a “What is to be done” section at the end of The Right to Oblivion: Privacy and the Good Life. In lieu of offering up any concrete technical or political solutions, he simply reiterates an argument he has carefully and convincingly built over the course of his book: that privacy is important “not because it empowers us to exercise control over our information, but because it protects against the creation of such information in the first place.” 

cover of The Right to Oblivion
The Right to Oblivion: Privacy and the Good Life
Lowry Pressly
HARVARD UNIVERSITY PRESS, 2024

For Pressly, a Stanford instructor, the way we currently understand and value privacy has been tainted by what he calls “the ideology of information.” “This is the idea that information has a natural existence in human affairs,” he writes, “and that there are no aspects of human life which cannot be translated somehow into data.” This way of thinking not only leads to an impoverished sense of our own humanity—it also forces us into the conceptual trap of debating privacy’s value using a framework (control, consent, access) established by the companies whose business model is to exploit it.

The way out of this trap is to embrace what Pressly calls “oblivion,” a kind of state of unknowing, ambiguity, and potential—or, as he puts it, a realm “where there is no information or knowledge one way or the other.” While he understands that it’s impossible to fully escape a modern world intent on turning us into data subjects, Pressly’s book suggests we can and should support the idea that certain aspects of our (and others’) subjective interior lives can never be captured by information. Privacy is important because it helps to both protect and produce these ineffable parts of our lives, which in turn gives them a sense of dignity, depth, and the possibility for change and surprise. 

Reserving or cultivating a space for oblivion in our own lives means resisting the logic that drives much of the modern world. Our inclination to “join the conversation,” share our thoughts, and do whatever it is we do when we create and curate a personal brand has become so normalized that it’s practically invisible to us. According to Pressly, all that effort has only made our lives and relationships shallower, less meaningful, and less trusting.

Calls for putting our screens down and stepping away from the internet are certainly nothing new. And while The Right to Oblivion isn’t necessarily prescriptive about such things, Pressly does offer a beautiful and compelling vision of what can be gained when we retreat not just from the digital world but from the idea that we are somehow knowable to that world in any authentic or meaningful way. 

If all this sounds a bit philosophical, well, it is. But it would be a mistake to think of The Right to Oblivion as a mere thought exercise on privacy. Part of what makes the book so engaging and persuasive is the way in which Pressly combines a philosopher’s knack for uncovering hidden assumptions with a historian’s interest in and sensitivity to older (often abandoned) ways of thinking, and how they can often enlighten and inform modern problems.

Pressly isn’t against efforts to pass more robust privacy legislation, or even to learn how to better protect our devices against surveillance. His argument is that in order to guide such efforts, you have to both ask the right questions and frame the problem in a way that gives you and others the moral clarity and urgency to act. Your phone’s privacy settings are important, but so is understanding what you’re protecting when you change them. 

Bryan Gardiner is a writer based in Oakland, California. 

IBM aims to build the world’s first large-scale, error-corrected quantum computer by 2028

IBM announced detailed plans today to build an error-corrected quantum computer with significantly more computational capability than existing machines by 2028. It hopes to make the computer available to users via the cloud by 2029. 

The proposed machine, named Starling, will consist of a network of modules, each of which contains a set of chips, housed within a new data center in Poughkeepsie, New York. “We’ve already started building the space,” says Jay Gambetta, vice president of IBM’s quantum initiative.

IBM claims Starling will be a leap forward in quantum computing. In particular, the company aims for it to be the first large-scale machine to implement error correction. If Starling achieves this, IBM will have solved arguably the biggest technical hurdle facing the industry today to beat competitors including Google, Amazon Web Services, and smaller startups such as Boston-based QuEra and PsiQuantum of Palo Alto, California. 

IBM, along with the rest of the industry, has years of work ahead. But Gambetta thinks it has an edge because it has all the building blocks to build error correction capabilities in a large-scale machine. That means improvements in everything from algorithm development to chip packaging. “We’ve cracked the code for quantum error correction, and now we’ve moved from science to engineering,” he says. 

Correcting errors in a quantum computer has been an engineering challenge, owing to the unique way the machines crunch numbers. Whereas classical computers encode information in the form of bits, or binary 1 and 0, quantum computers instead use qubits, which can represent “superpositions” of both values at once. IBM builds qubits made of tiny superconducting circuits, kept near absolute zero, in an interconnected layout on chips. Other companies have built qubits out of other materials, including neutral atoms, ions, and photons.

Quantum computers sometimes commit errors, such as when the hardware operates on one qubit but accidentally also alters a neighboring qubit that should not be involved in the computation. These errors add up over time. Without error correction, quantum computers cannot accurately perform the complex algorithms that are expected to be the source of their scientific or commercial value, such as extremely precise chemistry simulations for discovering new materials and pharmaceutical drugs. 

But error correction requires significant hardware overhead. Instead of encoding a single unit of information in a single “physical” qubit, error correction algorithms encode a unit of information in a constellation of physical qubits, referred to collectively as a “logical qubit.”

Currently, quantum computing researchers are competing to develop the best error correction scheme. Google’s surface code algorithm, while effective at correcting errors, requires on the order of 100 qubits to store a single logical qubit in memory. AWS’s Ocelot quantum computer uses a more efficient error correction scheme that requires nine physical qubits per logical qubit in memory. (The overhead is higher for qubits performing computations for storing data.) IBM’s error correction algorithm, known as a low-density parity check code, will make it possible to use 12 physical qubits per logical qubit in memory, a ratio comparable to AWS’s. 

One distinguishing characteristic of Starling’s design will be its anticipated ability to diagnose errors, known as decoding, in real time. Decoding involves determining whether a measured signal from the quantum computer corresponds to an error. IBM has developed a decoding algorithm that can be quickly executed by a type of conventional chip known as an FPGA. This work bolsters the “credibility” of IBM’s error correction method, says Neil Gillespie of the UK-based quantum computing startup Riverlane. 

However, other error correction schemes and hardware designs aren’t out of the running yet. “It’s still not clear what the winning architecture is going to be,” says Gillespie. 

IBM intends Starling to be able to perform computational tasks beyond the capability of classical computers. Starling will have 200 logical qubits, which will be constructed using the company’s chips. It should be able to perform 100 million logical operations consecutively with accuracy; existing quantum computers can do so for only a few thousand. 

The system will demonstrate error correction at a much larger scale than anything done before, claims Gambetta. Previous error correction demonstrations, such as those done by Google and Amazon, involve a single logical qubit, built from a single chip. Gambetta calls them “gadget experiments,” saying “They’re small-scale.” 

Still, it’s unclear whether Starling will be able to solve practical problems. Some experts think that you need a billion error-corrected logical operations to execute any useful algorithm. Starling represents “an interesting stepping-stone regime,” says Wolfgang Pfaff, a physicist at the University of Illinois Urbana-Champaign. “But it’s unlikely that this will generate economic value.” (Pfaff, who studies quantum computing hardware, has received research funding from IBM but is not involved with Starling.) 

The timeline for Starling looks feasible, according to Pfaff. The design is “based in experimental and engineering reality,” he says. “They’ve come up with something that looks pretty compelling.” But building a quantum computer is hard, and it’s possible that IBM will encounter delays due to unforeseen technical complications. “This is the first time someone’s doing this,” he says of making a large-scale error-corrected quantum computer.

IBM’s road map involves first building smaller machines before Starling. This year, it plans to demonstrate that error-corrected information can be stored robustly in a chip called Loon. Next year the company will build Kookaburra, a module that can both store information and perform computations. By the end of 2027, it plans to connect two Kookaburra-type modules together into a larger quantum computer, Cockatoo. After demonstrating that successfully, the next step is to scale up and connect around 100 modules to create Starling.

This strategy, says Pfaff, reflects the industry’s recent embrace of “modularity” when scaling up quantum computers—networking multiple modules together to create a larger quantum computer rather than laying out qubits on a single chip, as researchers did in earlier designs. 

IBM is also looking beyond 2029. After Starling, it plans to build another, Blue Jay. (“I like birds,” says Gambetta.) Blue Jay will contain 2000 logical qubits and is expected to be capable of a billion logical operations.

Driving business value by optimizing the cloud

Organizations are deepening their cloud investments at an unprecedented pace, recognizing its fundamental role in driving business agility and innovation. Synergy Research Group reports that companies spent $84 billion worldwide on cloud infrastructure services in the third quarter of 2024, a 23% rise over the third quarter of 2023 and the fourth consecutive quarter in which the year-on-year growth rate has increased.

Allowing users to access IT systems from anywhere in the world, cloud services also ensure solutions remain highly configurable and automated.

At the same time, hosted services like generative AI and tailored industry solutions can help companies quickly launch applications and grow the business. To get the most out of these services, companies are turning to cloud optimization—the process of selecting and allocating cloud resources to reduce costs while maximizing performance.

But despite all the interest in the cloud, many workloads remain stranded on-premises, and many more are not optimized for efficiency and growth, greatly limiting the forward momentum. Companies are missing out on a virtuous cycle of mutually reinforcing results that comes from even more efficient use of the cloud.

Organizations can enhance security, make critical workloads more resilient, protect the customer experience, boost revenues, and generate cost savings. These benefits can fuel growth and avert expenses, generating capital that can be invested in innovation.

“Cloud optimization involves making sure that your cloud spending is efficient so you’re not spending wastefully,” says André Dufour, Director and General Manager for AWS Cloud Optimization at Amazon Web Services. “But you can’t think of it only as cost savings at the expense of other things. Dollars freed up through optimization can be redirected to fund net new innovations, like generative AI.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

How a 1980s toy robot arm inspired modern robotics

As a child of an electronic engineer, I spent a lot of time in our local Radio Shack as a kid. While my dad was locating capacitors and resistors, I was in the toy section. It was there, in 1984, that I discovered the best toy of my childhood: the Armatron robotic arm. 

A drawing from the patent application for the Armatron robotic arm.
COURTESY OF TAKARA TOMY

Described as a “robot-like arm to aid young masterminds in scientific and laboratory experiments,” it was the rare toy that lived up to the hype printed on the front of the box. This was a legit robotic arm. You could rotate the arm to spin around its base, tilt it up and down, bend it at the “elbow” joint, rotate the “wrist,” and open and close the bright-­orange articulated hand in elegant chords of movement, all using only the twistable twin joysticks. 

Anyone who played with this toy will also remember the sound it made. Once you slid the power button to the On position, you heard a constant whirring sound of plastic gears turning and twisting. And if you tried to push it past its boundaries, it twitched and protested with a jarring “CLICK … CLICK … CLICK.”

It wasn’t just kids who found the Armatron so special. It was featured on the cover of the November/December 1982 issue of Robotics Age magazine, which noted that the $31.95 toy (about $96 today) had “capabilities usually found only in much more expensive experimental arms.”

pieces of the armatron disassembled and arranged on a table

JIM GOLDEN

A few years ago I found my Armatron, and when I opened the case to get it working again, I was startled to find that other than the compartment for the pair of D-cell batteries, a switch, and a tiny three-volt DC motor, this thing was totally devoid of any electronic components. It was purely mechanical. Later, I found the patent drawings for the Armatron online and saw how incredibly complex the schematics of the gearbox were. This design was the work of a genius—or a madman.

The man behind the arm

I needed to know the story of this toy. I reached out to the manufacturer, Tomy (now known as Takara Tomy), which has been in business in Japan for over 100 years. It put me in touch with Hiroyuki Watanabe, a 69-year-old engineer and toy designer living in Tokyo. He’s retired now, but he worked at Tomy for 49 years, building many classic handheld electronic toys of the ’80s, including Blip, Digital Diamond, Digital Derby, and Missile Strike. Watanabe’s name can be found on 44 patents, and he was involved in bringing between 50 and 60 products to market. Watanabe answered emailed questions via video, and his responses were translated from Japanese.

“I didn’t have a period where I studied engineering professionally. Instead, I enrolled in what Japan would call a technical high school that trains technical engineers, and I actually [entered] the electrical department there,” he told me. 

Afterward, he worked at Komatsu Manufacturing—because, he said, he liked bulldozers. But in 1974, he saw that Tomy was hiring, and he wanted to make toys. “I was told that it was the No. 1 toy company in Japan, so I decided [it was worth a look],” he said. “I took a night train from Tohoku to Tokyo to take a job exam, and that’s how I ended up joining the company.”

The inspiration for the Armatron came from a newspaper clipping that Watanabe’s boss brought to him one day. “It showed an image of a [mechanical arm] holding an egg with three fingers. I think we started out thinking, ‘This is where things are heading these days, so let’s make this,’” he recalled. 

As the lead of a small team, Watanabe briefly turned his attention to another project, and by the time he returned to the robotic arm, the team had a prototype. But it was quite different from the Armatron’s final form. “The hand stuck out from the main body to the side and could only move about 90 degrees. The control panel also had six movement positions, and they were switched using six switches. I personally didn’t like that,” said Watanabe. So he went back to work.

The Armatron’s inventor, Hiroyuki Watanabe, in Tokyo in 2025
COURTESY OF TAKARA TOMY

Watanabe’s breakthrough was inspired by the radio-controlled helicopters he operated as a hobby. Holding up a radio remote controller with dual joystick controls, he told me, “This stick operation allows you to perform four movements with two arms, but I thought that if you twist this part, you can use six movements.”

Watanabe at work at Tomy in Tokyo in 1982.
COURTESY OF HIROYUKI WATANABE

“I had always wanted to create a system that could rotate 360 degrees, so I thought about how to make that system work,” he added.

Watanabe stressed that while he is listed as the Armatron’s primary inventor, it was a team effort. A designer created the case, colors, and logo, adding touches to mimic features seen on industrial robots of the time, such as the rubber tubes (which are just for looks). 

When the Armatron first came out, in 1981, robotics engineers started contacting Watanabe. “I wasn’t so much hearing from people at toy stores, but rather from researchers at university laboratories, factories, and companies that were making industrial robots,” he said. “They were quite encouraging, and we often talked together.”

The long reach of the robot at Radio Shack

The bold look and function of Armatron made quite an impression on many young kids who would one day have a career in robotics.

One of them was Adam Borrell, a mechanical design engineer who has been building robots for 15 years at Boston Dynamics, including Petman, the YouTube-famous Atlas, and the dog-size quadruped called Spot. 

Borrell grew up a few blocks away from a Radio Shack in New York City. “If I was going to the subway station, we would walk right by Radio Shack. I would stop in and play with it and set the timer, do the challenges,” he says. “I know it was a toy, but that was a real robot.” The Armatron was the hook that lured him into Radio Shack and then sparked his lifelong interest in engineering: “I would roll pennies and use them to buy soldering irons and solder at Radio Shack.” 

“There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.”

Borrell had a fateful reunion with the toy while in grad school for engineering. “One of my office mates had an Armatron at his desk,” he recalls, “and it was broken. We took it apart together, and that was the first time I had seen the guts of it. 

“It had this fantastic mechanical gear train to just engage and disengage this one motor in a bunch of different ways. And it was really fascinating that it had done so much—the one little motor. And that sort of got me back thinking about industrial robot arms again.” 

Eric Paulos, a professor of electrical engineering and computer science at the University of California, Berkeley, recalls nagging his parents about what an educational gift Armatron would make. Ultimately, he succeeded in his lobbying. 

“It was just endless exploration of picking stuff up and moving it around and even just watching it move. It was mesmerizing to me. I felt like I really owned my own little robot,” he recalls. “I cherish this thing. I still have it to this day, and it’s still working.” 

The Armatron on the cover of the November/December 1982 issue of Robotics Age magazine.
PUBLIC DOMAIN

Today, Paulos builds robots and teaches his students how to build their own. He challenges them to solve problems within constraints, such as building with cardboard or Play-Doh; he believes the restrictions facing Watanabe and his team ultimately forced them to be more creative in their engineering.

It’s not very hard to draw connections between the Armatron—an impossibly analog robot—and highly advanced machines that are today learning to move in incredible new ways, powered by AI advancements like computer vision and reinforcement learning.

Paulos sees parallels between the problems he tackled as a kid with his Armatron and those that researchers are still trying to deal with today: “What happens when you pick things up and they’re too heavy, but you can sort of pick it up if you approach it from different angles? Or how do you grip things? There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.”

While AI may be taking over the world of robotics, the field still requires engineers—builders and tinkerers who can problem-solve in the physical world. 

A page from the 1984 Radio Shack catalogue,
featuring the Armatron for $31.95.
COURTESY OF RADIOSHACKCATALOGS.COM

The Armatron encouraged kids to explore these analog mechanics, a reminder that not all breakthroughs happen on a computer screen. And that hands-on curiosity hasn’t faded. Today, a new generation of fans are rediscovering the Armatron through online communities and DIY modifications. Dozens of Armatron videos are on YouTube, including one where the arm has been modified to run on steam power

“I’m very happy to see people who love mechanisms are amazed,” Watanabe told me. “I’m really happy that there are still people out there who love our products in this way.” 

Jon Keegan writes about technology and AI and publishes Beautiful Public Data, a curated collection of government data sets (beautifulpublicdata.com).

This spa’s water is heated by bitcoin mining

At first glance, the Bathhouse spa in Brooklyn looks not so different from other high-end spas. What sets it apart is out of sight: a closet full of cryptocurrency-­mining computers that not only generate bitcoins but also heat the spa’s pools, marble hammams, and showers. 

When cofounder Jason Goodman opened Bathhouse’s first location in Williamsburg in 2019, he used conventional pool heaters. But after diving deep into the world of bitcoin, he realized he could fit cryptocurrency mining seamlessly into his business. That’s because the process, where special computers (called miners) make trillions of guesses per second to try to land on the string of numbers that will earn a bitcoin, consumes tremendous amounts of electricitywhich in turn produces plenty of heat that usually goes to waste. 

 “I thought, ‘That’s interestingwe need heat,’” Goodman says of Bathhouse. Mining facilities typically use fans or water to cool their computers. And pools of water, of course, are a prominent feature of the spa. 

It takes six miners, each roughly the size of an Xbox One console, to maintain a hot tub at 104 °F. At Bathhouse’s  Williamsburg location, miners hum away quietly inside two large tanks, tucked in a storage closet among liquor bottles and teas. To keep them cool and quiet, the units are immersed directly in non-conductive oil, which absorbs the heat they give off and is pumped through tubes beneath Bathhouse’s hot tubs and hammams. 

Mining boilers, which cool the computers by pumping in cold water that comes back out at 170 °F, are now also being used at the site. A thermal battery stores excess heat for future use. 

Goodman says his spas aren’t saving energy by using bitcoin miners for heat, but they’re also not using any more than they would with conventional water heating. “I’m just inserting miners into that chain,” he says. 

Goodman isn’t the only one to see the potential in heating with crypto. In Finland, Marathon Digital Holdings turned fleets of bitcoin miners into a district heating system to warm the homes of 80,000 residents. HeatCore, an integrated energy service provider, has used bitcoin mining to heat a commercial office building in China and to keep pools at a constant temperature for fish farming. This year it will begin a pilot project to heat seawater for desalination. On a smaller scale, bitcoin fans who also want some extra warmth can buy miners that double as space heaters. 

Crypto enthusiasts like Goodman think much more of this is comingespecially under the Trump administration, which has announced plans to create a bitcoin reserve. This prospect alarms environmentalists. 

The energy required for a single bitcoin transaction varies, but as of mid-March it was equivalent to the energy consumed by an average US household over 47.2 days, according to the Bitcoin Energy Consumption Index, run by the economist Alex de Vries. 

Among the various cryptocurrencies, bitcoin mining gobbles up the most energy by far. De Vries points out that others, like ethereum, have eliminated mining and implemented less energy-­intensive algorithms. But bitcoin users resist any change to their currency, so de Vries is doubtful a shift away from mining will happen anytime soon. 

One key barrier to using bitcoin for heating, de Vries says, is that the heat can only be transported short distances before it dissipates. “I see this as something that is extremely niche,” he says. “It’s just not competitive, and you can’t make it work at a large scale.” 

The more renewable sources that are added to electric grids to replace fossil fuels, the cleaner crypto mining will become. But even if bitcoin is powered by renewable energy, “that doesn’t make it sustainable,” says Kaveh Madani, director of the United Nations University Institute for Water, Environment, and Health. Mining burns through valuable resources that could otherwise be used to meet existing energy needs, Madani says. 

For Goodman, relaxing into bitcoin-heated water is a completely justifiable use of energy. It soothes the muscles, calms the mind, and challenges current economic structures, all at the same time. 

Carrie Klein is a freelance journalist based in New York City.

A vision for the future of automation

The manufacturing industry is at a crossroads: Geopolitical instability is fracturing supply chains from the Suez to Shenzhen, impacting the flow of materials. Businesses are battling rising costs and inflation, coupled with a shrinking labor force, with more than half a million unfilled manufacturing jobs in the U.S. alone. And climate change is further intensifying the pressure, with more frequent extreme weather events and tightening environmental regulations forcing companies to rethink how they operate. New solutions are imperative.

Meanwhile, advanced automation, powered by the convergence of emerging and established technologies, including industrial AI, digital twins, the internet of things (IoT), and advanced robotics, promises greater resilience, flexibility, sustainability, and efficiency for industry. Individual success stories have demonstrated the transformative power of these technologies, providing examples of AI-driven predictive maintenance reducing downtime by up to 50%. Digital twin simulations can significantly reduce time to market, and bring environment dividends, too: One survey found 77% of leaders expect digital twins to reduce carbon emissions by 15% on average.

Yet, broad adoption of this advanced automation has lagged. “That’s not necessarily or just a technology gap,” says John Hart, professor of mechanical engineering and director of the Center for Advanced Production Technologies at MIT. “It relates to workforce capabilities and financial commitments and risk required.” For small and medium enterprises, and those with brownfield sites—older facilities with legacy systems— the barriers to implementation are significant.

In recent years, governments have stepped in to accelerate industrial progress. Through a revival of industrial policies, governments are incentivizing high-tech manufacturing, re-localizing critical production processes, and reducing reliance on fragile global supply chains.

All these developments converge in a key moment for manufacturing. The external pressures on the industry—met with technological progress and these new political incentives—may finally enable the shift toward advanced automation.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The machines are rising — but developers still hold the keys

Rumors of the ongoing death of software development — that it’s being slain by AI — are greatly exaggerated. In reality, software development is at a fork in the road: embracing the (currently) far-off notion of fully automated software development or acknowledging the work of a software developer is much more than just writing lines of code.

The decision the industry makes could have significant long-term consequences. Increasing complacency around AI-generated code and a shift to what has been termed “vibe coding” — where code is generated through natural language prompts until the results seem to work — will lead to code that’s more error-strewn, more expensive to run and harder to change in the future. And, if the devaluation of software development skills continues, we may even lack a workforce with the skills and knowledge to fix things down the line. 

This means software developers are going to become more important to how the world builds and maintains software. Yes, there are many ways their practices will evolve thanks to AI coding assistance, but in a world of proliferating machine-generated code, developer judgment and experience will be vital.

The dangers of AI-generated code are already here

The risks of AI-generated code aren’t science fiction: they’re with us today. Research done by GitClear earlier this year indicates that with AI coding assistants (like GitHub Copilot) going mainstream, code churn — which GitClear defines as “changes that were either incomplete or erroneous when the author initially wrote, committed, and pushed them to the company’s git repo” — has significantly increased. GitClear also found there was a marked decrease in the number of lines of code that have been moved, a signal for refactored code (essentially the care and feeding to make it more effective).

In other words, from the time coding assistants were introduced there’s been a pronounced increase in lines of code without a commensurate increase in lines deleted, updated, or replaced. Simultaneously, there’s been a decrease in lines moved — indicating a lot of code has been written but not refactored. More code isn’t necessarily a good thing (sometimes quite the opposite); GitClear’s findings ultimately point to complacency and a lack of rigor about code quality.

Can AI be removed from software development?

However, AI doesn’t have to be removed from software development and delivery. On the contrary, there’s plenty to be excited about. As noted in the latest volume of the Technology Radar — Thoughtworks’ report on technologies and practices from work with hundreds of clients all over the world — the coding assistance space is full of opportunities. 

Specifically, the report noted tools like Cursor, Cline and Windsurf can enable software engineering agents. What this looks like in practice is an agent-like feature inside developer environments that developers can ask specific sets of coding tasks to be performed in the form of a natural language prompt. This enables the human/machine partnership.

That being said, to only focus on code generation is to miss the variety of ways AI can help software developers. For example, Thoughtworks has been interested in how generative AI can be used to understand legacy codebases, and we see a lot of promise in tools like Unblocked, which is an AI team assistant that helps teams do just that. In fact, Anthropic’s Claude Code helped us add support for new languages in an internal tool, CodeConcise. We use CodeConcise to understand legacy systems; and while our success was mixed, we do think there’s real promise here.

Tightening practices to better leverage AI

It’s important to remember much of the work developers do isn’t developing something new from scratch. A large proportion of their work is evolving and adapting existing (and sometimes legacy) software. Sprawling and janky code bases that have taken on technical debt are, unfortunately, the norm. Simply applying AI will likely make things worse, not better, especially with approaches like vibe.  

This is why developer judgment will become more critical than ever. In the latest edition of the Technology Radar report, AI-friendly code design is highlighted, based on our experience that AI coding assistants perform best with well-structured codebases. 

In practice, this requires many different things, including clear and expressive naming to ensure context is clearly communicated (essential for code maintenance), reducing duplicate code, and ensuring modularity and effective abstractions. Done together, these will all help make code more legible to AI systems.

Good coding practices are all too easy to overlook when productivity and effectiveness are measured purely in terms of output, and even though this was true before there was AI tooling, software development needs to focus on good coding first.

AI assistance demands greater human responsibility

Instagram co-founder Mike Krieger recently claimed that in three years software engineers won’t write any code: they will only review AI-created code. This might sound like a huge claim, but it’s important to remember that reviewing code has always been a major part of software development work. With this in mind, perhaps the evolution of software development won’t be as dramatic as some fear.

But there’s another argument: as AI becomes embedded in how we build software, software developers will take on more responsibility, not less. This is something we’ve discussed a lot at Thoughtworks: the job of verifying that an AI-built system is correct will fall to humans. Yes, verification itself might be AI-assisted, but it will be the role of the software developer to ensure confidence. 

In a world where trust is becoming highly valuable — as evidenced by the emergence of the chief trust officer — the work of software developers is even more critical to the infrastructure of global industry. It’s vital software development is valued: the impact of thoughtless automation and pure vibes could prove incredibly problematic (and costly) in the years to come.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Amazon’s first quantum computing chip makes its debut

Amazon Web Services today announced Ocelot, its first-generation quantum computing chip. While the chip has only rudimentary computing capability, the company says it is a proof-of-principle demonstration—a step on the path to creating a larger machine that can deliver on the industry’s promised killer applications, such as fast and accurate simulations of new battery materials.

“This is a first prototype that demonstrates that this architecture is scalable and hardware-efficient,” says Oskar Painter, the head of quantum hardware at AWS, Amazon’s cloud computing unit. In particular, the company says its approach makes it simpler to perform error correction, a key technical challenge in the development of quantum computing.  

Ocelot consists of nine quantum bits, or qubits, on a chip about a centimeter square, which, like some forms of quantum hardware, must be cryogenically cooled to near absolute zero in order to operate. Five of the nine qubits are a type of hardware that the field calls a “cat qubit,” named for Schrödinger’s cat, the famous 20th-century thought experiment in which an unseen cat in a box may be considered both dead and alive. Such a superposition of states is a key concept in quantum computing.

The cat qubits AWS has made are tiny hollow structures of tantalum that contain microwave radiation, attached to a silicon chip. The remaining four qubits are transmons—each an electric circuit made of superconducting material. In this architecture, AWS uses cat qubits to store the information, while the transmon qubits monitor the information in the cat qubits. This distinguishes its technology from Google’s and IBM’s quantum computers, whose computational parts are all transmons. 

Notably, AWS researchers used Ocelot to implement a more efficient form of quantum error correction. Like any computer, quantum computers make mistakes. Without correction, these errors add up, with the result that current machines cannot accurately execute the long algorithms required for useful applications. “The only way you’re going to get a useful quantum computer is to implement quantum error correction,” says Painter.

Unfortunately, the algorithms required for quantum error correction usually have heavy hardware requirements. Last year, Google encoded a single error-corrected bit of quantum information using 105 qubits.

Amazon’s design strategy requires only a 10th as many qubits per bit of information, says Painter. In work published in Nature on Wednesday, the team encoded a single error-corrected bit of information in Ocelot’s nine qubits. Theoretically, this hardware design should be easier to scale up to a larger machine than a design made only of transmons, says Painter. 

This design combining cat qubits and transmons makes error correction simpler, reducing the number of qubits needed, says Shruti Puri, a physicist at Yale University who was not involved in the work. (Puri works part-time for another company that develops quantum computers but spoke to MIT Technology Review in her capacity as an academic.)

“Basically, you can decompose all quantum errors into two kinds—bit flips and phase flips,” says Puri. Quantum computers represent information as 1s, 0s, and probabilities, or superpositions, of both. A bit flip, which also occurs in conventional computing, takes place when the computer mistakenly encodes a 1 that should be a 0, or vice versa. In the case of quantum computing, the bit flip occurs when the computer encodes the probability of a 0 as the probability of a 1, or vice versa. A phase flip is a type of error unique to quantum computing, having to do with the wavelike properties of the qubit.

The cat-transmon design allowed Amazon to engineer the quantum computer so that any errors were predominantly phase-flip errors. This meant the company could use a much simpler error correction algorithm than Google’s—one that did not require as many qubits. “Your savings in hardware is coming from the fact that you need to mostly correct for one type of error,” says Puri. “The other error is happening very rarely.” 

The hardware savings also stem from AWS’s careful implementation of an operation known as a C-NOT gate, which is performed during error correction. Amazon’s researchers showed that the C-NOT operation did not disproportionately introduce bit-flip errors. This meant that after each round of error correction, the quantum computer still predominantly made phase-flip errors, so the simple, hardware-efficient error correction code could continue to be used.

AWS began working on designs for Ocelot as early as 2021, says Painter. Its development was a “full-stack problem.” To create high-performing qubits that could ultimately execute error correction, the researchers had to figure out a new way to grow tantalum, which is what their cat qubits are made of, on a silicon chip with as few atomic-scale defects as possible. 

It’s a significant advance that AWS can now fabricate and control multiple cat qubits in a single device, says Puri. “Any work that goes toward scaling up new kinds of qubits, I think, is interesting,” she says. Still, there are years of development to go. Other experts have predicted that quantum computers will require thousands, if not millions, of qubits to perform a useful task. Amazon’s work “is a first step,” says Puri.

She adds that the researchers will need to further reduce the fraction of errors due to bit flips as they scale up the number of qubits. 

Still, this announcement marks Amazon’s way forward. “This is an architecture we believe in,” says Painter. Previously, the company’s main strategy was to pursue conventional transmon qubits like Google’s and IBM’s, and they treated this cat qubit project as “skunkworks,” he says. Now, they’ve decided to prioritize cat qubits. “We really became convinced that this needed to be our mainline engineering effort, and we’ll still do some exploratory things, but this is the direction we’re going.” (The startup Alice & Bob, based in France, is also building a quantum computer made of cat qubits.)

As is, Ocelot basically is a demonstration of quantum memory, says Painter. The next step is to add more qubits to the chip, encode more information, and perform actual computations. But they have many challenges ahead, from how to attach all the wires to how to link multiple chips together. “Scaling is not trivial,” he says.

A new Microsoft chip could lead to more stable quantum computers

Microsoft announced today that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to building quantum computers that could make them more stable and easier to scale up. 

Researchers and companies have been working for years to build quantum computers, which could unlock dramatic new abilities to simulate complex materials and discover new ones, among many other possible applications. 

To achieve that potential, though, we must build big enough systems that are stable enough to perform computations. Many of the technologies being explored today, such as the superconducting qubits pursued by Google and IBM, are so delicate that the resulting systems need to have many extra qubits to correct errors. 

Microsoft has long been working on an alternative that could cut down on the overhead by using components that are far more stable. These components, called Majorana quasiparticles, are not real particles. Instead, they are special patterns of behavior that may arise inside certain physical systems and under certain conditions.

The pursuit has not been without setbacks, including a high-profile paper retraction by researchers associated with the company in 2018. But the Microsoft team, which has since pulled this research effort in house, claims it is now on track to build a fault-tolerant quantum computer containing a few thousand qubits in a matter of years and that it has a blueprint for building out chips that each contain a million qubits or so, a rough target that could be the point at which these computers really begin to show their power.

This week the company announced a few early successes on that path: piggybacking on a Nature paper published today that describes a fundamental validation of the system, the company says it has been testing a topological qubit, and that it has wired up a chip containing eight of them. 

“You don’t get to a million qubits without a lot of blood, sweat, and tears and solving a lot of really difficult technical challenges along the way. And I do not want to understate any of that,” says Chetan Nayak, a Microsoft technical fellow and leader of the team pioneering this approach. That said, he says, “I think that we have a path that we very much believe in, and we see a line of sight.” 

Researchers outside the company are cautiously optimistic. “I’m very glad that [this research] seems to have hit a very important milestone,” says computer scientist Scott Aaronson, who heads the ​​Quantum Information Center at the University of Texas at Austin. “I hope that this stands, and I hope that it’s built up.”

Even and odd

The first step in building a quantum computer is constructing qubits that can exist in fragile quantum states—not 0s and 1s like the bits in classical computers, but rather a mixture of the two. Maintaining qubits in these states and linking them up with one another is delicate work, and over the years a significant amount of research has gone into refining error correction schemes to make up for noisy hardware. 

For many years, theorists and experimentalists alike have been intrigued by the idea of creating topological qubits, which are constructed through mathematical twists and turns and have protection from errors essentially baked into their physics. “It’s been such an appealing idea to people since the early 2000s,” says Aaronson. “The only problem with it is that it requires, in a sense, creating a new state of matter that’s never been seen in nature.”

Microsoft has been on a quest to synthesize this state, called a Majorana fermion, in the form of quasiparticles. The Majorana was first proposed nearly 90 years ago as a particle that is its own antiparticle, which means two Majoranas will annihilate when they encounter one another. With the right conditions and physical setup, the company has been hoping to get behavior matching that of the Majorana fermion within materials.

In the last few years, Microsoft’s approach has centered on creating a very thin wire or “nanowire” from indium arsenide, a semiconductor. This material is placed in close proximity to aluminum, which becomes a superconductor close to absolute zero, and can be used to create superconductivity in the nanowire.

Ordinarily you’re not likely to find any unpaired electrons skittering about in a superconductor—electrons like to pair up. But under the right conditions in the nanowire, it’s theoretically possible for an electron to hide itself, with each half hiding at either end of the wire. If these complex entities, called Majorana zero modes, can be coaxed into existence, they will be difficult to destroy, making them intrinsically stable. 

”Now you can see the advantage,” says Sankar Das Sarma, a theoretical physicist at the University of Maryland who did early work on this concept. “You cannot destroy a half electron, right? If you try to destroy a half electron, that means only a half electron is left. That’s not allowed.”

In 2023, the Microsoft team published a paper in the journal Physical Review B claiming that this system had passed a specific protocol designed to assess the presence of Majorana zero modes. This week in Nature, the researchers reported that they can “read out” the information in these nanowires—specifically, whether there are Majorana zero modes hiding at the wires’ ends. If there are, that means the wire has an extra, unpaired electron.

“What we did in the Nature paper is we showed how to measure the even or oddness,” says Nayak. “To be able to tell whether there’s 10 million or 10 million and one electrons in one of these wires.” That’s an important step by itself, because the company aims to use those two states—an even or odd number of electrons in the nanowire—as the 0s and 1s in its qubits. 

If these quasiparticles exist, it should be possible to “braid” the four Majorana zero modes in a pair of nanowires around one another by making specific measurements in a specific order. The result would be a qubit with a mix of these two states, even and odd. Nayak says the team has done just that, creating a two-level quantum system, and that it is currently working on a paper on the results.

Researchers outside the company say they cannot comment on the qubit results, since that paper is not yet available. But some have hopeful things to say about the findings published so far. “I find it very encouraging,” says Travis Humble, director of the Quantum Science Center at Oak Ridge National Laboratory in Tennessee. “It is not yet enough to claim that they have created topological qubits. There’s still more work to be done there,” he says. But “this is a good first step toward validating the type of protection that they hope to create.” 

Others are more skeptical. Physicist Henry Legg of the University of St Andrews in Scotland, who previously criticized Physical Review B for publishing the 2023 paper without enough data for the results to be independently reproduced, is not convinced that the team is seeing evidence of Majorana zero modes in its Nature paper. He says that the company’s early tests did not put it on solid footing to make such claims. “The optimism is definitely there, but the science isn’t there,” he says.

One potential complication is impurities in the device, which can create conditions that look like Majorana particles. But Nayak says the evidence has only grown stronger as the research has proceeded. “This gives us confidence: We are manipulating sophisticated devices and seeing results consistent with a Majorana interpretation,” he says.

“They have satisfied many of the necessary conditions for a Majorana qubit, but there are still a few more boxes to check,” Das Sarma said after seeing preliminary results on the qubit. “The progress has been impressive and concrete.”

Scaling up

On the face of it, Microsoft’s topological efforts seem woefully behind in the world of quantum computing—the company is just now working to combine qubits in the single digits while others have tied together more than 1,000. But both Nayak and Das Sarma say other efforts had a strong head start because they involved systems that already had a solid grounding in physics. Work on the topological qubit, on the other hand, has meant starting from scratch. 

“We really were reinventing the wheel,” Nayak says, likening the team’s efforts to the early days of semiconductors, when there was so much to sort out about electron behavior and materials, and transistors and integrated circuits still had to be invented. That’s why this research path has taken almost 20 years, he says: “It’s the longest-running R&D program in Microsoft history.”

Some support from the US Defense Advanced Research Projects Agency could help the company catch up. Early this month, Microsoft was selected as one of two companies to continue work on the design of a scaled-up system, through a program focused on underexplored approaches that could lead to utility-scale quantum computers—those whose benefits exceed their costs. The other company selected is PsiQuantum, a startup that is aiming to build a quantum computer containing up to a million qubits using photons.

Many of the researchers MIT Technology Review spoke with would still like to see how this work plays out in scientific publications, but they were hopeful. “The biggest disadvantage of the topological qubit is that it’s still kind of a physics problem,” says Das Sarma. “If everything Microsoft is claiming today is correct … then maybe right now the physics is coming to an end, and engineering could begin.” 

This story was updated with Henry Legg’s current institutional affiliation.

From COBOL to chaos: Elon Musk, DOGE, and the Evil Housekeeper Problem

In trying to make sense of the wrecking ball that is Elon Musk and President Trump’s DOGE, it may be helpful to think about the Evil Housekeeper Problem. It’s a principle of computer security roughly stating that once someone is in your hotel room with your laptop, all bets are off. Because the intruder has physical access, you are in much more trouble. And the person demanding to get into your computer may be standing right beside you.

So who is going to stop the evil housekeeper from plugging a computer in and telling IT staff to connect it to the network?

What happens if someone comes in and tells you that you’ll be fired unless you reveal the authenticator code from your phone, or sign off on a code change, or turn over your PIV card, the Homeland Security–approved smart card used to access facilities and systems and securely sign documents and emails? What happens if someone says your name will otherwise be published in an online list of traitors? Already the new administration is firing, putting on leave, or outright escorting from the building people who refuse to do what they’re told. 

It’s incredibly hard to protect a system from someone—the evil housekeeper from DOGE—who has made their way inside and wants to wreck it. This administration is on the record as wanting to outright delete entire departments. Accelerationists are not only setting policy but implementing it by working within the administration. If you can’t delete a department, then why not just break it until it doesn’t work? 

That’s why what DOGE is doing is a massive, terrifying problem, and one I talked through earlier in a thread on Bluesky

Government is built to be stable. Collectively, we put systems and rules in place to ensure that stability. But whether they actually deliver and preserve stability in the real world isn’t actually about the technology used; it’s about the people using it. When it comes down to it, technology is a tool to be used by humans for human ends. The software used to run our democratically elected government is deployed to accomplish goals tied to policies: collecting money from people, or giving money to states so they can give money to people who qualify for food stamps, or making covid tests available to people.

Usually, our experience of government technology is that it’s out of date or slow or unreliable. Certainly not as shiny as what we see in the private sector. And that technology changes very, very slowly, if it happens at all. 

It’s not as if people don’t realize these systems could do with modernization. In my experience troubleshooting and modernizing government systems in California and the federal government, I worked with Head Start, Medicaid, child welfare, and logistics at the Department of Defense. Some of those systems were already undergoing modernization attempts, many of which were and continue to be late, over budget, or just plain broken. But the changes that are needed to make other systems more modern were frequently seen as too risky or too expensive. In other words, not important enough. 

Of course, some changes are deemed important enough. The covid-19 pandemic and our unemployment insurance systems offer good examples. When covid hit, certain critical government technologies suddenly became visible. Those systems, like unemployment insurance portals, also became politically important, just like the launch of the Affordable Care Act website (which is why it got so much attention when it was botched). 

Political attention can change everything. During the pandemic, suddenly it wasn’t just possible to modernize and upgrade government systems, or to make them simpler, clearer, and faster to use. It actually happened. Teams were parachuted in. Overly restrictive rules and procedures were reassessed and relaxed. Suddenly, government workers were allowed to work remotely and to use Slack.

However, there is a reason this was an exception. 

In normal times, rules and procedures are certainly part of what makes it very, very hard to change government technology. But they are in place to stop changes because, well, changes might break those systems and government doesn’t work without them working consistently. 

A long time ago I worked on a mainframe system in California—the kind that uses COBOL. It was as solid as a rock and worked day in, day out. Because if it didn’t, and reimbursements weren’t received for Medicaid, then the state might become temporarily insolvent. 

That’s why many of the rules about technology in government make it hard to make changes: because sometimes the risk of things breaking is just too high. Sometimes what’s at stake is simply keeping money flowing; sometimes, as with 911, lives are on the line.

Still, government systems and the rules that govern them are ultimately only as good as the people who oversee and enforce them. The technology will only do (and not do) what people tell it to. So if anyone comes in and breaks those rules on purpose—without fear of consequence—there are few practical or technical guardrails to prevent it. 

One system that’s meant to do that is the ATO, or the Authority to Operate. It does what it says: It lets you run a computer system. You are not supposed to operate a system without one. 

But DOGE staffers are behaving in a way that suggests they don’t care about getting ATOs. And nothing is really stopping them. (Someone on Bluesky replied to me: “My first thought about the OPM [email] server was, “there’s no way those fuckers have an ATO.”) 

You might think that there would be technical measures to stop someone right out of high school from coming in and changing the code to a government system. That the system could require two-factor authentication to deploy the code to the cloud. That you would need a smart card to log in to a specific system to do that. Nope—all those technical measures can be circumvented by coercion at the hands of the evil housekeeper. 

Indeed, none of our systems and rules work without enforcement, and consequences flowing from that enforcement. But to an unprecedented degree, this administration, and its individual leaders, have shown absolutely no fear. That’s why, according to Wired, the former X and SpaceX engineer and DOGE staffer Marko Elez had the “ability not just to read but to write code on two of the most sensitive systems in the US government: the Payment Automation Manager and Secure Payment System at the Bureau of the Fiscal Service (BFS).” (Elez reportedly resigned yesterday after the Wall Street Journal began reporting on a series of racist comments he had allegedly made.)

We’re seeing in real time that there are no practical technical measures preventing someone from taking a spanner to the technology that keeps our government stable, that keeps society running every day—despite the very real consequences. 

So we should plan for the worst, even if the likelihood of the worst is low. 

We need a version of the UK government’s National Risk Register, covering everything from the collapse of financial markets to “an attack on government” (but, unsurprisingly, that risk is described in terms of external threats). The register mostly predicts long-term consequences, with recovery taking months. That may end up being the case here. 

We need to dust off those “in the event of an emergency” disaster response procedures dealing with the failure of federal government—at individual organizations that may soon hit cash-flow problems and huge budget deficits without federal funding, at statehouses that will need to keep social programs running, and in groups doing the hard work of archiving and preserving data and knowledge.

In the end, all we have is each other—our ability to form communities and networks to support, help, and care for each other. Sometimes all it takes is for the first person to step forward, or to say no, and for us to rally around so it’s easier for the next person. In the end, it’s not about the technology—it’s about the people.

Dan Hon is principal of Very Little Gravitas, where he helps turn around and modernize large and complex government services and products.