Adapting to new threats with proactive risk management

In July 2024, a botched update to the software defenses managed by cybersecurity firm CrowdStrike caused more than 8 million Windows systems to fail. From hospitals to manufacturers, stock markets to retail stores, the outage caused parts of the global economy to grind to a halt. Payment systems were disrupted, broadcasters went off the air, and flights were canceled. In all, the outage is estimated to have caused direct losses of more than $5 billion to Fortune 500 companies. For US air carrier Delta Air Lines, the error exposed the brittleness of its systems. The airline suffered weeks of disruptions, leading to $500 million in losses and 7,000 canceled flights.

The magnitude of the CrowdStrike incident revealed just how interconnected digital systems are, and the extensive vulnerabilities in some companies when confronted with an unexpected occurrence. “On any given day, there could be a major weather event or some event like what happened…with CrowdStrike,” said then-US secretary of transportation Pete Buttigieg on announcing an investigation into how Delta Air Lines handled the incident. “The question is, is your airline prepared to absorb something like that and get back on its feet and take care of customers?”

Unplanned downtime poses a major challenge for organizations, and is estimated to cost Global 2000 companies on average $200 million per year. Beyond the financial impact, it can also erode customer trust and loyalty, decrease productivity, and even result in legal or privacy issues.

A 2024 ransomware attack on Change Healthcare, the medical-billing subsidiary of industry giant UnitedHealth Group—the biggest health and medical data breach in US history—exposed the data of around 190 million people and led to weeks of outages for medical groups. Another ransomware attack in 2024, this time on CDK Global, a software firm that works with nearly 15,000 auto dealerships in North America, led to around $1 billion worth of losses for car dealers as a result of the three-week disruption.

Managing risk and mitigating downtime is a growing challenge for businesses. As organizations become ever more interconnected, the expanding surface of networks and the rapid adoption of technologies like AI are exposing new vulnerabilities—and more opportunities for threat actors. Cyberattacks are also becoming increasingly sophisticated and damaging as AI-driven malware and malware-as-a-service platforms turbocharge attacks.

To prepare for these challenges head on, companies must take a more proactive approach to security and resilience. “We’ve had a traditional way of doing things that’s actually worked pretty well for maybe 15 to 20 years, but it’s been based on detecting an incident after the event,” says Chris Millington, global cyber resilience technical expert at Hitachi Vantara. “Now, we’ve got to be more preventative and use intelligence to focus on making the systems and business more resilient.”

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Why basic science deserves our boldest investment

In December 1947, three physicists at Bell Telephone Laboratories—John Bardeen, William Shockley, and Walter Brattain—built a compact electronic device using thin gold wires and a piece of germanium, a material known as a semiconductor. Their invention, later named the transistor (for which they were awarded the Nobel Prize in 1956), could amplify and switch electrical signals, marking a dramatic departure from the bulky and fragile vacuum tubes that had powered electronics until then.

Its inventors weren’t chasing a specific product. They were asking fundamental questions about how electrons behave in semiconductors, experimenting with surface states and electron mobility in germanium crystals. Over months of trial and refinement, they combined theoretical insights from quantum mechanics with hands-on experimentation in solid-state physics—work many might have dismissed as too basic, academic, or unprofitable.

Their efforts culminated in a moment that now marks the dawn of the information age. Transistors don’t usually get the credit they deserve, yet they are the bedrock of every smartphone, computer, satellite, MRI scanner, GPS system, and artificial-intelligence platform we use today. With their ability to modulate (and route) electrical current at astonishing speeds, transistors make modern and future computing and electronics possible.

This breakthrough did not emerge from a business plan or product pitch. It arose from open-ended, curiosity-driven research and enabling development, supported by an institution that saw value in exploring the unknown. It took years of trial and error, collaborations across disciplines, and a deep belief that understanding nature—even without a guaranteed payoff—was worth the effort.

After the first successful demonstration in late 1947, the invention of the transistor remained confidential while Bell Labs filed patent applications and continued development. It was publicly announced at a press conference on June 30, 1948, in New York City. The scientific explanation followed in a seminal paper published in the journal Physical Review

How do they work? At their core, transistors are made of semiconductors—materials like germanium and, later, silicon—that can either conduct or resist electricity depending on subtle manipulations of their structure and charge. In a typical transistor, a small voltage applied to one part of the device (the gate) either allows or blocks the electric current flowing through another part (the channel). It’s this simple control mechanism, scaled up billions of times, that lets your phone run apps, your laptop render images, and your search engine return answers in milliseconds.

Though early devices used germanium, researchers soon discovered that silicon—more thermally stable, moisture resistant, and far more abundant—was better suited for industrial production. By the late 1950s, the transition to silicon was underway, making possible the development of integrated circuits and, eventually, the microprocessors that power today’s digital world.

A modern chip the size of a human fingernail now contains tens of billions of silicon transistors, each measured in nanometers—smaller than many viruses. These tiny switches turn on and off billions of times per second, controlling the flow of electrical signals involved in computation, data storage, audio and visual processing, and artificial intelligence. They form the fundamental infrastructure behind nearly every digital device in use today. 

The global semiconductor industry is now worth over half a trillion dollars. Devices that began as experimental prototypes in a physics lab now underpin economies, national security, health care, education, and global communication. But the transistor’s origin story carries a deeper lesson—one we risk forgetting.

Much of the fundamental understanding that moved transistor technology forward came from federally funded university research. Nearly a quarter of transistor research at Bell Labs in the 1950s was supported by the federal government. Much of the rest was subsidized by revenue from AT&T’s monopoly on the US phone system, which flowed into industrial R&D.

Inspired by the 1945 report “Science: The Endless Frontier,” authored by Vannevar Bush at the request of President Truman, the US government began a long-standing tradition of investing in basic research. These investments have paid steady dividends across many scientific domains—from nuclear energy to lasers, and from medical technologies to artificial intelligence. Trained in fundamental research, generations of students have emerged from university labs with the knowledge and skills necessary to push existing technology beyond its known capabilities.

And yet, funding for basic science—and for the education of those who can pursue it—is under increasing pressure. The new White House’s proposed federal budget includes deep cuts to the Department of Energy and the National Science Foundation (though Congress may deviate from those recommendations). Already, the National Institutes of Health has canceled or paused more than $1.9 billion in grants, while NSF STEM education programs suffered more than $700 million in terminations.

These losses have forced some universities to freeze graduate student admissions, cancel internships, and scale back summer research opportunities—making it harder for young people to pursue scientific and engineering careers. In an age dominated by short-term metrics and rapid returns, it can be difficult to justify research whose applications may not materialize for decades. But those are precisely the kinds of efforts we must support if we want to secure our technological future.

Consider John McCarthy, the mathematician and computer scientist who coined the term “artificial intelligence.” In the late 1950s, while at MIT, he led one of the first AI groups and developed Lisp, a programming language still used today in scientific computing and AI applications. At the time, practical AI seemed far off. But that early foundational work laid the groundwork for today’s AI-driven world.

After the initial enthusiasm of the 1950s through the ’70s, interest in neural networks—a leading AI architecture today inspired by the human brain—declined during the so-called “AI winters” of the late 1990s and early 2000s. Limited data, inadequate computational power, and theoretical gaps made it hard for the field to progress. Still, researchers like Geoffrey Hinton and John Hopfield pressed on. Hopfield, now a 2024 Nobel laureate in physics, first introduced his groundbreaking neural network model in 1982, in a paper published in Proceedings of the National Academy of Sciences of the USA. His work revealed the deep connections between collective computation and the behavior of disordered magnetic systems. Together with the work of colleagues including Hinton, who was awarded the Nobel the same year, this foundational research seeded the explosion of deep-learning technologies we see today.

One reason neural networks now flourish is the graphics processing unit, or GPU—originally designed for gaming but now essential for the matrix-heavy operations of AI. These chips themselves rely on decades of fundamental research in materials science and solid-state physics: high-dielectric materials, strained silicon alloys, and other advances making it possible to produce the most efficient transistors possible. We are now entering another frontier, exploring memristors, phase-changing and 2D materials, and spintronic devices.

If you’re reading this on a phone or laptop, you’re holding the result of a gamble someone once made on curiosity. That same curiosity is still alive in university and research labs today—in often unglamorous, sometimes obscure work quietly laying the groundwork for revolutions that will infiltrate some of the most essential aspects of our lives 50 years from now. At the leading physics journal where I am editor, my collaborators and I see the painstaking work and dedication behind every paper we handle. Our modern economy—with giants like Nvidia, Microsoft, Apple, Amazon, and Alphabet—would be unimaginable without the humble transistor and the passion for knowledge fueling the relentless curiosity of scientists like those who made it possible.

The next transistor may not look like a switch at all. It might emerge from new kinds of materials (such as quantum, hybrid organic-inorganic, or hierarchical types) or from tools we haven’t yet imagined. But it will need the same ingredients: solid fundamental knowledge, resources, and freedom to pursue open questions driven by curiosity, collaboration—and most importantly, financial support from someone who believes it’s worth the risk.

Julia R. Greer is a materials scientist at the California Institute of Technology. She is a judge for MIT Technology Review’s Innovators Under 35 and a former honoree (in 2008).

Transforming CX with embedded real-time analytics 

During Black Friday in 2024, Stripe processed more than $31 billion in transactions, with processing rates peaking at 137,000 transactions per minute, the highest in the company’s history. The financial-services firm had to analyze every transaction in real time to prevent nearly 21 million fraud attempts that could have siphoned more than $910 million from its merchant customers. 

Yet, fraud protection is only one reason that Stripe embraced real-time data analytics. Evaluating trends in massive data flows is essential for the company’s services, such as allowing businesses to bill based on usage and monitor orders and inventory. In fact, many of Stripe’s services would not be possible without real-time analytics, says Avinash Bhat, head of data infrastructure at Stripe. “We have certain products that require real-time analytics, like usage-based billing and fraud detection,” he says. “Without our real-time analytics, we would not have a few of our products and that’s why it’s super important.” 

Stripe is not alone. In today’s digital world, data analysis is increasingly delivered directly to business customers and individual users, allowing real-time, continuous insights to shape user experiences. Ride-hailing apps calculate prices and estimate times of arrival (ETAs) in near-real time. Financial platforms deliver real-time cash-flow analysis. Customers expect and reward data-driven services that reflect what is happening now. 

In fact, having the capability to collect and analyze data in real time correlates with companies’ ability to grow. Business leaders that scored company in the top quartile for real-time operations saw 50% higher revenue growth and net margins, compared to companies placed in the bottom quartile, according to a survey conducted by the MIT Center for Information Systems Research (CISR) and Insight Partners. The top companies focused on automated processes and fast decision-making at all levels, relying on easily accessible data services updated in real time. 

Companies that wait on data are putting themselves in a bind, says Kishore Gopalakrishna, co-founder and CEO of StarTree, a real-time data-analytics technology provider. “The basis of real-time analytics is—when the value of the data is very high—we want to capitalize on it instead of waiting and doing batch analytics,” he says. “Getting access to the data a day, or even hours, later is sometimes actually too late.” 

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Unlocking enterprise agility in the API economy

Across industries, enterprises are increasingly adopting an on-demand approach to compute, storage, and applications. They are favoring digital services that are faster to deploy, easier to scale, and better integrated with partner ecosystems. Yet, one critical pillar has lagged: the network. While software-defined networking has made inroads, many organizations still operate rigid, pre-provisioned networks. As applications become increasingly distributed and dynamic—including hybrid cloud and edge deployments—a programmable, on-demand network infrastructure can enhance and enable this new era.

From CapEx to OpEx: The new connectivity mindset

Another, practical concern is also driving this shift: the need for IT models that align cost with usage. Rising uncertainty about inflation, consumer spending, business investment, and global supply chains are just a few of the economic factors weighing on company decision-making. And chief information officers (CIOs) are scrutinizing capital-expenditure-heavy infrastructure more closely and increasingly adopting operating-expenses-based subscription models.

Instead of long-term circuit contracts and static provisioning, companies are looking for cloud-ready, on-demand network services that can scale, adapt, and integrate across hybrid environments. This trend is fueling demand for API-first network infrastructure connectivity that behaves like software, dynamically orchestrated and integrated into enterprise IT ecosystems. There has been such rapid interest, the global network API market is projected to surge from $1.53 billion in 2024 to over $72 billion in 2034.

In fact, McKinsey estimates the network API market could unlock between $100 billion and $300 billion in connectivity- and edge-computing-related revenue for telecom operators over the next five to seven years, with an additional $10 billion to $30 billion generated directly from APIs themselves.

“When the cloud came in, first there was a trickle of adoptions. And then there was a deluge,” says Rajarshi Purkayastha, VP of solutions at Tata Communications. “We’re seeing the same trend with programmable networks. What was once a niche industry is now becoming mainstream as CIOs prioritize agility and time-to-value.”

Programmable networks as a catalyst for innovation

Programmable subscription-based networks are not just about efficiency, they are about enabling faster innovation, better user experiences, and global scalability. Organizations are preferring API-first systems to avoid vendor lock-in, enable multi-vendor integration, and foster innovation. API-first approaches allow seamless integration across different hardware and software stacks, reducing operational complexity and costs.

With APIs, enterprises can provision bandwidth, configure services, and connect to clouds and edge locations in real time, all through automation layers embedded in their DevOps and application platforms. This makes the network an active enabler of digital transformation rather than a lagging dependency.

For example, Netflix—one of the earliest adopters of microservices—handles billions of API requests daily through over 500 microservices and gateways, supporting global scalability and rapid innovation. After a two-year transition period, it redesigned its IT structure and organized it using microservice architecture.

Elsewhere, Coca-Cola integrated its global systems using APIs, enabling faster, lower-cost delivery and improved cross-functional collaboration. And Uber moved to microservices with API gateways, allowing independent scaling and rapid deployment across markets.

In each case, the network had to evolve from being static and hardware-bound to dynamic, programmable, and consumption-based. “API-first infrastructure fits naturally into how today’s IT teams work,” says Purkayastha. “It aligns with continuous integration and continuous delivery/deployment (CI/CD) pipelines and service orchestration tools. That reduces friction and accelerates how fast enterprises can launch new services.”

Powering on-demand connectivity

Tata Communications deployed Network Fabric—its programmable platform that uses APIs to allow enterprise systems to request and adjust network resources dynamically—to help a global software-as-a-service (SaaS) company modernize how it manages network capacity in response to real-time business needs. As the company scaled its digital services worldwide, it needed a more agile, cost-efficient way to align network performance with unpredictable traffic surges and fast-changing user demands. With Tata’s platform, the company’s operations teams were able to automatically scale bandwidth in key regions for peak performance, during high-impact events like global software releases. And just as quickly scale down once demand normalized, avoiding unnecessary costs.

In another scenario, when the SaaS provider needed to run large-scale data operations between its US and Asia hubs, the network was programmatically reconfigured in under an hour; a process that previously required weeks of planning and provisioning. “What we delivered wasn’t just bandwidth, it was the ability for their teams to take control,” says Purkayastha. “By integrating our Network Fabric APIs into their automation workflows, we gave them a network that responds at the pace of their business.”

Barriers to transformation — and how to overcome them

Transforming network infrastructure is no small task. Many enterprises still rely on legacy multiprotocol label switching (MPLS) and hardware-defined wide-area network (WAN) architectures. These environments are rigid, manually managed, and often incompatible with modern APIs or automation frameworks. As with any organization, barriers can be both technical and internal, and legacy devices may not support programmable interfaces. Organizations are often siloed, meaning networks are managed separately to application and DevOps workflows.

Furthermore, CIOs face pressure for quick returns and may not even remain in the company long enough to oversee the process and results, making it harder to push for long-term network modernization strategies. “Often, it’s easier to address the low-hanging fruit rather than go after the transformation because decision-makers may not be around to see the transformation come to life,” says Purkayastha.

But quick fixes or workarounds may not yield the desired results; transformation is needed instead. “Enterprises have historically built their networks for stability, not agility,” says Purkayastha. “But now, that same rigidity becomes a bottleneck when applications, users, and workloads are distributed across the cloud, edge, and remote locations.”

Despite the challenges, there is a clear path forward, starting with overlay orchestration, well-defined API contracts, and security-first design. Instead of completely removing and replacing an existing system, many enterprises are layering APIs over existing infrastructure, enabling controlled migrations and real-time service automation.

“We don’t just help customers adopt APIs, we guide them through the operational shift it requires,” says Purkayastha. “We have blueprints for what to automate first, how to manage hybrid environments, and how to design for resilience.”

For some organizations, there will be resistance to the change initially. Fears of extra workloads, or misalliance with teams’ existing goals and objectives are common, as is the deeply human distrust of change. These can be overcome, however. “There are playbooks on what we’ve done earlier—learnings from transformation—which we share with clients,” says Purkayastha. “We also plan for the unknowns. We usually reserve 10% of time and resources just to manage unforeseen risks, and the result is an empowered organization to scale innovation and reduce operational complexity.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Cybersecurity’s global alarm system is breaking down

Every day, billions of people trust digital systems to run everything from communication to commerce to critical infrastructure. But the global early warning system that alerts security teams to dangerous software flaws is showing critical gaps in coverage—and most users have no idea their digital lives are likely becoming more vulnerable.

Over the past 18 months, two pillars of global cybersecurity have flirted with apparent collapse. In February 2024, the US-backed National Vulnerability Database (NVD)—relied on globally for its free analysis of security threats—abruptly stopped publishing new entries, citing a cryptic “change in interagency support.” Then, in April of this year, the Common Vulnerabilities and Exposures (CVE) program, the fundamental numbering system for tracking software flaws, seemed at similar risk: A leaked letter warned of an imminent contract expiration.

Cybersecurity practitioners have since flooded Discord channels and LinkedIn feeds with emergency posts and memes of “NVD” and “CVE” engraved on tombstones. Unpatched vulnerabilities are the second most common way cyberattackers break in, and they have led to fatal hospital outages and critical infrastructure failures. In a social media post, Jen Easterly, a US cybersecurity expert, said: “Losing [CVE] would be like tearing out the card catalog from every library at once—leaving defenders to sort through chaos while attackers take full advantage.” If CVEs identify each vulnerability like a book in a card catalogue, NVD entries provide the detailed review with context around severity, scope, and exploitability. 

In the end, the Cybersecurity and Infrastructure Security Agency (CISA) extended funding for CVE another year, attributing the incident to a “contract administration issue.” But the NVD’s story has proved more complicated. Its parent organization, the National Institute of Standards and Technology (NIST), reportedly saw its budget cut roughly 12% in 2024, right around the time that CISA pulled its $3.7 million in annual funding for the NVD. Shortly after, as the backlog grew, CISA launched its own “Vulnrichment” program to help address the analysis gap, while promoting a more distributed approach that allows multiple authorized partners to publish enriched data. 

“CISA continuously assesses how to most effectively allocate limited resources to help organizations reduce the risk of newly disclosed vulnerabilities,” says Sandy Radesky, the agency’s associate director for vulnerability management. Rather than just filling the gap, she emphasizes, Vulnrichment was established to provide unique additional information, like recommended actions for specific stakeholders, and to “reduce dependency of the federal government’s role to be the sole provider of vulnerability enrichment.”

Meanwhile, NIST has scrambled to hire contractors to help clear the backlog. Despite a return to pre-crisis processing levels, a boom in vulnerabilities newly disclosed to the NVD has outpaced these efforts. Currently, over 25,000 vulnerabilities await processing—nearly 10 times the previous high in 2017, according to data from the software company Anchore. Before that, the NVD largely kept pace with CVE publications, maintaining a minimal backlog.

“Things have been disruptive, and we’ve been going through times of change across the board,” Matthew Scholl, then chief of the computer security division in NIST’s Information Technology Laboratory, said at an industry event in April. “Leadership has assured me and everyone that NVD is and will continue to be a mission priority for NIST, both in resourcing and capabilities.” Scholl left NIST in May after 20 years at the agency, and NIST declined to comment on the backlog. 

The situation has now prompted multiple government actions, with the Department of Commerce launching an audit of the NVD in May and House Democrats calling for a broader probe of both programs in June. But the damage to trust is already transforming geopolitics and supply chains as security teams prepare for a new era of cyber risk. “It’s left a bad taste, and people are realizing they can’t rely on this,” says Rose Gupta, who builds and runs enterprise vulnerability management programs. “Even if they get everything together tomorrow with a bigger budget, I don’t know that this won’t happen again. So I have to make sure I have other controls in place.”

As these public resources falter, organizations and governments are confronting a critical weakness in our digital infrastructure: Essential global cybersecurity services depend on a complex web of US agency interests and government funding that can be cut or redirected at any time.

Security haves and have-nots

What began as a trickle of software vulnerabilities in the early Internet era has become an unstoppable avalanche, and the free databases that have tracked them for decades have struggled to keep up. In early July, the CVE database crossed over 300,000 catalogued vulnerabilities. Numbers jump unpredictably each year, sometimes by 10% or much more. Even before its latest crisis, the NVD was notorious for delayed publication of new vulnerability analyses, often trailing private security software and vendor advisories by weeks or months.

Gupta has watched organizations increasingly adopt commercial vulnerability management (VM) software that includes its own threat intelligence services. “We’ve definitely become over-reliant on our VM tools,” she says, describing security teams’ growing dependence on vendors like Qualys, Rapid7, and Tenable to supplement or replace unreliable public databases. These platforms combine their own research with various data sources to create proprietary risk scores that help teams prioritize fixes. But not all organizations can afford to fill the NVD’s gap with premium security tools. “Smaller companies and startups, already at a disadvantage, are going to be more at risk,” she explains. 

Komal Rawat, a security engineer in New Delhi whose mid-stage cloud startup has a limited budget, describes the impact in stark terms: “If NVD goes, there will be a crisis in the market. Other databases are not that popular, and to the extent they are adopted, they are not free. If you don’t have recent data, you’re exposed to attackers who do.”

The growing backlog means new devices could be more likely to have vulnerability blind spots—whether that’s a Ring doorbell at home or an office building’s “smart” access control system. The biggest risk may be “one-off” security flaws that fly under the radar. “There are thousands of vulnerabilities that will not affect the majority of enterprises,” says Gupta. “Those are the ones that we’re not getting analysis on, which would leave us at risk.”

NIST acknowledges it has limited visibility into which organizations are most affected by the backlog. “We don’t track which industries use which products and therefore cannot measure impact to specific industries,” a spokesperson says. Instead, the team prioritizes vulnerabilities on the basis of CISA’s known exploits list and those included in vendor advisories like Microsoft Patch Tuesday.

The biggest vulnerability

Brian Martin has watched this system evolve—and deteriorate—from the inside. A former CVE board member and an original project leader behind the Open Source Vulnerability Database, he has built a combative reputation over the decades as a leading historian and practitioner. Martin says his current project, VulnDB (part of Flashpoint Security), outperforms the official databases he once helped oversee. “Our team processes more vulnerabilities, at a much faster turnaround, and we do it for a fraction of the cost,” he says, referring to the tens of millions in government contracts that support the current system. 

When we spoke in May, Martin said his database contains more than 112,000 vulnerabilities with no CVE identifiers—security flaws that exist in the wild but remain invisible to organizations that rely solely on public channels. “If you gave me the money to triple my team, that non-CVE number would be in the 500,000 range,” he said.

In the US, official vulnerability management duties are split between a web of contractors, agencies, and nonprofit centers like the Mitre Corporation. Critics like Martin say that creates potential for redundancy, confusion, and inefficiency, with layers of middle management and relatively few actual vulnerability experts. Others defend the value of this fragmentation. “These programs build on or complement each other to create a more comprehensive, supportive, and diverse community,” CISA said in a statement. “That increases the resilience and usefulness of the entire ecosystem.”

As American leadership wavers, other nations are stepping up. China now operates multiple vulnerability databases, some surprisingly robust but tainted by the possibility that they are subject to state control. In May, the European Union accelerated the launch of its own database, as well as a decentralized “Global CVE” architecture. Following social media and cloud services, vulnerability intelligence has become another front in the contest for technological independence. 

That leaves security professionals to navigate multiple potentially conflicting sources of data. “It’s going to be a mess, but I would rather have too much information than none at all,” says Gupta, describing how her team monitors multiple databases despite the added complexity. 

Resetting software liability

As defenders adapt to the fragmenting landscape, the tech industry faces another reckoning: Why don’t software vendors carry more responsibility for protecting their customers from security issues? Major vendors routinely disclose—but don’t necessarily patch—thousands of new vulnerabilities each year. A single exposure could crash critical systems or increase the risks of fraud and data misuse. 

For decades, the industry has hidden behind legal shields. “Shrink-wrap licenses” once forced consumers to broadly waive their right to hold software vendors liable for defects. Today’s end-user license agreements (EULAs), often delivered in pop-up browser windows, have evolved into incomprehensibly long documents. Last November, a lab project called “EULAS of Despair” used the length of War and Peace (587,287 words) to measure these sprawling contracts. The worst offender? Twitter, at 15.83 novels’ worth of fine print.

“This is a legal fiction that we’ve created around this whole ecosystem, and it’s just not sustainable,” says Andrea Matwyshyn, a US special advisor and technology law professor at Penn State University, where she directs the Policy Innovation Lab of Tomorrow. “Some people point to the fact that software can contain a mix of products and services, creating more complex facts. But just like in engineering or financial litigation, even the most messy scenarios can be resolved with the assistance of experts.”

This liability shield is finally beginning to crack. In July 2024, a faulty security update in CrowdStrike’s popular endpoint detection software crashed millions of Windows computers worldwide and caused outages at everything from airlines to hospitals to 911 systems. The incident led to billions in estimated damages, and the city of Portland, Oregon, even declared a “state of emergency.” Now, affected companies like Delta Airlines have hired high-priced attorneys to pursue major damages—a signal opening of the floodgates to litigation.

Despite the soaring number of vulnerabilities, many fall into long-established categories, such as SQL injections that interfere with database queries and buffer memory overflows that enable code to be executed remotely. Matwyshyn advocates for a mandatory “software bill of materials,” or S-BOM—an ingredients list that would let organizations understand what components and potential vulnerabilities exist throughout their software supply chains. One recent report found 30% of data breaches stemmed from the vulnerabilities of third-party software vendors or cloud service providers.

She adds: “When you can’t tell the difference between the companies that are cutting corners and a company that has really invested in doing right by their customers, that results in a market where everyone loses.”

CISA leadership shares this sentiment, with a spokesperson emphasizing its “secure-by-design principles,” such as “making essential security features available without additional cost, eliminating classes of vulnerabilities, and building products in a way that reduces the cybersecurity burden on customers.”

Avoiding a digital ‘dark age’

It will likely come as no surprise that practitioners are looking to AI to help fill the gap, while at the same time preparing for a coming swarm of cyberattacks by AI agents. Security researchers have used an OpenAI model to discover new “zero-day” vulnerabilities. And both the NVD and CVE teams are developing “AI-powered tools” to help streamline data collection, identification, and processing. NIST says that “up to 65% of our analysis time has been spent generating CPEs”—product information codes that pinpoint affected software. If AI can solve even part of this tedious process, it could dramatically speed up the analysis pipeline.

But Martin cautions against optimism around AI, noting that the technology remains unproven and often riddled with inaccuracies—which, in security, can be fatal. “Rather than AI or ML [machine learning], there are ways to strategically automate bits of the processing of that vulnerability data while ensuring 99.5% accuracy,” he says. 

AI also fails to address more fundamental challenges in governance. The CVE Foundation, launched in April 2025 by breakaway board members, proposes a globally funded nonprofit model similar to that of the internet’s addressing system, which transitioned from US government control to international governance. Other security leaders are pushing to revitalize open-source alternatives like Google’s OSV Project or the NVD++ (maintained by VulnCheck), which are accessible to the public but currently have limited resources.

As these various reform efforts gain momentum, the world is waking up to the fact that vulnerability intelligence—like disease surveillance or aviation safety—requires sustained cooperation and public investment. Without it, a patchwork of paid databases will be all that remains, threatening to leave all but the richest organizations and nations permanently exposed.

Matthew King is a technology and environmental journalist based in New York. He previously worked for cybersecurity firm Tenable.

The digital future of industrial and operational work

Digital transformation has long been a boardroom buzzword—shorthand for ambitious, often abstract visions of modernization. But today, digital technologies are no longer simply concepts in glossy consultancy decks and on corporate campuses; they’re also being embedded directly into factory floors, logistics hubs, and other mission-critical, frontline environments.

This evolution is playing out across sectors: Field technicians on industrial sites are diagnosing machinery remotely with help from a slew of connected devices and data feeds, hospital teams are collaborating across geographies on complex patient care via telehealth technologies, and warehouse staff are relying on connected ecosystems to streamline inventory and fulfillment far faster than manual processes would allow.

Across all these scenarios, IT fundamentals—like remote access, unified login systems, and interoperability across platforms—are being handled behind the scenes and consolidated into streamlined, user-friendly solutions. The way employees experience these tools, collectively known as the digital employee experience (DEX), can be a key component of achieving business outcomes: Deloitte finds that companies investing in frontline-focused digital tools see a 22 % boost in worker productivity, a doubling in customer satisfaction, and as much as a 25 % increase in profitability.

As digital tools become everyday fixtures in operational contexts, companies face both opportunities and hurdles—and the stakes are only rising as emerging technologies like AI become more sophisticated. The organizations best positioned for an AI-first future are crafting thoughtful strategies to ensure digital systems align with the realities of daily work—and placing people at the heart of the whole process.

IT meets OT in an AI world

Despite promising returns, many companies still face a last-mile challenge in delivering usable, effective tools to the frontline. The Deloitte study notes that less than one-quarter (just 23%) of frontline workers believe they have access to the technology they need to maximize productivity. There are several possible reasons for this disconnect, including the fact that operational digital transformation faces unique challenges compared to office-based digitization efforts.

For one, many companies are using legacy systems that don’t communicate easily across dispersed or edge environments. For example, the office IT department might use completely different software than what’s running the factory floor; a hospital’s patient records might be entirely separate from the systems monitoring medical equipment. When systems can’t talk to one another, troubleshooting issues becomes a time-consuming guessing game—one that often requires manual workarounds or clunky patches.

There’s also often a clash between tech’s typical “ship first, debug later” philosophy and the careful, safety-first approach that operational environments demand. A software glitch in a spreadsheet is annoying; a snafu in a power plant or at a chemical facility can be catastrophic.

Striking a careful balance between proactive innovation and prudent precaution will become ever more important, especially as AI usage becomes more common in high-stakes, tightly regulated environments. Companies will need to navigate a growing tension between the promise of smarter operations and the reality of implementing them safely at scale.

Humans at the heart of transformation efforts

With the buzz over AI and automation reaching fever pitch, it’s easy to overlook the single most impactful factor that makes transformation stick: the human element. The convergence of IT and OT goes hand in hand with the rise of digital employee experience. DEX encompasses everything from logging into systems and accessing applications to navigating networks and completing tasks across devices and locations. At its core, DEX is about ensuring technology empowers employees to work efficiently and without disruption—no matter where or how they work.

Companies investing in DEX technology are seeing measurable gains—from reduced help desk tickets and system downtime to harder-to-quantify benefits like higher employee satisfaction and retention. Frictionless digital workplaces, supported by real-time monitoring and automation capabilities, help organizations attend to IT issues before users experience disruptions or productivity levels dip.

There are real-world examples of seamless DEX in action: Swiss energy and infrastructure provider BKW, for instance, recently built a system that lets their IT team remotely assist employees experiencing technical difficulties across more than 140 subsidiaries. For employees, this means no more waiting for an in-person technician when their device freezes or software hiccups; IT can swoop in remotely and solve problems in minutes instead of hours.

The insurance company RLI faced a different but equally frustrating issue before switching to a centralized, remote IT support system: Technical issues like device lag or overheating were often left unreported, as employees didn’t want to disrupt their workflow or bother the IT team with seemingly minor complaints. Those small performance issues, however, could snowball over time, sometimes causing devices to fail completely. To get ahead of this phenomenon, RLI installed monitoring software to observe device performance in real time and catch issues proactively. Now, when a laptop gets too hot or starts slowing down, IT can address it right away—often before the employee even knows there’s a problem.

Ultimately, the organizations making the biggest strides in DEX recognize that digital transformation is as much about experience as it is about infrastructure. When digital tools feel like helpful extensions of workers’ expertise—rather than obstacles standing in the way of their workday—companies are in a better position to realize the full benefits of their investments.

Smart systems and smarter safeguards

Of course, as operational systems become more interconnected, security vulnerabilities multiply in turn. Consider this hypothetical: In a busy manufacturing plant, a piece of machinery suddenly breaks down. Instead of waiting hours for a technician to arrive on-site, a local operator deploys a mobile augmented reality device that projects step-by-step diagnostic instructions onto the machine. Following guidance from a remote specialist, the operator fixes the equipment and has production back on track in mere minutes.

This snappy and streamlined approach to diagnostics is undeniably efficient, but it opens up the factory floor to multiple external touchpoints: live video feeds streaming to remote experts, cloud databases containing sensitive repair procedures, and direct access to the machine’s diagnostic systems. Suddenly, a manufacturing plant that used to be an island is now part of an interconnected network.

Smart companies are getting practical about the challenges associated with this expanding threat surface. For instance, BKW has taken a structured approach to permissions: Subsidiary IT teams can only access their own company’s devices, outside contractors get temporary access for specific tasks, and employees can reach certain high-powered workstations when they need them.

Bühler, a global industrial equipment manufacturer, also uses centrally managed access controls to govern who can connect to which platforms, as well as when and under what conditions. By enforcing consistent policies from its headquarters, the company ensures all remote support activities are fully monitored and aligned with strict cybersecurity protocols, including compliance with ISO 27001 standards. The system allows Bühler’s extensive global technician network to provide real-time assistance without compromising system integrity.

The power of practical innovation

How do you help a technician troubleshoot equipment when the expert is 500 miles away? How do you catch IT problems before they shut down a production line? How do you keep operations secure without burying workers in passwords and protocols?

These are the kinds of practical questions that companies like Bühler, BKW, and RLI Insurance have focused on solving—and it’s part of why they’re succeeding where others struggle. These examples demonstrate a genuine shift in how successful companies think about technology and transformation. Instead of asking, “What’s the latest digital trend we should adopt?” they’re assessing, “What problems are our people actually trying to solve?”

The organizations pulling ahead to digitally transform frontline operations are the ones that have learned to make complex systems feel simple, intuitive, and secure to boot. Such a practical approach will only become more pressing as AI introduces new layers of complexity to operational work.

Ready to make work work better for your business? Learn how at TeamViewer.com.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Book review: Surveillance & privacy

Privacy only matters to those with something to hide. So goes one of the more inane and disingenuous justifications for mass government and corporate surveillance. There are others, of course, but the “nothing to hide” argument remains a popular way to rationalize or excuse what’s become standard practice in our digital age: the widespread and invasive collection of vast amounts of personal data.

One common response to this line of reasoning is that everyone, in fact, has something to hide, whether they realize it or not. If you’re unsure of whether this holds true for you, I encourage you to read Means of Control by Byron Tau. 

cover of Means of Control
Means of Control: How the Hidden Alliance of Tech and Government Is Creating a New American Surveillance State
Byron Tau
CROWN, 2024

Midway through his book, Tau, an investigative journalist, recalls meeting with a disgruntled former employee of a data broker—a shady company that collects, bundles, and sells your personal data to other (often shadier) third parties, including the government. This ex-employee had managed to make off with several gigabytes of location data representing the precise movements of tens of thousands of people over the course of a few weeks. “What could I learn with this [data]—­theoretically?” Tau asks the former employee. The answer includes a laundry list of possibilities that I suspect would make even the most enthusiastic oversharer uncomfortable.

“If information is power, and America is a society that’s still interested in the guarantee of liberty, personal dignity, and the individual freedom of its citizens, a serious conversation is needed.”

Bryon Tau, author of Means of Control

Did someone in this group recently visit an abortion clinic? That would be easy to figure out, says the ex-employee. Anyone attend an AA meeting or check into inpatient drug rehab? Again, pretty simple to discern. Is someone being treated for erectile dysfunction at a sexual health clinic? If so, that would probably be gleanable from the data too. Tau never opts to go down that road, but as Means of Control makes very clear, others certainly have done so and will.

While most of us are at least vaguely aware that our phones and apps are a vector for data collection and tracking, both the way in which this is accomplished and the extent to which it happens often remain murky. Purposely so, argues Tau. In fact, one of the great myths Means of Control takes aim at is the very idea that what we do with our devices can ever truly be anonymized. Each of us has habits and routines that are completely unique, he says, and if an advertiser knows you only as an alphanumeric string provided by your phone as you move about the world, and not by your real name, that still offers you virtually no real privacy protection. (You’ll perhaps not be surprised to learn that such “anonymized ad IDs” are relatively easy to crack.)

“I’m here to tell you if you’ve ever been on a dating app that wanted your location, or if you ever granted a weather app permission to know where you are 24/7, there’s a good chance a detailed log of your precise movement patterns has been vacuumed up and saved in some data bank somewhere that tens of thousands of total strangers have access to,” writes Tau.

Unraveling the story of how these strangers—everyone from government intelligence agents and local law enforcement officers to private investigators and employees of ad tech companies—gained access to our personal information is the ambitious task Tau sets for himself, and he begins where you might expect: the immediate aftermath of 9/11.

At no other point in US history was the government’s appetite for data more voracious than in the days after the attacks, says Tau. It was a hunger that just so happened to coincide with the advent of new technologies, devices, and platforms that excelled at harvesting and serving up personal information that had zero legal privacy protections. 

Over the course of 22 chapters, Tau gives readers a rare glimpse inside the shadowy industry, “built by corporate America and blessed by government lawyers,” that emerged in the years and decades following the 9/11 attacks. In the hands of a less skilled reporter, this labyrinthine world of shell companies, data vendors, and intelligence agencies could easily become overwhelming or incomprehensible. But Tau goes to great lengths to connect dots and plots, explaining how a perfect storm of business motivations, technological breakthroughs, government paranoia, and lax or nonexistent privacy laws combined to produce the “digital panopticon” we are all now living in.

Means of Control doesn’t offer much comfort or reassurance for privacy­-minded readers, but that’s arguably the point. As Tau notes repeatedly throughout his book, this now massive system of persistent and ubiquitous surveillance works only because the public is largely unaware of it. “If information is power, and America is a society that’s still interested in the guarantee of liberty, personal dignity, and the individual freedom of its citizens, a serious conversation is needed,” he writes. 

As another new book makes clear, this conversation also needs to include student data. Lindsay Weinberg’s Smart University: Student Surveillance in the Digital Age reveals how the motivations and interests of Big Tech are transforming higher education in ways that are increasingly detrimental to student privacy and, arguably, education as a whole.

cover of Smart University
Smart University: Student Surveillance in the Digital Age
Lindsay Weinberg
JOHNS HOPKINS UNIVERSITY PRESS, 2024

By “smart university,” Weinberg means the growing number of public universities across the country that are being restructured around “the production and capture of digital data.” Similar in vision and application to so-called “smart cities,” these big-data-pilled institutions are increasingly turning to technologies that can track students’ movements around campus, monitor how much time they spend on learning management systems, flag those who seem to need special “advising,” and “nudge” others toward specific courses and majors. “What makes these digital technologies so seductive to higher education administrators, in addition to promises of cost cutting, individualized student services, and improved school rankings, is the notion that the integration of digital technology on their campuses will position universities to keep pace with technological innovation,” Weinberg writes. 

Readers of Smart University will likely recognize a familiar logic at play here. Driving many of these academic tracking and data-gathering initiatives is a growing obsession with efficiency, productivity, and convenience. The result is a kind of Silicon Valley optimization mindset, but applied to higher education at scale. Get students in and out of university as fast as possible, minimize attrition, relentlessly track performance, and do it all under the guise of campus modernization and increased personalization. 

Under this emerging system, students are viewed less as self-empowered individuals and more as “consumers to be courted, future workers to be made employable for increasingly smart workplaces, sources of user-generated content for marketing and outreach, and resources to be mined for making campuses even smarter,” writes Weinberg. 

At the heart of Smart University seems to be a relatively straightforward question: What is an education for? Although Weinberg doesn’t provide a direct answer, she shows that how a university (or society) decides to answer that question can have profound impacts on how it treats its students and teachers. Indeed, as the goal of education becomes less to produce well-rounded humans capable of thinking critically and more to produce “data subjects capable of being managed and who can fill roles in the digital economy,” it’s no wonder we’re increasingly turning to the dumb idea of smart universities to get the job done.  

If books like Means of Control and Smart University do an excellent job exposing the extent to which our privacy has been compromised, commodified, and weaponized (which they undoubtedly do), they can also start to feel a bit predictable in their final chapters. Familiar codas include calls for collective action, buttressed by a hopeful anecdote or two detailing previously successful pro-privacy wins; nods toward a bipartisan privacy bill in the works or other pieces of legislation that could potentially close some glaring surveillance loophole; and, most often, technical guides that explain how each of us, individually, might better secure or otherwise take control and “ownership” of our personal data.

The motivations behind these exhortations and privacy-centric how-to guides are understandable. After all, it’s natural for readers to want answers, advice, or at least some suggestion that things could be different—especially after reading about the growing list of degradations suffered under surveillance capitalism. But it doesn’t take a skeptic to start to wonder if they’re actually advancing the fight for privacy in the way that its advocates truly want.

For one thing, technology tends to move much faster than any one smartphone privacy guide or individual law could ever hope to keep up with. Similarly, framing rampant privacy abuses as a problem we each have to be responsible for addressing individually seems a lot like framing the plastic pollution crisis as something Americans could have somehow solved by recycling. It’s both a misdirection and a misunderstanding of the problem.     

It’s to his credit, then, that Lowry Pressly doesn’t include a “What is to be done” section at the end of The Right to Oblivion: Privacy and the Good Life. In lieu of offering up any concrete technical or political solutions, he simply reiterates an argument he has carefully and convincingly built over the course of his book: that privacy is important “not because it empowers us to exercise control over our information, but because it protects against the creation of such information in the first place.” 

cover of The Right to Oblivion
The Right to Oblivion: Privacy and the Good Life
Lowry Pressly
HARVARD UNIVERSITY PRESS, 2024

For Pressly, a Stanford instructor, the way we currently understand and value privacy has been tainted by what he calls “the ideology of information.” “This is the idea that information has a natural existence in human affairs,” he writes, “and that there are no aspects of human life which cannot be translated somehow into data.” This way of thinking not only leads to an impoverished sense of our own humanity—it also forces us into the conceptual trap of debating privacy’s value using a framework (control, consent, access) established by the companies whose business model is to exploit it.

The way out of this trap is to embrace what Pressly calls “oblivion,” a kind of state of unknowing, ambiguity, and potential—or, as he puts it, a realm “where there is no information or knowledge one way or the other.” While he understands that it’s impossible to fully escape a modern world intent on turning us into data subjects, Pressly’s book suggests we can and should support the idea that certain aspects of our (and others’) subjective interior lives can never be captured by information. Privacy is important because it helps to both protect and produce these ineffable parts of our lives, which in turn gives them a sense of dignity, depth, and the possibility for change and surprise. 

Reserving or cultivating a space for oblivion in our own lives means resisting the logic that drives much of the modern world. Our inclination to “join the conversation,” share our thoughts, and do whatever it is we do when we create and curate a personal brand has become so normalized that it’s practically invisible to us. According to Pressly, all that effort has only made our lives and relationships shallower, less meaningful, and less trusting.

Calls for putting our screens down and stepping away from the internet are certainly nothing new. And while The Right to Oblivion isn’t necessarily prescriptive about such things, Pressly does offer a beautiful and compelling vision of what can be gained when we retreat not just from the digital world but from the idea that we are somehow knowable to that world in any authentic or meaningful way. 

If all this sounds a bit philosophical, well, it is. But it would be a mistake to think of The Right to Oblivion as a mere thought exercise on privacy. Part of what makes the book so engaging and persuasive is the way in which Pressly combines a philosopher’s knack for uncovering hidden assumptions with a historian’s interest in and sensitivity to older (often abandoned) ways of thinking, and how they can often enlighten and inform modern problems.

Pressly isn’t against efforts to pass more robust privacy legislation, or even to learn how to better protect our devices against surveillance. His argument is that in order to guide such efforts, you have to both ask the right questions and frame the problem in a way that gives you and others the moral clarity and urgency to act. Your phone’s privacy settings are important, but so is understanding what you’re protecting when you change them. 

Bryan Gardiner is a writer based in Oakland, California. 

IBM aims to build the world’s first large-scale, error-corrected quantum computer by 2028

IBM announced detailed plans today to build an error-corrected quantum computer with significantly more computational capability than existing machines by 2028. It hopes to make the computer available to users via the cloud by 2029. 

The proposed machine, named Starling, will consist of a network of modules, each of which contains a set of chips, housed within a new data center in Poughkeepsie, New York. “We’ve already started building the space,” says Jay Gambetta, vice president of IBM’s quantum initiative.

IBM claims Starling will be a leap forward in quantum computing. In particular, the company aims for it to be the first large-scale machine to implement error correction. If Starling achieves this, IBM will have solved arguably the biggest technical hurdle facing the industry today to beat competitors including Google, Amazon Web Services, and smaller startups such as Boston-based QuEra and PsiQuantum of Palo Alto, California. 

IBM, along with the rest of the industry, has years of work ahead. But Gambetta thinks it has an edge because it has all the building blocks to build error correction capabilities in a large-scale machine. That means improvements in everything from algorithm development to chip packaging. “We’ve cracked the code for quantum error correction, and now we’ve moved from science to engineering,” he says. 

Correcting errors in a quantum computer has been an engineering challenge, owing to the unique way the machines crunch numbers. Whereas classical computers encode information in the form of bits, or binary 1 and 0, quantum computers instead use qubits, which can represent “superpositions” of both values at once. IBM builds qubits made of tiny superconducting circuits, kept near absolute zero, in an interconnected layout on chips. Other companies have built qubits out of other materials, including neutral atoms, ions, and photons.

Quantum computers sometimes commit errors, such as when the hardware operates on one qubit but accidentally also alters a neighboring qubit that should not be involved in the computation. These errors add up over time. Without error correction, quantum computers cannot accurately perform the complex algorithms that are expected to be the source of their scientific or commercial value, such as extremely precise chemistry simulations for discovering new materials and pharmaceutical drugs. 

But error correction requires significant hardware overhead. Instead of encoding a single unit of information in a single “physical” qubit, error correction algorithms encode a unit of information in a constellation of physical qubits, referred to collectively as a “logical qubit.”

Currently, quantum computing researchers are competing to develop the best error correction scheme. Google’s surface code algorithm, while effective at correcting errors, requires on the order of 100 qubits to store a single logical qubit in memory. AWS’s Ocelot quantum computer uses a more efficient error correction scheme that requires nine physical qubits per logical qubit in memory. (The overhead is higher for qubits performing computations for storing data.) IBM’s error correction algorithm, known as a low-density parity check code, will make it possible to use 12 physical qubits per logical qubit in memory, a ratio comparable to AWS’s. 

One distinguishing characteristic of Starling’s design will be its anticipated ability to diagnose errors, known as decoding, in real time. Decoding involves determining whether a measured signal from the quantum computer corresponds to an error. IBM has developed a decoding algorithm that can be quickly executed by a type of conventional chip known as an FPGA. This work bolsters the “credibility” of IBM’s error correction method, says Neil Gillespie of the UK-based quantum computing startup Riverlane. 

However, other error correction schemes and hardware designs aren’t out of the running yet. “It’s still not clear what the winning architecture is going to be,” says Gillespie. 

IBM intends Starling to be able to perform computational tasks beyond the capability of classical computers. Starling will have 200 logical qubits, which will be constructed using the company’s chips. It should be able to perform 100 million logical operations consecutively with accuracy; existing quantum computers can do so for only a few thousand. 

The system will demonstrate error correction at a much larger scale than anything done before, claims Gambetta. Previous error correction demonstrations, such as those done by Google and Amazon, involve a single logical qubit, built from a single chip. Gambetta calls them “gadget experiments,” saying “They’re small-scale.” 

Still, it’s unclear whether Starling will be able to solve practical problems. Some experts think that you need a billion error-corrected logical operations to execute any useful algorithm. Starling represents “an interesting stepping-stone regime,” says Wolfgang Pfaff, a physicist at the University of Illinois Urbana-Champaign. “But it’s unlikely that this will generate economic value.” (Pfaff, who studies quantum computing hardware, has received research funding from IBM but is not involved with Starling.) 

The timeline for Starling looks feasible, according to Pfaff. The design is “based in experimental and engineering reality,” he says. “They’ve come up with something that looks pretty compelling.” But building a quantum computer is hard, and it’s possible that IBM will encounter delays due to unforeseen technical complications. “This is the first time someone’s doing this,” he says of making a large-scale error-corrected quantum computer.

IBM’s road map involves first building smaller machines before Starling. This year, it plans to demonstrate that error-corrected information can be stored robustly in a chip called Loon. Next year the company will build Kookaburra, a module that can both store information and perform computations. By the end of 2027, it plans to connect two Kookaburra-type modules together into a larger quantum computer, Cockatoo. After demonstrating that successfully, the next step is to scale up and connect around 100 modules to create Starling.

This strategy, says Pfaff, reflects the industry’s recent embrace of “modularity” when scaling up quantum computers—networking multiple modules together to create a larger quantum computer rather than laying out qubits on a single chip, as researchers did in earlier designs. 

IBM is also looking beyond 2029. After Starling, it plans to build another, Blue Jay. (“I like birds,” says Gambetta.) Blue Jay will contain 2000 logical qubits and is expected to be capable of a billion logical operations.

Driving business value by optimizing the cloud

Organizations are deepening their cloud investments at an unprecedented pace, recognizing its fundamental role in driving business agility and innovation. Synergy Research Group reports that companies spent $84 billion worldwide on cloud infrastructure services in the third quarter of 2024, a 23% rise over the third quarter of 2023 and the fourth consecutive quarter in which the year-on-year growth rate has increased.

Allowing users to access IT systems from anywhere in the world, cloud services also ensure solutions remain highly configurable and automated.

At the same time, hosted services like generative AI and tailored industry solutions can help companies quickly launch applications and grow the business. To get the most out of these services, companies are turning to cloud optimization—the process of selecting and allocating cloud resources to reduce costs while maximizing performance.

But despite all the interest in the cloud, many workloads remain stranded on-premises, and many more are not optimized for efficiency and growth, greatly limiting the forward momentum. Companies are missing out on a virtuous cycle of mutually reinforcing results that comes from even more efficient use of the cloud.

Organizations can enhance security, make critical workloads more resilient, protect the customer experience, boost revenues, and generate cost savings. These benefits can fuel growth and avert expenses, generating capital that can be invested in innovation.

“Cloud optimization involves making sure that your cloud spending is efficient so you’re not spending wastefully,” says André Dufour, Director and General Manager for AWS Cloud Optimization at Amazon Web Services. “But you can’t think of it only as cost savings at the expense of other things. Dollars freed up through optimization can be redirected to fund net new innovations, like generative AI.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

How a 1980s toy robot arm inspired modern robotics

As a child of an electronic engineer, I spent a lot of time in our local Radio Shack as a kid. While my dad was locating capacitors and resistors, I was in the toy section. It was there, in 1984, that I discovered the best toy of my childhood: the Armatron robotic arm. 

A drawing from the patent application for the Armatron robotic arm.
COURTESY OF TAKARA TOMY

Described as a “robot-like arm to aid young masterminds in scientific and laboratory experiments,” it was the rare toy that lived up to the hype printed on the front of the box. This was a legit robotic arm. You could rotate the arm to spin around its base, tilt it up and down, bend it at the “elbow” joint, rotate the “wrist,” and open and close the bright-­orange articulated hand in elegant chords of movement, all using only the twistable twin joysticks. 

Anyone who played with this toy will also remember the sound it made. Once you slid the power button to the On position, you heard a constant whirring sound of plastic gears turning and twisting. And if you tried to push it past its boundaries, it twitched and protested with a jarring “CLICK … CLICK … CLICK.”

It wasn’t just kids who found the Armatron so special. It was featured on the cover of the November/December 1982 issue of Robotics Age magazine, which noted that the $31.95 toy (about $96 today) had “capabilities usually found only in much more expensive experimental arms.”

pieces of the armatron disassembled and arranged on a table

JIM GOLDEN

A few years ago I found my Armatron, and when I opened the case to get it working again, I was startled to find that other than the compartment for the pair of D-cell batteries, a switch, and a tiny three-volt DC motor, this thing was totally devoid of any electronic components. It was purely mechanical. Later, I found the patent drawings for the Armatron online and saw how incredibly complex the schematics of the gearbox were. This design was the work of a genius—or a madman.

The man behind the arm

I needed to know the story of this toy. I reached out to the manufacturer, Tomy (now known as Takara Tomy), which has been in business in Japan for over 100 years. It put me in touch with Hiroyuki Watanabe, a 69-year-old engineer and toy designer living in Tokyo. He’s retired now, but he worked at Tomy for 49 years, building many classic handheld electronic toys of the ’80s, including Blip, Digital Diamond, Digital Derby, and Missile Strike. Watanabe’s name can be found on 44 patents, and he was involved in bringing between 50 and 60 products to market. Watanabe answered emailed questions via video, and his responses were translated from Japanese.

“I didn’t have a period where I studied engineering professionally. Instead, I enrolled in what Japan would call a technical high school that trains technical engineers, and I actually [entered] the electrical department there,” he told me. 

Afterward, he worked at Komatsu Manufacturing—because, he said, he liked bulldozers. But in 1974, he saw that Tomy was hiring, and he wanted to make toys. “I was told that it was the No. 1 toy company in Japan, so I decided [it was worth a look],” he said. “I took a night train from Tohoku to Tokyo to take a job exam, and that’s how I ended up joining the company.”

The inspiration for the Armatron came from a newspaper clipping that Watanabe’s boss brought to him one day. “It showed an image of a [mechanical arm] holding an egg with three fingers. I think we started out thinking, ‘This is where things are heading these days, so let’s make this,’” he recalled. 

As the lead of a small team, Watanabe briefly turned his attention to another project, and by the time he returned to the robotic arm, the team had a prototype. But it was quite different from the Armatron’s final form. “The hand stuck out from the main body to the side and could only move about 90 degrees. The control panel also had six movement positions, and they were switched using six switches. I personally didn’t like that,” said Watanabe. So he went back to work.

The Armatron’s inventor, Hiroyuki Watanabe, in Tokyo in 2025
COURTESY OF TAKARA TOMY

Watanabe’s breakthrough was inspired by the radio-controlled helicopters he operated as a hobby. Holding up a radio remote controller with dual joystick controls, he told me, “This stick operation allows you to perform four movements with two arms, but I thought that if you twist this part, you can use six movements.”

Watanabe at work at Tomy in Tokyo in 1982.
COURTESY OF HIROYUKI WATANABE

“I had always wanted to create a system that could rotate 360 degrees, so I thought about how to make that system work,” he added.

Watanabe stressed that while he is listed as the Armatron’s primary inventor, it was a team effort. A designer created the case, colors, and logo, adding touches to mimic features seen on industrial robots of the time, such as the rubber tubes (which are just for looks). 

When the Armatron first came out, in 1981, robotics engineers started contacting Watanabe. “I wasn’t so much hearing from people at toy stores, but rather from researchers at university laboratories, factories, and companies that were making industrial robots,” he said. “They were quite encouraging, and we often talked together.”

The long reach of the robot at Radio Shack

The bold look and function of Armatron made quite an impression on many young kids who would one day have a career in robotics.

One of them was Adam Borrell, a mechanical design engineer who has been building robots for 15 years at Boston Dynamics, including Petman, the YouTube-famous Atlas, and the dog-size quadruped called Spot. 

Borrell grew up a few blocks away from a Radio Shack in New York City. “If I was going to the subway station, we would walk right by Radio Shack. I would stop in and play with it and set the timer, do the challenges,” he says. “I know it was a toy, but that was a real robot.” The Armatron was the hook that lured him into Radio Shack and then sparked his lifelong interest in engineering: “I would roll pennies and use them to buy soldering irons and solder at Radio Shack.” 

“There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.”

Borrell had a fateful reunion with the toy while in grad school for engineering. “One of my office mates had an Armatron at his desk,” he recalls, “and it was broken. We took it apart together, and that was the first time I had seen the guts of it. 

“It had this fantastic mechanical gear train to just engage and disengage this one motor in a bunch of different ways. And it was really fascinating that it had done so much—the one little motor. And that sort of got me back thinking about industrial robot arms again.” 

Eric Paulos, a professor of electrical engineering and computer science at the University of California, Berkeley, recalls nagging his parents about what an educational gift Armatron would make. Ultimately, he succeeded in his lobbying. 

“It was just endless exploration of picking stuff up and moving it around and even just watching it move. It was mesmerizing to me. I felt like I really owned my own little robot,” he recalls. “I cherish this thing. I still have it to this day, and it’s still working.” 

The Armatron on the cover of the November/December 1982 issue of Robotics Age magazine.
PUBLIC DOMAIN

Today, Paulos builds robots and teaches his students how to build their own. He challenges them to solve problems within constraints, such as building with cardboard or Play-Doh; he believes the restrictions facing Watanabe and his team ultimately forced them to be more creative in their engineering.

It’s not very hard to draw connections between the Armatron—an impossibly analog robot—and highly advanced machines that are today learning to move in incredible new ways, powered by AI advancements like computer vision and reinforcement learning.

Paulos sees parallels between the problems he tackled as a kid with his Armatron and those that researchers are still trying to deal with today: “What happens when you pick things up and they’re too heavy, but you can sort of pick it up if you approach it from different angles? Or how do you grip things? There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.”

While AI may be taking over the world of robotics, the field still requires engineers—builders and tinkerers who can problem-solve in the physical world. 

A page from the 1984 Radio Shack catalogue,
featuring the Armatron for $31.95.
COURTESY OF RADIOSHACKCATALOGS.COM

The Armatron encouraged kids to explore these analog mechanics, a reminder that not all breakthroughs happen on a computer screen. And that hands-on curiosity hasn’t faded. Today, a new generation of fans are rediscovering the Armatron through online communities and DIY modifications. Dozens of Armatron videos are on YouTube, including one where the arm has been modified to run on steam power

“I’m very happy to see people who love mechanisms are amazed,” Watanabe told me. “I’m really happy that there are still people out there who love our products in this way.” 

Jon Keegan writes about technology and AI and publishes Beautiful Public Data, a curated collection of government data sets (beautifulpublicdata.com).