Meet the man hunting the spies in your smartphone

In April 2025, Ronald Deibert left all electronic devices at home in Toronto and boarded a plane. When he landed in Illinois, he took a taxi to a mall and headed directly to the Apple Store to purchase a new laptop and iPhone. He’d wanted to keep the risk of having his personal devices confiscated to a minimum, because he knew his work made him a prime target for surveillance. “I’m traveling under the assumption that I am being watched, right down to exactly where I am at any moment,” Deibert says.

Deibert directs the Citizen Lab, a research center he founded in 2001 to serve as “counterintelligence for civil society.” Housed at the University of Toronto, the lab operates independently of governments or corporate interests, relying instead on research grants and private philanthropy for financial support. It’s one of the few institutions that investigate cyberthreats exclusively in the public interest, and in doing so, it has exposed some of the most egregious digital abuses of the past two decades.

For many years, Deibert and his colleagues have held up the US as the standard for liberal democracy. But that’s changing, he says: “The pillars of democracy are under assault in the United States. For many decades, in spite of its flaws, it has upheld norms about what constitutional democracy looks like or should aspire to. [That] is now at risk.”

Even as some of his fellow Canadians avoided US travel after Donald Trump’s second election, Deibert relished the opportunity to visit. Alongside his meetings with human rights defenders, he also documented active surveillance at Columbia University during the height of its student protests. Deibert snapped photos of drones above campus and noted the exceptionally strict security protocols. “It was unorthodox to go to the United States,” he says. “But I really gravitate toward problems in the world.”


Deibert, 61, grew up in East Vancouver, British Columbia, a gritty area with a boisterous countercultural presence. In the ’70s, Vancouver brimmed with draft dodgers and hippies, but Deibert points to American investigative journalism—exposing the COINTELPRO surveillance program, the Pentagon Papers, Watergate—as the seed of his respect for antiestablishment sentiment. He didn’t imagine that this fascination would translate into a career, however.

“My horizons were pretty low because I came from a working-class family, and there weren’t many people in my family—in fact, none—who went on to university,” he says.

Deibert eventually entered a graduate program in international relations at the University of British Columbia. His doctoral research brought him to a field of inquiry that would soon explode: the geopolitical implications of the nascent internet.

“In my field, there were a handful of people beginning to talk about the internet, but it was very shallow, and that frustrated me,” he says. “And meanwhile, computer science was very technical, but not political—[politics] was almost like a dirty word.”

Deibert continued to explore these topics at the University of Toronto when he was appointed to a tenure-track professorship, but it wasn’t until after he founded the Citizen Lab in 2001 that his work rose to global prominence. 

What put the lab on the map, Deibert says, was its 2009 report “Tracking GhostNet,” which uncovered a digital espionage network in China that had breached offices of foreign embassies and diplomats in more than 100 countries, including the office of the Dalai Lama. The report and its follow-up in 2010 were among the first to publicly expose cybersurveillance in real time. In the years since, the lab has published over 180 such analyses, garnering praise from human rights advocates ranging from Margaret Atwood to Edward Snowden.

The lab has rigorously investigated authoritarian regimes around the world (Deibert says both Russia and China have his name on a “list” barring his entry). The group was the first to uncover the use of commercial spyware to surveil people close to the Saudi dissident and Washington Post journalist Jamal Khashoggi prior to his assassination, and its research has directly informed G7 and UN resolutions on digital repression and led to sanctions on spyware vendors. Even so, in 2025 US Immigration and Customs Enforcement reactivated a $2 million contract with the spyware vendor Paragon. The contract, which the Biden administration had previously placed under a stop-work order, resembles steps taken by governments in Europe and Israel that have also deployed domestic spyware to address security concerns. 

“It saves lives, quite literally,” Cindy Cohn, executive director of the Electronic Frontier Foundation, says of the lab’s work. “The Citizen Lab [researchers] were the first to really focus on technical attacks on human rights activists and democracy activists all around the world. And they’re still the best at it.”


When recruiting new Citizen Lab employees (or “Labbers,” as they refer to one another), Deibert forgoes stuffy, pencil-pushing academics in favor of brilliant, colorful personalities, many of whom personally experienced repression from some of the same regimes the lab now investigates.

Noura Aljizawi, a researcher on digital repression who survived torture at the hands of the al-Assad regime in Syria, researches the distinct threat that digital technologies pose to women and queer people, particularly when deployed against exiled nationals. She helped create Security Planner, a tool that gives personalized, expert-reviewed guidance to people looking to improve their digital hygiene, for which the University of Toronto awarded her an Excellence Through Innovation Award. 

Work for the lab is not without risk. Citizen Lab fellow Elies Campo, for example, was followed and photographed after the lab published a 2022 report that exposed the digital surveillance of dozens of Catalonian citizens and members of parliament, including four Catalonian presidents who were targeted during or after their terms.

Still, the lab’s reputation and mission make recruitment fairly easy, Deibert says. “This good work attracts a certain type of person,” he says. “But they’re usually also drawn to the sleuthing. It’s detective work, and that can be highly intoxicating—even addictive.”

Deibert frequently deflects the spotlight to his fellow Labbers. He rarely discusses the group’s accomplishments without referencing two senior researchers, Bill Marczak and John Scott-Railton, alongside other staffers. And on the occasion that someone decides to leave the Citizen Lab to pursue another position, this appreciation remains.

“We have a saying: Once a Labber, always a Labber,” Deibert says.


While in the US, Deibert taught a seminar on the Citizen Lab’s work to Northwestern University undergraduates and delivered talks on digital authoritarianism at the Columbia University Graduate School of Journalism. Universities in the US had been subjected to funding cuts and heightened scrutiny from the Trump administration, and Deibert wanted to be “in the mix” at such institutions to respond to what he sees as encroaching authoritarian practices by the US government. 

Since Deibert’s return to Canada, the lab has continued its work unearthing digital threats to civil society worldwide, but now Deibert must also contend with the US—a country that was once his benchmark for democracy but has become another subject of his scrutiny. “I do not believe that an institution like the Citizen Lab could exist right now in the United States,” he says. “The type of research that we pioneered is under threat like never before.”

He is particularly alarmed by the increasing pressures facing federal oversight bodies and academic institutions in the US. In September, for example, the Trump administration defunded the Council of the Inspectors General on Integrity and Efficiency, a government organization dedicated to preventing waste, fraud, and abuse within federal agencies, citing partisanship concerns. The White House has also threatened to freeze federal funding to universities that do not comply with administration directives related to gender, DEI, and campus speech. These sorts of actions, Deibert says, undermine the independence of watchdogs and research groups like the Citizen Lab. 

Cohn, the director of the EFF, says the lab’s location in Canada allows it to avoid many of these attacks on institutions that provide accountability. “Having the Citizen Lab based in Toronto and able to continue to do its work largely free of the things we’re seeing in the US,” she says, “could end up being tremendously important if we’re going to return to a place of the rule of law and protection of human rights and liberties.” 

Finian Hazen is a journalism and political science student at Northwestern University.

Securing VMware workloads in regulated industries

At a regional hospital, a cardiac patient’s lab results sit behind layers of encryption, accessible to his surgeon but shielded from those without strictly need-to-know status. Across the street at a credit union, a small business owner anxiously awaits the all-clear for a wire transfer, unaware that fraud detection systems have flagged it for further review.

Such scenarios illustrate how companies in regulated industries juggle competing directives: Move data and process transactions quickly enough to save lives and support livelihoods, but carefully enough to maintain ironclad security and satisfy regulatory scrutiny.

Organizations subject to such oversight walk a fine line every day. And recently, a number of curveballs have thrown off that hard-won equilibrium. Agencies are ramping up oversight thanks to escalating data privacy concerns; insurers are tightening underwriting and requiring controls like MFA and privileged-access governance as a condition of coverage. Meanwhile, the shifting VMware landscape has introduced more complexity for IT teams tasked with planning long-term infrastructure strategies. 

Download the full article

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Accelerating VMware migrations with a factory model approach

In 1913, Henry Ford cut the time it took to build a Model T from 12 hours to just over 90 minutes. He accomplished this feat through a revolutionary breakthrough in process design: Instead of skilled craftsmen building a car from scratch by hand, Ford created an assembly line where standardized tasks happened in sequence, at scale.

The IT industry is having a similar moment of reinvention. Across operations from software development to cloud migration, organizations are adopting an AI-infused factory model that replaces manual, one-off projects with templated, scalable systems designed for speed and cost-efficiency.

Take VMware migrations as an example. For years, these projects resembled custom production jobs—bespoke efforts that often took many months or even years to complete. Fluctuating licensing costs added a layer of complexity, just as business leaders began pushing for faster modernization to make their organizations AI-ready. That urgency has become nearly universal: According to a recent IDC report, six in 10 organizations evaluating or using cloud services say their IT infrastructure requires major transformation, while 82% report their cloud environments need modernization.

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Moving toward LessOps with VMware-to-cloud migrations

Today’s IT leaders face competing mandates to do more (“make us an ‘AI-first’ enterprise—yesterday”) with less (“no new hires for at least the next six months”).

VMware has become a focal point of these dueling directives. It remains central to enterprise IT, with 80% of organizations using VMware infrastructure products. But shifting licensing models are prompting teams to reconsider how they manage and scale these workloads, often on tighter budgets.

For many organizations, the path forward involves adopting a LessOps model, an operational strategy that makes hybrid environments manageable without increasing headcount. This operational philosophy minimizes human intervention through extensive automation and selfservice capabilities while maintaining governance and compliance.

In practice, VMware-to-cloud migrations create a “two birds, one stone” opportunity. They present a practical moment to codify the automation and governance practices LessOps depends on—laying the groundwork for a leaner, more resilient IT operating model.

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Aligning VMware migration with business continuity

For decades, business continuity planning meant preparing for anomalous events like hurricanes, floods, tornadoes, or regional power outages. In anticipation of these rare disasters, IT teams built playbooks, ran annual tests, crossed their fingers, and hoped they’d never have to use them.

In recent years, an even more persistent threat has emerged. Cyber incidents, particularly ransomware, are now more common—and often, more damaging—than physical disasters. In a recent survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year. Earlier in 2025, ransomware attack rates on enterprises reached record highs.

Mark Vaughn, senior director of the virtualization practice at Presidio, has witnessed the trend firsthand. “When I speak at conferences, I’ll ask the room, ‘How many people have been impacted?’ For disaster recovery, you usually get a few hands,” he says. “But a little over a year ago, I asked how many people in the room had been hit by ransomware, and easily two-thirds of the hands went up.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

From vibe coding to context engineering: 2025 in software development

This year, we’ve seen a real-time experiment playing out across the technology industry, one in which AI’s software engineering capabilities have been put to the test against human technologists. And although 2025 may have started with AI looking strong, the transition from vibe coding to what’s being termed context engineering shows that while the work of human developers is evolving, they nevertheless remain absolutely critical.

This is captured in the latest volume of the “Thoughtworks Technology Radar,” a report on the technologies used by our teams on projects with clients. In it, we see the emergence of techniques and tooling designed to help teams better tackle the problem of managing context when working with LLMs and AI agents. 

Taken together, there’s a clear signal of the direction of travel in software engineering and even AI more broadly. After years of the industry assuming progress in AI is all about scale and speed, we’re starting to see that what matters is the ability to handle context effectively.

Vibes, antipatterns, and new innovations 

In February 2025, Andrej Karpathy coined the term vibe coding. It took the industry by storm. It certainly sparked debate at Thoughtworks; many of us were skeptical. On an April episode of our technology podcast, we talked about our concerns and were cautious about how vibe coding might evolve.

Unsurprisingly given the implied imprecision of vibe-based coding, antipatterns have been proliferating. We’ve once again noted, for instance, complacency with AI generated code on the latest volume of the Technology Radar, but it’s also worth pointing out that early ventures into vibe coding also exposed a degree of complacency about what AI models can actually handle — users demanded more and prompts grew larger, but model reliability started to falter.

Experimenting with generative AI 

This is one of the drivers behind increasing interest in engineering context. We’re well aware of its importance, working with coding assistants like Claude Code and Augment Code. Providing necessary context—or knowledge priming—is crucial. It ensures outputs are more consistent and reliable, which will ultimately lead to better software that needs less work — reducing rewrites and potentially driving productivity.

When effectively prepared, we’ve seen good results when using generative AI to understand legacy codebases. Indeed, done effectively with the appropriate context, it can even help when we don’t have full access to source code

It’s important to remember that context isn’t just about more data and more detail. This is one of the lessons we’ve taken from using generative AI for forward engineering. It might sound counterintuitive, but in this scenario, we’ve found AI to be more effective when it’s further abstracted from the underlying system — or, in other words, further removed from the specifics of the legacy code. This is because the solution space becomes much wider, allowing us to better leverage the generative and creative capabilities of the AI models we use.

Context is critical in the agentic era

The backdrop of changes that have happened over recent months is the growth of agents and agentic systems — both as products organizations want to develop and as technology they want to leverage. This has forced the industry to properly reckon with context and move away from a purely vibes-based approach.

Indeed, far from simply getting on with tasks they’ve been programmed to do, agents require significant human intervention to ensure they are equipped to respond to complex and dynamic contexts. 

There are a number of context-related technologies aimed at tackling this challenge, including agents.md, Context7, and Mem0. But it’s also a question of approach. For instance, we’ve found success with anchoring coding agents to a reference application — essentially providing agents with a contextual ground truth. We’re also experimenting with using teams of coding agents; while this might sound like it increases complexity, it actually removes some of the burden of having to give a single agent all the dense layers of context it needs to do its job successfully.

Toward consensus

Hopefully the space will mature as practices and standards embed. It would be remiss to not mention the significance of the Model Context Protocol, which has emerged as the go-to protocol for connecting LLMs or agentic AI to sources of context. Relatedly, the agent2agent (A2A) protocol leads the way with standardizing how agents interact with one another. 

It remains to be seen whether these standards win out. But in any case, it’s important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems. Sure, AI needs context, but so do we. Techniques like curated shared instructions for software teams may not sound like the hottest innovation on the planet, but they can be remarkably powerful for helping teams work together.

There’s perhaps also a conversation to be had about what these changes mean for agile software development. Spec-driven development is one idea that appears to have some traction, but there are still questions about how we remain adaptable and flexible while also building robust contextual foundations and ground truths for AI systems.

Software engineers can solve the context challenge

Clearly, 2025 has been a huge year in the evolution of software engineering as a practice. There’s a lot the industry needs to monitor closely, but it’s also an exciting time. And while fears about AI job automation may remain, the fact the conversation has moved from questions of speed and scale to context puts software engineers right at the heart of things. 

Once again, it will be down to them to experiment, collaborate, and learn — the future depends on it.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

A new ion-based quantum computer makes error correction simpler

The US- and UK-based company Quantinuum today unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability. 

Like all other existing quantum computers, Helios is not powerful enough to execute the industry’s dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling. But Quantinuum’s machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google’s and IBM’s.

“Helios is an important proof point in our road map about how we’ll scale to larger physical systems,” says Jennifer Strabley, vice president at Quantinuum, which formed in 2021 from the merger of Honeywell Quantum Solutions and Cambridge Quantum. Honeywell remains Quantinuum’s majority owner.

Located at Quantinuum’s facility in Colorado, Helios comprises a myriad of components, including mirrors, lasers, and optical fiber. Its core is a thumbnail-size chip containing the barium ions that serve as the qubits, which perform the actual computing. Helios computes with 98 barium ions at a time; its predecessor, H2, used 56 ytterbium qubits. The barium ions are an upgrade, as they have proven easier to control than ytterbium.  These components all sit within a chamber that is cooled to about 15 Kelvin (-432.67 ℉), on top of an optical table. Users can access the computer by logging in remotely over the cloud.

Helios encodes information in the ions’ quantum states, which can represent not only 0s and 1s, like the bits in classical computing, but probabilistic combinations of both, known as superpositions. A hallmark of quantum computing, these superposition states are akin to the state of a coin flipping in the air—neither heads nor tails, but some probability of both. 

Quantum computing exploits the unique mathematics of quantum-mechanical objects like ions to perform computations. Proponents of the technology believe this should enable commercially useful applications, such as highly accurate chemistry simulations for the development of batteries or better optimization algorithms for logistics and finance. 

In the last decade, researchers at companies and academic institutions worldwide have incrementally developed the technology with billions of dollars of private and public funding. Still, quantum computing is in an awkward teenage phase. It’s unclear when it will bring profitable applications. Of late, developers have focused on scaling up the machines. 

A key challenge to making a more powerful quantum computer is implementing error correction. Like all computers, quantum computers occasionally make mistakes. Classical computers correct these errors by storing information redundantly. Owing to quirks of quantum mechanics, quantum computers can’t do this and require special correction techniques. 

Quantum error correction involves storing a single unit of information in multiple qubits rather than in a single qubit. The exact methods vary depending on the specific hardware of the quantum computer, with some machines requiring more qubits per unit of information than others. The industry refers to an error-corrected unit of quantum information as a “logical qubit.” Helios needs two ions, or “physical qubits,” to create one logical qubit.

This is fewer physical qubits than needed in recent quantum computers made of superconducting circuits. In 2024, Google used 105 physical qubits to create a logical qubit. This year, IBM used 12 physical qubits per single logical qubit, and Amazon Web Services used nine physical qubits to produce a single logical qubit. All three companies use variations of superconducting circuits as qubits.

Helios is noteworthy for its qubits’ precision, says Rajibul Islam, a physicist at the University of Waterloo in Canada, who is not affiliated with Quantinuum. The computer’s qubit error rates are low to begin with, which means it doesn’t need to devote as much of its hardware to error correction. Quantinuum had pairs of qubits interact in an operation known as entanglement and found that they behaved as expected 99.921% of the time. “To the best of my knowledge, no other platform is at this level,” says Islam.

This advantage comes from a design property of ions. Unlike superconducting circuits, which are affixed to the surface of a quantum computing chip, ions on Quantinuum’s Helios chip can be shuffled around. Because the ions can move, they can interact with every other ion in the computer, a capacity known as “all-to-all connectivity.” This connectivity allows for error correction approaches that use fewer physical qubits. In contrast, superconducting qubits can only interact with their direct neighbors, so a computation between two non-adjacent qubits requires several intermediate steps involving the qubits in between. “It’s becoming increasingly more apparent how important all-to-all-connectivity is for these high-performing systems,” says Strabley.

Still, it’s not clear what type of qubit will win in the long run. Each type has design benefits that could ultimately make it easier to scale. Ions (which are used by the US-based startup IonQ as well as Quantinuum) offer an advantage because they produce relatively few errors, says Islam: “Even with fewer physical qubits, you can do more.” However, it’s easier to manufacture superconducting qubits. And qubits made of neutral atoms, such as the quantum computers built by the Boston-based startup QuEra, are “easier to trap” than ions, he says. 

Besides increasing the number of qubits on its chip, another notable achievement for Quantinuum is that it demonstrated error correction “on the fly,” says David Hayes, the company’s director of computational theory and design, That’s a new capability for its machines. Nvidia GPUs were used to identify errors in the qubits in parallel. Hayes thinks that GPUs are more effective for error correction than chips known as FPGAs, also used in the industry.

Quantinuum has used its computers to investigate the basic physics of magnetism and superconductivity. Earlier this year, it reported simulating a magnet on H2, Quantinuum’s predecessor, with the claim that it “rivals the best classical approaches in expanding our understanding of magnetism.” Along with announcing the introduction of Helios, the company has used the machine to simulate the behavior of electrons in a high-temperature superconductor. 

“These aren’t contrived problems,” says Hayes. “These are problems that the Department of Energy, for example, is very interested in.”

Quantinuum plans to build another version of Helios in its facility in Minnesota. It has already begun to build a prototype for a fourth-generation computer, Sol, which it plans to deliver in 2027, with 192 physical qubits. Then, in 2029, the company hopes to release Apollo, which it says will have thousands of physical qubits and should be “fully fault tolerant,” or able to implement error correction at a large scale.

Turning migration into modernization

In late 2023, a long-trusted virtualization staple became the biggest open question on the enterprise IT roadmap.

Amid concerns of VMware licensing changes and steeper support costs, analysts noticed an exodus mentality. Forrester predicted that one in five large VMware customers would begin moving away from the platform in 2024. A subsequent Gartner community poll found that 74% of respondents were rethinking their VMware relationship in light of recent changes. CIOs contending with pricing hikes and product roadmap opacity face a daunting choice: double‑down on a familiar but costlier stack, or use the disruption to rethink how—and where—critical workloads should run.

“There’s still a lot of uncertainty in the marketplace around VMware,” explains Matt Crognale, senior director, migrations and modernization at cloud modernization firm Effectual, adding that the VMware portfolio has been streamlined and refocused over the past couple of years. “The portfolio has been trimmed down to a core offering focused on the technology versus disparate systems.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Powering HPC with next-generation CPUs

For all the excitement around GPUs—the workhorses of today’s AI revolution—the central processing unit (CPU) remains the backbone of high-performance computing (HPC). CPUs still handle 80% to 90% of HPC workloads globally, powering everything from climate modeling to semiconductor design. Far from being eclipsed, they’re evolving in ways that make them more competitive, flexible, and indispensable than ever.

The competitive landscape around CPUs has intensified. Once dominated almost exclusively by Intel’s x86 chips, the market now includes powerful alternatives based on ARM and even emerging architectures like RISC-V. Flagship examples like Japan’s Fugaku supercomputer demonstrate how CPU innovation is pushing performance to new frontiers. Meanwhile, cloud providers like Microsoft and AWS are developing their own silicon, adding even more diversity to the ecosystem.

What makes CPUs so enduring? Flexibility, compatibility, and cost efficiency are key. As Evan Burness of Microsoft Azure points out, CPUs remain the “it-just-works” technology. Moving complex, proprietary code to GPUs can be an expensive and time-consuming effort, while CPUs typically support software continuity across generations with minimal friction. That reliability matters for businesses and researchers who need results, not just raw power.

Innovation is also reshaping what a CPU can be. Advances in chiplet design, on-package memory, and hybrid CPU-GPU architectures are extending the performance curve well beyond the limits of Moore’s Law. For many organizations, the CPU is the strategic choice that balances speed, efficiency, and cost.

Looking ahead, the relationship between CPUs, GPUs, and specialized processors like NPUs will define the future of HPC. Rather than a zero-sum contest, it’s increasingly a question of fit-for-purpose design. As Addison Snell, co-founder and chief executive officer of Intersect360 Research, notes, science and industry never run out of harder problems to solve.

That means CPUs, far from fading, will remain at the center of the computing ecosystem.

To learn more, read the new report “Designing CPUs for next-generation supercomputing.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.