The first new subsea habitat in 40 years is about to launch

<div data-chronoton-summary="

  • Underwater living quarters Vanguard, launching in early 2025, will house four scientists at a time beneath Florida Keys waters. Its pressurized environment allows aquanauts to conduct extended dives without frequent decompression stops.
  • Scientific potential The habitat enables week-long missions for reef restoration, species surveys, and even astronaut training. With divers able to work many hours daily at depths up to 50 meters, it could dramatically accelerate ocean research.
  • Ambitious expansion plans Deep, Vanguard’s creator, envisions a larger successor called Sentinel by 2027 that could house up to 50 people at depths of 225 meters, advancing their mission to “make humans aquatic.”

” data-chronoton-post-id=”1127682″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Vanguard feels and smells like a new RV. It has long, gray banquettes that convert into bunks, a microwave cleverly hidden under a counter, a functional steel sink with a French press and crockery above. A weird little toilet hides behind a curtain.

But some clues hint that you can’t just fire up Vanguard’s engine and roll off the lot. The least subtle is its door, a massive disc of steel complete with a wheel that spins to lock.

Vanguard subsea human habitat from the outside door.

COURTESY MARK HARRIS

Once it is sealed and moved to its permanent home beneath the waves of the Florida Keys National Marine Sanctuary early next year, Vanguard will be the world’s first new subsea habitat in nearly four decades. Teams of four scientists will live and work on the seabed for a week at a time, entering and leaving the habitat as scuba divers. Their missions could include reef restoration, species surveys, underwater archaeology, or even astronaut training. 

One of Vanguard’s modules, unappetizingly named the “wet porch,” has a permanent opening in the floor (a.k.a. a “moon pool”) that doesn’t flood because Vanguard’s air pressure is matched to the water around it. 

It is this pressurization that makes the habitat so useful. Scuba divers working at its maximum operational depth of 50 meters would typically need to make a lengthy stop on their way back to the surface to avoid decompression sickness. This painful and potentially fatal condition, better known as the bends, develops if divers surface too quickly. A traditional 50-meter dive gives scuba divers only a handful of minutes on the seafloor, and they can make only a couple of such dives a day. With Vanguard’s atmosphere at the same pressure as the water, its aquanauts need to decompress only once, at the end of their stay. They can potentially dive for many hours every day.

That could unlock all kinds of new science and exploration. “More time in the ocean opens a world of possibility, accelerating discoveries, inspiration, solutions,” said Kristen Tertoole, Deep’s chief operating officer, at Vanguard’s unveiling in Miami in October. “The ocean is Earth’s life support system. It regulates our climate, sustains life, and holds mysteries we’ve only begun to explore, but it remains 95% undiscovered.”

Vanguard subsea human habitat unveiled in Miami

COURTESY DEEP

Subsea habitats are not a new invention. Jacques Cousteau (naturally) built the first in 1962, although it was only about the size of an elevator. Larger habitats followed in the 1970s and ’80s, maxing out at around the size of Vanguard.

But the technology has come a long way since then. Vanguard uses a tethered connection to a buoy above, known as the “surface expression,” that pipes fresh air and water down to the habitat. It also hosts a diesel generator to power a Starlink internet connection and a tank to hold wastewater. Norman Smith, Deep’s chief technology officer, says the company modeled the most severe hurricanes that Florida expects over the next 20 years and designed the tether to withstand them. Even if the worst happens and the link is broken, Deep says, Vanguard has enough air, water, and energy storage to support its crew for at least 72 hours.

That number came from DNV, an independent classification agency that inspects and certifies all types of marine vessels so that they can get commercial insurance. Vanguard will be the first subsea habitat to get a DNV classification. “That means you have to deal with the rules and all the challenging, frustrating things that come along with it, but it means that on a foundational level, it’s going to be safe,” says Patrick Lahey, founder of Triton Submarines, a manufacturer of classed submersibles.

An interior view of Vanguard during Life Under The Sea: Ocean Engineering and Technology Company DEEP's unveiling of Vanguard, its pilot subsea human habitat at The Hangar at Regatta Harbour on October 29, 2025 in Miami, Florida.

JASON KOERNER/GETTY IMAGES FOR DEEP

Although Deep hopes Vanguard itself will enable decades of useful science, its prime function for the company is to prove out technologies for its planned successor, an advanced modular habitat called Sentinel. Sentinel modules will be six meters wide, twice the diameter of Vanguard, complete with sweeping staircases and single-occupant cabins. A small deployment might have a crew of eight, about the same as the International Space Station. A big Sentinel system could house 50, up to 225 meters deep. Deep claims that Sentinel will be launched at some point in 2027.

Ultimately, according to its mission statement, Deep seeks to “make humans aquatic,” an indication that permanent communities are on its long-term road map. 

Deep has not publicly disclosed the identity of its principal funder, but business records in the UK indicate that as of January 31, 2025 a Canadian man, Robert MacGregor, owned at least 75% of its holding company. According to a Reuters investigation, MacGregor was once linked with Craig Steven Wright, a computer scientist who claimed to be Satoshi Nakamoto, as bitcoin’s elusive creator is pseudonymously known. However, Wright’s claims to be Nakamoto later collapsed. 

MacGregor has kept a very low public profile in recent years. When contacted for comment, Deep spokesperson Mike Bohan refused to comment on the link with Wright, only to say it was inaccurate, but said: “Robert MacGregor started his career as an IP lawyer in the dot-com era, moving into blockchain technology and has diverse interests including philanthropy, real estate, and now Deep.”

In any case, MacGregor could find keeping that low profile more difficult if Vanguard is successful in reinvigorating ocean science and exploration as the company hopes. The habitat is due to be deployed early next year, following final operational tests at Triton’s facility in Florida. It will welcome its first scientists shortly after. 

“The ocean is not just our resource; it is our responsibility,” says Tertoole. “Deep is more than a single habitat. We are building a full-stack capability for human presence in the ocean.”

An interior view of Vanguard during Life Under The Sea: Ocean Engineering and Technology Company DEEP's unveiling of Vanguard, its pilot subsea human habitat at The Hangar at Regatta Harbour on October 29, 2025 in Miami, Florida. (

JASON KOERNER/GETTY IMAGES FOR DEEP
Cloning isn’t just for celebrity pets like Tom Brady’s dog

This week, we heard that Tom Brady had his dog cloned. The former quarterback revealed that his Junie is actually a clone of Lua, a pit bull mix that died in 2023.

Brady’s announcement follows those of celebrities like Paris Hilton and Barbra Streisand, who also famously cloned their pet dogs. But some believe there are better ways to make use of cloning technologies.

While the pampered pooches of the rich and famous may dominate this week’s headlines, cloning technologies are also being used to diversify the genetic pools of inbred species and potentially bring other animals back from the brink of extinction.

Cloning itself isn’t new. The first mammal cloned from an adult cell, Dolly the sheep, was born in the 1990s. The technology has been used in livestock breeding over the decades since.

Say you’ve got a particularly large bull, or a cow that has an especially high milk yield. Those animals are valuable. You could selectively breed for those kinds of characteristics. Or you could clone the original animals—essentially creating genetic twins.

Scientists can take some of the animals’ cells, freeze them, and store them in a biobank. That opens the option to clone them in the future. It’s possible to thaw those cells, remove the DNA-containing nuclei of the cells, and insert them into donor egg cells.

Those donor egg cells, which come from another animal of the same species, have their own nuclei removed. So it’s a case of swapping out the DNA. The resulting cell is stimulated and grown in the lab until it starts to look like an embryo. Then it is transferred to the uterus of a surrogate animal—which eventually gives birth to a clone.

There are a handful of companies offering to clone pets. Viagen, which claims to have “cloned more animals than anyone else on Earth,” will clone a dog or cat for $50,000. That’s the company that cloned Streisand’s pet dog Samantha, twice.

This week, Colossal Biosciences—the “de-extinction” company that claims to have resurrected the dire wolf and created a “woolly mouse” as a precursor to reviving the woolly mammoth—announced that it had acquired Viagen, but that Viagen will “continue to operate under its current leadership.”

Pet cloning is controversial, for a few reasons. The companies themselves point out that, while the cloned animal will be a genetic twin of the original animal, it won’t be identical. One issue is mitochondrial DNA—a tiny fraction of DNA that sits outside the nucleus and is inherited from the mother. The cloned animal may inherit some of this from the surrogate.

Mitochondrial DNA is unlikely to have much of an impact on the animal itself. More important are the many, many factors thought to shape an individual’s personality and temperament. “It’s the old nature-versus-nurture question,” says Samantha Wisely, a conservation geneticist at the University of Florida. After all, human identical twins are never carbon copies of each other. Anyone who clones a pet expecting a like-for-like reincarnation is likely to be disappointed.

And some animal welfare groups are opposed to the practice of pet cloning. People for the Ethical Treatment of Animals (PETA) described it as “a horror show,” and the UK’s Royal Society for the Prevention of Cruelty to Animals (RSPCA) says that “there is no justification for cloning animals for such trivial purposes.” 

But there are other uses for cloning technology that are arguably less trivial. Wisely has long been interested in diversifying the gene pool of the critically endangered black-footed ferret, for example.

Today, there are around 10,000 black-footed ferrets that have been captively bred from only seven individuals, says Wisely. That level of inbreeding isn’t good for any species—it tends to leave organisms at risk of poor health. They are less able to reproduce or adapt to changes in their environment.

Wisely and her colleagues had access to frozen tissue samples taken from two other ferrets. Along with colleagues at not-for-profit Revive and Restore, the team created clones of those two individuals. The first clone, Elizabeth Ann, was born in 2020. Since then, other clones have been born, and the team has started breeding the cloned animals with the descendants of the other seven ferrets, says Wisely.

The same approach has been used to clone the endangered Przewalski’s horse, using decades-old tissue samples stored by the San Diego Zoo. It’s too soon to predict the impact of these efforts. Researchers are still evaluating the cloned ferrets and their offspring to see if they behave like typical animals and could survive in the wild.

Even this practice is not without its critics. Some have pointed out that cloning alone will not save any species. After all, it doesn’t address the habitat loss or human-wildlife conflict that is responsible for the endangerment of these animals in the first place. And there will always be detractors who accuse people who clone animals of “playing God.” 

For all her involvement in cloning endangered ferrets, Wisely tells me she would not consider cloning her own pets. She currently has three rescue dogs, a rescue cat, and “geriatric chickens.” “I love them all dearly,” she says. “But there are a lot of rescue animals out there that need homes.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Stop worrying about your AI footprint. Look at the big picture instead.

Picture it: I’m minding my business at a party, parked by the snack table (of course). A friend of a friend wanders up, and we strike up a conversation. It quickly turns to work, and upon learning that I’m a climate technology reporter, my new acquaintance says something like: “Should I be using AI? I’ve heard it’s awful for the environment.” 

This actually happens pretty often now. Generally, I tell people not to worry—let a chatbot plan your vacation, suggest recipe ideas, or write you a poem if you want. 

That response might surprise some people, but I promise I’m not living under a rock, and I have seen all the concerning projections about how much electricity AI is using. Data centers could consume up to 945 terawatt-hours annually by 2030. (That’s roughly as much as Japan.) 

But I feel strongly about not putting the onus on individuals, partly because AI concerns remind me so much of another question: “What should I do to reduce my carbon footprint?” 

That one gets under my skin because of the context: BP helped popularize the concept of a carbon footprint in a marketing campaign in the early 2000s. That framing effectively shifts the burden of worrying about the environment from fossil-fuel companies to individuals. 

The reality is, no one person can address climate change alone: Our entire society is built around burning fossil fuels. To address climate change, we need political action and public support for researching and scaling up climate technology. We need companies to innovate and take decisive action to reduce greenhouse-gas emissions. Focusing too much on individuals is a distraction from the real solutions on the table. 

I see something similar today with AI. People are asking climate reporters at barbecues whether they should feel guilty about using chatbots too frequently when we need to focus on the bigger picture. 

Big tech companies are playing into this narrative by providing energy-use estimates for their products at the user level. A couple of recent reports put the electricity used to query a chatbot at about 0.3 watt-hours, the same as powering a microwave for about a second. That’s so small as to be virtually insignificant.

But stopping with the energy use of a single query obscures the full truth, which is that this industry is growing quickly, building energy-hungry infrastructure at a nearly incomprehensible scale to satisfy the AI appetites of society as a whole. Meta is currently building a data center in Louisiana with five gigawatts of computational power—about the same demand as the entire state of Maine at the summer peak.  (To learn more, read our Power Hungry series online.)

Increasingly, there’s no getting away from AI, and it’s not as simple as choosing to use or not use the technology. Your favorite search engine likely gives you an AI summary at the top of your search results. Your email provider’s suggested replies? Probably AI. Same for chatting with customer service while you’re shopping online. 

Just as with climate change, we need to look at this as a system rather than a series of individual choices. 

Massive tech companies using AI in their products should be disclosing their total energy and water use and going into detail about how they complete their calculations. Estimating the burden per query is a start, but we also deserve to see how these impacts add up for billions of users, and how that’s changing over time as companies (hopefully) make their products more efficient. Lawmakers should be mandating these disclosures, and we should be asking for them, too. 

That’s not to say there’s absolutely no individual action that you can take. Just as you could meaningfully reduce your individual greenhouse-gas emissions by taking fewer flights and eating less meat, there are some reasonable things that you can do to reduce your AI footprint. Generating videos tends to be especially energy-intensive, as does using reasoning models to engage with long prompts and produce long answers. Asking a chatbot to help plan your day, suggest fun activities to do with your family, or summarize a ridiculously long email has relatively minor impact. 

Ultimately, as long as you aren’t relentlessly churning out AI slop, you shouldn’t be too worried about your individual AI footprint. But we should all be keeping our eye on what this industry will mean for our grid, our society, and our planet. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Why the for-profit race into solar geoengineering is bad for science and public trust

Last week, an American-Israeli company that claims it’s developed proprietary technology to cool the planet announced it had raised $60 million, by far the largest known venture capital round to date for a solar geoengineering startup.

The company, Stardust, says the funding will enable it to develop a system that could be deployed by the start of the next decade, according to Heatmap, which broke the story.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


As scientists who have worked on the science of solar geoengineering for decades, we have grown increasingly concerned about the emerging efforts to start and fund private companies to build and deploy technologies that could alter the climate of the planet. We also strongly dispute some of the technical claims that certain companies have made about their offerings. 

Given the potential power of such tools, the public concerns about them, and the importance of using them responsibly, we argue that they should be studied, evaluated, and developed mainly through publicly coordinated and transparently funded science and engineering efforts.  In addition, any decisions about whether or how they should be used should be made through multilateral government discussions, informed by the best available research on the promise and risks of such interventions—not the profit motives of companies or their investors.

The basic idea behind solar geoengineering, or what we now prefer to call sunlight reflection methods (SRM), is that humans might reduce climate change by making the Earth a bit more reflective, partially counteracting the warming caused by the accumulation of greenhouse gases. 

There is strong evidence, based on years of climate modeling and analyses by researchers worldwide, that SRM—while not perfect—could significantly and rapidly reduce climate changes and avoid important climate risks. In particular, it could ease the impacts in hot countries that are struggling to adapt.  

The goals of doing research into SRM can be diverse: identifying risks as well as finding better methods. But research won’t be useful unless it’s trusted, and trust depends on transparency. That means researchers must be eager to examine pros and cons, committed to following the evidence where it leads, and driven by a sense that research should serve public interests, not be locked up as intellectual property.

In recent years, a handful of for-profit startup companies have emerged that are striving to develop SRM technologies or already trying to market SRM services. That includes Make Sunsets, which sells “cooling credits” for releasing sulfur dioxide in the stratosphere. A new company, Sunscreen, which hasn’t yet been announced, intends to use aerosols in the lower atmosphere to achieve cooling over small areas, purportedly to help farmers or cities deal with extreme heat.  

Our strong impression is that people in these companies are driven by the same concerns about climate change that move us in our research. We agree that more research, and more innovation, is needed. However, we do not think startups—which by definition must eventually make money to stay in business—can play a productive role in advancing research on SRM.

Many people already distrust the idea of engineering the atmosphere—at whichever scale—to address climate change, fearing negative side effects, inequitable impacts on different parts of the world, or the prospect that a world expecting such solutions will feel less pressure to address the root causes of climate change.

Adding business interests, profit motives, and rich investors into this situation just creates more cause for concern, complicating the ability of responsible scientists and engineers to carry out the work needed to advance our understanding.

The only way these startups will make money is if someone pays for their services, so there’s a reasonable fear that financial pressures could drive companies to lobby governments or other parties to use such tools. A decision that should be based on objective analysis of risks and benefits would instead be strongly influenced by financial interests and political connections.

The need to raise money or bring in revenue often drives companies to hype the potential or safety of their tools. Indeed, that’s what private companies need to do to attract investors, but it’s not how you build public trust—particularly when the science doesn’t support the claims.

Notably, Stardust says on its website that it has developed novel particles that can be injected into the atmosphere to reflect away more sunlight, asserting that they’re “chemically inert in the stratosphere, and safe for humans and ecosystems.” According to the company, “The particles naturally return to Earth’s surface over time and recycle safely back into the biosphere.”

But it’s nonsense for the company to claim they can make particles that are inert in the stratosphere. Even diamonds, which are extraordinarily nonreactive, would alter stratospheric chemistry. First of all, much of that chemistry depends on highly reactive radicals that react with any solid surface, and second, any particle may become coated by background sulfuric acid in the stratosphere. That could accelerate the loss of the protective ozone layer by spreading that existing sulfuric acid over a larger surface area.

(Stardust didn’t provide a response to an inquiry about the concerns raised in this piece.)

In materials presented to potential investors, which we’ve obtained a copy of, Stardust further claims its particles “improve” on sulfuric acid, which is the most studied material for SRM. But the point of using sulfate for such studies was never that it was perfect, but that its broader climatic and environmental impacts are well understood. That’s because sulfate is widespread on Earth, and there’s an immense body of scientific knowledge about the fate and risks of sulfur that reaches the stratosphere through volcanic eruptions or other means.

If there’s one great lesson of 20th-century environmental science, it’s how crucial it is to understand the ultimate fate of any new material introduced into the environment. 

Chlorofluorocarbons and the pesticide DDT both offered safety advantages over competing technologies, but they both broke down into products that accumulated in the environment in unexpected places, causing enormous and unanticipated harms. 

The environmental and climate impacts of sulfate aerosols have been studied in many thousands of scientific papers over a century, and this deep well of knowledge greatly reduces the chance of unknown unknowns. 

Grandiose claims notwithstanding—and especially considering that Stardust hasn’t disclosed anything about its particles or research process—it would be very difficult to make a pragmatic, risk-informed decision to start SRM efforts with these particles instead of sulfate.

We don’t want to claim that every single answer lies in academia. We’d be fools to not be excited by profit-driven innovation in solar power, EVs, batteries, or other sustainable technologies. But the math for sunlight reflection is just different. Why?   

Because the role of private industry was essential in improving the efficiency, driving down the costs, and increasing the market share of renewables and other forms of cleantech. When cost matters and we can easily evaluate the benefits of the product, then competitive, for-profit capitalism can work wonders.  

But SRM is already technically feasible and inexpensive, with deployment costs that are negligible compared with the climate damage it averts.

The essential questions of whether or how to use it come down to far thornier societal issues: How can we best balance the risks and benefits? How can we ensure that it’s used in an equitable way? How do we make legitimate decisions about SRM on a planet with such sharp political divisions?

Trust will be the most important single ingredient in making these decisions. And trust is the one product for-profit innovation does not naturally manufacture. 

Ultimately, we’re just two researchers. We can’t make investors in these startups do anything differently. Our request is that they think carefully, and beyond the logic of short-term profit. If they believe geoengineering is worth exploring, could it be that their support will make it harder, not easier, to do that?  

David Keith is the professor of geophysical sciences at the University of Chicago and founding faculty director of the school’s Climate Systems Engineering Initiative. Daniele Visioni is an assistant professor of earth and atmospheric sciences at Cornell University and head of data for Reflective, a nonprofit that develops tools and provides funding to support solar geoengineering research.

This startup wants to clean up the copper industry

Demand for copper is surging, as is pollution from its dirty production processes. The founders of one startup, Still Bright, think they have a better, cleaner way to generate the copper the world needs. 

The company uses water-based reactions, based on battery chemistry technology, to purify copper in a process that could be less polluting than traditional smelting. The hope is that this alternative will also help ease growing strain on the copper supply chain.

“We’re really focused on addressing the copper supply crisis that’s looming ahead of us,” says Randy Allen, Still Bright’s cofounder and CEO.

Copper is a crucial ingredient in everything from electrical wiring to cookware today. And clean energy technologies like solar panels and electric vehicles are introducing even more demand for the metal. Global copper demand is expected to grow by 40% between now and 2040. 

As demand swells, so do the climate and environmental impacts of copper extraction, the process of refining ore into a pure metal. There’s also growing concern about the geographic concentration of the copper supply chain. Copper is mined all over the world, and historically, many of those mines had smelters on-site to process what they extracted. (Smelters form pure copper metal by essentially burning concentrated copper ore at high temperatures.) But today, the smelting industry has consolidated, with many mines shipping copper concentrates to smelters in Asia, particularly China.

That’s partly because smelting uses a lot of energy and chemicals, and it can produce sulfur-containing emissions that can harm air quality. “They shipped the environmental and social problems elsewhere,” says Simon Jowitt, a professor at the University of Nevada, Reno, and director of the Nevada Bureau of Mines and Geology.

It’s possible to scrub pollution out of a smelter’s emissions, and smelters are much cleaner than they used to be, Jowitt says. But overall, smelting centers aren’t exactly known for environmental responsibility. 

So even countries like the US, which have plenty of copper reserves and operational mines, largely ship copper concentrates, which contain up to around 30% copper, to China or other countries for smelting. (There are just two operational ore smelters in the US today.)

Still Bright avoids the pyrometallurgic process that smelters use in favor of a chemical approach, partially inspired by devices called vanadium flow batteries.

In the startup’s reactor, vanadium reacts with the copper compounds in copper concentrates. The copper metal remains a solid, leaving many of the impurities behind in the liquid phase. The whole thing takes between 30 and 90 minutes. The solid, which contains roughly 70% copper after this reaction, can then be fed into another, established process in the mining industry, called solvent extraction and electrowinning, to make copper that’s over 99% pure. 

This is far from the first attempt to use a water-based, chemical approach to processing copper. Today, some copper ore is processed with acid, for example, and Ceibo, a startup based in Chile, is trying to use a version of that process on the type of copper that’s traditionally smelted. The difference here is the particular chemistry, particularly the choice to use vanadium.

One of Still Bright’s founders, Jon Vardner, was researching copper reactions and vanadium flow batteries when he came up with the idea to marry a copper extraction reaction with an electrical charging step that could recycle the vanadium.

worker in the lab

COURTESY OF STILL BRIGHT

After the vanadium reacts with the copper, the liquid soup can be fed into an electrolyzer, which uses electricity to turn the vanadium back into a form that can react with copper again. It’s basically the same process that vanadium flow batteries use to charge up. 

While other chemical processes for copper refining require high temperatures or extremely acidic conditions to get the copper into solution and force the reaction to proceed quickly and ensure all the copper gets reacted, Still Bright’s process can run at ambient temperatures.

One of the major benefits to this approach is cutting the pollution from copper refining.  Traditional smelting heats the target material to over 1,200 °C (2,000 °F), forming sulfur-containing gases that are released into the atmosphere. 

Still Bright’s process produces hydrogen sulfide gas as a by-product instead. It’s still a dangerous material, but one that can be effectively captured and converted into useful side products, Allen says.

Another source of potential pollution is the sulfide minerals left over after the refining process, which can form sulfuric acid when exposed to air and water (this is called acid mine drainage, common in mining waste). Still Bright’s process will also produce that material, and the company plans to carefully track it, ensuring that it doesn’t leak into groundwater. 

The company is currently testing its process in the lab in New Jersey and designing a pilot facility in Colorado, which will have the capacity to make about two tons of copper per year. Next will be a demonstration-scale reactor, which will have a 500-ton annual capacity and should come online in 2027 or 2028 at a mine site, Allen says. Still Bright recently raised an $18.7 million seed round to help with the scale-up process.

How scale up goes will be a crucial test of the technology and whether the typically conservative mining industry will jump on board, UNR’s Jowitt says: “You want to see what happens on an industrial scale. And I think until that happens, people might be a little reluctant to get into this.”

The State of AI: Is China about to win the race? 

The State of AI is a collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power. Every Monday for the next six weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.

In this conversation, the FT’s tech columnist and Innovation Editor John Thornhill and MIT Technology Review’s Caiwei Chen consider the battle between Silicon Valley and Beijing for technological supremacy.

John Thornhill writes:

Viewed from abroad, it seems only a matter of time before China emerges as the AI superpower of the 21st century. 

Here in the West, our initial instinct is to focus on America’s significant lead in semiconductor expertise, its cutting-edge AI research, and its vast investments in data centers. The legendary investor Warren Buffett once warned: “Never bet against America.” He is right that for more than two centuries, no other “incubator for unleashing human potential” has matched the US.

Today, however, China has the means, motive, and opportunity to commit the equivalent of technological murder. When it comes to mobilizing the whole-of-society resources needed to develop and deploy AI to maximum effect, it may be just as rash to bet against. 

The data highlights the trends. In AI publications and patents, China leads. By 2023, China accounted for 22.6% of all citations, compared with 20.9% from Europe and 13% from the US, according to Stanford University’s Artificial Intelligence Index Report 2025. As of 2023, China also accounted for 69.7% of all AI patents. True, the US maintains a strong lead in the top 100 most cited publications (50 versus 34 in 2023), but its share has been steadily declining. 

Similarly, the US outdoes China in top AI research talent, but the gap is narrowing. According to a report from the US Council of Economic Advisers, 59% of the world’s top AI researchers worked in the US in 2019, compared with 11% in China. But by 2022 those figures were 42% and 28%. 

The Trump administration’s tightening of restrictions for foreign H-1B visa holders may well lead more Chinese AI researchers in the US to return home. The talent ratio could move further in China’s favor.

Regarding the technology itself, US-based institutions produced 40 of the world’s most notable AI models in 2024, compared with 15 from China. But Chinese researchers have learned to do more with less, and their strongest large language models—including the open-source DeepSeek-V3 and Alibaba’s Qwen 2.5-Max—surpass the best US models in terms of algorithmic efficiency.

Where China is really likely to excel in future is in applying these open-source models. The latest report from Air Street Capital shows that China has now overtaken the US in terms of monthly downloads of AI models. In AI-enabled fintech, e-commerce, and logistics, China already outstrips the US. 

Perhaps the most intriguing—and potentially the most productive—applications of AI may yet come in hardware, particularly in drones and industrial robotics. With the research field evolving toward embodied AI, China’s advantage in advanced manufacturing will shine through.

Dan Wang, the tech analyst and author of Breakneck, has rightly highlighted the strengths of China’s engineering state in developing manufacturing process knowledge—even if he has also shown the damaging effects of applying that engineering mentality in the social sphere. “China has been growing technologically stronger and economically more dynamic in all sorts of ways,” he told me. “But repression is very real. And it is getting worse in all sorts of ways as well.”

I’d be fascinated to hear from you, Caiwei, about your take on the strengths and weaknesses of China’s AI dream. To what extent will China’s engineered social control hamper its technological ambitions? 

Caiwei Chen responds:

Hi, John!

You’re right that the US still holds a clear lead in frontier research and infrastructure. But “winning” AI can mean many different things. Jeffrey Ding, in his book Technology and the Rise of Great Powers, makes a counterintuitive point: For a general-purpose technology like AI, long-term advantage often comes down to how widely and deeply technologies spread across society. And China is in a good position to win that race (although “murder” might be pushing it a bit!).

Chips will remain China’s biggest bottleneck. Export restrictions have throttled access to top GPUs, pushing buyers into gray markets and forcing labs to recycle or repair banned Nvidia stock. Even as domestic chip programs expand, the performance gap at the very top still stands.

Yet those same constraints have pushed Chinese companies toward a different playbook: pooling compute, optimizing efficiency, and releasing open-weight models. DeepSeek-V3’s training run, for example, used just 2.6 million GPU-hours—far below the scale of US counterparts. But Alibaba’s Qwen models now rank among the most downloaded open-weights globally, and companies like Zhipu and MiniMax are building competitive multimodal and video models. 

China’s industrial policy means new models can move from lab to implementation fast. Local governments and major enterprises are already rolling out reasoning models in administration, logistics, and finance. 

Education is another advantage. Major Chinese universities are implementing AI literacy programs in their curricula, embedding skills before the labor market demands them. The Ministry of Education has also announced plans to integrate AI training for children of all school ages. I’m not sure the phrase “engineering state” fully captures China’s relationship with new technologies, but decades of infrastructure building and top-down coordination have made the system unusually effective at pushing large-scale adoption, often with far less social resistance than you’d see elsewhere. The use at scale, naturally, allows for faster iterative improvements.

Meanwhile, Stanford HAI’s 2025 AI Index found Chinese respondents to be the most optimistic in the world about AI’s future—far more optimistic than populations in the US or the UK. It’s striking, given that China’s economy has slowed since the pandemic for the first time in over two decades. Many in government and industry now see AI as a much-needed spark. Optimism can be powerful fuel, but whether it can persist through slower growth is still an open question.

Social control remains part of the picture, but a different kind of ambition is taking shape. The Chinese AI founders in this new generation are the most globally minded I’ve seen, moving fluidly between Silicon Valley hackathons and pitch meetings in Dubai. Many are fluent in English and in the rhythms of global venture capital. Having watched the last generation wrestle with the burden of a Chinese label, they now build companies that are quietly transnational from the start.

The US may still lead in speed and experimentation, but China could shape how AI becomes part of daily life, both at home and abroad. Speed matters, but speed isn’t the same thing as supremacy.

John Thornhill replies:

You’re right, Caiwei, that speed is not the same as supremacy (and “murder” may be too strong a word). And you’re also right to amplify the point about China’s strength in open-weight models and the US preference for proprietary models. This is not just a struggle between two different countries’ economic models but also between two different ways of deploying technology.  

Even OpenAI’s chief executive, Sam Altman, admitted earlier this year: “We have been on the wrong side of history here and need to figure out a different open-source strategy.” That’s going to be a very interesting subplot to follow. Who’s called that one right?

Further reading on the US-China competition

There’s been a lot of talk about how people may be using generative AI in their daily lives. This story from the FT’s visual story team explores the reality 

From China, FT reporters ask how long Nvidia can maintain its dominance over Chinese rivals

When it comes to real-world uses, toys and companions devices are a novel but emergent application of AI that is gaining traction in China—but is also heading to the US. This MIT Technology Review story explored it.

The once-frantic data center buildout in China has hit walls, and as the sanctions and AI demands shift, this MIT Technology Review story took an on-the-ground look at how stakeholders are figuring it out.

Here’s why we don’t have a cold vaccine. Yet.

For those of us in the Northern Hemisphere, it’s the season of the sniffles. As the weather turns, we’re all spending more time indoors. The kids have been back at school for a couple of months. And cold germs are everywhere.

My youngest started school this year, and along with artwork and seedlings, she has also been bringing home lots of lovely bugs to share with the rest of her family. As she coughed directly into my face for what felt like the hundredth time, I started to wonder if there was anything I could do to stop this endless cycle of winter illnesses. We all got our flu jabs a month ago. Why couldn’t we get a vaccine to protect us against the common cold, too?

Scientists have been working on this for decades. It turns out that creating a cold vaccine is hard. Really hard.

But not impossible. There’s still hope. Let me explain.

Technically, colds are infections that affect your nose and throat, causing symptoms like sneezing, coughing, and generally feeling like garbage. Unlike some other infections,—covid-19, for example—they aren’t defined by the specific virus that causes them.

That’s because there are a lot of viruses that cause colds, including rhinoviruses, adenoviruses, and even seasonal coronaviruses (they don’t all cause covid!). Within those virus families, there are many different variants.

Take rhinoviruses, for example. These viruses are thought to be behind most colds. They’re human viruses—over the course of evolution, they have become perfectly adapted to infecting us, rapidly multiplying in our noses and airways to make us sick. There are around 180 rhinovirus variants, says Gary McLean, a molecular immunologist at Imperial College London in the UK.

Once you factor in the other cold-causing viruses, there are around 280 variants all told. That’s 280 suspects behind the cough that my daughter sprayed into my face. It’s going to be really hard to make a vaccine that will offer protection against all of them.

The second challenge lies in the prevalence of those variants.

Scientists tailor flu and covid vaccines to whatever strain happens to be circulating. Months before flu season starts, the World Health Organization advises countries on which strains their vaccines should protect against. Early recommendations for the Northern Hemisphere can be based on which strains seem to be dominant in the Southern Hemisphere, and vice versa.

That approach wouldn’t work for the common cold, because all those hundreds of variants are circulating all the time, says McLean.

That’s not to say that people haven’t tried to make a cold vaccine. There was a flurry of interest in the 1960s and ’70s, when scientists made valiant efforts to develop vaccines for the common cold. Sadly, they all failed. And we haven’t made much progress since then.

In 2022, a team of researchers reviewed all the research that had been published up to that year. They only identified one clinical trial—and it was conducted back in 1965.

Interest has certainly died down since then, too. Some question whether a cold vaccine is even worth the effort. After all, most colds don’t require much in the way of treatment and don’t last more than a week or two. There are many, many more dangerous viruses out there we could be focusing on.

And while cold viruses do mutate and evolve, no one really expects them to cause the next pandemic, says McLean. They’ve evolved to cause mild disease in humans—something they’ve been doing successfully for a long, long time. Flu viruses—which can cause serious illness, disability, or even death—pose a much bigger risk, so they probably deserve more attention.

But colds are still irritating, disruptive, and potentially harmful. Rhinoviruses are considered to be the leading cause of human infectious disease. They can cause pneumonia in children and older adults. And once you add up doctor visits, medication, and missed work, the economic cost of colds is pretty hefty: a 2003 study put it at $40 billion per year for the US alone.

So it’s reassuring that we needn’t abandon all hope: Some scientists are making progress! McLean and his colleagues are working on ways to prepare the immune systems of people with asthma and lung diseases to potentially protect them from cold viruses. And a team at Emory University has developed a vaccine that appears to protect monkeys from around a third of rhinoviruses.

There’s still a long way to go. Don’t expect a cold vaccine to materialize in the next five years, at least. “We’re not quite there yet,” says Michael Boeckh, an infectious-disease researcher at Fred Hutch Cancer Center in Seattle, Washington. “But will it at some point happen? Possibly.”

At the end of our Zoom call, perhaps after reading the disappointed expression on my sniffling, cold-riddled face (yes, I did end up catching my daughter’s cold), McLean told me he hoped he was “positive enough.” He admitted that he used to be more optimistic about a cold vaccine. But he hasn’t given up hope. He’s even running a trial of a potential new vaccine in people, although he wouldn’t reveal the details.

“It could be done,” he said.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Here’s the latest company planning for gene-edited babies

A West Coast biotech entrepreneur says he’s secured $30 million to form a public-benefit company to study how to safely create genetically edited babies, marking the largest known investment into the taboo technology.  

The new company, called Preventive, is being formed to research so-called “heritable genome editing,” in which the DNA of embryos would be modified by correcting harmful mutations or installing beneficial genes. The goal would be to prevent disease.

Preventive was founded by the gene-editing scientist Lucas Harrington, who described his plans yesterday in a blog post announcing the venture. Preventive, he said, will not rush to try out the technique but instead will dedicate itself “to rigorously researching whether heritable genome editing can be done safely and responsibly.”

Creating genetically edited humans remains controversial, and the first scientist to do it, in China, was imprisoned for three years. The procedure remains illegal in many countries, including the US, and doubts surround its usefulness as a form of medicine.

Still, as gene-editing technology races forward, the temptation to shape the future of the species may prove irresistible, particularly to entrepreneurs keen to put their stamp on the human condition. In theory, even small genetic tweaks could create people who never get heart disease or Alzheimer’s, and who would pass those traits on to their own offspring.

According to Harrington, if the technique proves safe, it “could become one of the most important health technologies of our time.” He has estimated that editing an embryo would cost only about $5,000 and believes regulations could change in the future. 

Preventive is the third US startup this year to say it is pursuing technology to produce gene-edited babies. The first, Bootstrap Bio, based in California, is reportedly seeking seed funding and has an interest in enhancing intelligence. Another, Manhattan Genomics, is also in the formation stage but has not announced funding yet.

As of now, none of these companies have significant staff or facilities, and they largely lack any credibility among mainstream gene-editing scientists. Reached by email, Fyodor Urnov, an expert in gene editing at the University of California, Berkeley, where Harrington studied, said he believes such ventures should not move forward.

Urnov has been a pointed critic of the concept of heritable genome editing, calling it dangerous, misguided, and a distraction from the real benefits of gene editing to treat adults and children. 

In his email, Urnov said the launch of still another venture into the area made him want to “howl with pain.”  

Harrinton’s venture was incorporated in Delaware in May 2025,under the name Preventive Medicine PBC. As a public-benefit corporation, it is organized to put its public mission above profits. “If our research shows [heritable genome editing] cannot be done safely, that conclusion is equally valuable to the scientific community and society,” Harrington wrote in his post.

Harrington is a cofounder of Mammoth Biosciences, a gene-editing company pursuing drugs for adults, and remains a board member there.

In recent months, Preventive has sought endorsements from leading figures in genome editing, but according to its post, it had secured only one—from Paula Amato, a fertility doctor at Oregon Health Sciences University, who said she had agreed to act as an advisor to the company.

Amato is a member of a US team that has researched embryo editing in the country since 2017, and she has promoted the technology as a way to increase IVF success. That could be the case if editing could correct abnormal embryos, making more available for use in trying to create a pregnancy.

It remains unclear where Preventive’s funding is coming from. Harrington said the $30 million was gathered from “private funders who share our commitment to pursuing this research responsibly.” But he declined to identify those investors other than SciFounders, a venture firm he runs with his personal and business partner Matt Krisiloff, the CEO of the biotech company Conception, which aims to create human eggs from stem cells.

That’s yet another technology that could change reproduction, if it works. Krisiloff is listed as a member of Preventive’s founding team.

The idea of edited babies has received growing attention from figures in the cryptocurrency business. These include Brian Armstrong, the billionaire founder of Coinbase, who has held a series of off-the-record dinners to discuss the technology (which Harrington attended). Armstrong previously argued that the “time is right” for a startup venture in the area.

Will Harborne, a crypto entrepreneur and partner at LongGame Ventures, says he’s “thrilled” to see Preventive launch. If the technology proves safe, he argues, “widespread adoption is inevitable,” calling its use a “societal obligation.”

Harborne’s fund has invested in Herasight, a company that uses genetic tests to rank IVF embryos for future IQ and other traits. That’s another hotly debated technology, but one that has already reached the market, since such testing isn’t strictly regulated. Some have begun to use the term “human enhancement companies” to refer to such ventures.

What’s still lacking is evidence that leading gene-editing specialists support these ventures. Preventive was unsuccessful in establishing a collaboration with at least one key research group, and Urnov says he had harsh words for Manhattan Genomics when that company reached out to him about working together. “I encourage you to stop,” he wrote back. “You will cause zero good and formidable harm.”

Harrington thinks Preventive could change such attitudes, if it shows that it is serious about doing responsible research. “Most scientists I speak with either accept embryo editing as inevitable or are enthusiastic about the potential but hesitate to voice these opinions publicly,” he told MIT Technology Review earlier this year. “Part of being more public about this is to encourage others in the field to discuss this instead of ignoring it.”

It’s never been easier to be a conspiracy theorist

The timing was eerie.

On November 21, 1963, Richard Hofstadter delivered the annual Herbert Spencer Lecture at Oxford University. Hofstadter was a professor of American history at Columbia University who liked to use social psychology to explain political history, the better to defend liberalism from extremism on both sides. His new lecture was titled “The Paranoid Style in American Politics.” 

“I call it the paranoid style,” he began, “simply because no other word adequately evokes the qualities of heated exaggeration, suspiciousness, and conspiratorial fantasy that I have in mind.”

Then, barely 24 hours later, President John F. Kennedy was assassinated in Dallas. This single, shattering event, and subsequent efforts to explain it, popularized a term for something that is clearly the subject of Hofstadter’s talk though it never actually figures in the text: “conspiracy theory.”


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


Hofstadter’s lecture was later revised into what remains an essential essay, even after decades of scholarship on conspiracy theories, because it lays out, with both rigor and concision, a historical continuity of conspiracist politics. “The paranoid style is an old and recurrent phenomenon in our public life which has been frequently linked with movements of suspicious discontent,” he writes, tracing the phenomenon back to the early years of the republic. Though each upsurge in conspiracy theories feels alarmingly novel—new narratives disseminated through new technologies on a new scale—they all conform to a similar pattern. As Hofstadter demonstrated, the names may change, but the fundamental template remains the same.

His psychological reading of politics has been controversial, but it is psychology, rather than economics or other external circumstances, that best explains the flourishing of conspiracy theories. Subsequent research has indeed shown that we are prone to perceive intentionality and patterns where none exist—and that this helps us feel like a person of consequence. To identify and expose a secret plot is to feel heroic and gain the illusion of control over the bewildering mess of life. 

Like many pioneering theories exposed to the cold light of hindsight, Hofstadter’s has flaws and blind spots. His key oversight was to downplay  the paranoid style’s role in mainstream politics up to that point and underrate its potential to spread in the future.

In 1963, conspiracy theories were still a fringe phenomenon, not because they were inherently unusual but because they had limited reach and were stigmatized by people in power. Now that neither factor holds true, it is obvious how infectious they are. Hofstadter could not, of course, have imagined the information technologies that have become stitched into our lives, nor the fractured media ecosystem of the 21st century, both of which have allowed conspiracist thinking to reach more and more people—to morph, and to bloom like mold. And he could not have predicted that a serial conspiracy theorist would be elected president, twice, and that he would staff his second administration with fellow proponents of the paranoid style. 

But Hofstadter’s concept of the paranoid style remains useful—and ever relevant—because it also describes a way of reading the world. As he put it, “The distinguishing thing about the paranoid style is not that its exponents see conspiracies or plots here or there in history, but they regard a ‘vast’ or ‘gigantic’ conspiracy as the motive force in historical events. History is a conspiracy, set in motion by demonic forces of almost transcendent power, and what is felt to be needed to defeat it is not the usual methods of political give-and-take, but an all-out crusade.”

Needless to say, this mystically unified version of history is not just untrue but impossible. It doesn’t make sense on any level. So why has it proved so alluring for so long—and why does it seem to be getting more popular every day?

What is a conspiracy theory, anyway? 

The first person to define the “conspiracy theory” as a widespread phenomenon was the Austrian-British philosopher Karl Popper, in his 1948 lecture “Towards a Rational Theory of Tradition.” He was not referring to a theory about an individual conspiracy. He was interested in “the conspiracy theory of society”: a particular way of interpreting the course of events. 

He later defined it as “the view that an explanation of a social phenomenon consists in the discovery of the men or groups who are interested in the occurrence of this phenomenon (sometimes it is a hidden interest which has first to be revealed), and who have planned and conspired to bring it about.”

Take an unforeseen catastrophe that inspires fear, anger, and pain—a financial crash, a devastating fire, a terrorist attack, a war. The conventional historian will try to unpick a tangle of different factors, of which malice is only one, and one that may be less significant than dumb luck.

The conspiracist, however, will perceive only sinister calculation behind these terrible events—a fiendishly intricate plot conceived and executed to perfection. Intent is everything. Popper’s observation chimes with Hofstadter’s: “The paranoid’s interpretation of history is … distinctly personal: decisive events are not taken as part of the stream of history, but as the consequences of someone’s will.”

A Culture of Conspiracy
Michael Barkun
UNIVERSITY OF CALIFORNIA PRESS, 2013

According to Michael Barkun in the 2003 book A Culture of Conspiracy, the conspiracist interpretation of events rests on three assumptions: Everything is connected, everything is premeditated, and nothing is as it seems. Following that third law means that widely accepted and documented history is, by definition, suspect and alternative explanations, however outré, are more likely to be true. As Hannah Arendt wrote in The Origins of Totalitarianism, the purpose of conspiracy theories in 20th-century dictatorships “was always to reveal official history as a joke, to demonstrate a sphere of secret influences in which the visible, traceable, and known historical reality was only the outward façade erected explicitly to fool the people.” (Those dictators, of course, were conspirators themselves, projecting their own love of secret plots onto others.)

Still, it’s important to remember that “conspiracy theory” can mean different things. Barkun describes three varieties, nesting like Russian dolls. 

The “event conspiracy theory” concerns a specific, contained catastrophe, such as the Reichstag fire of 1933 or the origins of covid-19. These theories are relatively plausible, even if they can not be proved. 

The “systemic conspiracy theory” is much more ambitious, purporting to explain numerous events as the poisonous fruit of a clandestine international plot. Far-fetched though they are, they do at least fixate on named groups, whether the Illuminati or the World Economic Forum. 

It is increasingly clear that “conspiracy theory” is a misnomer and what we are really dealing with is conspiracy belief.

Finally, the “superconspiracy theory” is that impossible fantasy in which history itself is a conspiracy, orchestrated by unseen forces of almost supernatural power and malevolence. The most extreme variants of QAnon posit such a universal conspiracy. It seeks to encompass and explain nothing less than the entire world.

These are very different genres of storytelling. If the first resembles a detective story, then the other two are more akin to fables. Yet one can morph into the other. Take the theories surrounding the Kennedy assassination. The first wave of amateur investigators created event conspiracy theories—relatively self-contained plots with credible assassins such as Cubans or the Mafia. 

But over time, event conspiracy theories have come to seem parochial. By the time of Oliver Stone’s 1991 movie JFK, once-popular plots had been eclipsed by elaborate fictions of gigantic long-running conspiracies in which the murder of the president was just one component. One of Stone’s primary sources was the journalist Jim Marrs, who went on to write books about the Freemasons and UFOs. 

Why limit yourself to a laboriously researched hypothesis about a single event when one giant, dramatic plot can explain them all? 

The theory of everything 

In every systemic or superconspiracy theory, the world is corrupt and unjust and getting worse. An elite cabal of improbably powerful individuals, motivated by pure malignancy, is responsible for most of humanity’s misfortunes. Only through the revelation of hidden knowledge and the cracking of codes by a righteous minority can the malefactors be unmasked and defeated. The morality is as simplistic as the narrative is complex: It is a battle between good and evil.

Notice anything? This is not the language of democratic politics but that of myth and of religion. In fact, it is the fundamental message of the Book of Revelation. Conspiracist thinking can be seen as an offshoot, often but not always secularized, of apocalyptic Christianity, with its alluring web of prophecies, signs, and secrets and its promise of violent resolution. After studying several millenarian sects for his 1957 book The Pursuit of the Millennium, the historian Norman Cohn itemized some common traits, among them “the megalomaniac view of oneself as the Elect, wholly good, abominably persecuted yet assured of ultimate triumph; the attribution of gigantic and demonic powers to the adversary; the refusal to accept the ineluctable limitations and imperfections of human experience.”

Popper similarly considered the conspiracy theory of society “a typical result of the secularization of religious superstition,” adding: “The gods are abandoned. But their place is filled by powerful men or groups … whose wickedness is responsible for all the evils we suffer from.” 

QAnon’s mutation from a conspiracy theory on an internet message board into a movement with the characteristics of a cult makes explicit the kinship between conspiracy theories and apocalyptic religion.

This way of thinking facilitates the creation of dehumanized scapegoats—one of the oldest and most consistent features of a conspiracy theory. During the Middle Ages and beyond, political and religious leaders routinely flung the name “Antichrist” at their opponents. During the Crusades, Christians falsely accused Europe’s Jewish communities of collaborating with Islam or poisoning wells and put them to the sword. Witch-hunters implicated tens of thousands of innocent women in a supposed satanic conspiracy that was said to explain everything from illness to crop failure. “Conspiracy theories are, in the end, not so much an explanation of events as they are an effort to assign blame,” writes Anna Merlan in the 2019 book Republic of Lies.

cover of Republic of Lies
Republic of Lies: American Conspiracy Theorists and Their Surprising Rise to Power
Anna Merlan
METROPOLITAN PUBLISHERS, 2019

But the systemic conspiracy theory as we know it—that is, the ostensibly secular variety—was established three centuries later, with remarkable speed. Some horrified opponents of the French Revolution could not accept that such an upheaval could be simply a popular revolt and needed to attribute it to sinister, unseen forces. They settled on the Illuminati, a Bavarian secret society of Enlightenment intellectuals influenced in part by the rituals and hierarchy of Freemasonry. 

The group was founded by a young law professor named Adam Weishaupt, who used the alias Brother Spartacus. In reality, the Illuminati were few in number, fractious, powerless, and, by the time of the revolution in 1789, defunct. But in the imaginations of two influential writers who published “exposés” of the Illuminati in 1797—Scotland’s John Robison and France’s Augustin Barruel—they were everywhere. Each man erected a wobbling tower of wild supposition and feverish nonsense on a platform of plausible claims and verifiable facts. Robison alleged that the revolution was merely part of “one great and wicked project” whose ultimate aim was to “abolish all religion, overturn every government, and make the world a general plunder and a wreck.”  

The Illuminati’s bogeyman status faded during the 19th century, but the core narrative persisted and proceeded to underpin the notorious hoax The Protocols of the Elders of Zion, first published in a Russian newspaper in 1903. The document’s anonymous author reinvented antisemitism by grafting it onto the story of the one big plot and positing Jews as the secret rulers of the world. In this account, the Elders orchestrate every war, recession, and so on in order to destabilize the world to the point where they can impose tyranny. 

You might ask why, if they have such world-bending power already, they would require a dictatorship. You might also wonder how one group could be responsible for both communism and monopoly capitalism, anarchism and democracy, the theory of evolution, and much more besides. But the vast, self-contradicting incoherence of the plot is what made it impossible to disprove. Nothing was ruled out, so every development could potentially be taken as evidence of the Elders at work.

In 1921, the Protocols were exposed as what the London Times called a “clumsy forgery,” plagiarized from two obscure 19th-century novels, yet they remained the key text of European antisemitism—essentially “true” despite being demonstrably false. “I believe in the inner, but not the factual, truth of the Protocols,” said Joseph Goebbels, who would become Hitler’s minister of propaganda. In Mein Kampf, Hitler claimed that efforts to debunk the Protocols were actually “evidence in favor of their authenticity.” He alleged that Jews, if not stopped, would “one day devour the other nations and become lords of the earth.” Popper and Hofstadter both used the Holocaust as an example of what happens when a conspiracy theorist gains power and makes the paranoid style a governing principle.

esoteric symbols and figures on torn paper including a witchfinder, George Washington and a Civil war era solder

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | PUBLIC DOMAIN

The prominent role of Jewish Bolsheviks like Leon Trotsky and Grigory Zinoviev in the Russian Revolution of 1917 enabled a merger of antisemitism and anticommunism that survived the fascist era. Cold War red-baiters such as Senator Joseph McCarthy and the John Birch Society assigned to communists uncanny degrees of malice and ubiquity, far beyond the real threat of Soviet espionage. In fact, they presented this view as the only logical one. McCarthy claimed that a string of national security setbacks could be explained only if George C. Marshall, the secretary of defense and former secretary of state, was literally a Soviet agent. “How can we account for our present situation unless we believe that men high in this government are concerting to deliver us to disaster?” he asked in 1951. “This must be the product of a great conspiracy so immense as to dwarf any previous such venture in the history of man.”

This continuity between antisemitism, anticommunism, and 18th-century paranoia about secret societies isn’t hard to see. General Francisco Franco, Spain’s right-wing dictator, claimed to be fighting a “Judeo-Masonic-Bolshevik” conspiracy. The Nazis persecuted Freemasons alongside Jews and communists. Nesta Webster, the British fascist sympathizer who laundered the Protocols through the British press, revived interest in Robison and Barruel’s books about the Illuminati, which the pro-Nazi Baptist preacher Gerald Winrod then promoted in the US. Even Winston Churchill was briefly persuaded by Webster’s work, citing it in his claims of a “world-wide conspiracy for the overthrow of civilization … from the days of Spartacus-Weishaupt to the days of Karl Marx.”

To follow the chain further, Webster and Winrod’s stew of anticommunism, antisemitism, and anti-Illuminati conspiracy theories influenced the John Birch Society, whose publications would light a fire decades later under the Infowars founder Alex Jones, perhaps the most consequential conspiracy theorist of the early 21st century. 

The villains behind the one big plot might be the Illuminati, the Elders of Zion, the communists, or the New World Order, but they are always essentially the same people, aspiring to officially dominate a world that they already secretly control. The names can be swapped around without much difficulty. While Winrod maintained that “the real conspirators behind the Illuminati were Jews,” the anticommunist William Guy Carr conversely argued that antisemitic paranoia “plays right into the hands of the Illuminati.” These days, it might be the World Economic Forum or George Soros; liberal internationalists with aspirations to change the world are easily cast as the new Illuminati, working toward establishing one world government.

Finding connection

The main reason that conspiracy theorists have lost interest in the relatively hard work of micro-conspiracies in favor of grander schemes is that it has become much easier to draw lines between objectively unrelated people and events. Information technology is, after all, also misinformation technology. That’s nothing new. 

The witch craze could not have traveled as far or lasted as long without the printing press. Malleus Maleficarum (Hammer of the Witches), a 1486 screed by the German witch-hunter Heinrich Kramer, became the best-selling witch-hunter’s handbook, going through 28 editions by 1600. Similarly, it was the books and pamphlets “exposing” the Illuminati that allowed those ideas to spread everywhere following the French Revolution. And in the early 20th century, the introduction of the radio facilitated fascist propaganda. During the 1930s, the Nazi-sympathizing Catholic priest and radio host Charles Coughlin broadcast his antisemitic conspiracy theories to tens of millions of Americans on dozens of stations. 

The internet has, of course, vastly accelerated and magnified the spread of conspiracy theories. It is hard to recall now, but in the early days it was sweetly assumed that the internet would improve the world by democratizing access to information. While this initial idealism survives in doughty enclaves such as Wikipedia, most of us vastly underestimated the human appetite for false information that confirms the consumer’s biases.

Politicians, too, were slow to recognize the corrosive power of free-flowing conspiracy theories. For a long time, the more fantastical assertions of McCarthy and the Birchers were kept at arm’s length from the political mainstream, but that distance began to diminish rapidly during the 1990s, as right-wing activists built a cottage industry of outrageous claims about Bill and Hillary Clinton to advance the idea that they were not just corrupt or dishonest but actively evil and even satanic. This became an article of faith in the information ecosystem of internet message boards and talk radio, which expanded over time to include Fox News, blogs, and social media. So when Democrats nominated Hillary Clinton in 2016, a significant portion of the American public saw a monster at the heart of an organized crime ring whose activities included human trafficking and murder.

Nobody could make the same mistake about misinformation today. One could hardly design a more fertile breeding ground for conspiracy theories than social media. The algorithms of YouTube, Facebook, TikTok, and X, which operate on the principle that rage is engaging, have turned into radicalization machines. When these platforms took off during the second half of the 2010s, they offered a seamless system in which people were able to come across exciting new information, share it, connect it to other strands of misinformation, and weave them into self-contained, self-affirming communities, all without leaving the house.

It’s not hard to see how the problem will continue to grow as AI burrows ever deeper into our everyday lives. Elon Musk has tinkered with the AI chatbot Grok to produce information that conforms to his personal beliefs rather than to actual facts. This outcome does not even have to be intentional. Chatbots have been shown to validate and intensify some users’ beliefs, even if they’re rooted in paranoia or hubris. If you believe that you’re the hero in an epic battle between good and evil, then your chatbot is inclined to agree with you.

It’s all this digital noise that has brought about the virtual collapse of the event conspiracy theory. The industry produced by the JFK assassination may have been pseudo-scholarship, but at least researchers went through the motions of scrutinizing documents, gathering evidence, and putting forward a somewhat consistent hypothesis. However misguided the conclusions, that kind of conspiracy theory required hard work and commitment. 

Commuters reading of John F. Kennedy's assassination in the newspaper

CARL MYDANS/THE LIFE PICTURE COLLECTION/SHUTTERSTOCK

Today’s online conspiracy theorists, by contrast, are shamelessly sloppy. Events such as the attack on Paul Pelosi, husband of former US House Speaker Nancy Pelosi, in October 2022, or the murders of Minnesota House speaker Melissa Hortman and her husband Mark in June 2025, or even more recently the killing of Charlie Kirk, have inspired theories overnight, which then evaporate just as quickly. The point of such theories, if they even merit that label, is not to seek the truth but to defame political opponents and turn victims into villains.

Before he even ran for office, Trump was notorious for promoting false stories about Barack Obama’s birthplace or vaccine safety. Heir to Joseph McCarthy, Barry Goldwater, and the John Birch Society, he is the lurid incarnation of the paranoid style. He routinely damns his opponents as “evil” or “very bad people” and speaks of America’s future in apocalyptic terms. It is no surprise, then, that every member of the administration must subscribe to Trump’s false claim that the 2020 election was stolen from him, or that celebrity conspiracy theorists are now in charge of national intelligence, public health, and the FBI. Former Democrats who hold such roles, like Tulsi Gabbard and Robert F. Kennedy Jr., have entered Trump’s orbit through the gateway of conspiracy theories. They illustrate how this mindset can create counterintuitive alliances that collapse conventional political distinctions and scramble traditional notions of right and left. 

The antidemocratic implications of what’s happening today are obvious. “Since what is at stake is always a conflict between absolute good and absolute evil, the quality needed is not a willingness to compromise but the will to fight things out to the finish,” Hofstadter wrote. “Nothing but complete victory will do.” 

Meeting the moment

It’s easy to feel helpless in the face of this epistemic chaos. Because one other foundational feature of religious prophecy is that it can be disproved without being discredited: Perhaps the world does not come to an end on the predicted day, but that great day will still come. The prophet is never wrong—he is just not proven right yet

The same flexibility is enjoyed by systemic conspiracy theories. The plotters never actually succeed, nor are they ever decisively exposed, yet the theory remains intact. Recently, claims that covid-19 was either exaggerated or wholly fabricated in order to crush civil liberties did not wither away once lockdown restrictions were lifted. Surely the so-called “plandemic” was a complete disaster? No matter. This type of conspiracy theory does not have to make sense.

Scholars who have attempted to methodically repudiate conspiracy theories about the 9/11 attacks or the JFK assassination have found that even once all the supporting pillars have been knocked away, the edifice still stands. It is increasingly clear that “conspiracy theory” is a misnomer and what we are really dealing with is conspiracy belief—as Hofstadter suggested, a worldview buttressed with numerous cognitive biases and impregnable to refutation. As Goebbels implied, the “factual truth” pales in comparison to the “inner truth,” which is whatever somebody believes it be.

But at the very least, what we can do is identify the entirely different realities constructed by believers and recognize and internalize their common roots, tropes, and motives. 

Those different realities, after all, have proved remarkably consistent in shape if not in their details. What we saw then, we see now. The Illuminati were Enlightenment idealists whose liberal agenda to “dispel the clouds of superstition and of prejudice,” in Weishaupt’s words, was demonized as wicked and destructive. If they could be shown to have fomented the French Revolution, then the whole revolution was a sham. Similarly, today’s radical right recasts every plank of progressive politics as an anti-American conspiracy. The far-right Great Replacement Theory, for instance, posits that immigration policy is a calculated effort by elites to supplant the native population with outsiders. This all flows directly from what thinkers such as Hofstadter, Popper, and Arendt diagnosed more than 60 years ago. 

What is dangerously novel, at least in democracies, is conspiracy theories’ ubiquity, reach, and power to affect the lives of ordinary citizens. So understanding the paranoid style better equips us to counteract it in our daily existence. At minimum, this knowledge empowers us to spot the flaws and biases in our own thinking and stop ourselves from tumbling down dangerous rabbit holes. 

cover of book
The Paranoid Style in American Politics and Other Essays
Richard Hofstadter
VINTAGE BOOKS, 1967

On November 18, 1961, President Kennedy—almost exactly two years before Hofstadter’s lecture and his own assassination—offered his own definition of the paranoid style in a speech to the Democratic Party of California. “There have always been those on the fringes of our society who have sought to escape their own responsibility by finding a simple solution, an appealing slogan, or a convenient scapegoat,” he said. “At times these fanatics have achieved a temporary success among those who lack the will or the wisdom to face unpleasant facts or unsolved problems. But in time the basic good sense and stability of the great American consensus has always prevailed.” 

We can only hope that the consensus begins to see the rolling chaos and naked aggression of Trump’s two administrations as weighty evidence against the conspiracy theory of society. The notion that any group could successfully direct the larger mess of this moment in the world, let alone the course of history for decades, undetected, is palpably absurd. The important thing is not that the details of this or that conspiracy theory are wrong; it is that the entire premise behind this worldview is false. 

Not everything is connected, not everything is premeditated, and many things are in fact just as they seem. 

Dorian Lynskey is the author of several books, including The Ministry of Truth: The Biography of George Orwell’s 1984 and Everything Must Go: The Stories We Tell About the End of the World. He cohosts the podcast Origin Story and co-writes the Origin Story books with Ian Dunt. 

Can “The Simpsons” really predict the future?

According to internet listicles, the animated sitcom The Simpsons has predicted the future anywhere from 17 to 55 times. 

“As you know, we’ve inherited quite a budget crunch from President Trump,” the newly sworn-in President Lisa Simpson declared way back in 2000, 17 years before the real estate mogul was inaugurated as the 45th leader of the United States. Earlier, in 1993, an episode of the show featured the “Osaka flu,” which some felt was eerily prescient of the coronavirus pandemic. And—somehow!—Simpsons writers just knew that the US Olympic curling team would beat Sweden eight whole years before they did it.

still frame from The Simpson where Principal Skinner's mother stands next to him on the Olympic podium and leans to heckle the Swedish curling team
After Team USA wins, Principal Skinner’s mother gloats to the Swedish curling team, “Tell me how my ice tastes.”
THE SIMPSONS ™ & © 20TH TELEVISION

The 16th-century seer Nostradamus made 942 predictions. To date, there have been some 800 episodes of The Simpsons. How does it feel to be a showrunner turned soothsayer? What’s it like when the world combs your jokes for prophecies and thinks you knew about 9/11 four years before it happened? 


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


Al Jean has worked on The Simpsons on and off since 1989; he is the cartoon’s longest-serving showrunner. Here, he reflects on the conspiracy theories that have sprung from these apparent prophecies. 

When did you first start hearing rumblings about The Simpsons having predicted the future?

It definitely got huge when Donald Trump was elected president in 2016 after we “predicted” it in an episode from 2000. The original pitch for the line was Johnny Depp and that was in for a while, but it was decided that it wasn’t as funny as Trump. 

What people don’t remember is that in the year 2000, it wasn’t such a crazy name to pick, because Trump was talking about running as a Reform Party candidate. So, like a lot of our “predictions,” it’s an educated guess. I won’t comment on whether it’s a good thing that it happened, but I will say that it’s not the most illogical person you could have picked for that joke. And we did say that following him was Lisa, and now that he’s been elected again, we could still have Lisa next time—that’s my hope! 

How did it make you feel that people thought you were a prophet? 

Again, apart from the election’s impact on the free world, I would say that we were amused that we had said something that came true. Then we made a short video called “Trumptastic Voyage” in 2015 that predicted he would run in 2016, 2020, 2024, and 2028, so we’re three-quarters of the way through that arduous prediction.

But I like people thinking that I know something about the future. It’s a good reputation to have. You only need half a dozen things that were either on target or even uncanny to be considered an oracle. Or maybe we’re from the future—I’ll let you decide! 

Why do you think people are so drawn to the idea that The Simpsons is prophetic? 

Maybe it slightly satisfies a yearning people have for meaning, certainly when life is now so random.

Would you say that most of your predictions have logical explanations? 

It’s cherry-picking—there are 35 years of material. How many of the things that we said came true versus how many of the many things we said did not come true? 

In 2014, we predicted Germany would win the World Cup in Brazil. It’s because we wanted a joke where the Brazilians were sad and they were singing a sad version of the “Olé, olé” song. So we had to think about who would be likely to win if Brazil lost, and Germany was the number two, so they did win, but it wasn’t the craziest prediction. In the same episode, we predicted that FIFA would be corrupt, which is a very easy prediction! So a lot of them fall under that category. 

In one scene I wrote, Marge holds a book called Curious George and the Ebola Virus—people go, “Oh my God! He predicted that!” Well, Ebola existed when I wrote the joke. I’d seen a movie about it called Outbreak. It’s like predicting the Black Death. 

But have any of your so-called “predictions” made even you pause? 

There are a couple of really bizarre coincidences. There was a brochure in a New York episode [which aired in 1997] that said “New York, $9” next to a picture of the trade towers looking like an 11. That was nuts. It still sends chills down me. The writer of that episode, Ian Maxtone-Graham, was nonplussed. He really couldn’t believe it. 

THE SIMPSONS ™ & © 20TH TELEVISION

It’s not like we would’ve made that knowing what was going to come, which we didn’t. And people have advanced conspiracy theories that we’re all Ivy League writers who knew … it’s preposterous stuff that people say. There’s also a thing people do that we don’t really love, which is they fake predictions. So after something happens, they’ll concoct a Simpsons frame, and it’s not something that ever aired. [Editor’s note: People faked Simpsons screenshots seeming to predict the 2024 Baltimore bridge collapse and the 2019 Notre-Dame fire. Images from the real “Osaka flu” episode were also edited to include the word “coronavirus.”] 

How does that make you feel? Is it frustrating?

It shows you how you can really convince people of something that’s not the case. Our small denial doesn’t get as much attention. 

As far as internet conspiracies go, where would you rate the idea that The Simpsons can predict the future? 

I hope it’s harmless. I think it’s really lodged in the internet very well. I don’t think it’s disappearing anytime soon. I’m sure for the rest of my life I’ll be hearing about what a group of psychics and seers I was part of. If we really could predict that well, we’d all be retired from betting on football. Although, advice to readers: Don’t bet on football. 

THE SIMPSONS ™ & © 20TH TELEVISION

Still, it is a tiny part of a trend that is alarming, which is people being unable to distinguish fact from fiction. And I have that trouble too. You read something, and your natural inclination has always been, “Well, I read it—it’s true.” And you have to really be skeptical about that. 

Can I ask you to predict a solution to all of this?

I think my only solution is: Look at your phone less and read more books.

This interview has been edited for length and clarity. 

Amelia Tait is a London-based freelance features journalist who writes about culture, trends, and unusual phenomena.