It’s never been easier to be a conspiracy theorist

The timing was eerie.

On November 21, 1963, Richard Hofstadter delivered the annual Herbert Spencer Lecture at Oxford University. Hofstadter was a professor of American history at Columbia University who liked to use social psychology to explain political history, the better to defend liberalism from extremism on both sides. His new lecture was titled “The Paranoid Style in American Politics.” 

“I call it the paranoid style,” he began, “simply because no other word adequately evokes the qualities of heated exaggeration, suspiciousness, and conspiratorial fantasy that I have in mind.”

Then, barely 24 hours later, President John F. Kennedy was assassinated in Dallas. This single, shattering event, and subsequent efforts to explain it, popularized a term for something that is clearly the subject of Hofstadter’s talk though it never actually figures in the text: “conspiracy theory.”


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


Hofstadter’s lecture was later revised into what remains an essential essay, even after decades of scholarship on conspiracy theories, because it lays out, with both rigor and concision, a historical continuity of conspiracist politics. “The paranoid style is an old and recurrent phenomenon in our public life which has been frequently linked with movements of suspicious discontent,” he writes, tracing the phenomenon back to the early years of the republic. Though each upsurge in conspiracy theories feels alarmingly novel—new narratives disseminated through new technologies on a new scale—they all conform to a similar pattern. As Hofstadter demonstrated, the names may change, but the fundamental template remains the same.

His psychological reading of politics has been controversial, but it is psychology, rather than economics or other external circumstances, that best explains the flourishing of conspiracy theories. Subsequent research has indeed shown that we are prone to perceive intentionality and patterns where none exist—and that this helps us feel like a person of consequence. To identify and expose a secret plot is to feel heroic and gain the illusion of control over the bewildering mess of life. 

Like many pioneering theories exposed to the cold light of hindsight, Hofstadter’s has flaws and blind spots. His key oversight was to downplay  the paranoid style’s role in mainstream politics up to that point and underrate its potential to spread in the future.

In 1963, conspiracy theories were still a fringe phenomenon, not because they were inherently unusual but because they had limited reach and were stigmatized by people in power. Now that neither factor holds true, it is obvious how infectious they are. Hofstadter could not, of course, have imagined the information technologies that have become stitched into our lives, nor the fractured media ecosystem of the 21st century, both of which have allowed conspiracist thinking to reach more and more people—to morph, and to bloom like mold. And he could not have predicted that a serial conspiracy theorist would be elected president, twice, and that he would staff his second administration with fellow proponents of the paranoid style. 

But Hofstadter’s concept of the paranoid style remains useful—and ever relevant—because it also describes a way of reading the world. As he put it, “The distinguishing thing about the paranoid style is not that its exponents see conspiracies or plots here or there in history, but they regard a ‘vast’ or ‘gigantic’ conspiracy as the motive force in historical events. History is a conspiracy, set in motion by demonic forces of almost transcendent power, and what is felt to be needed to defeat it is not the usual methods of political give-and-take, but an all-out crusade.”

Needless to say, this mystically unified version of history is not just untrue but impossible. It doesn’t make sense on any level. So why has it proved so alluring for so long—and why does it seem to be getting more popular every day?

What is a conspiracy theory, anyway? 

The first person to define the “conspiracy theory” as a widespread phenomenon was the Austrian-British philosopher Karl Popper, in his 1948 lecture “Towards a Rational Theory of Tradition.” He was not referring to a theory about an individual conspiracy. He was interested in “the conspiracy theory of society”: a particular way of interpreting the course of events. 

He later defined it as “the view that an explanation of a social phenomenon consists in the discovery of the men or groups who are interested in the occurrence of this phenomenon (sometimes it is a hidden interest which has first to be revealed), and who have planned and conspired to bring it about.”

Take an unforeseen catastrophe that inspires fear, anger, and pain—a financial crash, a devastating fire, a terrorist attack, a war. The conventional historian will try to unpick a tangle of different factors, of which malice is only one, and one that may be less significant than dumb luck.

The conspiracist, however, will perceive only sinister calculation behind these terrible events—a fiendishly intricate plot conceived and executed to perfection. Intent is everything. Popper’s observation chimes with Hofstadter’s: “The paranoid’s interpretation of history is … distinctly personal: decisive events are not taken as part of the stream of history, but as the consequences of someone’s will.”

A Culture of Conspiracy
Michael Barkun
UNIVERSITY OF CALIFORNIA PRESS, 2013

According to Michael Barkun in the 2003 book A Culture of Conspiracy, the conspiracist interpretation of events rests on three assumptions: Everything is connected, everything is premeditated, and nothing is as it seems. Following that third law means that widely accepted and documented history is, by definition, suspect and alternative explanations, however outré, are more likely to be true. As Hannah Arendt wrote in The Origins of Totalitarianism, the purpose of conspiracy theories in 20th-century dictatorships “was always to reveal official history as a joke, to demonstrate a sphere of secret influences in which the visible, traceable, and known historical reality was only the outward façade erected explicitly to fool the people.” (Those dictators, of course, were conspirators themselves, projecting their own love of secret plots onto others.)

Still, it’s important to remember that “conspiracy theory” can mean different things. Barkun describes three varieties, nesting like Russian dolls. 

The “event conspiracy theory” concerns a specific, contained catastrophe, such as the Reichstag fire of 1933 or the origins of covid-19. These theories are relatively plausible, even if they can not be proved. 

The “systemic conspiracy theory” is much more ambitious, purporting to explain numerous events as the poisonous fruit of a clandestine international plot. Far-fetched though they are, they do at least fixate on named groups, whether the Illuminati or the World Economic Forum. 

It is increasingly clear that “conspiracy theory” is a misnomer and what we are really dealing with is conspiracy belief.

Finally, the “superconspiracy theory” is that impossible fantasy in which history itself is a conspiracy, orchestrated by unseen forces of almost supernatural power and malevolence. The most extreme variants of QAnon posit such a universal conspiracy. It seeks to encompass and explain nothing less than the entire world.

These are very different genres of storytelling. If the first resembles a detective story, then the other two are more akin to fables. Yet one can morph into the other. Take the theories surrounding the Kennedy assassination. The first wave of amateur investigators created event conspiracy theories—relatively self-contained plots with credible assassins such as Cubans or the Mafia. 

But over time, event conspiracy theories have come to seem parochial. By the time of Oliver Stone’s 1991 movie JFK, once-popular plots had been eclipsed by elaborate fictions of gigantic long-running conspiracies in which the murder of the president was just one component. One of Stone’s primary sources was the journalist Jim Marrs, who went on to write books about the Freemasons and UFOs. 

Why limit yourself to a laboriously researched hypothesis about a single event when one giant, dramatic plot can explain them all? 

The theory of everything 

In every systemic or superconspiracy theory, the world is corrupt and unjust and getting worse. An elite cabal of improbably powerful individuals, motivated by pure malignancy, is responsible for most of humanity’s misfortunes. Only through the revelation of hidden knowledge and the cracking of codes by a righteous minority can the malefactors be unmasked and defeated. The morality is as simplistic as the narrative is complex: It is a battle between good and evil.

Notice anything? This is not the language of democratic politics but that of myth and of religion. In fact, it is the fundamental message of the Book of Revelation. Conspiracist thinking can be seen as an offshoot, often but not always secularized, of apocalyptic Christianity, with its alluring web of prophecies, signs, and secrets and its promise of violent resolution. After studying several millenarian sects for his 1957 book The Pursuit of the Millennium, the historian Norman Cohn itemized some common traits, among them “the megalomaniac view of oneself as the Elect, wholly good, abominably persecuted yet assured of ultimate triumph; the attribution of gigantic and demonic powers to the adversary; the refusal to accept the ineluctable limitations and imperfections of human experience.”

Popper similarly considered the conspiracy theory of society “a typical result of the secularization of religious superstition,” adding: “The gods are abandoned. But their place is filled by powerful men or groups … whose wickedness is responsible for all the evils we suffer from.” 

QAnon’s mutation from a conspiracy theory on an internet message board into a movement with the characteristics of a cult makes explicit the kinship between conspiracy theories and apocalyptic religion.

This way of thinking facilitates the creation of dehumanized scapegoats—one of the oldest and most consistent features of a conspiracy theory. During the Middle Ages and beyond, political and religious leaders routinely flung the name “Antichrist” at their opponents. During the Crusades, Christians falsely accused Europe’s Jewish communities of collaborating with Islam or poisoning wells and put them to the sword. Witch-hunters implicated tens of thousands of innocent women in a supposed satanic conspiracy that was said to explain everything from illness to crop failure. “Conspiracy theories are, in the end, not so much an explanation of events as they are an effort to assign blame,” writes Anna Merlan in the 2019 book Republic of Lies.

cover of Republic of Lies
Republic of Lies: American Conspiracy Theorists and Their Surprising Rise to Power
Anna Merlan
METROPOLITAN PUBLISHERS, 2019

But the systemic conspiracy theory as we know it—that is, the ostensibly secular variety—was established three centuries later, with remarkable speed. Some horrified opponents of the French Revolution could not accept that such an upheaval could be simply a popular revolt and needed to attribute it to sinister, unseen forces. They settled on the Illuminati, a Bavarian secret society of Enlightenment intellectuals influenced in part by the rituals and hierarchy of Freemasonry. 

The group was founded by a young law professor named Adam Weishaupt, who used the alias Brother Spartacus. In reality, the Illuminati were few in number, fractious, powerless, and, by the time of the revolution in 1789, defunct. But in the imaginations of two influential writers who published “exposés” of the Illuminati in 1797—Scotland’s John Robison and France’s Augustin Barruel—they were everywhere. Each man erected a wobbling tower of wild supposition and feverish nonsense on a platform of plausible claims and verifiable facts. Robison alleged that the revolution was merely part of “one great and wicked project” whose ultimate aim was to “abolish all religion, overturn every government, and make the world a general plunder and a wreck.”  

The Illuminati’s bogeyman status faded during the 19th century, but the core narrative persisted and proceeded to underpin the notorious hoax The Protocols of the Elders of Zion, first published in a Russian newspaper in 1903. The document’s anonymous author reinvented antisemitism by grafting it onto the story of the one big plot and positing Jews as the secret rulers of the world. In this account, the Elders orchestrate every war, recession, and so on in order to destabilize the world to the point where they can impose tyranny. 

You might ask why, if they have such world-bending power already, they would require a dictatorship. You might also wonder how one group could be responsible for both communism and monopoly capitalism, anarchism and democracy, the theory of evolution, and much more besides. But the vast, self-contradicting incoherence of the plot is what made it impossible to disprove. Nothing was ruled out, so every development could potentially be taken as evidence of the Elders at work.

In 1921, the Protocols were exposed as what the London Times called a “clumsy forgery,” plagiarized from two obscure 19th-century novels, yet they remained the key text of European antisemitism—essentially “true” despite being demonstrably false. “I believe in the inner, but not the factual, truth of the Protocols,” said Joseph Goebbels, who would become Hitler’s minister of propaganda. In Mein Kampf, Hitler claimed that efforts to debunk the Protocols were actually “evidence in favor of their authenticity.” He alleged that Jews, if not stopped, would “one day devour the other nations and become lords of the earth.” Popper and Hofstadter both used the Holocaust as an example of what happens when a conspiracy theorist gains power and makes the paranoid style a governing principle.

esoteric symbols and figures on torn paper including a witchfinder, George Washington and a Civil war era solder

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | PUBLIC DOMAIN

The prominent role of Jewish Bolsheviks like Leon Trotsky and Grigory Zinoviev in the Russian Revolution of 1917 enabled a merger of antisemitism and anticommunism that survived the fascist era. Cold War red-baiters such as Senator Joseph McCarthy and the John Birch Society assigned to communists uncanny degrees of malice and ubiquity, far beyond the real threat of Soviet espionage. In fact, they presented this view as the only logical one. McCarthy claimed that a string of national security setbacks could be explained only if George C. Marshall, the secretary of defense and former secretary of state, was literally a Soviet agent. “How can we account for our present situation unless we believe that men high in this government are concerting to deliver us to disaster?” he asked in 1951. “This must be the product of a great conspiracy so immense as to dwarf any previous such venture in the history of man.”

This continuity between antisemitism, anticommunism, and 18th-century paranoia about secret societies isn’t hard to see. General Francisco Franco, Spain’s right-wing dictator, claimed to be fighting a “Judeo-Masonic-Bolshevik” conspiracy. The Nazis persecuted Freemasons alongside Jews and communists. Nesta Webster, the British fascist sympathizer who laundered the Protocols through the British press, revived interest in Robison and Barruel’s books about the Illuminati, which the pro-Nazi Baptist preacher Gerald Winrod then promoted in the US. Even Winston Churchill was briefly persuaded by Webster’s work, citing it in his claims of a “world-wide conspiracy for the overthrow of civilization … from the days of Spartacus-Weishaupt to the days of Karl Marx.”

To follow the chain further, Webster and Winrod’s stew of anticommunism, antisemitism, and anti-Illuminati conspiracy theories influenced the John Birch Society, whose publications would light a fire decades later under the Infowars founder Alex Jones, perhaps the most consequential conspiracy theorist of the early 21st century. 

The villains behind the one big plot might be the Illuminati, the Elders of Zion, the communists, or the New World Order, but they are always essentially the same people, aspiring to officially dominate a world that they already secretly control. The names can be swapped around without much difficulty. While Winrod maintained that “the real conspirators behind the Illuminati were Jews,” the anticommunist William Guy Carr conversely argued that antisemitic paranoia “plays right into the hands of the Illuminati.” These days, it might be the World Economic Forum or George Soros; liberal internationalists with aspirations to change the world are easily cast as the new Illuminati, working toward establishing one world government.

Finding connection

The main reason that conspiracy theorists have lost interest in the relatively hard work of micro-conspiracies in favor of grander schemes is that it has become much easier to draw lines between objectively unrelated people and events. Information technology is, after all, also misinformation technology. That’s nothing new. 

The witch craze could not have traveled as far or lasted as long without the printing press. Malleus Maleficarum (Hammer of the Witches), a 1486 screed by the German witch-hunter Heinrich Kramer, became the best-selling witch-hunter’s handbook, going through 28 editions by 1600. Similarly, it was the books and pamphlets “exposing” the Illuminati that allowed those ideas to spread everywhere following the French Revolution. And in the early 20th century, the introduction of the radio facilitated fascist propaganda. During the 1930s, the Nazi-sympathizing Catholic priest and radio host Charles Coughlin broadcast his antisemitic conspiracy theories to tens of millions of Americans on dozens of stations. 

The internet has, of course, vastly accelerated and magnified the spread of conspiracy theories. It is hard to recall now, but in the early days it was sweetly assumed that the internet would improve the world by democratizing access to information. While this initial idealism survives in doughty enclaves such as Wikipedia, most of us vastly underestimated the human appetite for false information that confirms the consumer’s biases.

Politicians, too, were slow to recognize the corrosive power of free-flowing conspiracy theories. For a long time, the more fantastical assertions of McCarthy and the Birchers were kept at arm’s length from the political mainstream, but that distance began to diminish rapidly during the 1990s, as right-wing activists built a cottage industry of outrageous claims about Bill and Hillary Clinton to advance the idea that they were not just corrupt or dishonest but actively evil and even satanic. This became an article of faith in the information ecosystem of internet message boards and talk radio, which expanded over time to include Fox News, blogs, and social media. So when Democrats nominated Hillary Clinton in 2016, a significant portion of the American public saw a monster at the heart of an organized crime ring whose activities included human trafficking and murder.

Nobody could make the same mistake about misinformation today. One could hardly design a more fertile breeding ground for conspiracy theories than social media. The algorithms of YouTube, Facebook, TikTok, and X, which operate on the principle that rage is engaging, have turned into radicalization machines. When these platforms took off during the second half of the 2010s, they offered a seamless system in which people were able to come across exciting new information, share it, connect it to other strands of misinformation, and weave them into self-contained, self-affirming communities, all without leaving the house.

It’s not hard to see how the problem will continue to grow as AI burrows ever deeper into our everyday lives. Elon Musk has tinkered with the AI chatbot Grok to produce information that conforms to his personal beliefs rather than to actual facts. This outcome does not even have to be intentional. Chatbots have been shown to validate and intensify some users’ beliefs, even if they’re rooted in paranoia or hubris. If you believe that you’re the hero in an epic battle between good and evil, then your chatbot is inclined to agree with you.

It’s all this digital noise that has brought about the virtual collapse of the event conspiracy theory. The industry produced by the JFK assassination may have been pseudo-scholarship, but at least researchers went through the motions of scrutinizing documents, gathering evidence, and putting forward a somewhat consistent hypothesis. However misguided the conclusions, that kind of conspiracy theory required hard work and commitment. 

Commuters reading of John F. Kennedy's assassination in the newspaper

CARL MYDANS/THE LIFE PICTURE COLLECTION/SHUTTERSTOCK

Today’s online conspiracy theorists, by contrast, are shamelessly sloppy. Events such as the attack on Paul Pelosi, husband of former US House Speaker Nancy Pelosi, in October 2022, or the murders of Minnesota House speaker Melissa Hortman and her husband Mark in June 2025, or even more recently the killing of Charlie Kirk, have inspired theories overnight, which then evaporate just as quickly. The point of such theories, if they even merit that label, is not to seek the truth but to defame political opponents and turn victims into villains.

Before he even ran for office, Trump was notorious for promoting false stories about Barack Obama’s birthplace or vaccine safety. Heir to Joseph McCarthy, Barry Goldwater, and the John Birch Society, he is the lurid incarnation of the paranoid style. He routinely damns his opponents as “evil” or “very bad people” and speaks of America’s future in apocalyptic terms. It is no surprise, then, that every member of the administration must subscribe to Trump’s false claim that the 2020 election was stolen from him, or that celebrity conspiracy theorists are now in charge of national intelligence, public health, and the FBI. Former Democrats who hold such roles, like Tulsi Gabbard and Robert F. Kennedy Jr., have entered Trump’s orbit through the gateway of conspiracy theories. They illustrate how this mindset can create counterintuitive alliances that collapse conventional political distinctions and scramble traditional notions of right and left. 

The antidemocratic implications of what’s happening today are obvious. “Since what is at stake is always a conflict between absolute good and absolute evil, the quality needed is not a willingness to compromise but the will to fight things out to the finish,” Hofstadter wrote. “Nothing but complete victory will do.” 

Meeting the moment

It’s easy to feel helpless in the face of this epistemic chaos. Because one other foundational feature of religious prophecy is that it can be disproved without being discredited: Perhaps the world does not come to an end on the predicted day, but that great day will still come. The prophet is never wrong—he is just not proven right yet

The same flexibility is enjoyed by systemic conspiracy theories. The plotters never actually succeed, nor are they ever decisively exposed, yet the theory remains intact. Recently, claims that covid-19 was either exaggerated or wholly fabricated in order to crush civil liberties did not wither away once lockdown restrictions were lifted. Surely the so-called “plandemic” was a complete disaster? No matter. This type of conspiracy theory does not have to make sense.

Scholars who have attempted to methodically repudiate conspiracy theories about the 9/11 attacks or the JFK assassination have found that even once all the supporting pillars have been knocked away, the edifice still stands. It is increasingly clear that “conspiracy theory” is a misnomer and what we are really dealing with is conspiracy belief—as Hofstadter suggested, a worldview buttressed with numerous cognitive biases and impregnable to refutation. As Goebbels implied, the “factual truth” pales in comparison to the “inner truth,” which is whatever somebody believes it be.

But at the very least, what we can do is identify the entirely different realities constructed by believers and recognize and internalize their common roots, tropes, and motives. 

Those different realities, after all, have proved remarkably consistent in shape if not in their details. What we saw then, we see now. The Illuminati were Enlightenment idealists whose liberal agenda to “dispel the clouds of superstition and of prejudice,” in Weishaupt’s words, was demonized as wicked and destructive. If they could be shown to have fomented the French Revolution, then the whole revolution was a sham. Similarly, today’s radical right recasts every plank of progressive politics as an anti-American conspiracy. The far-right Great Replacement Theory, for instance, posits that immigration policy is a calculated effort by elites to supplant the native population with outsiders. This all flows directly from what thinkers such as Hofstadter, Popper, and Arendt diagnosed more than 60 years ago. 

What is dangerously novel, at least in democracies, is conspiracy theories’ ubiquity, reach, and power to affect the lives of ordinary citizens. So understanding the paranoid style better equips us to counteract it in our daily existence. At minimum, this knowledge empowers us to spot the flaws and biases in our own thinking and stop ourselves from tumbling down dangerous rabbit holes. 

cover of book
The Paranoid Style in American Politics and Other Essays
Richard Hofstadter
VINTAGE BOOKS, 1967

On November 18, 1961, President Kennedy—almost exactly two years before Hofstadter’s lecture and his own assassination—offered his own definition of the paranoid style in a speech to the Democratic Party of California. “There have always been those on the fringes of our society who have sought to escape their own responsibility by finding a simple solution, an appealing slogan, or a convenient scapegoat,” he said. “At times these fanatics have achieved a temporary success among those who lack the will or the wisdom to face unpleasant facts or unsolved problems. But in time the basic good sense and stability of the great American consensus has always prevailed.” 

We can only hope that the consensus begins to see the rolling chaos and naked aggression of Trump’s two administrations as weighty evidence against the conspiracy theory of society. The notion that any group could successfully direct the larger mess of this moment in the world, let alone the course of history for decades, undetected, is palpably absurd. The important thing is not that the details of this or that conspiracy theory are wrong; it is that the entire premise behind this worldview is false. 

Not everything is connected, not everything is premeditated, and many things are in fact just as they seem. 

Dorian Lynskey is the author of several books, including The Ministry of Truth: The Biography of George Orwell’s 1984 and Everything Must Go: The Stories We Tell About the End of the World. He cohosts the podcast Origin Story and co-writes the Origin Story books with Ian Dunt. 

Can “The Simpsons” really predict the future?

According to internet listicles, the animated sitcom The Simpsons has predicted the future anywhere from 17 to 55 times. 

“As you know, we’ve inherited quite a budget crunch from President Trump,” the newly sworn-in President Lisa Simpson declared way back in 2000, 17 years before the real estate mogul was inaugurated as the 45th leader of the United States. Earlier, in 1993, an episode of the show featured the “Osaka flu,” which some felt was eerily prescient of the coronavirus pandemic. And—somehow!—Simpsons writers just knew that the US Olympic curling team would beat Sweden eight whole years before they did it.

still frame from The Simpson where Principal Skinner's mother stands next to him on the Olympic podium and leans to heckle the Swedish curling team
After Team USA wins, Principal Skinner’s mother gloats to the Swedish curling team, “Tell me how my ice tastes.”
THE SIMPSONS ™ & © 20TH TELEVISION

The 16th-century seer Nostradamus made 942 predictions. To date, there have been some 800 episodes of The Simpsons. How does it feel to be a showrunner turned soothsayer? What’s it like when the world combs your jokes for prophecies and thinks you knew about 9/11 four years before it happened? 


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


Al Jean has worked on The Simpsons on and off since 1989; he is the cartoon’s longest-serving showrunner. Here, he reflects on the conspiracy theories that have sprung from these apparent prophecies. 

When did you first start hearing rumblings about The Simpsons having predicted the future?

It definitely got huge when Donald Trump was elected president in 2016 after we “predicted” it in an episode from 2000. The original pitch for the line was Johnny Depp and that was in for a while, but it was decided that it wasn’t as funny as Trump. 

What people don’t remember is that in the year 2000, it wasn’t such a crazy name to pick, because Trump was talking about running as a Reform Party candidate. So, like a lot of our “predictions,” it’s an educated guess. I won’t comment on whether it’s a good thing that it happened, but I will say that it’s not the most illogical person you could have picked for that joke. And we did say that following him was Lisa, and now that he’s been elected again, we could still have Lisa next time—that’s my hope! 

How did it make you feel that people thought you were a prophet? 

Again, apart from the election’s impact on the free world, I would say that we were amused that we had said something that came true. Then we made a short video called “Trumptastic Voyage” in 2015 that predicted he would run in 2016, 2020, 2024, and 2028, so we’re three-quarters of the way through that arduous prediction.

But I like people thinking that I know something about the future. It’s a good reputation to have. You only need half a dozen things that were either on target or even uncanny to be considered an oracle. Or maybe we’re from the future—I’ll let you decide! 

Why do you think people are so drawn to the idea that The Simpsons is prophetic? 

Maybe it slightly satisfies a yearning people have for meaning, certainly when life is now so random.

Would you say that most of your predictions have logical explanations? 

It’s cherry-picking—there are 35 years of material. How many of the things that we said came true versus how many of the many things we said did not come true? 

In 2014, we predicted Germany would win the World Cup in Brazil. It’s because we wanted a joke where the Brazilians were sad and they were singing a sad version of the “Olé, olé” song. So we had to think about who would be likely to win if Brazil lost, and Germany was the number two, so they did win, but it wasn’t the craziest prediction. In the same episode, we predicted that FIFA would be corrupt, which is a very easy prediction! So a lot of them fall under that category. 

In one scene I wrote, Marge holds a book called Curious George and the Ebola Virus—people go, “Oh my God! He predicted that!” Well, Ebola existed when I wrote the joke. I’d seen a movie about it called Outbreak. It’s like predicting the Black Death. 

But have any of your so-called “predictions” made even you pause? 

There are a couple of really bizarre coincidences. There was a brochure in a New York episode [which aired in 1997] that said “New York, $9” next to a picture of the trade towers looking like an 11. That was nuts. It still sends chills down me. The writer of that episode, Ian Maxtone-Graham, was nonplussed. He really couldn’t believe it. 

THE SIMPSONS ™ & © 20TH TELEVISION

It’s not like we would’ve made that knowing what was going to come, which we didn’t. And people have advanced conspiracy theories that we’re all Ivy League writers who knew … it’s preposterous stuff that people say. There’s also a thing people do that we don’t really love, which is they fake predictions. So after something happens, they’ll concoct a Simpsons frame, and it’s not something that ever aired. [Editor’s note: People faked Simpsons screenshots seeming to predict the 2024 Baltimore bridge collapse and the 2019 Notre-Dame fire. Images from the real “Osaka flu” episode were also edited to include the word “coronavirus.”] 

How does that make you feel? Is it frustrating?

It shows you how you can really convince people of something that’s not the case. Our small denial doesn’t get as much attention. 

As far as internet conspiracies go, where would you rate the idea that The Simpsons can predict the future? 

I hope it’s harmless. I think it’s really lodged in the internet very well. I don’t think it’s disappearing anytime soon. I’m sure for the rest of my life I’ll be hearing about what a group of psychics and seers I was part of. If we really could predict that well, we’d all be retired from betting on football. Although, advice to readers: Don’t bet on football. 

THE SIMPSONS ™ & © 20TH TELEVISION

Still, it is a tiny part of a trend that is alarming, which is people being unable to distinguish fact from fiction. And I have that trouble too. You read something, and your natural inclination has always been, “Well, I read it—it’s true.” And you have to really be skeptical about that. 

Can I ask you to predict a solution to all of this?

I think my only solution is: Look at your phone less and read more books.

This interview has been edited for length and clarity. 

Amelia Tait is a London-based freelance features journalist who writes about culture, trends, and unusual phenomena. 

How conspiracy theories infiltrated the doctor’s office

As anyone who has googled their symptoms and convinced themselves that they’ve got a brain tumor will attest, the internet makes it very easy to self-(mis)diagnose your health problems. And although social media and other digital forums can be a lifeline for some people looking for a diagnosis or community, when that information is wrong, it can put their well-being and even lives in danger.

Unfortunately, this modern impulse to “do your own research” became even more pronounced during the coronavirus pandemic.


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


We asked a number of health-care professionals about how this shifting landscape is changing their profession. They told us that they are being forced to adapt how they treat patients. It’s a wide range of experiences: Some say patients tell them they just want more information about certain treatments because they’re concerned about how effective they are. Others hear that their patients just don’t trust the powers that be. Still others say patients are rejecting evidence-based medicine altogether in favor of alternative theories they’ve come across online. 

These are their stories, in their own words.

Interviews have been edited for length and clarity.


The physician trying to set shared goals 

David Scales

Internal medicine hospitalist and assistant professor of medicine,
Weill Cornell Medical College
New York City

Every one of my colleagues has stories about patients who have been rejective of care, or had very peculiar perspectives on what their care should be. Sometimes that’s driven by religion. But I think what has changed is people, not necessarily with a religious standpoint, having very fixed beliefs that are sometimes—based on all the evidence that we have—in contradiction with their health goals. And that is a very challenging situation. 

I once treated a patient with a connective tissue disease called Ehlers-Danlos syndrome. While there’s no doubt that the illness exists, there’s a lot of doubt and uncertainty over which symptoms can be attributed to Ehlers-Danlos. This means it can fall into what social scientists call a “contested illness.” 

Contested illnesses used to be causes for arguably fringe movements, but they have become much more prominent since the rise of social media in the mid-2010s. Patients often search for information that resonates with their experience. 

This patient was very hesitant about various treatments, and it was clear she was getting her information from, I would say, suspect sources. She’d been following people online who were not necessarily trustworthy, so I sat down with her and we looked them up on Quackwatch, a site that lists health myths and misconduct. 

“She was extremely knowledgeable, and had done a lot of her own research, but she struggled to tell the difference between good and bad sources.”

She was still accepting of treatment, and was extremely knowledgeable, and had done a lot of her own research, but she struggled to tell the difference between good and bad sources and fixed beliefs that overemphasize particular things—like what symptoms might be attributable to other stuff.

Physicians have the tools to work with patients who are struggling with these challenges. The first is motivational interviewing, a counseling technique that was developed for people with substance-use disorders. It’s a nonjudgmental approach that uses open-ended questions to draw out people’s motivations, and to find where there’s a mismatch between their behaviors and their beliefs. It’s highly effective in treating people who are vaccine-hesitant.

Another is an approach called shared decision-making. First we work out what the patient’s goals are and then figure out a way to align those with what we know about the evidence-based way to treat them. It’s something we use for end-of-life care, too.

What’s concerning to me is that it seems as though there’s a dynamic of patients coming in with a fixed belief of how to diagnose their illness, how their symptoms should be treated, and how to treat it in a way that’s completely divorced from the kinds of medicine you’d find in textbooks—and that the same dynamic is starting to extend to other illnesses, too.


The therapist committed to being there when the conspiracy fever breaks 

Damien Stewart

Psychologist
Warsaw, Poland

Before covid, I hadn’t really had any clients bring up conspiracy theories into my practice. But once the pandemic began, they went from being fun or harmless to something dangerous.

In my experience, vaccines were the topic where I first really started to see some militancy—people who were looking down the barrel of losing their jobs because they wouldn’t get vaccinated. At one point, I had an out-and-out conspiracy theorist say to me, “I might as well wear a yellow star like the Jews during the Holocaust, because I won’t get vaccinated.” 

I felt pure anger, and I reached a point in my therapeutic journey I didn’t know would ever occur—I’d found that I had a line that could be crossed by a client that I could not tolerate. I spoke in a very direct manner he probably wasn’t used to and challenged his conspiracy theory. He got very angry and hung up the call.  

It made me figure out how I was going to deal with this in future, and to develop an approach—which was to not challenge the conspiracy theory, but to gently talk through it, to provide alternative points of view and ask questions. I try to find the therapeutic value in the information, in the conversations we’re having. My belief is and evidence seems to show that people believe in conspiracy theories because there’s something wrong in their life that is inexplicable, and they need something to explain what’s happening to them. And even if I have no belief or agreement whatsoever in what they’re saying, I think I need to sit here and have this conversation, because one day this person might snap out of it, and I need to be here when that happens.

As a psychologist, you have to remember that these people who believe in these things are extremely vulnerable. So my anger around these conspiracy theories has changed from being directed toward the deliverer—the person sitting in front of me saying these things—to the people driving the theories.


The emergency room doctor trying to get patients to reconnect with the evidence

Luis Aguilar Montalvan

Attending emergency medicine physician 
Queens, New York

The emergency department is essentially the pulse of what is happening in society. That’s what really attracted me to it. And I think the job of the emergency doctor, particularly within shifting political views or belief in Western medicine, is to try to reconnect with someone. To just create the experience that you need to prime someone to hopefully reconsider their relationship with this evidence-based medicine.

When I was working in the pediatrics emergency department a few years ago, we saw a resurgence of diseases we thought we had eradicated, like measles. I typically framed it by saying to the child’s caregiver: “This is a disease we typically use vaccines for, and it can prevent it in the majority of people.” 

“The doctor is now more like a consultant or a customer service provider than the authority. … The power dynamic has changed.”

The sentiment among my adult patients who are reluctant to get vaccinated or take certain medications seems to be from a mistrust of the government or “The System” rather than from anything Robert F. Kennedy Jr. says directly, for example. I’m definitely seeing more patients these days asking me what they can take to manage a condition or pain that’s not medication. I tell them that the knowledge I have is based on science, and explain the medications I’d typically give other people in their situation. I try to give them autonomy while reintroducing the idea of sticking with the evidence, and for the most part they’re appreciative and courteous.

The role of doctor has changed in recent years—there’s been a cultural change. My understanding is that back in the day, what the doctor said, the patient did. Some doctors used to shame parents who hadn’t vaccinated their kids. Now we’re shifting away from that, and the doctor is now more like a consultant or a customer service provider than the authority. I think that could be because we’ve seen a lot of bad actors in medicine, so the power dynamic has changed.  

I think if we had a more unified approach at a national level, if they had an actual unified and transparent relationship with the population, that would set us up right. But I’m not sure we’ve ever had it.

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | PUBLIC DOMAIN

The psychologist who supported severely mentally ill patients through the pandemic 

Michelle Sallee

Psychologist, board certified in serious mental illness psychology
Oakland, California

I’m a clinical psychologist who only works with people who have been in the hospital three or more times in the last 12 months. I do both individual therapy and a lot of group work, and several years ago during the pandemic, I wrote a 10-week program for patients about how to cope with sheltering in place, following safety guidelines, and their concerns about vaccines.

My groups were very structured around evidence-based practice, and I had rules for the groups. First, I would tell people that the goal was not to talk them out of their conspiracy theory; my goal was not to talk them into a vaccination. My goal was to provide a safe place for them to be able to talk about things that were terrifying to them. We wanted to reduce anxiety, depression, thoughts of suicide, and the need for psychiatric hospitalizations. 

Half of the group was pro–public health requirements, and their paranoia and fear for safety was around people who don’t get vaccinated; the other half might have been strongly opposed to anyone other than themselves deciding they need a vaccination or a mask. Both sides were fearing for their lives—but from each other.

I wanted to make sure everybody felt heard, and it was really important to be able to talk about what they believed—like, some people felt like the government was trying to track us and even kill us—without any judgment from other people. My theory is that if you allow people to talk freely about what’s on their mind without blocking them with your own opinions or judgment, they will find their way eventually. And a lot of times that works. 

People have been stuck on their conspiracy theory or their paranoia has been stuck on it for a long time because they’re always fighting with people about it, everyone’s telling them that this is not true. So we would just have an open discussion about these things. 

“People have been stuck on their conspiracy theory for a long time because they’re always fighting with people about it, everyone’s telling them that this is not true.”

I ran the program four times for a total of 27 people, and the thing that I remember the most was how respectful and tolerant and empathic, but still honest about their feelings and opinions, everybody was. At the end of the program, most participants reported a decrease in pandemic-related stress. Half reported a decrease in general perceived stress, and half reported no change.

I’d say that the rate of how much vaccines are talked about now is significantly lower, and covid doesn’t really come up anymore. But other medical illnesses come up—patients saying, “My doctor said I need to get this surgery, but I know who they’re working for.” Everybody has their concerns, but when a person with psychosis has concerns, it becomes delusional, paranoid, and psychotic.

I’d like to see more providers be given more training around severe mental illness. These are not just people who just need to go to the hospital to get remedicated for a couple of days. There’s a whole life that needs to get looked at here, and they deserve that. I’d like to see more group settings with a combination of psychoeducation, evidence-based research, skills training, and process, because the research says that’s the combination that’s really important.

Editor’s note: Sallee works for a large HMO psychiatry department, and her account here is not on behalf of, endorsed by, or speaking for any larger organization.


The epidemiologist rethinking how to bridge differences in culture and community 

John Wright

Clinician and epidemiologist
Bradford, United Kingdom

I work in Bradford, the fifth-biggest city in the UK. It has a big South Asian population and high levels of deprivation. Before covid, I’d say there was growing awareness about conspiracies. But during the pandemic, I think that lockdown, isolation, fear of this unknown virus, and then the uncertainty about the future came together in a perfect storm to highlight people’s latent attraction to alternative hypotheses and conspiracies—it was fertile ground. I’ve been a National Health Service doctor for almost 40 years, and until recently, the NHS had a great reputation, with great trust, and great public support. The pandemic was the first time that I started seeing that erode.

It wasn’t just conspiracies about vaccines or new drugs, either—it was also an undermining of trust in public institutions. I remember an older woman who had come into the emergency department with covid. She was very unwell, but she just wouldn’t go into hospital despite all our efforts, because there were conspiracies going around that we were killing patients in hospital. So she went home, and I don’t know what happened to her.

The other big change in recent years has been social media and social networks that have obviously amplified and accelerated alternative theories and conspiracies. That’s been the tinder that’s allowed the wildfires to spread with these sort of conspiracy theories. In Bradford, particularly among ethnic minority communities, there’s been stronger links between them—allowing this to spread quicker—but also a more structural distrust. 

Vaccination rates have fallen since the pandemic, and we’re seeing lower uptake of the meningitis and HPV vaccines in schools among South Asian families. Ultimately, this needs a bigger societal approach than individual clinicians putting needles in arms. We started a project called Born in Bradford in 2007 that’s following more than 13,000 families, including around 20,000 teenagers as they grow up. One of the biggest focuses for us is how they use social media and how it links to their mental health, so we’re asking them to donate their digital media to us so we can examine it in confidence. We’re hoping it could allow us to explore conspiracies and influences.

The challenge for the next generation of resident doctors and clinicians is: How do we encourage health literacy in young people about what’s right and what’s wrong without being paternalistic? We also need to get better at engaging with people as health advocates to counter some of the online narratives. The NHS website can’t compete with how engaging content on TikTok is.


The pediatrician who worries about the confusing public narrative on vaccines

Jessica Weisz

Pediatrician
Washington, DC

I’m an outpatient pediatrician, so I do a lot of preventative care, checkups, and sick visits, and treating coughs and colds—those sorts of things. I’ve had specific training in how to support families in clinical decision-making related to vaccines, and every family wants what’s best for their child, and so supporting them is part of my job.

I don’t see specific articulation of conspiracy theories, but I do think there’s more questions about vaccines in conversations I’ve not typically had to have before. I’ve found that parents and caregivers do ask general questions about the risks and benefits of vaccines. We just try to reiterate that vaccines have been studied, that they are intentionally scheduled to protect an immature immune system when it’s the most vulnerable, and that we want everyone to be safe, healthy, and strong. That’s how we can provide protection.

“I think what’s confusing is that distress is being sowed in headlines when most patients, families, and caregivers are motivated and want to be vaccinated.”

I feel that the narrative in the public space is unfairly confusing to families when over 90% of families still want their kids to be vaccinated. The families who are not as interested in that, or have questions—it typically takes multiple conversations to support that family in their decision-making. It’s very rarely one conversation.

I think what’s confusing is that distress is being sowed in headlines when most patients, families, and caregivers are motivated and want to be vaccinated. For example, some of the headlines around recent changes the CDC are making make it sound like they’re making a huge clinical change, when it’s actually not a huge change from what people are typically doing. In my standard clinical practice, we don’t give the combined MMRV vaccine to children under four years old, and that’s been standard practice in all of the places I’ve worked on the Eastern Seaboard. [Editor’s note: In early October, the CDC updated its recommendation that young children receive the varicella vaccine separately from the combined vaccine for measles, mumps, and rubella. Many practitioners, including Weisz, already offer the shots separately.]

If you look at public surveys, pediatricians are still the most trusted [among health-care providers], and I do live in a jurisdiction with pretty strong policy about school-based vaccination. I think that people are getting information from multiple sources, but at the end of the day, in terms of both the national rates and also what I see in clinical practice, we really are seeing most families wanting vaccines.

Why it’s so hard to bust the weather control conspiracy theory

It was October 2024, and Hurricane Helene had just devastated the US Southeast. Representative Marjorie Taylor Greene of Georgia found an abstract target on which to pin the blame: “Yes they can control the weather,” she posted on X. “It’s ridiculous for anyone to lie and say it can’t be done.” 

There was no word on who “they” were, but maybe it was better that way. 

She was repeating what’s by now a pretty familiar and popular conspiracy theory: that shadowy forces are out there, wielding unknown technology to control the weather and wreak havoc on their supposed enemies. This claim, fundamentally preposterous from a scientific standpoint, has grown louder and more common in recent years. It pops up over and over when extreme weather strikes: in Dubai in April 2024, in Australia in July 2022, in the US after California floods and hurricanes like Helene and Milton. In the UK, conspiracy theorists claimed that the government had fixed the weather to be sunny and rain-free during the first covid lockdown in March 2020. Most recently, the theories spread again when disastrous floods hit central Texas this past July. The idea has even inspired some antigovernment extremists to threaten and try to destroy weather radar towers. 


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


But here’s the thing: While Greene and other believers are not correct, this conspiracy theory—like so many others—holds a kernel of much more modest truth behind the grandiose claims. 

Sure, there is no current way for humans to control the weather. We can’t cause major floods or redirect hurricanes or other powerful storm systems, simply because the energy involved is far too great for humans to alter significantly. 

But there are ways we can modify the weather. The key difference is the scale of what is possible. 

The most common weather modification practice is called cloud seeding, and it involves injecting small amounts of salts or other materials into clouds with the goal of juicing levels of rain or snow. This is typically done in dry areas that lack regular precipitation. Research shows that it can in fact work, though advances in technology reveal that its impact is modest—coaxing maybe 5% to 10% more moisture out of otherwise stubborn clouds.

But the fact that humans can influence weather at all gives conspiracy theorists a foothold in the truth. Add to this a spotty history of actual efforts by governments and militaries to control major storms, as well as other emerging but not-yet-deployed-at-any-scale technologies that aim to address climate change … and you can see where things get confusing. 

So while more sweeping claims of weather control are ultimately ridiculous from a scientific standpoint, they can’t be dismissed as entirely stupid.

This all helped make the conspiracy theories swirling after the recent Texas floods particularly loud and powerful. Just days earlier, 100 miles away from the epicenter of the floods, in a town called Runge, the cloud-seeding company Rainmaker had flown a single-engine plane and released about 70 grams of silver iodide into some clouds; a modest drizzle of less than half a centimeter of rain followed. But once the company saw a storm front in the forecast, it suspended its work; there was no need to seed with rain already on the way.

“We conducted an operation on July 2, totally within the scope of what we were regulatorily permitted to do,” Augustus Doricko, Rainmaker’s founder and CEO, recently told me. Still, when as much as 20 inches of rain fell soon afterward not too far away, and more than 100 people died, the conspiracy theory machine whirred into action. 

As Doricko told the Washington Post in the tragedy’s aftermath, he and his company faced “nonstop pandemonium” on social media; eventually someone even posted photos from outside Rainmaker’s office, along with its address. Doricko told me a few factors played into the pile-on, including a lack of familiarity with the specifics of cloud seeding, as well as what he called “deliberately inflammatory messaging from politicians.” Indeed, theories about Rainmaker and cloud seeding spread online via prominent figures including Greene and former national security advisor Mike Flynn

Unfortunately, all this is happening at the same time as the warming climate is making heavy rainfall and the floods that accompany it more and more likely. “These events will become more frequent,” says Emily Yeh, a professor of geography at the University of Colorado who has examined approaches and reactions to weather modification around the world. “There is a large, vocal group of people who are willing to believe anything but climate change as the reason for Texas floods, or hurricanes.”

Worsening extremes, increasing weather modification activity, improving technology, a sometimes shady track record—the conditions are perfect for an otherwise niche conspiracy theory to spread to anyone desperate for tidy explanations of increasingly disastrous events.

Here, we break down just what’s possible and what isn’t—and address some of the more colorful reasons why people may believe things that go far beyond the facts. 

What we can do with the weather—and who is doing it

The basic concepts behind cloud seeding have been around for about 80 years, and government interest in the topic goes back even longer than that

The primary practice involves using planes, drones, or generators on the ground to inject tiny particles of stuff, usually silver iodide, into existing clouds. The particles act as nuclei around which moisture can build up, forming ice crystals that can get heavy enough to fall out of the cloud as snow or rain.

“Weather modification is an old field; starting in the 1940s there was a lot of excitement,” says David Delene, a research professor of atmospheric sciences at the University of North Dakota and an expert on cloud seeding. In a US Senate report from 1952 to establish a committee to study weather modification, authors noted that a small amount of extra rain could “produce electric power worth hundreds of thousands of dollars” and “greatly increase crop yields.” It also cited potential uses like “reducing soil erosion,” “breaking up hurricanes,” and even “cutting holes in clouds so that aircraft can operate.” 

But, as Delene adds, “that excitement … was not realized.”

Through the 1980s, extensive research often funded or conducted by Washington yielded a much better understanding of atmospheric science and cloud physics, though it proved extremely difficult to actually demonstrate the efficacy of the technology itself. In other words, scientists learned the basic principles behind cloud seeding, and understood on a theoretical level that it should work—but it was hard to tell how big an impact it was having on rainfall.

There is huge variability between one cloud and another, one storm system and another, one mountain or valley and another; for decades, the tools available to researchers did not really allow for firm conclusions on exactly how much extra moisture, if any, they were getting out of any given operation. Interest in the practice died down to a low hum by the 1990s.

But over the past couple of decades, the early excitement has returned.

Cloud seeding can enhance levels of rain and snow 

While the core technology has largely stayed the same, several projects launched in the US and abroad starting in the 2000s have combined statistical modeling with new and improved aircraft-based measurements, ground-based radar, and more to provide better answers on what results are actually achievable when seeding clouds.

“I think we’ve identified unequivocally that we can indeed modify the cloud,” says Jeff French, an associate professor and head of the University of Wyoming’s Department of Atmospheric Science, who has worked for years on the topic. But even as scientists have come to largely agree that the practice can have an impact on precipitation, they also largely recognize that the impact probably has some fairly modest upper limits—far short of massive water surges. 

“There is absolutely no evidence that cloud seeding can modify a cloud to the extent that would be needed to cause a flood,” French says. Floods require a few factors, he adds—a system with plenty of moisture available that stays localized to a certain spot for an extended period. “All of these things which cloud seeding has zero effect on,” he says. 

The technology simply operates on a different level. “Cloud seeding really is looking at making an inefficient system a little bit more efficient,” French says. 

As Delene puts it: “Originally [researchers] thought, well, we could, you know, do 50%, 100% increases in precipitation,” but “I think if you do a good program you’re not going to get more than a 10% increase.” 

Asked for his take on a theoretical limit, French was hesitant—“I don’t know if I’m ready to stick my neck out”—but agreed on “maybe 10-ish percent” as a reasonable guess.

Another cloud seeding expert, Katja Friedrich from the University of Colorado–Boulder, says that any grander potential would be obvious by this point: We wouldn’t have “spent the last 100 years debating—within the scientific community—if cloud seeding works,” she writes in an email. “It would have been easy to separate the signal (from cloud seeding) from the noise (natural precipitation).”

It can also (probably) suppress precipitation

Sometimes cloud seeding is used not to boost rain and snow but rather to try to reduce its severity—or, more specifically, to change the size of individual rain droplets or hailstones. 

One of the most prominent examples has been in parts of Canada, where hailstorms can be devastating; a 2024 event in Calgary, for instance, was the country’s second-most-expensive disaster ever, with over $2 billion in damages. 

Insurance companies in Alberta have been working together for nearly three decades on a cloud seeding program that’s aimed at reducing some of that damage. In these cases, the silver iodide or other particles are meant to act essentially as competition for other “embryos” inside the cloud, increasing the total number of hailstones and thus reducing each individual stone’s average size. 

Smaller hailstones means less damage when they reach the ground. The insurance companies—which continue to pay for the program—say losses have been cut by 50% since the program started, though scientists aren’t quite as confident in its overall success. A 2023 study published in Atmospheric Research examined 10 years of cloud seeding efforts in the province and found that the practice did appear to reduce potential for damage in about 60% of seeded storms—while in others, it had no effect or was even associated with increased hail (though the authors said this could have been due to natural variation).

Similar techniques are also sometimes deployed to try to improve the daily forecast just a bit. During the 2008 Olympics, for instance, China engaged in a form of cloud seeding aimed at reducing rainfall. As MIT Technology Review detailed back then, officials with the Beijing Weather Modification Office planned to use a liquid-nitrogen-based coolant that could increase the number of water droplets in a cloud while reducing their size; this can get droplets to stay aloft a little longer instead of falling out of the cloud. Though it is tough to prove that it definitively would have rained without the effort, the targeted opening ceremony did stay dry.

So, where is this happening? 

The United Nations’ World Meteorological Organization says that some form of weather modification is taking place in “more than 50 countries” and that “demand for these weather modification activities is increasing steadily due to the incidence of droughts and other calamities.”

The biggest user of cloud-seeding tech is arguably China. Following the work around the Olympics, the country announced a huge expansion of its weather modification program in 2020, claiming it would eventually run operations for agricultural relief and other functions, including hail suppression, over an area about the size of India and Algeria combined. Since then, China has occasionally announced bits of progress—including updates to weather modification aircraft and the first use of drones for artificial snow enhancement. Overall, it spends billions on the practice, with more to come.

Elsewhere, desert countries have taken an interest. In 2024, Saudi Arabia announced an expanded research program on cloud seeding—Delene, of the University of North Dakota, was part of a team that conducted experiments in various parts of that country in late 2023. Its neighbor the United Arab Emirates began “rain enhancement” activities back in 1990; this program too has faced outcry, especially after more than a typical year’s worth of rain fell in a single day in 2024, causing massive flooding. (Bloomberg recently published a story about persistent questions regarding the country’s cloud seeding program; in response to the story, French wrote in an email that the “best scientific understanding is still that cloud seeding CANNOT lead to these types of events.” Other experts we asked agreed.) 

In the US, a 2024 Government Accountability Office report on cloud seeding said that at least nine states have active programs. These are sometimes run directly by the state and sometimes contracted out through nonprofits like the South Texas Weather Modification Association to private companies, including Doricko’s Rainmaker and North Dakota–based Weather Modification. In August, Doricko told me that Rainmaker had grown to 76 employees since it launched in 2023. It now runs cloud seeding operations in Utah, Idaho, Oregon, California, and Texas, as well as forecasting services in New Mexico and Arizona. And in an answer that may further fuel the conspiracy fire, he added they are also operating in one Middle Eastern country; when I asked which one, he’d only say, “Can’t tell you.”

What we cannot do

The versions of weather modification that the conspiracy theorists envision most often—significantly altering monsoons or hurricanes or making the skies clear and sunny for weeks at a time—have so far proved impossible to carry out. But that’s not necessarily for lack of trying.

The US government attempted to alter a hurricane in 1947 as part of a program dubbed Project Cirrus. In collaboration with GE, government scientists seeded clouds with pellets of dry ice, the idea being that the falling pellets could induce supercooled liquid in the clouds to crystallize into ice. After they did this, the storm took a sharp left turn and struck the area around Savannah, Georgia. This was a significant moment for budding conspiracy theories, since a GE scientist who had been working with the government said he was “99% sure” the cyclone swerved because of their work. Other experts disagreed and showed that such storm trajectories are, in reality, perfectly possible without intervention. Perhaps unsurprisingly, public outrage and threats of lawsuits followed.

It took some time for the hubbub to die down, after which several US government agencies continued—unsuccessfully—trying to alter and weaken hurricanes with a long-running cloud seeding program called Project Stormfury. Around the same time, the US military joined the fray with Operation Popeye, essentially trying to harness weather as a weapon in the Vietnam War—engaging in cloud seeding efforts over Vietnam, Cambodia, and Laos in the late 1960s and early 1970s, with an eye toward increasing monsoon rains and bogging down the enemy. Though it was never really clear whether these efforts worked, the Nixon administration tried to deny them, going so far as to lie to the public and even to congressional committees.

More recently and less menacingly, there have been experiments with Dyn-O-Gel—a Florida company’s super-absorbent powder, intended to be dropped into storm clouds to sop up their moisture. In the early 2000s, the company carried out experiments with the stuff in thunderstorms, and it had grand plans to use it to weaken tropical cyclones. But according to one former NOAA scientist, you would need to drop almost 38,000 tons of it, requiring nearly 380 individual plane trips, in and around even a relatively small cyclone’s eyewall to really affect the storm’s strength. And then you would have to do that again an hour and a half later, and so on. Reality tends to get in the way of the biggest weather modification ideas.

Beyond trying to control storms, there are some other potential weather modification technologies out there that are either just getting started or have never taken off. Swiss researchers have tried to use powerful lasers to induce cloud formation, for example; in Australia, where climate change is imperiling the Great Barrier Reef, artificial clouds created when ship-based nozzles spray moisture into the sky have been used to try to protect the vital ecosystem. In each case, the efforts remain small, localized, and not remotely close to achieving the kinds of control the conspiracy theorists allege.

What is not weather modification—but gets lumped in with it

Further worsening weather control conspiracies is that there is a tendency to conflate cloud seeding and other promising weather modification research with concepts such as chemtrails—a full-on conspiracist fever dream about innocuous condensation trails left by jets—and solar geoengineering, a theoretical stopgap to cool the planet that has been subject to much discussion and modeling research but has never been deployed in any large-scale way.

One controversial form of solar geoengineering, known as stratospheric aerosol injection, would involve having high-altitude jets drop tiny aerosol particles—sulfur dioxide, most likely—into the stratosphere to act essentially as tiny mirrors. They would reflect a small amount of sunlight back into space, leaving less energy to reach the ground and contribute to warming. To date, attempts to launch physical experiments in this space have been shouted down, and only tiny—though still controversial—commercial efforts have taken place. 

One can see why it gets lumped in with cloud seeding: bits of stuff, dumped into the sky, with the aim of altering what happens down below. But the aims are entirely separate; geoengineering would alter the global average temperature rather than having measurable effects on momentary cloudbursts or hailstorms. Some research has suggested that the practice could alter monsoon patterns, a significant issue given their importance to much of the world’s agriculture, but it remains a fundamentally different practice from cloud seeding.

Still, the political conversation around supposed weather control often reflects this confusion. Greene, for instance, introduced a bill in July called the Clear Skies Act, which would ban all weather modification and geoengineering activities. (Greene’s congressional office did not respond to a request for comment.) And last year, Tennessee became the first state to enact a law to prohibit the “intentional injection, release, or dispersion, by any means, of chemicals, chemical compounds, substances, or apparatus … into the atmosphere with the express purpose of affecting temperature, weather, or the intensity of the sunlight.” Florida followed suit, with Governor Ron DeSantis signing SB 56 into law in June of this year for the same stated purpose.

Also this year, lawmakers in more than 20 other states have also proposed some version of a ban on weather modification, often lumping it in with geoengineering, even though caution on the latter is more widely accepted or endorsed. “It’s not a conspiracy theory,” one Pennsylvania lawmaker who cosponsored a similar bill told NBC News. “All you have to do is look up.”

Oddly enough, as Yeh of the University of Colorado points out, the places where bans have passed are states where weather modification isn’t really happening. “In a way, it’s easy for them to ban it, because, you know, nothing actually has to be done,” she says. In general, neither Florida nor Tennessee—nor any other part of the Southeast—needs any help finding rain. Basically, all weather modification activity in the US happens in the drier areas west of the Mississippi. 

Finding a culprit

Doricko told me that in the wake of the Texas disaster, he has seen more people become willing to learn about the true capabilities of cloud seeding and move past the more sinister theories about it. 

I asked him, though, about some of his company’s flashier branding: Until recently, visitors to the Rainmaker website were greeted right up top with the slogan “Making Earth Habitable.” Might this level of hype contribute to public misunderstanding or fear? 

He said he is indeed aware that Earth is, currently, habitable, and called the slogan a “tongue-in-cheek, deliberately provocative statement.” Still, in contrast to the academics who seem more comfortable acknowledging weather modification’s limits, he has continued to tout its revolutionary potential. “If we don’t produce more water, then a lot of the Earth will become less habitable,” he said. “By producing more water via cloud seeding, we’re helping to conserve the ecosystems that do currently exist, that are at risk of collapse.” 

While other experts cited that 10% figure as a likely upper limit of cloud seeding’s effectiveness, Doricko said they could eventually approach 20%, though that might be years away. “Is it literally magic? Like, can I snap my fingers and turn the Sahara green? No,” he said. “But can it help make a greener, verdant, and abundant world? Yeah, absolutely.” 

It’s not all that hard to see why people still cling to magical thinking here. The changing climate is, after all, offering up what’s essentially weaponized weather, only with a much broader and long-term mechanism behind it. There is no single sinister agency or company with its finger on the trigger, though it can be tempting to look for one; rather, we just have an atmosphere capable of holding more moisture and dropping it onto ill-prepared communities, and many of the people in power are doing little to mitigate the impacts.

“Governments are not doing a good job of responding to the climate crisis; they are often captured by fossil-fuel interests, which drive policy, and they can be slow and ineffective when responding to disasters,” Naomi Smith, a lecturer in sociology at the University of the Sunshine Coast in Australia who has written about conspiracy theories and weather events, writes in an email. “It’s hard to hold all this complexity, and conspiracy theorizing is one way of making it intelligible and understandable.”  

“Conspiracy theories give us a ‘big bad’ to point the finger at, someone to blame and a place to put our feelings of anger, despair, and grief,” she writes. “It’s much less satisfying to yell at the weather, or to engage in the sustained collective action we actually need to tackle climate change.”

The sinister “they” in Greene’s accusations is, in other words, a far easier target than the real culprit. 

Dave Levitan is an independent journalist, focused on science, politics, and policy. Find his work at davelevitan.com and subscribe to his newsletter at gravityisgone.com

Chatbots are surprisingly effective at debunking conspiracy theories

It’s become a truism that facts alone don’t change people’s minds. Perhaps nowhere is this more clear than when it comes to conspiracy theories: Many people believe that you can’t talk conspiracists out of their beliefs. 

But that’s not necessarily true. It turns out that many conspiracy believers do respond to evidence and arguments—information that is now easy to deliver in the form of a tailored conversation with an AI chatbot.

In research we published in the journal Science this year, we had over 2,000 conspiracy believers engage in a roughly eight-minute conversation with DebunkBot, a model we built on top of OpenAI’s GPT-4 Turbo (the most up-to-date GPT model at that time). Participants began by writing out, in their own words, a conspiracy theory that they believed and the evidence that made the theory compelling to them. Then we instructed the AI model to persuade the user to stop believing in that conspiracy and adopt a less conspiratorial view of the world. A three-round back-and-forth text chat with the AI model (lasting 8.4 minutes on average) led to a 20% decrease in participants’ confidence in the belief, and about one in four participants—all of whom believed the conspiracy theory beforehand—indicated that they did not believe it after the conversation. This effect held true for both classic conspiracies (think the JFK assassination or the moon landing hoax) and more contemporary politically charged ones (like those related to the 2020 election and covid-19).


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


This is good news, given the outsize role that unfounded conspiracy theories play in today’s political landscape. So while there are widespread and legitimate concerns that generative AI is a potent tool for spreading disinformation, our work shows that it can also be part of the solution. 

Even people who began the conversation absolutely certain that their conspiracy was true, or who indicated that it was highly important to their personal worldview, showed marked decreases in belief. Remarkably, the effects were very durable; we followed up with participants two months later and saw just as big a reduction in conspiracy belief as we did immediately after the conversations. 

Our experiments indicate that many believers are relatively rational but misinformed, and getting them timely, accurate facts can have a big impact. Conspiracy theories can make sense to reasonable people who have simply never heard clear, non-conspiratorial explanations for the events they’re fixated on. This may seem surprising. But many conspiratorial claims, while wrong, seem reasonable on the surface and require specialized, esoteric knowledge to evaluate and debunk. 

For example, 9/11 deniers often point to the claim that jet fuel doesn’t burn hot enough to melt steel as evidence that airplanes were not responsible for bringing down the Twin Towers—but the chatbot responds by pointing out that although this is true, the American Institute of Steel Construction says jet fuel does burn hot enough to reduce the strength of steel by over 50%, which is more than enough to cause such towers to collapse. 

Although we have greater access to factual information than ever before, it is extremely difficult to search that vast corpus of knowledge efficiently. Finding the truth that way requires knowing what to google—or who to listen to—and being sufficiently motivated to seek out conflicting information. There are large time and skill barriers to conducting such a search every time we hear a new claim, and so it’s easy to take conspiratorial content you stumble upon at face value. And most would-be debunkers at the Thanksgiving table make elementary mistakes that AI avoids: Do you know the melting point and tensile strength of steel offhand? And when your relative calls you an idiot while trying to correct you, are you able to maintain your composure? 

With enough effort, humans would almost certainly be able to research and deliver facts like the AI in our experiments. And in a follow-up experiment, we found that the AI debunking was just as effective if we told participants they were talking to an expert rather than an AI. So it’s not that the debunking effect is AI-specific. Generally speaking, facts and evidence delivered by humans would also work. But it would require a lot of time and concentration for a human to come up with those facts. Generative AI can do the cognitive labor of fact-checking and rebutting conspiracy claims much more efficiently. 

In another large follow-up experiment, we found that what drove the debunking effect was specifically the facts and evidence the model provided: Factors like letting people know the chatbot was going to try to talk them out of their beliefs didn’t reduce its efficacy, whereas telling the model to try to persuade its chat partner without using facts and evidence totally eliminated the effect. 

Although the foibles and hallucinations of these models are well documented, our results suggest that debunking efforts are widespread enough on the internet to keep the conspiracy-focused conversations roughly accurate. When we hired a professional fact-checker to evaluate GPT-4’s claims, they found that over 99% of the claims were rated as true (and not politically biased). Also, in the few cases where participants named conspiracies that turned out to be true (like MK Ultra, the CIA’s human experimentation program from the 1950s), the AI chatbot confirmed their accurate belief rather than erroneously talking them out of it.

To date, largely by necessity, interventions to combat conspiracy theorizing have been mainly prophylactic—aiming to prevent people from going down the rabbit hole rather than trying to pull them back out. Now, thanks to advances in generative AI, we have a tool that can change conspiracists’ minds using evidence. 

Bots prompted to debunk conspiracy theories could be deployed on social media platforms to engage with those who share conspiratorial content—including other AI chatbots that spread conspiracies. Google could also link debunking AI models to search engines to provide factual answers to conspiracy-related queries. And instead of arguing with your conspiratorial uncle over the dinner table, you could just pass him your phone and have him talk to AI. 

Of course, there are much deeper implications here for how we as humans make sense of the world around us. It is widely argued that we now live in a “post-truth” world, where polarization and politics have eclipsed facts and evidence. By that account, our passions trump truth, logic-based reasoning is passé, and the only way to effectively change people’s minds is via psychological tactics like presenting compelling personal narratives or changing perceptions of the social norm. If so, the typical, discourse-based work of living together in a democracy is fruitless.

But facts aren’t dead. Our findings about conspiracy theories are the latest—and perhaps most extreme—in an emerging body of research demonstrating the persuasive power of facts and evidence. For example, while it was once believed that correcting falsehoods that aligns with one’s politics would just cause people to dig in and believe them even more, this idea of a “backfire” has itself been debunked: Many studies consistently find that corrections and warning labels reduce belief in, and sharing of, falsehoods—even among those who most distrust the fact-checkers making the corrections. Similarly, evidence-based arguments can change partisans’ minds on political issues, even when they are actively reminded that the argument goes against their party leader’s position. And simply reminding people to think about whether content is accurate before they share it can substantially reduce the spread of misinformation. 

And if facts aren’t dead, then there’s hope for democracy—though this arguably requires a consensus set of facts from which rival factions can work. There is indeed widespread partisan disagreement on basic facts, and a disturbing level of belief in conspiracy theories. Yet this doesn’t necessarily mean our minds are inescapably warped by our politics and identities. When faced with evidence—even inconvenient or uncomfortable evidence—many people do shift their thinking in response. And so if it’s possible to disseminate accurate information widely enough, perhaps with the help of AI, we may be able to reestablish the factual common ground that is missing from society today.

You can try our debunking bot yourself at at debunkbot.com

Thomas Costello is an assistant professor in social and decision sciences at Carnegie Mellon University. His research integrates psychology, political science, and human-computer interaction to examine where our viewpoints come from, how they differ from person to person, and why they change—as well as the sweeping impacts of artificial intelligence on these processes.

Gordon Pennycook is the Dorothy and Ariz Mehta Faculty Leadership Fellow and associate professor of psychology at Cornell University. He examines the causes and consequences of analytic reasoning, exploring how intuitive versus deliberative thinking shapes decision-making to understand errors underlying issues such as climate inaction, health behaviors, and political polarization.

David Rand is a professor of information science, marketing and management communication, and psychology at Cornell University. He uses approaches from computational social science and cognitive science to explore how human-AI dialogue can correct inaccurate beliefs, why people share falsehoods, and how to reduce political polarization and promote cooperation.

Why do so many people think the Fruit of the Loom logo had a cornucopia?

There is a shirt currently listed on eBay for $2,128.79. It was not designed by Versace or Dior, nor spun from the world’s finest silk. In fact, a tag proudly declares, “100% cotton made in Myanmar”—but it’s a second tag, just below that one, that makes this blue button-down so expensive. 

“I looked at it and I was like, Wow, this is cool,” says Brooke Hermann, the 30-year-old Kentucky-based reseller who bought the top for $1 at a secondhand sale in 2024. “This doesn’t look like any other Fruit of the Loom tag I’ve ever seen.”

Quick question: Does the Fruit of the Loom logo feature a cornucopia? 

Many of us have been wearing the casualwear company’s T-shirts and underpants for decades, and yet the question of whether there is a woven brown horn of plenty on the logo is surprisingly contentious. According to a 2022 poll by the research company YouGov, 55% of Americans believe the logo does include a cornucopia, 25% are unsure, and only 21% are confident that it doesn’t, even though this last group is correct. According to a 2023 post from the company, the Fruit of the Loom logo does not include—and, according to Snopes, has never included—a horn of plenty. (MIT Technology Review could not reach Fruit of the Loom for comment.)


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


Maybe you’ve come across this fact before, via an internet meme that made you gasp, shrug, or scratch your head. There’s a specific name for what’s happening here: Those who believe the logo used to include the cornucopia are experiencing the “Mandela effect,” or collective false memory, so called because a number of people misremember that Nelson Mandela died in prison. I helped popularize the phenomenon in a viral 2016 New Statesman article about a movie that doesn’t actually exist, and in the time since it’s become something of a household term; TV shows from Saturday Night Live to Black Mirror to The X-Files have explored the Mandela effect.

But whether you remember the brown horn, incorrectly recall Darth Vader saying, “Luke, I am your father,” or believe that a popular children’s book was spelled The Berenstein Bears, you’ve probably moved on with your life. Google searches for “Mandela effect” have plummeted from 2016 highs, and Hermann has had zero bids on the shirt she posted last year—even though, at least to her eyes, it features a cornucopia on the tag. “No one’s really offered anything, and no one’s said anything about it,” she says, “which to me is kind of crazy.” 

And yet while many find it easy to let their unconfirmable beliefs go, others have spent the better part of a decade seeking answers—and vindication. There are commonly more than 170,000 weekly visitors to a Mandela effect subreddit that sees over 1,000 comments on average every day. While a fair share of these commenters are skeptics, plenty more are dedicated believers who are not satisfied with the prevailing explanation that human memory is fallible and instead invest their time into bringing the truth—whatever exactly it may be—to light. 

“I’ve been a bit ostracized from my family ever since I started pushing this thing nine years ago,” says a 51-year-old Massachusetts-based Fruit of the Loom truther who asked to go by the name AJ Booras. “I’m not inclined to simply let this phenomenon fall by the proverbial wayside, even if I’m the last one standing.” 

Some online believe in a fairly straightforward conspiracy: They want Fruit of the Loom to confess that it’s “gaslighting” customers and used to have a cornucopia on its tags. Others speculate that the answer lies in quantum physics: If—as the astrophysicist Neil deGrasse Tyson has said—there’s “better than 50-50 odds” that we’re living in a simulation, then might there be some sort of glitch, lag, or failed software update that means some people see and remember the world differently from others?

“The scientific community isn’t really looking that hard at it—and if they are, they’re always framing it as a memory thing,” says AJ. “It’s a hard barrier to make any headway on.” This is why, AJ says, he’s become “addicted” to researching the phenomenon: “It’s a personal quest for vindication.” 

Will anyone ever believe these believers? There are two options for those who think the Fruit of the Loom logo once had a cornucopia: accept that your memory is wrong, or think that the world is. What makes some people happy with the simple explanation and others determined to seek the more complicated one? 

“The bridge between perception and memory” 

There’s nothing quite as disconcerting as when memory and reality conflict. After all, what is reality—or at least your reality—if not your memory? This is why it can be so satisfying to find concrete evidence that you are irrefutably correct: Here’s an old photo that proves Dad did come on the ’09 trip to Florida and your sister is foolish and wrong. 

In the Mandela effect community, evidence implying that the world used to be different is called “residue.” There is an abundant amount of residue suggesting that the Fruit of the Loom logo once had a cornucopia. 

In the 2006 animated film The Ant Bully, a pair of parodical “Fruit of the Loin” underwear is drawn with a cornucopia on the tag. A similar gag in a 2012 episode of South Park sees a fake clothing brand named “Cornucopia.” In 1973, when the jazz flutist Frank Wess released an album called Flute of the Loom, the cover showed fruit pouring out a cornucopia-shaped flute. When allegedly tracked down by Redditors, the illustrator reportedly said the clothing logo had inspired the design: “Why the hell else would I have used a cornucopia?” 

On top of that, numerous newspaper and magazine articles written from the 1970s to the early 2000s reference the horn of plenty, as does a short play first performed in 1968 and a novel, The Brothers K, published in 1992. New residue is still being discovered: In April 2025, a TikToker shared an old ’90s trivia game in which clues about brands are listed on cards. The card for Fruit of the Loom includes the words “underwear,” “apples and grapes,” and “cornucopia.” 

How can all these people—animators, illustrators, journalists, and writers—have made the same mistake? When I reached out to the author of The Brothers K, David James Duncan, he was adamant that there was no mistake: “My inspiration was the Fruit of the Loom boxer shorts I owned at the time,” he said via email. “I changed nothing in describing the boxers, and yes, they did have a Fruit of the Loom cornucopia on the label in the back of the shorts.” 

Conversely, when I spoke with Billy Cox, a journalist who referenced the cornucopia in a 1994 article in Florida Today, he was less confident. “I have no idea what fueled my initial assumption about the cornucopia. Zero. Zilch-o. Nada,” said Cox, also via email. But he’s prepared to admit that he may have been careless in his reporting: “Even if the internet had been available back then, I doubt I would’ve double-checked the logo’s history.”

It’s an interesting thought: Most of the articles referencing the cornucopia are from a period—the ’70s through the ’90s—when journalists wouldn’t have been able to quickly google the logo. But why would they all misremember it the exact same way

Wilma Bainbridge is an associate psychology professor at the University of Chicago who researches what she calls “the bridge between perception and memory”; she got her PhD in brain and cognitive sciences from MIT in 2016. 

Bainbridge herself first came across the Mandela effect on social media—she was “wowed” when confronted with the true spelling of the Berenstain Bears in the famous American children’s books. In 2022, she published a scientific study on visual Mandela effects and ultimately found that there is consistency in what people misremember. “People’s memories are surprisingly predictable,” she says. 

ADOBE STOCK

WIKIMEDIA COMMONS

The husband-and-wife team of Stanley and Janice Berenstain wrote and illustrated the popular children’s books. More than 300 titles bear the family name.

In one experiment in the study, she found that people who aren’t very familiar with an image can share the same false memories as those who claim to be highly familiar. For example, some Mandela effect experiencers believe that the Monopoly man wore a monocle. In Bainbridge’s study, even people who didn’t know the character well sometimes drew the monocle when they were shown the Monopoly man and were later asked to draw him; this means the mistake was based on recall, not recognition, and could suggest that there’s something intrinsic to certain images that encourages memory errors. 

Scientists have long demonstrated that human memory is inherently fallible. In 1996, psychologists asked people whether they had watched news footage of the 1992 Bijlmer plane crash in Amsterdam, and more than 60% of the participants said yes—even though no recording of the crash exists. Other studies have shown that our memories can be corrupted by our peers and that false memories can be contagious. Arguably, the internet has caused memory contagion when it comes to the Mandela effect: Comparatively very few people googled “Fruit of the Loom cornucopia” between 2004 and 2017, with searches growing more common after a Redditor pointed out what was believed to be the first piece of “residue” in 2016 and (and again spiking dramatically when a TikTok video on the phenomenon was posted in 2023; it’s since earned over 5 million views). 

“Some people make things go viral because they want to believe it,” says Don, a 61-year-old American who has been moderating the Mandela effect subreddit since 2017. (He asked to be identified only by his first name to protect his privacy.) “People want to be part of the experience.”

Still, Bainbridge’s study didn’t land on one definitive reason for the Mandela effect. “I was surprised to find there was no singular explanation,” she says. 

Using a method of monitoring cursor movement that’s analogous to eye-tracking technology, the academic tested whether people made memory errors because they didn’t pay attention to an image or looked at only certain parts of it. She found this wasn’t the case. 

Could it be, then, that people simply fill in the blanks of their memory with archetypes—we remember the Monopoly man with a monocle because we associate the eyepiece with rich old men? 

Bainbridge has found that this explanation—known as the “schema theory”—cannot fully explain the Mandela effect either. In one of her experiments, participants were asked to select the correct Fruit of the Loom logo from three images: one without a cornucopia, one with a cornucopia, and one with a plate. Even though we see fruit on plates far more often than we see it inside cornucopias, more participants selected the horn of plenty than the crockery. 

Bainbridge is drawn to the idea that some images simply cause more false memories than others. “We think the underlying cause will not likely be a single feature—e.g., attractiveness, color—but how these features work together in relation to things already stored in our memory,” she says. “But this work is still in its early stages, so we don’t know exactly what that combination is like.” 

Believers like AJ just aren’t convinced.

“In simulation, anything can happen”

“A lot of people remember looking at this unfamiliar object on their underwear tag,” says AJ, “and asking a parent, ‘Is that a loom?’ and the parents saying, ‘No! That’s what we call a cornucopia.’” 

When he was growing up in the ’70s and ’80s, AJ wore Fruit of the Loom underwear and regularly folded laundry with his mother. “You stack up enough underwear, and you’re seeing that logo over and over and over again,” he says.

As a newly fledged adult around the late ’90s, AJ had to go to the store and buy his own underwear for the first time. “I noticed,” he says, “that the logo had changed, and it was just a pile of fruit.” 

Unperturbed, AJ assumed the company had just rebranded—he didn’t worry about it too much until almost two decades later, when he came across the Mandela effect online and realized the consensus was that there had never been a cornucopia. “We call it the wave of 2016 in the Mandela effect community—it was this huge rush of many, many effects that were being noticed,” he says. 

The first time he heard about the Mandela effect, AJ says, he “actually swooned” because of an “overwhelming existential dread that something was dreadfully wrong with reality itself.” 

Today, AJ believes in numerous instances of the Mandela effect, all of which have been shown to be incorrect but nonetheless have robust believer communities online: that the pizza roll brand used to be called Tostino’s, not Totino’s; that the location of Australia has moved on the world map; that the show Sex and the City was Sex in the City; Froot Loops cereal was formerly Fruit Loops; human organs have shifted positions; the sun changed color from yellow to white …

For him, each is just one part of a larger problem he can’t explain about the universe. At first, AJ says, he sought answers by researching memory science and psychology, but he was left unsatisfied. So instead, he looked to quantum mechanics and metaphysics, specifically ontology—the study of reality. 

In 2003, the philosopher Nick Bostrom—famous for his theories on the threat of AI “superintelligence”—posited that humanity may be living inside a simulation. Almost two decades later, the astronomer David Kipping performed some calculations and put the odds at 50-50. “In simulation, anything can happen,” AJ says. “You could have different servers—one server hasn’t been updated, some people are seeing one version, some people are seeing the other.” It is also possible, AJ argues, that we exist in a multiverse—an idea first floated by the physicist Hugh Everett III in the 1950s. If people are somehow traveling between these parallel universes, then they may have memories from different worlds. Both of these theories are recurrent in the Mandela effect community online.  

And yet AJ doesn’t find these explanations entirely fulfilling: “If we jumped universes, why would there be residue?” Instead, he’s been diving into a combination of the theoretical physicist John Archibald Wheeler’s concept of the Participatory Anthropic Principle (PAP)—which suggests, controversially, that the act of observation creates reality—and the Nobel Prize–winning physicist Eugene Wigner’s “friend” experiment, which theorized that two observers can experience two different realities. AJ believes that physicists’ own work may be affecting the universe: “It’s almost like you’re changing the parameters of reality itself by digging deeper.” 

There is still so much that the experts themselves can’t explain about quantum physics, so it’s no wonder that laypeople get confused. The internet offers myriad rabbit holes to go down, some of them legitimate and some of them less so. Things are complicated further when YouTubers and internet commenters who aren’t well versed in the science take specific, highly complex theories and experiments and try to apply them to other phenomena, even if there is no concrete evidence they’re related. So I set about emailing physicists, simply to see whether they believe it might be remotely possible that quantum physics could, in fact, explain the Mandela effect.

Numerous academics replied telling me they had nothing to say on the topic; Bostrom’s office said he was unavailable. I asked the theoretical physicist Carlo Rovelli—who has been labelled one of the world’s 50 top thinkers—whether he has any thoughts about Mandela effect believers’ quantum-physics-related theories. “Yes, definitely,” he replied. “They are all total bullshit! There are few things about which I am totally convinced. This is one.” 

I contacted the University of Oxford physicist David Deutsch—often called the “father of quantum computing”—and listed the theories believers think may explain the Mandela effect, including parallel universes, simulations, the holographic principle, PAP, and Wigner’s friend idea. “Considered as explanations of the Mandela effect, none of those follow from quantum theory, and none of them constitute a rational speculation beyond it,” he said. Johns Hopkins University professor and physicist Sean M. Carroll concurs: “I cannot imagine how any of those phenomena could be in any way related to the Mandela effect.” 

Melvin Vopson, an associate professor of physics at the University of Portsmouth who has conducted research on the simulation theory, admits he has experienced the Mandela effect himself but doesn’t attribute the phenomenon to glitches in the simulation: That’d be a “cheap explanation,” he says.

Nevertheless, scientists waving away these explanations could have a detrimental effect: In the absence of expert engagement, there are plenty of people online who can bolster believers’ views. AJ is not surprised by these responses: “I just don’t think that physicists have given it a real hard look,” he says, “because they’re already certain that it’s explainable otherwise.” 

Bainbridge, for her part, thinks her study at least disproves the theory that we’ve been jumping between different universes. When she took those study participants who weren’t familiar with certain logos and mascots and showed them the correct version for the first time, she tested their memory by asking them to redraw the image only moments later, and still some drew the Mandela effect version. 

“It’s unlikely we jumped dimensions during that short time span,” she says, “so it seems like the Mandela effect is something more about the shortcuts our memories take, rather than something about parallel worlds.” She hopes her future work will help further elucidate these “shortcuts,” and she is even planning to see whether she can create her own Mandela effects. 

Leaving it all behind

One of the most well-documented memory phenomena is the “misinformation effect.” Since the 1970s, scientists have demonstrated that exposing people to misinformation after an event can alter their memories. If people are asked leading questions—say, “Did you see the broken headlight?” rather than “Did you see a broken headlight?” after witnessing a crash—they are more likely to report seeing something they didn’t. But on the flip side, warning witnesses about the threat of misinformation before they recount an event can increase the accuracy of their memory. 

In short, the way information is presented to us is crucial. This is why it was pretty poor form for YouGov to poll Americans about the Fruit of the Loom logo with a question that was easily open to misinterpretation: “Does the logo for the clothing company Fruit of the Loom have a cornucopia of fruit in it, or not?” It is unclear here which part is in question—the cornucopia or the fruit. But it was also poor form that I didn’t mention this until now—nor did I mention that Neil deGrasse Tyson later changed his mind about simulation theory and is now “convinced” that we do not live in a simulation. 

It was also probably pretty misleading of me to start this article with a link-free reference to Brooke Hermann’s eBay-listed shirt, which she believes features a cornucopia but to my eyes clearly features brown leaves. From the ’60s to the early ’00s, the Fruit of the Loom logo did include brown leaves behind the fruit; they were recolored green in 2003. When I started writing this article, I was certain that my Fruit of the Loom childhood PE kit had a cornucopia on the tag. I’m now convinced that 10-year-old me simply wasn’t looking that closely and thus I’ve misremembered the leaves as a horn. After all, even when I look at the current logo on shirts listed on the Fruit of the Loom website, my eyes still seem to want to make this mistake: From far away, I interpret the crowded cluster as a cornucopia.

current Fruit of the Loom logo in color
Fruit of the Loom line drawing for trade application from 1973

The current Fruit of the Loom logo (left) and the version submitted in their 1973 trademark design application. Neither contain a cornucopia.

It’s as easy as that to convince me my memory was wrong—whether that’s a good thing or a bad thing, I’ll let you decide. I’m clearly at one end of some sort of spectrum here. Other Mandela effect experiencers may believe something stranger is going on but are still prepared to happily get on with their lives. Larry Jung is a thirtysomething musician who was living in New Jersey when he spent hours hunting for Fruit of the Loom residue; in 2019, he even purchased a copy of a 1969 book for around $20 so he could see the cornucopia reference within it. “I did obsess about it for a while in the beginning,” says Jung, who remembers the cornucopia because he mistook it for a croissant as a child. “But then—I don’t know, I just came to this acceptance phase. I just didn’t want it to affect my life in a big way. I didn’t want to bring it up in every discussion.” 

Or, as another erstwhile Mandela effect researcher puts it: “If I just so happened to be living in a computer simulation, and that was my entire reality, what can I do about that?”

Don, the Mandela effect subreddit moderator, has seen waves of people move on while he has stayed active in the community for the better part of a decade (I first spoke to him for my New Statesman article in 2016). “I’ve recruited a lot of moderators, and they come and go pretty quick,” he says. 

Don says he experiences “more than average but less than all” examples of the Mandela effect (he too remembers the cornucopia on his childhood underpants). “They find solutions that they find acceptable,” he says of some people who leave the sub. For others, the cognitive dissonance becomes “too much,” he adds. “It interferes with their ability to function.” Don theorizes that the people who stay are people who experienced the Mandela effect organically, “in the wild,” he says, “before it was a well-known phenomenon”—arguably the antithesis of people jumping on an internet bandwagon.

“I compare it to someone who saw Bigfoot. If you were in the woods and Bigfoot walks into your campground and he scares you and your kids, eight feet tall, smells terrible—you’re not going to forget it.” 

AJ concurs that “anchor memories” like these are key. And yet Jung has a croissant-based anchor memory, and I myself have similar anecdotes about false memories I’ve found easy to let go. Psychologically, why does the Mandela effect affect people in such vastly different ways? Why do some people hold onto their memories while others don’t? 

“We know that most people’s intuitions about memory are wrong; they think of it as an accurate recording device when in fact memory is a reconstructive apparatus that is presenting us with recollections based on very fragmented snippets,” says Stephan Lewandowsky, a cognitive psychologist at the University of Bristol who writes computer simulations of memory to better understand how the mind works. “So most people will have an exaggerated sense of the accuracy of their own memories and will refuse to accept that they could be completely false.”

In recent years, Lewandowsky has studied misinformation and has coauthored The Conspiracy Theory Handbook, and he says that while some people move on from their conspiracy theories, others turn them into their identity. “They will enter a state,” he says, “in which they are extremely difficult to extract from their rabbit hole.” People who become highly committed to conspiracies “tend to be disgruntled and feel left behind by society and are extremely distrustful,” Lewandowsky adds. “Those people also tend to be high in narcissism and often exhibit paranoid thoughts.” 

Shauna Bowes is an assistant psychology professor at the University of Alabama who researches conspiratorial ideation, misinformation, and intellectual humility. Her work has found that people with this last quality—the tendency to acknowledge the limits of your own views—are less likely to believe misinformation. 

“Belief perseverance is when you double down on your beliefs, even if evidence contradicts them,” Bowes says. “There are many reasons why some are willing to change their minds while others do not. Personality traits, childhood experiences, social networks, cognitive styles, and more determine these processes. What we do know is that people who tend to be more cognitively flexible, humble, and generally open-minded also tend to change their minds more in response to evidence.” 

And yet when it comes to the Mandela effect, the question of “evidence” is a complicated one—after all, there’s plenty of cornucopia residue. Part of the trouble with understanding people’s responses to the Mandela effect is that the phenomenon can’t neatly be categorized as misinformation or conspiracy theory. 

Lewandowsky believes the Mandela effect is primarily a social phenomenon. “My take on it is that if many people believe that an event has happened, that becomes a social norm that other people can support by sharing that belief. Social norms are very powerful,” he says, adding that the internet “provides a great amplification machine.”

Creating reality

AJ tells me that even though skeptics have called him “so many derogatory names over the last nine years,” he remains passionate about spreading word of the Mandela effect. He wants to “push a dialogue” so that believers don’t feel afraid to speak out. Mostly, AJ wants scientists to look at the qualitative side of things: the hundreds of autobiographical accounts by people with very specific memories of things that are now officially said not to have happened the way they recall. He wishes scientists would speak to experiencers directly, the same way the once-skeptical astronomer Josef Allen Hynek spoke to UFO eyewitnesses in the mid-20th century. 

“Once upon a time, the UFO phenomenon was considered to be fringe. And now we have multiple world governments that have acknowledged that there is stuff flying around that we don’t know what it is,” AJ says. Overall, “the goal is to get the scientific establishment to at least consider the other side.” 

Of course, AJ is not alone, even if Mandela effect believers do exist on a spectrum. The community holds space for people having fun with the phenomenon, for those committed to just a single example of the effect, for others who dive in on a short-term basis before moving on—and for those who have run the International Mandela Effect Conference in locations across the US since 2019. 

And there’s Don, who is still moderating the subreddit after all these years and has seen believers of every stripe. “The thing that keeps me going is I want to make sure that it’s still here,” he says. “There’s a lot of history here, and I want to keep it around for that reason.” 

Personally, he believes there may be different explanations for different examples of the Mandela effect. It could be as simple as people confusing Fruit of the Loom with a knockoff brand, he says, or as complicated as Fruit of the Loom lying about the cornucopia as free advertising (though he also notes that no one has ever actually discovered an old Fruit of the Loom label with a cornucopia on it). 

Don also wonders whether some people might be guinea pigs caught up in longitudinal studies in which psychologists play with subjects’ memories. He’s considered, too, that nefarious tech bros could be digitally manipulating and deleting data on the internet as a form of social engineering, a possibility he compares to the Cambridge Analytica affair. “It’s something that’s possible. I’m not saying that’s what’s happening,” he says. “But this is the kind of thing that could be being done.” (To be clear: Don shared no evidence that this is being done.) 

Perhaps Don’s most interesting belief is that the Mandela effect is not a phenomenon but an “event”—one that may now in fact be over. In his opinion, there have been no significant or “persuasive” examples discovered since 2019 (when Redditors found that the character Baloo in Disney’s 1967 The Jungle Book never wore a coconut bra). Don believes the community peaked between 2015 and 2018, when people were making new discoveries regularly. “There was a period of time where it was an actual event, like this was an ongoing event,” he says. He compares the whole thing to medieval manias in which people danced themselves to death: “I think the fervor with which the Mandela effect spread will likely be compared to dancing plagues by future generations.”

Toward the end of my second of three calls with AJ, he asked me if I’d also experienced the Mandela effect. I explained that historically I experienced it with Fruit of the Loom, but I’m prepared to believe it was just a false memory. “Yeah, that’s fair,” he said. But I started to wonder if it is. When I wrote about the Mandela effect in 2016, I wanted to write an exciting story with twists and turns, which arguably played up the mystery. Am I responsible for making some people question reality? What are the consequences of writing another article, the one you’re currently reading? How much am I creating reality by observing it?

To be “fair” to AJ, should I tell you that one of memory science’s most famous studies has recently come under fire, and some academics now believe that people aren’t as susceptible to false memories as we once thought? 

Or to be “fair” to you, the reader, should I stress that despite my own desire to believe in the mysteries of the universe, I’ve come away thinking that the biggest mystery of all is the human mind? 

Amelia Tait is a London-based freelance features journalist who writes about culture, trends, and unusual phenomena. 

What it’s like to be in the middle of a conspiracy theory (according to a conspiracy theory expert)

On a gloomy Saturday morning this past May, a few months after entire blocks of Altadena, California, were destroyed by wildfires, several dozen survivors met at a local church to vent their built-up frustration, anger, blame, and anguish. As I sat there listening to one horror story after another, I almost felt sorry for the very polite consultants who were being paid to sit there, and who couldn’t do a thing about what they were hearing.

Hosted by a third-party arbiter at the behest of Los Angeles County, the gathering was a listening session in which survivors could “share their experiences with emergency alerts and evacuations” for a report on how the response to the Eaton Fire months earlier had succeeded and failed. 

It didn’t take long to see just how much failure there had been.


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


After a small fire started in the bone-dry brush of Pasadena’s Eaton Canyon early in the evening of Tuesday, January 7, 2025, the raging Santa Ana winds blew its embers into nearby Altadena, the historically Black and middle-class town just to the north. By Wednesday morning, much of it was burning. Its residents spent the night making frantic, desperate scrambles to grab whatever they could and get to safety. 

In the aftermath, many claimed that they received no warning to evacuate, saw no first responders battling the blazes, and had little interaction with official personnel. Most were simply left to fend for themselves. 

Making matters worse, while no place is “good” for a wildfire, Altadena was especially vulnerable. It was densely packed with 100-year-old wooden homes, many of which were decades behind on the code upgrades that would have better protected them. It was full of trees and other plants that had dried out during the rain-free winter. Few residents or officials were prepared for the seemingly remote possibility that the fires that often broke out in the mountains nearby would jump into town. As a result, resources were strained to the breaking point, and many homes simply burned freely.

So the people packed into the room that morning had a lot to be angry about. They unloaded their own personal ordeals, the traumas their community had experienced, and even catastrophes they’d heard about secondhand. Each was like a dagger to the heart, met with head-nods and “uh-huhs” from people all going through the same thing.

LA County left us to die because we couldn’t get alerts!

I’m sleeping in my car because I was a renter and have no insurance coverage!

Millions of dollars in aid were raised for us, and we haven’t gotten anything!

Developers are buying up Altadena and pricing out the Black families who made this place!

The firefighting planes were grounded on purpose by Joe Biden so he could fly around LA!

One of these things was definitely not like the others. And I knew why.

Two trains collide

It’s something of a familiar cycle by now: Tragedy hits; rampant misinformation and conspiracy theories follow. Think of the deluge of “false flag” and “staged gun grab” conspiracy theories after mass shootings, or the rampant disinformation around covid-19 and the 2020 election. It’s often even more acute in the case of a natural disaster, when conspiracy theories about what “really” caused the calamity run right into culture-war-driven climate change denialism. Put together, these theories obscure real causes while elevating fake ones, with both sides battling it out on social media and TV. 

I’ve studied these ideas extensively, having spent the last 10 years writing about conspiracy theories and disinformation as a journalist and researcher. I’ve covered everything from the rise of QAnon to whether Donald Trump faked his assassination attempt to the alarming rises in antisemitism, antivaccine conspiracism, and obsession with human trafficking. I’ve written three books, testified to Congress, and even written a report for the January 6th Committee. So this has been my life for quite a while. 

Still, I’d never lived it. Not until the Eaton Fire.

For a long time, I’d been able to talk about the conspiracy theories without letting them in. Now the disinformation was in the room with me, and it was about my life.

My house, a cottage built in 1925, was one of those that burned back in January. Our only official notification to flee had come at 3:25 a.m., nine hours after the fires started. We grabbed what we could in 10 minutes, I locked our front door, and six hours later, it was all gone. We could have died. Eighteen Altadena residents did die—and all but one were in the area that was warned too late.

Previously in my professional life, I’d always been able to look at the survivors of a tragedy, crying on TV about how they’d lost everything, and think sympathetically but distantly, Oh, those poor people. And soon enough, the conspiracy theories I was following about the incident for work would die down, and then it was no longer in my official purview—I could move on to the next disaster and whatever mess came with it. 

Now I was one of those poor people. The Eaton Fire had changed everything about my life. Would it change everything about my work as well? It felt as though two trains I’d managed to keep on parallel tracks had collided.

For a long time, I’d been able to talk about the conspiracy theories without letting them in. Now the disinformation was in the room with me, and it was about my life. And I wondered: Did I have a duty to journalism to push back on the wild thinking—or on this particular idea that Biden was responsible? 

Or did I have a duty to myself and my sanity to just stay quiet?

Just true enough

In the days following the Eaton Fire, which coincided with another devastating fire in Los Angeles’ Pacific Palisades neighborhood, the Biden plane storyline was just one of countless rumors, false claims, hoaxes, and accusations about what had happened and who was behind them.

Most were culture-war nonsense or political fodder. I also saw clearly fake AI slop (no, the Hollywood sign was not on fire) and bits of TikTok ephemera that could largely be ignored. 

They were from something like an alternate world, one where forest floors hadn’t been “raked” and where incompetent “DEI firefighters” let houses burn while water waited in a giant spigot that California’s governor, Gavin Newsom, refused to “turn on” because he preferred to protect an endangered fish. There were claims that the fires were set on purpose to clear land for the Olympics, or to cover up evidence of human trafficking. Rumors flew that LA had donated all its firefighting money and gear to Ukraine. Some speculated that the fires were started by undocumented immigrants (one was suspected of causing one of the fires but never charged) or “antifa” or Black Lives Matter activists—never mind that one of the most demographically Black areas in the city was wiped out. Or, as always, it was the Jews. In this case, blame fell on a “wealthy Jewish couple” who supposedly owned most of LA’s water and wouldn’t let it go.

These claims originated from the same “just asking questions” influencers who run the same playbook for every disaster. And they spread rapidly through X, a platform where breaking news had been drowned out by hysterical conspiracism. 

But many did have elements of truth to them, surrounded by layers of lies and accusations. A few were just true enough to be impossible to dismiss out of hand, but also not actually true.

So, for the record: Biden did not ground firefighting aircraft in Los Angeles. 

According to fact-checking by both USA Today and Reuters, Biden flew into Los Angeles the day before the Eaton Fire broke out (which was also the same day that the Palisades Fire started, roughly 30 miles to the west), to dedicate two new national monuments. He left two days later. And while there were security measures in place, including flight restrictions over the area where he was staying, firefighting planes simply had to coordinate with air traffic controllers to cross into the closed-off space. 

But when my sort-of neighbor brought up this particular theory that day in May, I wasn’t able to debunk it. For one thing, this was my first time hearing the rumor. But more than that, what could I say that would assuage this man’s anger? And if he wanted to blame Biden for his house burning down, was it really my place to tell him he was wrong—even if he was? 

It’s common for survivors of a disaster to be aware of only parts of the story, struggle to understand the full picture, or fail to fully recollect what happened to them in the moment of survival. Once the trauma ebbs, we’re left looking for answers and clarity and someone who knows what’s going on, because we certainly don’t have a clue. Hoaxes and misinformation stem from anger, confusion, and a lack of clear answers to rapidly evolving questions.  

I can confirm that it was dizzying. Rumors and hoaxes were going around in my personal circles too, even if they weren’t so lurid and even if we didn’t really believe them. Bits of half-heard news circulated constantly in our group texts, WhatsApp chains, Facebook groups, and in-person gatherings. 

There was confusion over who was responsible for the extent of the devastation, genuine anger about purported LA Fire Department budget cuts (though those had not actually happened to the extent conspiracists claimed they did), and fears that a Trump-controlled federal government would abandon California. 

Many of the homes and businesses that we heard had burned down hadn’t, and others that we heard had survived were gone. In an especially heartbreaking early bit of misinformation, a local child-care facility shared a Facebook post stating that FEMA was handing out vouchers to pay 90% of your rent for the next three years—except FEMA doesn’t hand out rent vouchers without an application process. I quietly reached out to the source, who took it down. 

In this information vacuum, and given my work, friends started asking me questions, and answering them took energy and time I didn’t have. Honestly, the “disinformation researcher” was largely just as clueless as everyone else. 

Some of the questions were harmless enough. At one point a friend texted me about a picture from Facebook of a burned Bible page that survived the fire when everything else had turned to ash. It looked too corny and convenient to be real. But I had also found a burned page of Psalms that had survived. I kept it in a ziplock bag because it seemed like the right thing to do. So I told my friend I didn’t know if it was real. I still don’t—but I also still have that ziplock somewhere.

Under attack

As weeks passed, we began to deal with another major issue where truth and misinformation walked together: the reasonable worry that a new president who constantly belittled California would not be willing to provide relief funds

Recovery depended on FEMA to distribute grants, on the EPA to clear toxic debris, on the Small Business Administration to make loans for rebuilding or repairing homes, on the Army Corps of Engineers to remove the detritus of burned structures, and so much more. How would this square with the new “government efficiency” mandate touting the trillions of dollars and tens of thousands of jobs to be cut from the federal budget? 

Nobody knew—including the many kind government employees who spent months in Altadena helping us recover while silently wondering if they were about to be fired.

We dealt with scammers, grifters, squatters, thieves, and even tow truck companies that simply stole cars parked outside burned lots and held them for ransom. After a decade of helping people recognize scams and frauds, there was little I could do when they came for us.

Many residents of Altadena began to have trepidation about accepting government assistance, particularly in its Black community, which already had a well-earned deep distrust of the federal government. Many Black residents felt that their needs and stories were being left behind in the recovery, and feared they would be the first to be priced out of whatever Altadena would become in the future.

Outreach in person became critical. I happened to meet the two-star general in charge of the Army Corps’ effort at lunch one day, as he and his team tried to find outside-the-box ways to engage with exhausted and wary residents. He told me they had tried to use technology—texts, emails, clips designed to go viral—but it was too much information, all apparently delivered in the wrong way. Many of the people they needed to reach, particularly older residents, didn’t use social media, weren’t able to communicate well via text, and were easy prey for sophisticated scammers. It was also easy for the real information to get lost as we got bombarded with communications, including many from hoaxers and frauds.

This, too, wasn’t new to me. Many of the movements I’ve covered are awash in grift and worthless wellness products. I know the signs of a scam and a snake-oil salesman. Still, I watched helplessly as my friends and my community, desperate for help, were turned into chum for cash-hungry sharks opening their jaws wide. 

The community was hammered by dodgy contractors and fly-by-night debris removal companies, relief scams and phony grants, and spam calls from “repair companies” and builders. We dealt with scammers, grifters, squatters, thieves, and even tow truck companies that simply stole cars parked outside burned lots and held them for ransom. We were also victimized by looting: Abandoned wires on our lot were stripped for copper, and our neighbor’s unlocked garage was ransacked. After a decade of helping people recognize scams and frauds, there was little I could do when they came for us.

The fear of being conned was easily transmittable, even to me personally. After hearing of friends who couldn’t get a FEMA grant because a previous owner of their home had fraudulently filed an application, we delayed our own appointment with FEMA for weeks. The agency’s call had come so out of the blue that we were convinced it was fake. Maybe my job made me overcautious, or maybe we were just paralyzed by the sheer tonnage of decisions and calls that needed to be handled. Whatever the reason, the fear meant we later had to make multiple calls just to get our meeting rescheduled. It’s a small thing, but when you’re as exhausted and dispirited as we were, there are no small things. 

Contractors for the US Army Corps of Engineers remove hazardous materials from a home destroyed in the Eaton Fire, near a burned-out car.
STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | GETTY IMAGES

Making all this even more frustrating was that the scammers, the people spinning tales of lasers and endangered fish and antifa, were very much ignoring the reality: that our planet is trying to kill us. While federal officials recently made an arrest in the Palisades Fire, the direct causes of that fire and the nearby Eaton Fire may still take years of investigation and litigation to be fully known. But even now, it can’t be denied to any reasonable degree that climate change worsened the wind that made the fires spread more quickly.

The Santa Ana winds bombarding Southern California were among the worst ever to hit the region. Their ferocity drove the embers well beyond the nominal fire danger line, particularly in Altadena. Many landed in brush left brittle and dead by the decades-long drought plaguing California. And they had even more fuel because the previous two winters had been among the wettest in the region’s recent history. Such rapid swings between wet and dry or cold and hot have become so common around the world that they even have a name: climate whiplash

There are the conspiracy theory gurus who see this and make money off it, peddling disinformation on their podcasts and livestreams, while blaming everyone and everything but the real reasons. Many of these figures have spent decades railing against the very idea that the climate could change. And if it is changing, they claimed, human consumption and urbanization have nothing to do with it. When faced with a disaster that undeniably reflected climate change at work, their business models—which rely on sales of subscriptions and merchandise—demanded that they just keep denying it was climate change at work.

As more cities and countries deal with “once in a century” climate disasters, I have no doubt that these figures will continue to deflect attention away from human activity. They will use crackpot science, conspiracy theories, politics, and—increasingly—fake videos depicting whatever AI can generate. They will prey on their audiences’ limited understanding of basic science, their inability to perceive how climate and weather differ, and their fears that globalist power brokers will somehow use the weather against them. And their message will spread with little pushback from social media platforms more concerned with virality and shareholder value than truth.

Resisting the temptation

When you cover disinformation and live through an event creating a massive volume of disinformation, it’s like floating outside your body on an operating table as your heart is being worked on, while also being a heart surgeon. I knew I should be trying to help. But I did not have the mental capacity, the time, or, to be honest, the interest in covering what the worst people on the internet were saying about the worst time of my life. I had very real questions about where my family would live. Thinking about my career was not a priority. 

But of course, these experiences cannot now be excised from my career. I’ve spent a lot of time talking about how trauma influences conspiracism; see how the isolation and boredom of covid created a new generation of conspiracy theory believers. And now I had my own trauma, and it has been a test of my abilities as a journalist and a thinker to avoid falling into the pit of despair.

At the same time, I have a much deeper understanding of the psychology at work in conspiracy belief. One of the biggest reasons conspiracy theories take off after a disaster is that they serve to make sense out of something that makes no sense. Neighborhoods aren’t supposed to burn down in an era of highly trained firefighters and seemingly fireproof materials. They especially aren’t supposed to burn down in Los Angeles, one of the wealthiest cities on the planet. These were seven- and eight-figure homes going up like matches. There must be a reason, people figured. Someone, or something, must be responsible.

So, as I emerge from the haze to something resembling “normal,” I feel more compassion and understanding for trauma victims who turn to conspiracy theories. Having faced the literal burning down of my life, I get the urge to assign meaning to such a calamity and point a finger at whoever we think did it to us. 

Meanwhile, the people of Altadena and Pacific Palisades continue to slowly put our lives and communities back together. The effects of both our warming planet and our disinformation crisis continue to assert themselves every day. It’s still alluring to look for easy answers in outrageous conspiracy theories, but such answers are not real and offer no actual help—only the illusion of help.

It’s equally tempting for someone who researches and debunks conspiracy theories to mock or belittle the people who believe these ideas. How could anyone be so dumb as to think Joe Biden caused the fire that burned down my home?

I kept my mouth shut that day at the meeting in the church, though, again, I can now sympathize much more deeply with something I’d otherwise think completely inane. 

But even a journalist who lost his house is still a journalist. So I decided early on that what I really needed to do was keep Altadena in the news. I went on TV and radio, blogged, and happily told our story to anyone who asked. I focused on the community, the impact, the people who would be working to recover long after the national spotlight moved to the next shiny object.

If there is a professional lesson to be taken from this nightmare, it might be that the people caught up in tragedies are exactly that: caught up. And those who believe this nonsense find something of value in it. They find hope and comfort and the reassurance that whoever did this to them will get what they deserve. 

I could have done it too, throwing away years of experience to embrace conspiracist nihilism in the face of unspeakable trauma. After all, those poor people going through this weren’t just on my TV. 

They were my friends. They were me. They could be anyone.

Mike Rothschild is a journalist and an expert on the growth and impact of conspiracy theories and disinformation. He has written three books, including The Storm Is Upon Us, about the QAnon conspiracy movement, and Jewish Space Lasers, about the myths around the Rothschild banking family. He also is a frequent expert witness in legal cases involving conspiracy theories and has spoken at colleges and conferences around the country. He lives in Southern California.