How conspiracy theories infiltrated the doctor’s office

As anyone who has googled their symptoms and convinced themselves that they’ve got a brain tumor will attest, the internet makes it very easy to self-(mis)diagnose your health problems. And although social media and other digital forums can be a lifeline for some people looking for a diagnosis or community, when that information is wrong, it can put their well-being and even lives in danger.

Unfortunately, this modern impulse to “do your own research” became even more pronounced during the coronavirus pandemic.


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


We asked a number of health-care professionals about how this shifting landscape is changing their profession. They told us that they are being forced to adapt how they treat patients. It’s a wide range of experiences: Some say patients tell them they just want more information about certain treatments because they’re concerned about how effective they are. Others hear that their patients just don’t trust the powers that be. Still others say patients are rejecting evidence-based medicine altogether in favor of alternative theories they’ve come across online. 

These are their stories, in their own words.

Interviews have been edited for length and clarity.


The physician trying to set shared goals 

David Scales

Internal medicine hospitalist and assistant professor of medicine,
Weill Cornell Medical College
New York City

Every one of my colleagues has stories about patients who have been rejective of care, or had very peculiar perspectives on what their care should be. Sometimes that’s driven by religion. But I think what has changed is people, not necessarily with a religious standpoint, having very fixed beliefs that are sometimes—based on all the evidence that we have—in contradiction with their health goals. And that is a very challenging situation. 

I once treated a patient with a connective tissue disease called Ehlers-Danlos syndrome. While there’s no doubt that the illness exists, there’s a lot of doubt and uncertainty over which symptoms can be attributed to Ehlers-Danlos. This means it can fall into what social scientists call a “contested illness.” 

Contested illnesses used to be causes for arguably fringe movements, but they have become much more prominent since the rise of social media in the mid-2010s. Patients often search for information that resonates with their experience. 

This patient was very hesitant about various treatments, and it was clear she was getting her information from, I would say, suspect sources. She’d been following people online who were not necessarily trustworthy, so I sat down with her and we looked them up on Quackwatch, a site that lists health myths and misconduct. 

“She was extremely knowledgeable, and had done a lot of her own research, but she struggled to tell the difference between good and bad sources.”

She was still accepting of treatment, and was extremely knowledgeable, and had done a lot of her own research, but she struggled to tell the difference between good and bad sources and fixed beliefs that overemphasize particular things—like what symptoms might be attributable to other stuff.

Physicians have the tools to work with patients who are struggling with these challenges. The first is motivational interviewing, a counseling technique that was developed for people with substance-use disorders. It’s a nonjudgmental approach that uses open-ended questions to draw out people’s motivations, and to find where there’s a mismatch between their behaviors and their beliefs. It’s highly effective in treating people who are vaccine-hesitant.

Another is an approach called shared decision-making. First we work out what the patient’s goals are and then figure out a way to align those with what we know about the evidence-based way to treat them. It’s something we use for end-of-life care, too.

What’s concerning to me is that it seems as though there’s a dynamic of patients coming in with a fixed belief of how to diagnose their illness, how their symptoms should be treated, and how to treat it in a way that’s completely divorced from the kinds of medicine you’d find in textbooks—and that the same dynamic is starting to extend to other illnesses, too.


The therapist committed to being there when the conspiracy fever breaks 

Damien Stewart

Psychologist
Warsaw, Poland

Before covid, I hadn’t really had any clients bring up conspiracy theories into my practice. But once the pandemic began, they went from being fun or harmless to something dangerous.

In my experience, vaccines were the topic where I first really started to see some militancy—people who were looking down the barrel of losing their jobs because they wouldn’t get vaccinated. At one point, I had an out-and-out conspiracy theorist say to me, “I might as well wear a yellow star like the Jews during the Holocaust, because I won’t get vaccinated.” 

I felt pure anger, and I reached a point in my therapeutic journey I didn’t know would ever occur—I’d found that I had a line that could be crossed by a client that I could not tolerate. I spoke in a very direct manner he probably wasn’t used to and challenged his conspiracy theory. He got very angry and hung up the call.  

It made me figure out how I was going to deal with this in future, and to develop an approach—which was to not challenge the conspiracy theory, but to gently talk through it, to provide alternative points of view and ask questions. I try to find the therapeutic value in the information, in the conversations we’re having. My belief is and evidence seems to show that people believe in conspiracy theories because there’s something wrong in their life that is inexplicable, and they need something to explain what’s happening to them. And even if I have no belief or agreement whatsoever in what they’re saying, I think I need to sit here and have this conversation, because one day this person might snap out of it, and I need to be here when that happens.

As a psychologist, you have to remember that these people who believe in these things are extremely vulnerable. So my anger around these conspiracy theories has changed from being directed toward the deliverer—the person sitting in front of me saying these things—to the people driving the theories.


The emergency room doctor trying to get patients to reconnect with the evidence

Luis Aguilar Montalvan

Attending emergency medicine physician 
Queens, New York

The emergency department is essentially the pulse of what is happening in society. That’s what really attracted me to it. And I think the job of the emergency doctor, particularly within shifting political views or belief in Western medicine, is to try to reconnect with someone. To just create the experience that you need to prime someone to hopefully reconsider their relationship with this evidence-based medicine.

When I was working in the pediatrics emergency department a few years ago, we saw a resurgence of diseases we thought we had eradicated, like measles. I typically framed it by saying to the child’s caregiver: “This is a disease we typically use vaccines for, and it can prevent it in the majority of people.” 

“The doctor is now more like a consultant or a customer service provider than the authority. … The power dynamic has changed.”

The sentiment among my adult patients who are reluctant to get vaccinated or take certain medications seems to be from a mistrust of the government or “The System” rather than from anything Robert F. Kennedy Jr. says directly, for example. I’m definitely seeing more patients these days asking me what they can take to manage a condition or pain that’s not medication. I tell them that the knowledge I have is based on science, and explain the medications I’d typically give other people in their situation. I try to give them autonomy while reintroducing the idea of sticking with the evidence, and for the most part they’re appreciative and courteous.

The role of doctor has changed in recent years—there’s been a cultural change. My understanding is that back in the day, what the doctor said, the patient did. Some doctors used to shame parents who hadn’t vaccinated their kids. Now we’re shifting away from that, and the doctor is now more like a consultant or a customer service provider than the authority. I think that could be because we’ve seen a lot of bad actors in medicine, so the power dynamic has changed.  

I think if we had a more unified approach at a national level, if they had an actual unified and transparent relationship with the population, that would set us up right. But I’m not sure we’ve ever had it.

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | PUBLIC DOMAIN

The psychologist who supported severely mentally ill patients through the pandemic 

Michelle Sallee

Psychologist, board certified in serious mental illness psychology
Oakland, California

I’m a clinical psychologist who only works with people who have been in the hospital three or more times in the last 12 months. I do both individual therapy and a lot of group work, and several years ago during the pandemic, I wrote a 10-week program for patients about how to cope with sheltering in place, following safety guidelines, and their concerns about vaccines.

My groups were very structured around evidence-based practice, and I had rules for the groups. First, I would tell people that the goal was not to talk them out of their conspiracy theory; my goal was not to talk them into a vaccination. My goal was to provide a safe place for them to be able to talk about things that were terrifying to them. We wanted to reduce anxiety, depression, thoughts of suicide, and the need for psychiatric hospitalizations. 

Half of the group was pro–public health requirements, and their paranoia and fear for safety was around people who don’t get vaccinated; the other half might have been strongly opposed to anyone other than themselves deciding they need a vaccination or a mask. Both sides were fearing for their lives—but from each other.

I wanted to make sure everybody felt heard, and it was really important to be able to talk about what they believed—like, some people felt like the government was trying to track us and even kill us—without any judgment from other people. My theory is that if you allow people to talk freely about what’s on their mind without blocking them with your own opinions or judgment, they will find their way eventually. And a lot of times that works. 

People have been stuck on their conspiracy theory or their paranoia has been stuck on it for a long time because they’re always fighting with people about it, everyone’s telling them that this is not true. So we would just have an open discussion about these things. 

“People have been stuck on their conspiracy theory for a long time because they’re always fighting with people about it, everyone’s telling them that this is not true.”

I ran the program four times for a total of 27 people, and the thing that I remember the most was how respectful and tolerant and empathic, but still honest about their feelings and opinions, everybody was. At the end of the program, most participants reported a decrease in pandemic-related stress. Half reported a decrease in general perceived stress, and half reported no change.

I’d say that the rate of how much vaccines are talked about now is significantly lower, and covid doesn’t really come up anymore. But other medical illnesses come up—patients saying, “My doctor said I need to get this surgery, but I know who they’re working for.” Everybody has their concerns, but when a person with psychosis has concerns, it becomes delusional, paranoid, and psychotic.

I’d like to see more providers be given more training around severe mental illness. These are not just people who just need to go to the hospital to get remedicated for a couple of days. There’s a whole life that needs to get looked at here, and they deserve that. I’d like to see more group settings with a combination of psychoeducation, evidence-based research, skills training, and process, because the research says that’s the combination that’s really important.

Editor’s note: Sallee works for a large HMO psychiatry department, and her account here is not on behalf of, endorsed by, or speaking for any larger organization.


The epidemiologist rethinking how to bridge differences in culture and community 

John Wright

Clinician and epidemiologist
Bradford, United Kingdom

I work in Bradford, the fifth-biggest city in the UK. It has a big South Asian population and high levels of deprivation. Before covid, I’d say there was growing awareness about conspiracies. But during the pandemic, I think that lockdown, isolation, fear of this unknown virus, and then the uncertainty about the future came together in a perfect storm to highlight people’s latent attraction to alternative hypotheses and conspiracies—it was fertile ground. I’ve been a National Health Service doctor for almost 40 years, and until recently, the NHS had a great reputation, with great trust, and great public support. The pandemic was the first time that I started seeing that erode.

It wasn’t just conspiracies about vaccines or new drugs, either—it was also an undermining of trust in public institutions. I remember an older woman who had come into the emergency department with covid. She was very unwell, but she just wouldn’t go into hospital despite all our efforts, because there were conspiracies going around that we were killing patients in hospital. So she went home, and I don’t know what happened to her.

The other big change in recent years has been social media and social networks that have obviously amplified and accelerated alternative theories and conspiracies. That’s been the tinder that’s allowed the wildfires to spread with these sort of conspiracy theories. In Bradford, particularly among ethnic minority communities, there’s been stronger links between them—allowing this to spread quicker—but also a more structural distrust. 

Vaccination rates have fallen since the pandemic, and we’re seeing lower uptake of the meningitis and HPV vaccines in schools among South Asian families. Ultimately, this needs a bigger societal approach than individual clinicians putting needles in arms. We started a project called Born in Bradford in 2007 that’s following more than 13,000 families, including around 20,000 teenagers as they grow up. One of the biggest focuses for us is how they use social media and how it links to their mental health, so we’re asking them to donate their digital media to us so we can examine it in confidence. We’re hoping it could allow us to explore conspiracies and influences.

The challenge for the next generation of resident doctors and clinicians is: How do we encourage health literacy in young people about what’s right and what’s wrong without being paternalistic? We also need to get better at engaging with people as health advocates to counter some of the online narratives. The NHS website can’t compete with how engaging content on TikTok is.


The pediatrician who worries about the confusing public narrative on vaccines

Jessica Weisz

Pediatrician
Washington, DC

I’m an outpatient pediatrician, so I do a lot of preventative care, checkups, and sick visits, and treating coughs and colds—those sorts of things. I’ve had specific training in how to support families in clinical decision-making related to vaccines, and every family wants what’s best for their child, and so supporting them is part of my job.

I don’t see specific articulation of conspiracy theories, but I do think there’s more questions about vaccines in conversations I’ve not typically had to have before. I’ve found that parents and caregivers do ask general questions about the risks and benefits of vaccines. We just try to reiterate that vaccines have been studied, that they are intentionally scheduled to protect an immature immune system when it’s the most vulnerable, and that we want everyone to be safe, healthy, and strong. That’s how we can provide protection.

“I think what’s confusing is that distress is being sowed in headlines when most patients, families, and caregivers are motivated and want to be vaccinated.”

I feel that the narrative in the public space is unfairly confusing to families when over 90% of families still want their kids to be vaccinated. The families who are not as interested in that, or have questions—it typically takes multiple conversations to support that family in their decision-making. It’s very rarely one conversation.

I think what’s confusing is that distress is being sowed in headlines when most patients, families, and caregivers are motivated and want to be vaccinated. For example, some of the headlines around recent changes the CDC are making make it sound like they’re making a huge clinical change, when it’s actually not a huge change from what people are typically doing. In my standard clinical practice, we don’t give the combined MMRV vaccine to children under four years old, and that’s been standard practice in all of the places I’ve worked on the Eastern Seaboard. [Editor’s note: In early October, the CDC updated its recommendation that young children receive the varicella vaccine separately from the combined vaccine for measles, mumps, and rubella. Many practitioners, including Weisz, already offer the shots separately.]

If you look at public surveys, pediatricians are still the most trusted [among health-care providers], and I do live in a jurisdiction with pretty strong policy about school-based vaccination. I think that people are getting information from multiple sources, but at the end of the day, in terms of both the national rates and also what I see in clinical practice, we really are seeing most families wanting vaccines.

Why it’s so hard to bust the weather control conspiracy theory

It was October 2024, and Hurricane Helene had just devastated the US Southeast. Representative Marjorie Taylor Greene of Georgia found an abstract target on which to pin the blame: “Yes they can control the weather,” she posted on X. “It’s ridiculous for anyone to lie and say it can’t be done.” 

There was no word on who “they” were, but maybe it was better that way. 

She was repeating what’s by now a pretty familiar and popular conspiracy theory: that shadowy forces are out there, wielding unknown technology to control the weather and wreak havoc on their supposed enemies. This claim, fundamentally preposterous from a scientific standpoint, has grown louder and more common in recent years. It pops up over and over when extreme weather strikes: in Dubai in April 2024, in Australia in July 2022, in the US after California floods and hurricanes like Helene and Milton. In the UK, conspiracy theorists claimed that the government had fixed the weather to be sunny and rain-free during the first covid lockdown in March 2020. Most recently, the theories spread again when disastrous floods hit central Texas this past July. The idea has even inspired some antigovernment extremists to threaten and try to destroy weather radar towers. 


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


But here’s the thing: While Greene and other believers are not correct, this conspiracy theory—like so many others—holds a kernel of much more modest truth behind the grandiose claims. 

Sure, there is no current way for humans to control the weather. We can’t cause major floods or redirect hurricanes or other powerful storm systems, simply because the energy involved is far too great for humans to alter significantly. 

But there are ways we can modify the weather. The key difference is the scale of what is possible. 

The most common weather modification practice is called cloud seeding, and it involves injecting small amounts of salts or other materials into clouds with the goal of juicing levels of rain or snow. This is typically done in dry areas that lack regular precipitation. Research shows that it can in fact work, though advances in technology reveal that its impact is modest—coaxing maybe 5% to 10% more moisture out of otherwise stubborn clouds.

But the fact that humans can influence weather at all gives conspiracy theorists a foothold in the truth. Add to this a spotty history of actual efforts by governments and militaries to control major storms, as well as other emerging but not-yet-deployed-at-any-scale technologies that aim to address climate change … and you can see where things get confusing. 

So while more sweeping claims of weather control are ultimately ridiculous from a scientific standpoint, they can’t be dismissed as entirely stupid.

This all helped make the conspiracy theories swirling after the recent Texas floods particularly loud and powerful. Just days earlier, 100 miles away from the epicenter of the floods, in a town called Runge, the cloud-seeding company Rainmaker had flown a single-engine plane and released about 70 grams of silver iodide into some clouds; a modest drizzle of less than half a centimeter of rain followed. But once the company saw a storm front in the forecast, it suspended its work; there was no need to seed with rain already on the way.

“We conducted an operation on July 2, totally within the scope of what we were regulatorily permitted to do,” Augustus Doricko, Rainmaker’s founder and CEO, recently told me. Still, when as much as 20 inches of rain fell soon afterward not too far away, and more than 100 people died, the conspiracy theory machine whirred into action. 

As Doricko told the Washington Post in the tragedy’s aftermath, he and his company faced “nonstop pandemonium” on social media; eventually someone even posted photos from outside Rainmaker’s office, along with its address. Doricko told me a few factors played into the pile-on, including a lack of familiarity with the specifics of cloud seeding, as well as what he called “deliberately inflammatory messaging from politicians.” Indeed, theories about Rainmaker and cloud seeding spread online via prominent figures including Greene and former national security advisor Mike Flynn

Unfortunately, all this is happening at the same time as the warming climate is making heavy rainfall and the floods that accompany it more and more likely. “These events will become more frequent,” says Emily Yeh, a professor of geography at the University of Colorado who has examined approaches and reactions to weather modification around the world. “There is a large, vocal group of people who are willing to believe anything but climate change as the reason for Texas floods, or hurricanes.”

Worsening extremes, increasing weather modification activity, improving technology, a sometimes shady track record—the conditions are perfect for an otherwise niche conspiracy theory to spread to anyone desperate for tidy explanations of increasingly disastrous events.

Here, we break down just what’s possible and what isn’t—and address some of the more colorful reasons why people may believe things that go far beyond the facts. 

What we can do with the weather—and who is doing it

The basic concepts behind cloud seeding have been around for about 80 years, and government interest in the topic goes back even longer than that

The primary practice involves using planes, drones, or generators on the ground to inject tiny particles of stuff, usually silver iodide, into existing clouds. The particles act as nuclei around which moisture can build up, forming ice crystals that can get heavy enough to fall out of the cloud as snow or rain.

“Weather modification is an old field; starting in the 1940s there was a lot of excitement,” says David Delene, a research professor of atmospheric sciences at the University of North Dakota and an expert on cloud seeding. In a US Senate report from 1952 to establish a committee to study weather modification, authors noted that a small amount of extra rain could “produce electric power worth hundreds of thousands of dollars” and “greatly increase crop yields.” It also cited potential uses like “reducing soil erosion,” “breaking up hurricanes,” and even “cutting holes in clouds so that aircraft can operate.” 

But, as Delene adds, “that excitement … was not realized.”

Through the 1980s, extensive research often funded or conducted by Washington yielded a much better understanding of atmospheric science and cloud physics, though it proved extremely difficult to actually demonstrate the efficacy of the technology itself. In other words, scientists learned the basic principles behind cloud seeding, and understood on a theoretical level that it should work—but it was hard to tell how big an impact it was having on rainfall.

There is huge variability between one cloud and another, one storm system and another, one mountain or valley and another; for decades, the tools available to researchers did not really allow for firm conclusions on exactly how much extra moisture, if any, they were getting out of any given operation. Interest in the practice died down to a low hum by the 1990s.

But over the past couple of decades, the early excitement has returned.

Cloud seeding can enhance levels of rain and snow 

While the core technology has largely stayed the same, several projects launched in the US and abroad starting in the 2000s have combined statistical modeling with new and improved aircraft-based measurements, ground-based radar, and more to provide better answers on what results are actually achievable when seeding clouds.

“I think we’ve identified unequivocally that we can indeed modify the cloud,” says Jeff French, an associate professor and head of the University of Wyoming’s Department of Atmospheric Science, who has worked for years on the topic. But even as scientists have come to largely agree that the practice can have an impact on precipitation, they also largely recognize that the impact probably has some fairly modest upper limits—far short of massive water surges. 

“There is absolutely no evidence that cloud seeding can modify a cloud to the extent that would be needed to cause a flood,” French says. Floods require a few factors, he adds—a system with plenty of moisture available that stays localized to a certain spot for an extended period. “All of these things which cloud seeding has zero effect on,” he says. 

The technology simply operates on a different level. “Cloud seeding really is looking at making an inefficient system a little bit more efficient,” French says. 

As Delene puts it: “Originally [researchers] thought, well, we could, you know, do 50%, 100% increases in precipitation,” but “I think if you do a good program you’re not going to get more than a 10% increase.” 

Asked for his take on a theoretical limit, French was hesitant—“I don’t know if I’m ready to stick my neck out”—but agreed on “maybe 10-ish percent” as a reasonable guess.

Another cloud seeding expert, Katja Friedrich from the University of Colorado–Boulder, says that any grander potential would be obvious by this point: We wouldn’t have “spent the last 100 years debating—within the scientific community—if cloud seeding works,” she writes in an email. “It would have been easy to separate the signal (from cloud seeding) from the noise (natural precipitation).”

It can also (probably) suppress precipitation

Sometimes cloud seeding is used not to boost rain and snow but rather to try to reduce its severity—or, more specifically, to change the size of individual rain droplets or hailstones. 

One of the most prominent examples has been in parts of Canada, where hailstorms can be devastating; a 2024 event in Calgary, for instance, was the country’s second-most-expensive disaster ever, with over $2 billion in damages. 

Insurance companies in Alberta have been working together for nearly three decades on a cloud seeding program that’s aimed at reducing some of that damage. In these cases, the silver iodide or other particles are meant to act essentially as competition for other “embryos” inside the cloud, increasing the total number of hailstones and thus reducing each individual stone’s average size. 

Smaller hailstones means less damage when they reach the ground. The insurance companies—which continue to pay for the program—say losses have been cut by 50% since the program started, though scientists aren’t quite as confident in its overall success. A 2023 study published in Atmospheric Research examined 10 years of cloud seeding efforts in the province and found that the practice did appear to reduce potential for damage in about 60% of seeded storms—while in others, it had no effect or was even associated with increased hail (though the authors said this could have been due to natural variation).

Similar techniques are also sometimes deployed to try to improve the daily forecast just a bit. During the 2008 Olympics, for instance, China engaged in a form of cloud seeding aimed at reducing rainfall. As MIT Technology Review detailed back then, officials with the Beijing Weather Modification Office planned to use a liquid-nitrogen-based coolant that could increase the number of water droplets in a cloud while reducing their size; this can get droplets to stay aloft a little longer instead of falling out of the cloud. Though it is tough to prove that it definitively would have rained without the effort, the targeted opening ceremony did stay dry.

So, where is this happening? 

The United Nations’ World Meteorological Organization says that some form of weather modification is taking place in “more than 50 countries” and that “demand for these weather modification activities is increasing steadily due to the incidence of droughts and other calamities.”

The biggest user of cloud-seeding tech is arguably China. Following the work around the Olympics, the country announced a huge expansion of its weather modification program in 2020, claiming it would eventually run operations for agricultural relief and other functions, including hail suppression, over an area about the size of India and Algeria combined. Since then, China has occasionally announced bits of progress—including updates to weather modification aircraft and the first use of drones for artificial snow enhancement. Overall, it spends billions on the practice, with more to come.

Elsewhere, desert countries have taken an interest. In 2024, Saudi Arabia announced an expanded research program on cloud seeding—Delene, of the University of North Dakota, was part of a team that conducted experiments in various parts of that country in late 2023. Its neighbor the United Arab Emirates began “rain enhancement” activities back in 1990; this program too has faced outcry, especially after more than a typical year’s worth of rain fell in a single day in 2024, causing massive flooding. (Bloomberg recently published a story about persistent questions regarding the country’s cloud seeding program; in response to the story, French wrote in an email that the “best scientific understanding is still that cloud seeding CANNOT lead to these types of events.” Other experts we asked agreed.) 

In the US, a 2024 Government Accountability Office report on cloud seeding said that at least nine states have active programs. These are sometimes run directly by the state and sometimes contracted out through nonprofits like the South Texas Weather Modification Association to private companies, including Doricko’s Rainmaker and North Dakota–based Weather Modification. In August, Doricko told me that Rainmaker had grown to 76 employees since it launched in 2023. It now runs cloud seeding operations in Utah, Idaho, Oregon, California, and Texas, as well as forecasting services in New Mexico and Arizona. And in an answer that may further fuel the conspiracy fire, he added they are also operating in one Middle Eastern country; when I asked which one, he’d only say, “Can’t tell you.”

What we cannot do

The versions of weather modification that the conspiracy theorists envision most often—significantly altering monsoons or hurricanes or making the skies clear and sunny for weeks at a time—have so far proved impossible to carry out. But that’s not necessarily for lack of trying.

The US government attempted to alter a hurricane in 1947 as part of a program dubbed Project Cirrus. In collaboration with GE, government scientists seeded clouds with pellets of dry ice, the idea being that the falling pellets could induce supercooled liquid in the clouds to crystallize into ice. After they did this, the storm took a sharp left turn and struck the area around Savannah, Georgia. This was a significant moment for budding conspiracy theories, since a GE scientist who had been working with the government said he was “99% sure” the cyclone swerved because of their work. Other experts disagreed and showed that such storm trajectories are, in reality, perfectly possible without intervention. Perhaps unsurprisingly, public outrage and threats of lawsuits followed.

It took some time for the hubbub to die down, after which several US government agencies continued—unsuccessfully—trying to alter and weaken hurricanes with a long-running cloud seeding program called Project Stormfury. Around the same time, the US military joined the fray with Operation Popeye, essentially trying to harness weather as a weapon in the Vietnam War—engaging in cloud seeding efforts over Vietnam, Cambodia, and Laos in the late 1960s and early 1970s, with an eye toward increasing monsoon rains and bogging down the enemy. Though it was never really clear whether these efforts worked, the Nixon administration tried to deny them, going so far as to lie to the public and even to congressional committees.

More recently and less menacingly, there have been experiments with Dyn-O-Gel—a Florida company’s super-absorbent powder, intended to be dropped into storm clouds to sop up their moisture. In the early 2000s, the company carried out experiments with the stuff in thunderstorms, and it had grand plans to use it to weaken tropical cyclones. But according to one former NOAA scientist, you would need to drop almost 38,000 tons of it, requiring nearly 380 individual plane trips, in and around even a relatively small cyclone’s eyewall to really affect the storm’s strength. And then you would have to do that again an hour and a half later, and so on. Reality tends to get in the way of the biggest weather modification ideas.

Beyond trying to control storms, there are some other potential weather modification technologies out there that are either just getting started or have never taken off. Swiss researchers have tried to use powerful lasers to induce cloud formation, for example; in Australia, where climate change is imperiling the Great Barrier Reef, artificial clouds created when ship-based nozzles spray moisture into the sky have been used to try to protect the vital ecosystem. In each case, the efforts remain small, localized, and not remotely close to achieving the kinds of control the conspiracy theorists allege.

What is not weather modification—but gets lumped in with it

Further worsening weather control conspiracies is that there is a tendency to conflate cloud seeding and other promising weather modification research with concepts such as chemtrails—a full-on conspiracist fever dream about innocuous condensation trails left by jets—and solar geoengineering, a theoretical stopgap to cool the planet that has been subject to much discussion and modeling research but has never been deployed in any large-scale way.

One controversial form of solar geoengineering, known as stratospheric aerosol injection, would involve having high-altitude jets drop tiny aerosol particles—sulfur dioxide, most likely—into the stratosphere to act essentially as tiny mirrors. They would reflect a small amount of sunlight back into space, leaving less energy to reach the ground and contribute to warming. To date, attempts to launch physical experiments in this space have been shouted down, and only tiny—though still controversial—commercial efforts have taken place. 

One can see why it gets lumped in with cloud seeding: bits of stuff, dumped into the sky, with the aim of altering what happens down below. But the aims are entirely separate; geoengineering would alter the global average temperature rather than having measurable effects on momentary cloudbursts or hailstorms. Some research has suggested that the practice could alter monsoon patterns, a significant issue given their importance to much of the world’s agriculture, but it remains a fundamentally different practice from cloud seeding.

Still, the political conversation around supposed weather control often reflects this confusion. Greene, for instance, introduced a bill in July called the Clear Skies Act, which would ban all weather modification and geoengineering activities. (Greene’s congressional office did not respond to a request for comment.) And last year, Tennessee became the first state to enact a law to prohibit the “intentional injection, release, or dispersion, by any means, of chemicals, chemical compounds, substances, or apparatus … into the atmosphere with the express purpose of affecting temperature, weather, or the intensity of the sunlight.” Florida followed suit, with Governor Ron DeSantis signing SB 56 into law in June of this year for the same stated purpose.

Also this year, lawmakers in more than 20 other states have also proposed some version of a ban on weather modification, often lumping it in with geoengineering, even though caution on the latter is more widely accepted or endorsed. “It’s not a conspiracy theory,” one Pennsylvania lawmaker who cosponsored a similar bill told NBC News. “All you have to do is look up.”

Oddly enough, as Yeh of the University of Colorado points out, the places where bans have passed are states where weather modification isn’t really happening. “In a way, it’s easy for them to ban it, because, you know, nothing actually has to be done,” she says. In general, neither Florida nor Tennessee—nor any other part of the Southeast—needs any help finding rain. Basically, all weather modification activity in the US happens in the drier areas west of the Mississippi. 

Finding a culprit

Doricko told me that in the wake of the Texas disaster, he has seen more people become willing to learn about the true capabilities of cloud seeding and move past the more sinister theories about it. 

I asked him, though, about some of his company’s flashier branding: Until recently, visitors to the Rainmaker website were greeted right up top with the slogan “Making Earth Habitable.” Might this level of hype contribute to public misunderstanding or fear? 

He said he is indeed aware that Earth is, currently, habitable, and called the slogan a “tongue-in-cheek, deliberately provocative statement.” Still, in contrast to the academics who seem more comfortable acknowledging weather modification’s limits, he has continued to tout its revolutionary potential. “If we don’t produce more water, then a lot of the Earth will become less habitable,” he said. “By producing more water via cloud seeding, we’re helping to conserve the ecosystems that do currently exist, that are at risk of collapse.” 

While other experts cited that 10% figure as a likely upper limit of cloud seeding’s effectiveness, Doricko said they could eventually approach 20%, though that might be years away. “Is it literally magic? Like, can I snap my fingers and turn the Sahara green? No,” he said. “But can it help make a greener, verdant, and abundant world? Yeah, absolutely.” 

It’s not all that hard to see why people still cling to magical thinking here. The changing climate is, after all, offering up what’s essentially weaponized weather, only with a much broader and long-term mechanism behind it. There is no single sinister agency or company with its finger on the trigger, though it can be tempting to look for one; rather, we just have an atmosphere capable of holding more moisture and dropping it onto ill-prepared communities, and many of the people in power are doing little to mitigate the impacts.

“Governments are not doing a good job of responding to the climate crisis; they are often captured by fossil-fuel interests, which drive policy, and they can be slow and ineffective when responding to disasters,” Naomi Smith, a lecturer in sociology at the University of the Sunshine Coast in Australia who has written about conspiracy theories and weather events, writes in an email. “It’s hard to hold all this complexity, and conspiracy theorizing is one way of making it intelligible and understandable.”  

“Conspiracy theories give us a ‘big bad’ to point the finger at, someone to blame and a place to put our feelings of anger, despair, and grief,” she writes. “It’s much less satisfying to yell at the weather, or to engage in the sustained collective action we actually need to tackle climate change.”

The sinister “they” in Greene’s accusations is, in other words, a far easier target than the real culprit. 

Dave Levitan is an independent journalist, focused on science, politics, and policy. Find his work at davelevitan.com and subscribe to his newsletter at gravityisgone.com

Chatbots are surprisingly effective at debunking conspiracy theories

It’s become a truism that facts alone don’t change people’s minds. Perhaps nowhere is this more clear than when it comes to conspiracy theories: Many people believe that you can’t talk conspiracists out of their beliefs. 

But that’s not necessarily true. It turns out that many conspiracy believers do respond to evidence and arguments—information that is now easy to deliver in the form of a tailored conversation with an AI chatbot.

In research we published in the journal Science this year, we had over 2,000 conspiracy believers engage in a roughly eight-minute conversation with DebunkBot, a model we built on top of OpenAI’s GPT-4 Turbo (the most up-to-date GPT model at that time). Participants began by writing out, in their own words, a conspiracy theory that they believed and the evidence that made the theory compelling to them. Then we instructed the AI model to persuade the user to stop believing in that conspiracy and adopt a less conspiratorial view of the world. A three-round back-and-forth text chat with the AI model (lasting 8.4 minutes on average) led to a 20% decrease in participants’ confidence in the belief, and about one in four participants—all of whom believed the conspiracy theory beforehand—indicated that they did not believe it after the conversation. This effect held true for both classic conspiracies (think the JFK assassination or the moon landing hoax) and more contemporary politically charged ones (like those related to the 2020 election and covid-19).


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


This is good news, given the outsize role that unfounded conspiracy theories play in today’s political landscape. So while there are widespread and legitimate concerns that generative AI is a potent tool for spreading disinformation, our work shows that it can also be part of the solution. 

Even people who began the conversation absolutely certain that their conspiracy was true, or who indicated that it was highly important to their personal worldview, showed marked decreases in belief. Remarkably, the effects were very durable; we followed up with participants two months later and saw just as big a reduction in conspiracy belief as we did immediately after the conversations. 

Our experiments indicate that many believers are relatively rational but misinformed, and getting them timely, accurate facts can have a big impact. Conspiracy theories can make sense to reasonable people who have simply never heard clear, non-conspiratorial explanations for the events they’re fixated on. This may seem surprising. But many conspiratorial claims, while wrong, seem reasonable on the surface and require specialized, esoteric knowledge to evaluate and debunk. 

For example, 9/11 deniers often point to the claim that jet fuel doesn’t burn hot enough to melt steel as evidence that airplanes were not responsible for bringing down the Twin Towers—but the chatbot responds by pointing out that although this is true, the American Institute of Steel Construction says jet fuel does burn hot enough to reduce the strength of steel by over 50%, which is more than enough to cause such towers to collapse. 

Although we have greater access to factual information than ever before, it is extremely difficult to search that vast corpus of knowledge efficiently. Finding the truth that way requires knowing what to google—or who to listen to—and being sufficiently motivated to seek out conflicting information. There are large time and skill barriers to conducting such a search every time we hear a new claim, and so it’s easy to take conspiratorial content you stumble upon at face value. And most would-be debunkers at the Thanksgiving table make elementary mistakes that AI avoids: Do you know the melting point and tensile strength of steel offhand? And when your relative calls you an idiot while trying to correct you, are you able to maintain your composure? 

With enough effort, humans would almost certainly be able to research and deliver facts like the AI in our experiments. And in a follow-up experiment, we found that the AI debunking was just as effective if we told participants they were talking to an expert rather than an AI. So it’s not that the debunking effect is AI-specific. Generally speaking, facts and evidence delivered by humans would also work. But it would require a lot of time and concentration for a human to come up with those facts. Generative AI can do the cognitive labor of fact-checking and rebutting conspiracy claims much more efficiently. 

In another large follow-up experiment, we found that what drove the debunking effect was specifically the facts and evidence the model provided: Factors like letting people know the chatbot was going to try to talk them out of their beliefs didn’t reduce its efficacy, whereas telling the model to try to persuade its chat partner without using facts and evidence totally eliminated the effect. 

Although the foibles and hallucinations of these models are well documented, our results suggest that debunking efforts are widespread enough on the internet to keep the conspiracy-focused conversations roughly accurate. When we hired a professional fact-checker to evaluate GPT-4’s claims, they found that over 99% of the claims were rated as true (and not politically biased). Also, in the few cases where participants named conspiracies that turned out to be true (like MK Ultra, the CIA’s human experimentation program from the 1950s), the AI chatbot confirmed their accurate belief rather than erroneously talking them out of it.

To date, largely by necessity, interventions to combat conspiracy theorizing have been mainly prophylactic—aiming to prevent people from going down the rabbit hole rather than trying to pull them back out. Now, thanks to advances in generative AI, we have a tool that can change conspiracists’ minds using evidence. 

Bots prompted to debunk conspiracy theories could be deployed on social media platforms to engage with those who share conspiratorial content—including other AI chatbots that spread conspiracies. Google could also link debunking AI models to search engines to provide factual answers to conspiracy-related queries. And instead of arguing with your conspiratorial uncle over the dinner table, you could just pass him your phone and have him talk to AI. 

Of course, there are much deeper implications here for how we as humans make sense of the world around us. It is widely argued that we now live in a “post-truth” world, where polarization and politics have eclipsed facts and evidence. By that account, our passions trump truth, logic-based reasoning is passé, and the only way to effectively change people’s minds is via psychological tactics like presenting compelling personal narratives or changing perceptions of the social norm. If so, the typical, discourse-based work of living together in a democracy is fruitless.

But facts aren’t dead. Our findings about conspiracy theories are the latest—and perhaps most extreme—in an emerging body of research demonstrating the persuasive power of facts and evidence. For example, while it was once believed that correcting falsehoods that aligns with one’s politics would just cause people to dig in and believe them even more, this idea of a “backfire” has itself been debunked: Many studies consistently find that corrections and warning labels reduce belief in, and sharing of, falsehoods—even among those who most distrust the fact-checkers making the corrections. Similarly, evidence-based arguments can change partisans’ minds on political issues, even when they are actively reminded that the argument goes against their party leader’s position. And simply reminding people to think about whether content is accurate before they share it can substantially reduce the spread of misinformation. 

And if facts aren’t dead, then there’s hope for democracy—though this arguably requires a consensus set of facts from which rival factions can work. There is indeed widespread partisan disagreement on basic facts, and a disturbing level of belief in conspiracy theories. Yet this doesn’t necessarily mean our minds are inescapably warped by our politics and identities. When faced with evidence—even inconvenient or uncomfortable evidence—many people do shift their thinking in response. And so if it’s possible to disseminate accurate information widely enough, perhaps with the help of AI, we may be able to reestablish the factual common ground that is missing from society today.

You can try our debunking bot yourself at at debunkbot.com

Thomas Costello is an assistant professor in social and decision sciences at Carnegie Mellon University. His research integrates psychology, political science, and human-computer interaction to examine where our viewpoints come from, how they differ from person to person, and why they change—as well as the sweeping impacts of artificial intelligence on these processes.

Gordon Pennycook is the Dorothy and Ariz Mehta Faculty Leadership Fellow and associate professor of psychology at Cornell University. He examines the causes and consequences of analytic reasoning, exploring how intuitive versus deliberative thinking shapes decision-making to understand errors underlying issues such as climate inaction, health behaviors, and political polarization.

David Rand is a professor of information science, marketing and management communication, and psychology at Cornell University. He uses approaches from computational social science and cognitive science to explore how human-AI dialogue can correct inaccurate beliefs, why people share falsehoods, and how to reduce political polarization and promote cooperation.

Why do so many people think the Fruit of the Loom logo had a cornucopia?

There is a shirt currently listed on eBay for $2,128.79. It was not designed by Versace or Dior, nor spun from the world’s finest silk. In fact, a tag proudly declares, “100% cotton made in Myanmar”—but it’s a second tag, just below that one, that makes this blue button-down so expensive. 

“I looked at it and I was like, Wow, this is cool,” says Brooke Hermann, the 30-year-old Kentucky-based reseller who bought the top for $1 at a secondhand sale in 2024. “This doesn’t look like any other Fruit of the Loom tag I’ve ever seen.”

Quick question: Does the Fruit of the Loom logo feature a cornucopia? 

Many of us have been wearing the casualwear company’s T-shirts and underpants for decades, and yet the question of whether there is a woven brown horn of plenty on the logo is surprisingly contentious. According to a 2022 poll by the research company YouGov, 55% of Americans believe the logo does include a cornucopia, 25% are unsure, and only 21% are confident that it doesn’t, even though this last group is correct. According to a 2023 post from the company, the Fruit of the Loom logo does not include—and, according to Snopes, has never included—a horn of plenty. (MIT Technology Review could not reach Fruit of the Loom for comment.)


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


Maybe you’ve come across this fact before, via an internet meme that made you gasp, shrug, or scratch your head. There’s a specific name for what’s happening here: Those who believe the logo used to include the cornucopia are experiencing the “Mandela effect,” or collective false memory, so called because a number of people misremember that Nelson Mandela died in prison. I helped popularize the phenomenon in a viral 2016 New Statesman article about a movie that doesn’t actually exist, and in the time since it’s become something of a household term; TV shows from Saturday Night Live to Black Mirror to The X-Files have explored the Mandela effect.

But whether you remember the brown horn, incorrectly recall Darth Vader saying, “Luke, I am your father,” or believe that a popular children’s book was spelled The Berenstein Bears, you’ve probably moved on with your life. Google searches for “Mandela effect” have plummeted from 2016 highs, and Hermann has had zero bids on the shirt she posted last year—even though, at least to her eyes, it features a cornucopia on the tag. “No one’s really offered anything, and no one’s said anything about it,” she says, “which to me is kind of crazy.” 

And yet while many find it easy to let their unconfirmable beliefs go, others have spent the better part of a decade seeking answers—and vindication. There are commonly more than 170,000 weekly visitors to a Mandela effect subreddit that sees over 1,000 comments on average every day. While a fair share of these commenters are skeptics, plenty more are dedicated believers who are not satisfied with the prevailing explanation that human memory is fallible and instead invest their time into bringing the truth—whatever exactly it may be—to light. 

“I’ve been a bit ostracized from my family ever since I started pushing this thing nine years ago,” says a 51-year-old Massachusetts-based Fruit of the Loom truther who asked to go by the name AJ Booras. “I’m not inclined to simply let this phenomenon fall by the proverbial wayside, even if I’m the last one standing.” 

Some online believe in a fairly straightforward conspiracy: They want Fruit of the Loom to confess that it’s “gaslighting” customers and used to have a cornucopia on its tags. Others speculate that the answer lies in quantum physics: If—as the astrophysicist Neil deGrasse Tyson has said—there’s “better than 50-50 odds” that we’re living in a simulation, then might there be some sort of glitch, lag, or failed software update that means some people see and remember the world differently from others?

“The scientific community isn’t really looking that hard at it—and if they are, they’re always framing it as a memory thing,” says AJ. “It’s a hard barrier to make any headway on.” This is why, AJ says, he’s become “addicted” to researching the phenomenon: “It’s a personal quest for vindication.” 

Will anyone ever believe these believers? There are two options for those who think the Fruit of the Loom logo once had a cornucopia: accept that your memory is wrong, or think that the world is. What makes some people happy with the simple explanation and others determined to seek the more complicated one? 

“The bridge between perception and memory” 

There’s nothing quite as disconcerting as when memory and reality conflict. After all, what is reality—or at least your reality—if not your memory? This is why it can be so satisfying to find concrete evidence that you are irrefutably correct: Here’s an old photo that proves Dad did come on the ’09 trip to Florida and your sister is foolish and wrong. 

In the Mandela effect community, evidence implying that the world used to be different is called “residue.” There is an abundant amount of residue suggesting that the Fruit of the Loom logo once had a cornucopia. 

In the 2006 animated film The Ant Bully, a pair of parodical “Fruit of the Loin” underwear is drawn with a cornucopia on the tag. A similar gag in a 2012 episode of South Park sees a fake clothing brand named “Cornucopia.” In 1973, when the jazz flutist Frank Wess released an album called Flute of the Loom, the cover showed fruit pouring out a cornucopia-shaped flute. When allegedly tracked down by Redditors, the illustrator reportedly said the clothing logo had inspired the design: “Why the hell else would I have used a cornucopia?” 

On top of that, numerous newspaper and magazine articles written from the 1970s to the early 2000s reference the horn of plenty, as does a short play first performed in 1968 and a novel, The Brothers K, published in 1992. New residue is still being discovered: In April 2025, a TikToker shared an old ’90s trivia game in which clues about brands are listed on cards. The card for Fruit of the Loom includes the words “underwear,” “apples and grapes,” and “cornucopia.” 

How can all these people—animators, illustrators, journalists, and writers—have made the same mistake? When I reached out to the author of The Brothers K, David James Duncan, he was adamant that there was no mistake: “My inspiration was the Fruit of the Loom boxer shorts I owned at the time,” he said via email. “I changed nothing in describing the boxers, and yes, they did have a Fruit of the Loom cornucopia on the label in the back of the shorts.” 

Conversely, when I spoke with Billy Cox, a journalist who referenced the cornucopia in a 1994 article in Florida Today, he was less confident. “I have no idea what fueled my initial assumption about the cornucopia. Zero. Zilch-o. Nada,” said Cox, also via email. But he’s prepared to admit that he may have been careless in his reporting: “Even if the internet had been available back then, I doubt I would’ve double-checked the logo’s history.”

It’s an interesting thought: Most of the articles referencing the cornucopia are from a period—the ’70s through the ’90s—when journalists wouldn’t have been able to quickly google the logo. But why would they all misremember it the exact same way

Wilma Bainbridge is an associate psychology professor at the University of Chicago who researches what she calls “the bridge between perception and memory”; she got her PhD in brain and cognitive sciences from MIT in 2016. 

Bainbridge herself first came across the Mandela effect on social media—she was “wowed” when confronted with the true spelling of the Berenstain Bears in the famous American children’s books. In 2022, she published a scientific study on visual Mandela effects and ultimately found that there is consistency in what people misremember. “People’s memories are surprisingly predictable,” she says. 

ADOBE STOCK

WIKIMEDIA COMMONS

The husband-and-wife team of Stanley and Janice Berenstain wrote and illustrated the popular children’s books. More than 300 titles bear the family name.

In one experiment in the study, she found that people who aren’t very familiar with an image can share the same false memories as those who claim to be highly familiar. For example, some Mandela effect experiencers believe that the Monopoly man wore a monocle. In Bainbridge’s study, even people who didn’t know the character well sometimes drew the monocle when they were shown the Monopoly man and were later asked to draw him; this means the mistake was based on recall, not recognition, and could suggest that there’s something intrinsic to certain images that encourages memory errors. 

Scientists have long demonstrated that human memory is inherently fallible. In 1996, psychologists asked people whether they had watched news footage of the 1992 Bijlmer plane crash in Amsterdam, and more than 60% of the participants said yes—even though no recording of the crash exists. Other studies have shown that our memories can be corrupted by our peers and that false memories can be contagious. Arguably, the internet has caused memory contagion when it comes to the Mandela effect: Comparatively very few people googled “Fruit of the Loom cornucopia” between 2004 and 2017, with searches growing more common after a Redditor pointed out what was believed to be the first piece of “residue” in 2016 and (and again spiking dramatically when a TikTok video on the phenomenon was posted in 2023; it’s since earned over 5 million views). 

“Some people make things go viral because they want to believe it,” says Don, a 61-year-old American who has been moderating the Mandela effect subreddit since 2017. (He asked to be identified only by his first name to protect his privacy.) “People want to be part of the experience.”

Still, Bainbridge’s study didn’t land on one definitive reason for the Mandela effect. “I was surprised to find there was no singular explanation,” she says. 

Using a method of monitoring cursor movement that’s analogous to eye-tracking technology, the academic tested whether people made memory errors because they didn’t pay attention to an image or looked at only certain parts of it. She found this wasn’t the case. 

Could it be, then, that people simply fill in the blanks of their memory with archetypes—we remember the Monopoly man with a monocle because we associate the eyepiece with rich old men? 

Bainbridge has found that this explanation—known as the “schema theory”—cannot fully explain the Mandela effect either. In one of her experiments, participants were asked to select the correct Fruit of the Loom logo from three images: one without a cornucopia, one with a cornucopia, and one with a plate. Even though we see fruit on plates far more often than we see it inside cornucopias, more participants selected the horn of plenty than the crockery. 

Bainbridge is drawn to the idea that some images simply cause more false memories than others. “We think the underlying cause will not likely be a single feature—e.g., attractiveness, color—but how these features work together in relation to things already stored in our memory,” she says. “But this work is still in its early stages, so we don’t know exactly what that combination is like.” 

Believers like AJ just aren’t convinced.

“In simulation, anything can happen”

“A lot of people remember looking at this unfamiliar object on their underwear tag,” says AJ, “and asking a parent, ‘Is that a loom?’ and the parents saying, ‘No! That’s what we call a cornucopia.’” 

When he was growing up in the ’70s and ’80s, AJ wore Fruit of the Loom underwear and regularly folded laundry with his mother. “You stack up enough underwear, and you’re seeing that logo over and over and over again,” he says.

As a newly fledged adult around the late ’90s, AJ had to go to the store and buy his own underwear for the first time. “I noticed,” he says, “that the logo had changed, and it was just a pile of fruit.” 

Unperturbed, AJ assumed the company had just rebranded—he didn’t worry about it too much until almost two decades later, when he came across the Mandela effect online and realized the consensus was that there had never been a cornucopia. “We call it the wave of 2016 in the Mandela effect community—it was this huge rush of many, many effects that were being noticed,” he says. 

The first time he heard about the Mandela effect, AJ says, he “actually swooned” because of an “overwhelming existential dread that something was dreadfully wrong with reality itself.” 

Today, AJ believes in numerous instances of the Mandela effect, all of which have been shown to be incorrect but nonetheless have robust believer communities online: that the pizza roll brand used to be called Tostino’s, not Totino’s; that the location of Australia has moved on the world map; that the show Sex and the City was Sex in the City; Froot Loops cereal was formerly Fruit Loops; human organs have shifted positions; the sun changed color from yellow to white …

For him, each is just one part of a larger problem he can’t explain about the universe. At first, AJ says, he sought answers by researching memory science and psychology, but he was left unsatisfied. So instead, he looked to quantum mechanics and metaphysics, specifically ontology—the study of reality. 

In 2003, the philosopher Nick Bostrom—famous for his theories on the threat of AI “superintelligence”—posited that humanity may be living inside a simulation. Almost two decades later, the astronomer David Kipping performed some calculations and put the odds at 50-50. “In simulation, anything can happen,” AJ says. “You could have different servers—one server hasn’t been updated, some people are seeing one version, some people are seeing the other.” It is also possible, AJ argues, that we exist in a multiverse—an idea first floated by the physicist Hugh Everett III in the 1950s. If people are somehow traveling between these parallel universes, then they may have memories from different worlds. Both of these theories are recurrent in the Mandela effect community online.  

And yet AJ doesn’t find these explanations entirely fulfilling: “If we jumped universes, why would there be residue?” Instead, he’s been diving into a combination of the theoretical physicist John Archibald Wheeler’s concept of the Participatory Anthropic Principle (PAP)—which suggests, controversially, that the act of observation creates reality—and the Nobel Prize–winning physicist Eugene Wigner’s “friend” experiment, which theorized that two observers can experience two different realities. AJ believes that physicists’ own work may be affecting the universe: “It’s almost like you’re changing the parameters of reality itself by digging deeper.” 

There is still so much that the experts themselves can’t explain about quantum physics, so it’s no wonder that laypeople get confused. The internet offers myriad rabbit holes to go down, some of them legitimate and some of them less so. Things are complicated further when YouTubers and internet commenters who aren’t well versed in the science take specific, highly complex theories and experiments and try to apply them to other phenomena, even if there is no concrete evidence they’re related. So I set about emailing physicists, simply to see whether they believe it might be remotely possible that quantum physics could, in fact, explain the Mandela effect.

Numerous academics replied telling me they had nothing to say on the topic; Bostrom’s office said he was unavailable. I asked the theoretical physicist Carlo Rovelli—who has been labelled one of the world’s 50 top thinkers—whether he has any thoughts about Mandela effect believers’ quantum-physics-related theories. “Yes, definitely,” he replied. “They are all total bullshit! There are few things about which I am totally convinced. This is one.” 

I contacted the University of Oxford physicist David Deutsch—often called the “father of quantum computing”—and listed the theories believers think may explain the Mandela effect, including parallel universes, simulations, the holographic principle, PAP, and Wigner’s friend idea. “Considered as explanations of the Mandela effect, none of those follow from quantum theory, and none of them constitute a rational speculation beyond it,” he said. Johns Hopkins University professor and physicist Sean M. Carroll concurs: “I cannot imagine how any of those phenomena could be in any way related to the Mandela effect.” 

Melvin Vopson, an associate professor of physics at the University of Portsmouth who has conducted research on the simulation theory, admits he has experienced the Mandela effect himself but doesn’t attribute the phenomenon to glitches in the simulation: That’d be a “cheap explanation,” he says.

Nevertheless, scientists waving away these explanations could have a detrimental effect: In the absence of expert engagement, there are plenty of people online who can bolster believers’ views. AJ is not surprised by these responses: “I just don’t think that physicists have given it a real hard look,” he says, “because they’re already certain that it’s explainable otherwise.” 

Bainbridge, for her part, thinks her study at least disproves the theory that we’ve been jumping between different universes. When she took those study participants who weren’t familiar with certain logos and mascots and showed them the correct version for the first time, she tested their memory by asking them to redraw the image only moments later, and still some drew the Mandela effect version. 

“It’s unlikely we jumped dimensions during that short time span,” she says, “so it seems like the Mandela effect is something more about the shortcuts our memories take, rather than something about parallel worlds.” She hopes her future work will help further elucidate these “shortcuts,” and she is even planning to see whether she can create her own Mandela effects. 

Leaving it all behind

One of the most well-documented memory phenomena is the “misinformation effect.” Since the 1970s, scientists have demonstrated that exposing people to misinformation after an event can alter their memories. If people are asked leading questions—say, “Did you see the broken headlight?” rather than “Did you see a broken headlight?” after witnessing a crash—they are more likely to report seeing something they didn’t. But on the flip side, warning witnesses about the threat of misinformation before they recount an event can increase the accuracy of their memory. 

In short, the way information is presented to us is crucial. This is why it was pretty poor form for YouGov to poll Americans about the Fruit of the Loom logo with a question that was easily open to misinterpretation: “Does the logo for the clothing company Fruit of the Loom have a cornucopia of fruit in it, or not?” It is unclear here which part is in question—the cornucopia or the fruit. But it was also poor form that I didn’t mention this until now—nor did I mention that Neil deGrasse Tyson later changed his mind about simulation theory and is now “convinced” that we do not live in a simulation. 

It was also probably pretty misleading of me to start this article with a link-free reference to Brooke Hermann’s eBay-listed shirt, which she believes features a cornucopia but to my eyes clearly features brown leaves. From the ’60s to the early ’00s, the Fruit of the Loom logo did include brown leaves behind the fruit; they were recolored green in 2003. When I started writing this article, I was certain that my Fruit of the Loom childhood PE kit had a cornucopia on the tag. I’m now convinced that 10-year-old me simply wasn’t looking that closely and thus I’ve misremembered the leaves as a horn. After all, even when I look at the current logo on shirts listed on the Fruit of the Loom website, my eyes still seem to want to make this mistake: From far away, I interpret the crowded cluster as a cornucopia.

current Fruit of the Loom logo in color
Fruit of the Loom line drawing for trade application from 1973

The current Fruit of the Loom logo (left) and the version submitted in their 1973 trademark design application. Neither contain a cornucopia.

It’s as easy as that to convince me my memory was wrong—whether that’s a good thing or a bad thing, I’ll let you decide. I’m clearly at one end of some sort of spectrum here. Other Mandela effect experiencers may believe something stranger is going on but are still prepared to happily get on with their lives. Larry Jung is a thirtysomething musician who was living in New Jersey when he spent hours hunting for Fruit of the Loom residue; in 2019, he even purchased a copy of a 1969 book for around $20 so he could see the cornucopia reference within it. “I did obsess about it for a while in the beginning,” says Jung, who remembers the cornucopia because he mistook it for a croissant as a child. “But then—I don’t know, I just came to this acceptance phase. I just didn’t want it to affect my life in a big way. I didn’t want to bring it up in every discussion.” 

Or, as another erstwhile Mandela effect researcher puts it: “If I just so happened to be living in a computer simulation, and that was my entire reality, what can I do about that?”

Don, the Mandela effect subreddit moderator, has seen waves of people move on while he has stayed active in the community for the better part of a decade (I first spoke to him for my New Statesman article in 2016). “I’ve recruited a lot of moderators, and they come and go pretty quick,” he says. 

Don says he experiences “more than average but less than all” examples of the Mandela effect (he too remembers the cornucopia on his childhood underpants). “They find solutions that they find acceptable,” he says of some people who leave the sub. For others, the cognitive dissonance becomes “too much,” he adds. “It interferes with their ability to function.” Don theorizes that the people who stay are people who experienced the Mandela effect organically, “in the wild,” he says, “before it was a well-known phenomenon”—arguably the antithesis of people jumping on an internet bandwagon.

“I compare it to someone who saw Bigfoot. If you were in the woods and Bigfoot walks into your campground and he scares you and your kids, eight feet tall, smells terrible—you’re not going to forget it.” 

AJ concurs that “anchor memories” like these are key. And yet Jung has a croissant-based anchor memory, and I myself have similar anecdotes about false memories I’ve found easy to let go. Psychologically, why does the Mandela effect affect people in such vastly different ways? Why do some people hold onto their memories while others don’t? 

“We know that most people’s intuitions about memory are wrong; they think of it as an accurate recording device when in fact memory is a reconstructive apparatus that is presenting us with recollections based on very fragmented snippets,” says Stephan Lewandowsky, a cognitive psychologist at the University of Bristol who writes computer simulations of memory to better understand how the mind works. “So most people will have an exaggerated sense of the accuracy of their own memories and will refuse to accept that they could be completely false.”

In recent years, Lewandowsky has studied misinformation and has coauthored The Conspiracy Theory Handbook, and he says that while some people move on from their conspiracy theories, others turn them into their identity. “They will enter a state,” he says, “in which they are extremely difficult to extract from their rabbit hole.” People who become highly committed to conspiracies “tend to be disgruntled and feel left behind by society and are extremely distrustful,” Lewandowsky adds. “Those people also tend to be high in narcissism and often exhibit paranoid thoughts.” 

Shauna Bowes is an assistant psychology professor at the University of Alabama who researches conspiratorial ideation, misinformation, and intellectual humility. Her work has found that people with this last quality—the tendency to acknowledge the limits of your own views—are less likely to believe misinformation. 

“Belief perseverance is when you double down on your beliefs, even if evidence contradicts them,” Bowes says. “There are many reasons why some are willing to change their minds while others do not. Personality traits, childhood experiences, social networks, cognitive styles, and more determine these processes. What we do know is that people who tend to be more cognitively flexible, humble, and generally open-minded also tend to change their minds more in response to evidence.” 

And yet when it comes to the Mandela effect, the question of “evidence” is a complicated one—after all, there’s plenty of cornucopia residue. Part of the trouble with understanding people’s responses to the Mandela effect is that the phenomenon can’t neatly be categorized as misinformation or conspiracy theory. 

Lewandowsky believes the Mandela effect is primarily a social phenomenon. “My take on it is that if many people believe that an event has happened, that becomes a social norm that other people can support by sharing that belief. Social norms are very powerful,” he says, adding that the internet “provides a great amplification machine.”

Creating reality

AJ tells me that even though skeptics have called him “so many derogatory names over the last nine years,” he remains passionate about spreading word of the Mandela effect. He wants to “push a dialogue” so that believers don’t feel afraid to speak out. Mostly, AJ wants scientists to look at the qualitative side of things: the hundreds of autobiographical accounts by people with very specific memories of things that are now officially said not to have happened the way they recall. He wishes scientists would speak to experiencers directly, the same way the once-skeptical astronomer Josef Allen Hynek spoke to UFO eyewitnesses in the mid-20th century. 

“Once upon a time, the UFO phenomenon was considered to be fringe. And now we have multiple world governments that have acknowledged that there is stuff flying around that we don’t know what it is,” AJ says. Overall, “the goal is to get the scientific establishment to at least consider the other side.” 

Of course, AJ is not alone, even if Mandela effect believers do exist on a spectrum. The community holds space for people having fun with the phenomenon, for those committed to just a single example of the effect, for others who dive in on a short-term basis before moving on—and for those who have run the International Mandela Effect Conference in locations across the US since 2019. 

And there’s Don, who is still moderating the subreddit after all these years and has seen believers of every stripe. “The thing that keeps me going is I want to make sure that it’s still here,” he says. “There’s a lot of history here, and I want to keep it around for that reason.” 

Personally, he believes there may be different explanations for different examples of the Mandela effect. It could be as simple as people confusing Fruit of the Loom with a knockoff brand, he says, or as complicated as Fruit of the Loom lying about the cornucopia as free advertising (though he also notes that no one has ever actually discovered an old Fruit of the Loom label with a cornucopia on it). 

Don also wonders whether some people might be guinea pigs caught up in longitudinal studies in which psychologists play with subjects’ memories. He’s considered, too, that nefarious tech bros could be digitally manipulating and deleting data on the internet as a form of social engineering, a possibility he compares to the Cambridge Analytica affair. “It’s something that’s possible. I’m not saying that’s what’s happening,” he says. “But this is the kind of thing that could be being done.” (To be clear: Don shared no evidence that this is being done.) 

Perhaps Don’s most interesting belief is that the Mandela effect is not a phenomenon but an “event”—one that may now in fact be over. In his opinion, there have been no significant or “persuasive” examples discovered since 2019 (when Redditors found that the character Baloo in Disney’s 1967 The Jungle Book never wore a coconut bra). Don believes the community peaked between 2015 and 2018, when people were making new discoveries regularly. “There was a period of time where it was an actual event, like this was an ongoing event,” he says. He compares the whole thing to medieval manias in which people danced themselves to death: “I think the fervor with which the Mandela effect spread will likely be compared to dancing plagues by future generations.”

Toward the end of my second of three calls with AJ, he asked me if I’d also experienced the Mandela effect. I explained that historically I experienced it with Fruit of the Loom, but I’m prepared to believe it was just a false memory. “Yeah, that’s fair,” he said. But I started to wonder if it is. When I wrote about the Mandela effect in 2016, I wanted to write an exciting story with twists and turns, which arguably played up the mystery. Am I responsible for making some people question reality? What are the consequences of writing another article, the one you’re currently reading? How much am I creating reality by observing it?

To be “fair” to AJ, should I tell you that one of memory science’s most famous studies has recently come under fire, and some academics now believe that people aren’t as susceptible to false memories as we once thought? 

Or to be “fair” to you, the reader, should I stress that despite my own desire to believe in the mysteries of the universe, I’ve come away thinking that the biggest mystery of all is the human mind? 

Amelia Tait is a London-based freelance features journalist who writes about culture, trends, and unusual phenomena. 

What it’s like to be in the middle of a conspiracy theory (according to a conspiracy theory expert)

On a gloomy Saturday morning this past May, a few months after entire blocks of Altadena, California, were destroyed by wildfires, several dozen survivors met at a local church to vent their built-up frustration, anger, blame, and anguish. As I sat there listening to one horror story after another, I almost felt sorry for the very polite consultants who were being paid to sit there, and who couldn’t do a thing about what they were hearing.

Hosted by a third-party arbiter at the behest of Los Angeles County, the gathering was a listening session in which survivors could “share their experiences with emergency alerts and evacuations” for a report on how the response to the Eaton Fire months earlier had succeeded and failed. 

It didn’t take long to see just how much failure there had been.


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


After a small fire started in the bone-dry brush of Pasadena’s Eaton Canyon early in the evening of Tuesday, January 7, 2025, the raging Santa Ana winds blew its embers into nearby Altadena, the historically Black and middle-class town just to the north. By Wednesday morning, much of it was burning. Its residents spent the night making frantic, desperate scrambles to grab whatever they could and get to safety. 

In the aftermath, many claimed that they received no warning to evacuate, saw no first responders battling the blazes, and had little interaction with official personnel. Most were simply left to fend for themselves. 

Making matters worse, while no place is “good” for a wildfire, Altadena was especially vulnerable. It was densely packed with 100-year-old wooden homes, many of which were decades behind on the code upgrades that would have better protected them. It was full of trees and other plants that had dried out during the rain-free winter. Few residents or officials were prepared for the seemingly remote possibility that the fires that often broke out in the mountains nearby would jump into town. As a result, resources were strained to the breaking point, and many homes simply burned freely.

So the people packed into the room that morning had a lot to be angry about. They unloaded their own personal ordeals, the traumas their community had experienced, and even catastrophes they’d heard about secondhand. Each was like a dagger to the heart, met with head-nods and “uh-huhs” from people all going through the same thing.

LA County left us to die because we couldn’t get alerts!

I’m sleeping in my car because I was a renter and have no insurance coverage!

Millions of dollars in aid were raised for us, and we haven’t gotten anything!

Developers are buying up Altadena and pricing out the Black families who made this place!

The firefighting planes were grounded on purpose by Joe Biden so he could fly around LA!

One of these things was definitely not like the others. And I knew why.

Two trains collide

It’s something of a familiar cycle by now: Tragedy hits; rampant misinformation and conspiracy theories follow. Think of the deluge of “false flag” and “staged gun grab” conspiracy theories after mass shootings, or the rampant disinformation around covid-19 and the 2020 election. It’s often even more acute in the case of a natural disaster, when conspiracy theories about what “really” caused the calamity run right into culture-war-driven climate change denialism. Put together, these theories obscure real causes while elevating fake ones, with both sides battling it out on social media and TV. 

I’ve studied these ideas extensively, having spent the last 10 years writing about conspiracy theories and disinformation as a journalist and researcher. I’ve covered everything from the rise of QAnon to whether Donald Trump faked his assassination attempt to the alarming rises in antisemitism, antivaccine conspiracism, and obsession with human trafficking. I’ve written three books, testified to Congress, and even written a report for the January 6th Committee. So this has been my life for quite a while. 

Still, I’d never lived it. Not until the Eaton Fire.

For a long time, I’d been able to talk about the conspiracy theories without letting them in. Now the disinformation was in the room with me, and it was about my life.

My house, a cottage built in 1925, was one of those that burned back in January. Our only official notification to flee had come at 3:25 a.m., nine hours after the fires started. We grabbed what we could in 10 minutes, I locked our front door, and six hours later, it was all gone. We could have died. Eighteen Altadena residents did die—and all but one were in the area that was warned too late.

Previously in my professional life, I’d always been able to look at the survivors of a tragedy, crying on TV about how they’d lost everything, and think sympathetically but distantly, Oh, those poor people. And soon enough, the conspiracy theories I was following about the incident for work would die down, and then it was no longer in my official purview—I could move on to the next disaster and whatever mess came with it. 

Now I was one of those poor people. The Eaton Fire had changed everything about my life. Would it change everything about my work as well? It felt as though two trains I’d managed to keep on parallel tracks had collided.

For a long time, I’d been able to talk about the conspiracy theories without letting them in. Now the disinformation was in the room with me, and it was about my life. And I wondered: Did I have a duty to journalism to push back on the wild thinking—or on this particular idea that Biden was responsible? 

Or did I have a duty to myself and my sanity to just stay quiet?

Just true enough

In the days following the Eaton Fire, which coincided with another devastating fire in Los Angeles’ Pacific Palisades neighborhood, the Biden plane storyline was just one of countless rumors, false claims, hoaxes, and accusations about what had happened and who was behind them.

Most were culture-war nonsense or political fodder. I also saw clearly fake AI slop (no, the Hollywood sign was not on fire) and bits of TikTok ephemera that could largely be ignored. 

They were from something like an alternate world, one where forest floors hadn’t been “raked” and where incompetent “DEI firefighters” let houses burn while water waited in a giant spigot that California’s governor, Gavin Newsom, refused to “turn on” because he preferred to protect an endangered fish. There were claims that the fires were set on purpose to clear land for the Olympics, or to cover up evidence of human trafficking. Rumors flew that LA had donated all its firefighting money and gear to Ukraine. Some speculated that the fires were started by undocumented immigrants (one was suspected of causing one of the fires but never charged) or “antifa” or Black Lives Matter activists—never mind that one of the most demographically Black areas in the city was wiped out. Or, as always, it was the Jews. In this case, blame fell on a “wealthy Jewish couple” who supposedly owned most of LA’s water and wouldn’t let it go.

These claims originated from the same “just asking questions” influencers who run the same playbook for every disaster. And they spread rapidly through X, a platform where breaking news had been drowned out by hysterical conspiracism. 

But many did have elements of truth to them, surrounded by layers of lies and accusations. A few were just true enough to be impossible to dismiss out of hand, but also not actually true.

So, for the record: Biden did not ground firefighting aircraft in Los Angeles. 

According to fact-checking by both USA Today and Reuters, Biden flew into Los Angeles the day before the Eaton Fire broke out (which was also the same day that the Palisades Fire started, roughly 30 miles to the west), to dedicate two new national monuments. He left two days later. And while there were security measures in place, including flight restrictions over the area where he was staying, firefighting planes simply had to coordinate with air traffic controllers to cross into the closed-off space. 

But when my sort-of neighbor brought up this particular theory that day in May, I wasn’t able to debunk it. For one thing, this was my first time hearing the rumor. But more than that, what could I say that would assuage this man’s anger? And if he wanted to blame Biden for his house burning down, was it really my place to tell him he was wrong—even if he was? 

It’s common for survivors of a disaster to be aware of only parts of the story, struggle to understand the full picture, or fail to fully recollect what happened to them in the moment of survival. Once the trauma ebbs, we’re left looking for answers and clarity and someone who knows what’s going on, because we certainly don’t have a clue. Hoaxes and misinformation stem from anger, confusion, and a lack of clear answers to rapidly evolving questions.  

I can confirm that it was dizzying. Rumors and hoaxes were going around in my personal circles too, even if they weren’t so lurid and even if we didn’t really believe them. Bits of half-heard news circulated constantly in our group texts, WhatsApp chains, Facebook groups, and in-person gatherings. 

There was confusion over who was responsible for the extent of the devastation, genuine anger about purported LA Fire Department budget cuts (though those had not actually happened to the extent conspiracists claimed they did), and fears that a Trump-controlled federal government would abandon California. 

Many of the homes and businesses that we heard had burned down hadn’t, and others that we heard had survived were gone. In an especially heartbreaking early bit of misinformation, a local child-care facility shared a Facebook post stating that FEMA was handing out vouchers to pay 90% of your rent for the next three years—except FEMA doesn’t hand out rent vouchers without an application process. I quietly reached out to the source, who took it down. 

In this information vacuum, and given my work, friends started asking me questions, and answering them took energy and time I didn’t have. Honestly, the “disinformation researcher” was largely just as clueless as everyone else. 

Some of the questions were harmless enough. At one point a friend texted me about a picture from Facebook of a burned Bible page that survived the fire when everything else had turned to ash. It looked too corny and convenient to be real. But I had also found a burned page of Psalms that had survived. I kept it in a ziplock bag because it seemed like the right thing to do. So I told my friend I didn’t know if it was real. I still don’t—but I also still have that ziplock somewhere.

Under attack

As weeks passed, we began to deal with another major issue where truth and misinformation walked together: the reasonable worry that a new president who constantly belittled California would not be willing to provide relief funds

Recovery depended on FEMA to distribute grants, on the EPA to clear toxic debris, on the Small Business Administration to make loans for rebuilding or repairing homes, on the Army Corps of Engineers to remove the detritus of burned structures, and so much more. How would this square with the new “government efficiency” mandate touting the trillions of dollars and tens of thousands of jobs to be cut from the federal budget? 

Nobody knew—including the many kind government employees who spent months in Altadena helping us recover while silently wondering if they were about to be fired.

We dealt with scammers, grifters, squatters, thieves, and even tow truck companies that simply stole cars parked outside burned lots and held them for ransom. After a decade of helping people recognize scams and frauds, there was little I could do when they came for us.

Many residents of Altadena began to have trepidation about accepting government assistance, particularly in its Black community, which already had a well-earned deep distrust of the federal government. Many Black residents felt that their needs and stories were being left behind in the recovery, and feared they would be the first to be priced out of whatever Altadena would become in the future.

Outreach in person became critical. I happened to meet the two-star general in charge of the Army Corps’ effort at lunch one day, as he and his team tried to find outside-the-box ways to engage with exhausted and wary residents. He told me they had tried to use technology—texts, emails, clips designed to go viral—but it was too much information, all apparently delivered in the wrong way. Many of the people they needed to reach, particularly older residents, didn’t use social media, weren’t able to communicate well via text, and were easy prey for sophisticated scammers. It was also easy for the real information to get lost as we got bombarded with communications, including many from hoaxers and frauds.

This, too, wasn’t new to me. Many of the movements I’ve covered are awash in grift and worthless wellness products. I know the signs of a scam and a snake-oil salesman. Still, I watched helplessly as my friends and my community, desperate for help, were turned into chum for cash-hungry sharks opening their jaws wide. 

The community was hammered by dodgy contractors and fly-by-night debris removal companies, relief scams and phony grants, and spam calls from “repair companies” and builders. We dealt with scammers, grifters, squatters, thieves, and even tow truck companies that simply stole cars parked outside burned lots and held them for ransom. We were also victimized by looting: Abandoned wires on our lot were stripped for copper, and our neighbor’s unlocked garage was ransacked. After a decade of helping people recognize scams and frauds, there was little I could do when they came for us.

The fear of being conned was easily transmittable, even to me personally. After hearing of friends who couldn’t get a FEMA grant because a previous owner of their home had fraudulently filed an application, we delayed our own appointment with FEMA for weeks. The agency’s call had come so out of the blue that we were convinced it was fake. Maybe my job made me overcautious, or maybe we were just paralyzed by the sheer tonnage of decisions and calls that needed to be handled. Whatever the reason, the fear meant we later had to make multiple calls just to get our meeting rescheduled. It’s a small thing, but when you’re as exhausted and dispirited as we were, there are no small things. 

Contractors for the US Army Corps of Engineers remove hazardous materials from a home destroyed in the Eaton Fire, near a burned-out car.
STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | GETTY IMAGES

Making all this even more frustrating was that the scammers, the people spinning tales of lasers and endangered fish and antifa, were very much ignoring the reality: that our planet is trying to kill us. While federal officials recently made an arrest in the Palisades Fire, the direct causes of that fire and the nearby Eaton Fire may still take years of investigation and litigation to be fully known. But even now, it can’t be denied to any reasonable degree that climate change worsened the wind that made the fires spread more quickly.

The Santa Ana winds bombarding Southern California were among the worst ever to hit the region. Their ferocity drove the embers well beyond the nominal fire danger line, particularly in Altadena. Many landed in brush left brittle and dead by the decades-long drought plaguing California. And they had even more fuel because the previous two winters had been among the wettest in the region’s recent history. Such rapid swings between wet and dry or cold and hot have become so common around the world that they even have a name: climate whiplash

There are the conspiracy theory gurus who see this and make money off it, peddling disinformation on their podcasts and livestreams, while blaming everyone and everything but the real reasons. Many of these figures have spent decades railing against the very idea that the climate could change. And if it is changing, they claimed, human consumption and urbanization have nothing to do with it. When faced with a disaster that undeniably reflected climate change at work, their business models—which rely on sales of subscriptions and merchandise—demanded that they just keep denying it was climate change at work.

As more cities and countries deal with “once in a century” climate disasters, I have no doubt that these figures will continue to deflect attention away from human activity. They will use crackpot science, conspiracy theories, politics, and—increasingly—fake videos depicting whatever AI can generate. They will prey on their audiences’ limited understanding of basic science, their inability to perceive how climate and weather differ, and their fears that globalist power brokers will somehow use the weather against them. And their message will spread with little pushback from social media platforms more concerned with virality and shareholder value than truth.

Resisting the temptation

When you cover disinformation and live through an event creating a massive volume of disinformation, it’s like floating outside your body on an operating table as your heart is being worked on, while also being a heart surgeon. I knew I should be trying to help. But I did not have the mental capacity, the time, or, to be honest, the interest in covering what the worst people on the internet were saying about the worst time of my life. I had very real questions about where my family would live. Thinking about my career was not a priority. 

But of course, these experiences cannot now be excised from my career. I’ve spent a lot of time talking about how trauma influences conspiracism; see how the isolation and boredom of covid created a new generation of conspiracy theory believers. And now I had my own trauma, and it has been a test of my abilities as a journalist and a thinker to avoid falling into the pit of despair.

At the same time, I have a much deeper understanding of the psychology at work in conspiracy belief. One of the biggest reasons conspiracy theories take off after a disaster is that they serve to make sense out of something that makes no sense. Neighborhoods aren’t supposed to burn down in an era of highly trained firefighters and seemingly fireproof materials. They especially aren’t supposed to burn down in Los Angeles, one of the wealthiest cities on the planet. These were seven- and eight-figure homes going up like matches. There must be a reason, people figured. Someone, or something, must be responsible.

So, as I emerge from the haze to something resembling “normal,” I feel more compassion and understanding for trauma victims who turn to conspiracy theories. Having faced the literal burning down of my life, I get the urge to assign meaning to such a calamity and point a finger at whoever we think did it to us. 

Meanwhile, the people of Altadena and Pacific Palisades continue to slowly put our lives and communities back together. The effects of both our warming planet and our disinformation crisis continue to assert themselves every day. It’s still alluring to look for easy answers in outrageous conspiracy theories, but such answers are not real and offer no actual help—only the illusion of help.

It’s equally tempting for someone who researches and debunks conspiracy theories to mock or belittle the people who believe these ideas. How could anyone be so dumb as to think Joe Biden caused the fire that burned down my home?

I kept my mouth shut that day at the meeting in the church, though, again, I can now sympathize much more deeply with something I’d otherwise think completely inane. 

But even a journalist who lost his house is still a journalist. So I decided early on that what I really needed to do was keep Altadena in the news. I went on TV and radio, blogged, and happily told our story to anyone who asked. I focused on the community, the impact, the people who would be working to recover long after the national spotlight moved to the next shiny object.

If there is a professional lesson to be taken from this nightmare, it might be that the people caught up in tragedies are exactly that: caught up. And those who believe this nonsense find something of value in it. They find hope and comfort and the reassurance that whoever did this to them will get what they deserve. 

I could have done it too, throwing away years of experience to embrace conspiracist nihilism in the face of unspeakable trauma. After all, those poor people going through this weren’t just on my TV. 

They were my friends. They were me. They could be anyone.

Mike Rothschild is a journalist and an expert on the growth and impact of conspiracy theories and disinformation. He has written three books, including The Storm Is Upon Us, about the QAnon conspiracy movement, and Jewish Space Lasers, about the myths around the Rothschild banking family. He also is a frequent expert witness in legal cases involving conspiracy theories and has spoken at colleges and conferences around the country. He lives in Southern California.

Four thoughts from Bill Gates on climate tech

Bill Gates doesn’t shy away or pretend modesty when it comes to his stature in the climate world today. “Well, who’s the biggest funder of climate innovation companies?” he asked a handful of journalists at a media roundtable event last week. “If there’s someone else, I’ve never met them.”

The former Microsoft CEO has spent the last decade investing in climate technology through Breakthrough Energy, which he founded in 2015. Ahead of the UN climate meetings kicking off next week, Gates published a memo outlining what he thinks activists and negotiators should focus on and how he’s thinking about the state of climate tech right now. Let’s get into it. 

Are we too focused on near-term climate goals?

One of the central points Gates made in his new memo is that he thinks the world is too focused on near-term emissions goals and national emissions reporting.

So in parallel with the national accounting structure for emissions, Gates argues, we should have high-level climate discussions at events like the UN climate conference. Those discussions should take a global view on how to reduce emissions in key sectors like energy and heavy industry.

“The way everybody makes steel, it’s the same. The way everybody makes cement, it’s the same. The way we make fertilizer, it’s all the same,” he says.

As he noted in one recent essay for MIT Technology Review, he sees innovation as the key to cutting the cost of clean versions of energy, cement, vehicles, and so on. And once products get cheaper, they can see wider adoption.

What’s most likely to power our grid in the future?

“In the long run, probably either fission or fusion will be the cheapest way to make electricity,” he says. (It should be noted that, as with most climate technologies, Gates has investments in both fission and fusion companies through Breakthrough Energy Ventures, so he has a vested interest here.)

He acknowledges, though, that reactors likely won’t come online quickly enough to meet rising electricity demand in the US: “I wish I could deliver nuclear fusion, like, three years earlier than I can.”

He also spoke to China’s leadership in both nuclear fission and fusion energy. “The amount of money they’re putting [into] fusion is more than the rest of the world put together times two. I mean, it’s not guaranteed to work. But name your favorite fusion approach here in the US—there’s a Chinese project.”

Can carbon removal be part of the solution?

I had my colleague James Temple’s recent story on what’s next for carbon removal at the top of my mind, so I asked Gates if he saw carbon credits or carbon removal as part of the problematic near-term thinking he wrote about in the memo.

Gates buys offsets to cancel out his own personal emissions, to the tune of about $9 million a year, he said at the roundtable, but doesn’t expect many of those offsets to make a significant dent in climate progress on a broader scale: “That stuff, most of those technologies, are a complete dead end. They don’t get you cheap enough to be meaningful.

“Carbon sequestration at $400, $200, $100, can never be a meaningful part of this game. If you have a technology that starts at $400 and can get to $4, then hallelujah, let’s go. I haven’t seen that one. There are some now that look like they can get to $40 or $50, and that can play somewhat of a role.”

 Will AI be good news for innovation? 

During the discussion, I started a tally in the corner of my notebook, adding a tick every time Gates mentioned AI. Over the course of about an hour, I got to six tally marks, and I definitely missed making a few.

Gates acknowledged that AI is going to add electricity demand, a challenge for a US grid that hasn’t seen net demand go up for decades. But so too will electric cars and heat pumps. 

I was surprised at just how positively he spoke about AI’s potential, though:

“AI will accelerate every innovation pipeline you can name: cancer, Alzheimer’s, catalysts in material science, you name it. And we’re all trying to figure out what that means. That is the biggest change agent in the world today, moving at a pace that is very, very rapid … every breakthrough energy company will be able to move faster because of using those tools, some very dramatically.”

I’ll add that, as I’ve noted here before, I’m skeptical of big claims about AI’s potential to be a silver bullet across industries, including climate tech. (If you missed it, check out this story about AI and the grid from earlier this year.) 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

DeepSeek may have found a new way to improve AI’s ability to remember

<div data-chronoton-summary="

  • Memory Through Images: DeepSeek’s new OCR model stores information as visual rather than text tokens, a technique that allows it to retain more data. This approach could drastically reduce computing costs and carbon footprint while improving AI’s ability to ‘remember’.
  • Addressing Context Rot: The model works a bit like human memory, storing older or less critical information in slightly blurred form to save space. This could help address the fact current AI systems forget or muddle information over long conversations, a problem dubbed “context rot.”
  • DeepSeek Disruption: DeepSeek shocked the AI industry with its efficient DeepSeek-R1 reasoning model in January, and is again pushing boundaries. The OCR system can generate over 200,000 training data pages daily on a single GPU, potentially addressing the industry’s severe shortage of quality training text.

” data-chronoton-post-id=”1126932″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

An AI model released by the Chinese AI company DeepSeek uses new techniques that could significantly improve AI’s ability to “remember.”

Released last week, the optical character recognition (OCR) model works by extracting text from an image and turning it into machine-readable words. This is the same technology that powers scanner apps, translation of text in photos, and many accessibility tools. 

OCR is already a mature field with numerous high-performing systems, and according to the paper and some early reviews, DeepSeek’s new model performs on par with top models on key benchmarks.

But researchers say the model’s main innovation lies in how it processes information—specifically, how it stores and retrieves memories. Improving how AI models “remember” information could reduce the computing power they need to run, thus mitigating AI’s large (and growing) carbon footprint. 

Currently, most large language models break text down into thousands of tiny units called tokens. This turns the text into representations that models can understand. However, these tokens quickly become expensive to store and compute with as conversations with end users grow longer. When a user chats with an AI for lengthy periods, this challenge can cause the AI to forget things it’s been told and get information muddled, a problem some call “context rot.”

The new methods developed by DeepSeek (and published in its latest paper) could help to overcome this issue. Instead of storing words as tokens, its system packs written information into image form, almost as if it’s taking a picture of pages from a book. This allows the model to retain nearly the same information while using far fewer tokens, the researchers found. 

Essentially, the OCR model is a test bed for these new methods that permit more information to be packed into AI models more efficiently. 

Besides using visual tokens instead of just text tokens, the model is built on a type of tiered compression that is not unlike how human memories fade: Older or less critical content is stored in a slightly more blurry form in order to save space. Despite that, the paper’s authors argue, this compressed content can still remain accessible in the background while maintaining a high level of system efficiency.

Text tokens have long been the default building block in AI systems. Using visual tokens instead is unconventional, and as a result, DeepSeek’s model is quickly capturing researchers’ attention. Andrej Karpathy, the former Tesla AI chief and a founding member of OpenAI, praised the paper on X, saying that images may ultimately be better than text as inputs for LLMs. Text tokens might be “wasteful and just terrible at the input,” he wrote. 

Manling Li, an assistant professor of computer science at Northwestern University, says the paper offers a new framework for addressing the existing challenges in AI memory. “While the idea of using image-based tokens for context storage isn’t entirely new, this is the first study I’ve seen that takes it this far and shows it might actually work,” Li says.

The method could open up new possibilities in AI research and applications, especially in creating more useful AI agents, says Zihan Wang, a PhD candidate at Northwestern University. He believes that since conversations with AI are continuous, this approach could help models remember more and assist users more effectively.

The technique can also be used to produce more training data for AI models. Model developers are currently grappling with a severe shortage of quality text to train systems on. But the DeepSeek paper says that the company’s OCR system can generate over 200,000 pages of training data a day on a single GPU.

The model and paper, however, are only an early exploration of using image tokens rather than text tokens for AI memorization. Li says she hopes to see visual tokens applied not just to memory storage but also to reasoning. Future work, she says, should explore how to make AI’s memory fade in a more dynamic way, akin to how we can recall a life-changing moment from years ago but forget what we ate for lunch last week. Currently, even with DeepSeek’s methods, AI tends to forget and remember in a very linear way—recalling whatever was most recent, but not necessarily what was most important, she says. 

Despite its attempts to keep a low profile, DeepSeek, based in Hangzhou, China, has built a reputation for pushing the frontier in AI research. The company shocked the industry at the start of this year with the release of DeepSeek-R1, an open-source reasoning model that rivaled leading Western systems in performance despite using far fewer computing resources. 

The AI Hype Index: Data centers’ neighbors are pivoting to power blackouts

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

Just about all businesses these days seem to be pivoting to AI, even when they don’t seem to know exactly why they’re investing in it—or even what it really does. “Optimization,” “scaling,” and “maximizing efficiency” are convenient buzzwords bandied about to describe what AI can achieve in theory, but for most of AI companies’ eager customers, the hundreds of billions of dollars they’re pumping into the industry aren’t adding up. And maybe they never will.

This month’s news doesn’t exactly cast the technology in a glowing light either. A bunch of NGOs and aid agencies are using AI models to generate images of fake suffering people to guilt their Instagram followers. AI translators are pumping out low-quality Wikipedia pages in the languages most vulnerable to going extinct. And thanks to the construction of new AI data centers, lots of neighborhoods living in their shadows are getting forced into their own sort of pivots—fighting back against the power blackouts and water shortages the data centers cause. How’s that for optimization?

An AI adoption riddle

A few weeks ago, I set out on what I thought would be a straightforward reporting journey. 

After years of momentum for AI—even if you didn’t think it would be good for the world, you probably thought it was powerful enough to take seriously—hype for the technology had been slightly punctured. First there was the underwhelming release of GPT-5 in August. Then a report released two weeks later found that 95% of generative AI pilots were failing, which caused a brief stock market panic. I wanted to know: Which companies are spooked enough to scale back their AI spending?

I searched and searched for them. As I did, more news fueled the idea of an AI bubble that, if popped, would spell doom economy-wide. Stories spread about the circular nature of AI spending, layoffs, the inability of companies to articulate what exactly AI will do for them. Even the smartest people building modern AI systems were saying the tech has not progressed as much as its evangelists promised. 

But after all my searching, companies that took these developments as a sign to perhaps not go all in on AI were nowhere to be found. Or, at least, none that were willing to admit it. What gives? 

There are several interpretations of this one reporter’s quest (which, for the record, I’m presenting as an anecdote and not a representation of the economy), but let’s start with the easy ones. First is that this is a huge score for the “AI is a bubble” believers. What is a bubble if not a situation where companies continue to spend relentlessly even in the face of worrying news? The other is that underneath the bad headlines, there’s not enough genuinely troubling news about AI to convince companies they should pivot.

But it could also be that the unbelievable speed of AI progress and adoption has made me think industries are more sensitive to news than they perhaps should be. I spoke with Martha Gimbel, who leads the Yale Budget Lab and coauthored a report finding that AI has not yet changed anyone’s jobs. What I gathered is that Gimbel, like many economists, thinks on a longer time scale than anyone in the AI world is used to. 

“It would be historically shocking if a technology had had an impact as quickly as people thought that this one was going to,” she says. In other words, perhaps most of the economy is still figuring out what the hell AI even does, not deciding whether to abandon it. 

The other reaction I heard—particularly from the consultant crowd—is that when executives hear that so many AI pilots are failing, they indeed take it very seriously. They’re just not reading it as a failure of the technology itself. They instead point to pilots not moving quickly enough, companies lacking the right data to build better AI, or a host of other strategic reasons.

Even if there is incredible pressure, especially on public companies, to invest heavily in AI, a few have taken big swings on the technology only to pull back. The buy now, pay later company Klarna laid off staff and paused hiring in 2024, claiming it could use AI instead. Less than a year later it was hiring again, explaining that “AI gives us speed. Talent gives us empathy.” 

Drive-throughs, from McDonald’s to Taco Bell, ended pilots testing the use of AI voice assistants. The vast majority of Coca-Cola advertisements, according to experts I spoke with, are not made with generative AI, despite the company’s $1 billion promise. 

So for now, the question remains unanswered: Are there companies out there rethinking how much their bets on AI will pay off, or when? And if there are, what’s keeping them from talking out loud about it? (If you’re out there, email me!)

“We will never build a sex robot,” says Mustafa Suleyman

<div data-chronoton-summary="

  • Balancing humanlike interaction with safety concerns: Suleyman emphasizes that Microsoft’s new Copilot features—including group chat and the “Real Talk” personality—are designed to keep AI as a tool serving humanity rather than a replacement for human connection. The company deliberately avoids building chatbots that encourage romantic or sexual relationships, drawing clear boundaries where others in the industry see market opportunity.
  • Personality as craft, not deception: While acknowledging that engaging personalities make AI more useful, Suleyman argues the industry must learn to “sculpt” emotional intelligence carefully.
  • Reframing the “digital species” metaphor: Suleyman clarifies that describing AI as a new digital species isn’t endorsing consciousness or rights for machines; rather, it’s a warning about what’s coming that demands proper containment. He insists the goal is keeping AI subordinate to human interests, not granting it autonomy or moral consideration that would distract from protecting actual human rights.

” data-chronoton-post-id=”1126781″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Mustafa Suleyman, CEO of Microsoft AI, is trying to walk a fine line. On the one hand, he thinks that the industry is taking AI in a dangerous direction by building chatbots that present as human: He worries that people will be tricked into seeing life instead of lifelike behavior. In August, he published a much-discussed post on his personal blog that urged his peers to stop trying to make what he called “seemingly conscious artificial intelligence,” or SCAI.

On the other hand, Suleyman runs a product shop that must compete with those peers. Last week, Microsoft announced a string of updates to its Copilot chatbot, designed to boost its appeal in a crowded market in which customers can pick and choose between a pantheon of rival bots that already includes ChatGPT, Perplexity, Gemini, Claude, DeepSeek, and more.

I talked to Suleyman about the tension at play when it comes to designing our interactions with chatbots and his ultimate vision for what this new technology should be.

One key Copilot update is a group-chat feature that lets multiple people talk to the chatbot at the same time. A big part of the idea seems to be to stop people from falling down a rabbit hole in a one-on-one conversation with a yes-man bot. Another feature, called Real Talk, lets people tailor how much Copilot pushes back on you, dialing down the sycophancy so that the chatbot challenges what you say more often.

Copilot also got a memory upgrade, so that it can now remember your upcoming events or long-term goals and bring up things that you told it in past conversations. And then there’s Mico, an animated yellow blob—a kind of Chatbot Clippy—that Microsoft hopes will make Copilot more accessible and engaging for new and younger users.  

Microsoft says the updates were designed to make Copilot more expressive, engaging, and helpful. But I’m curious how far those features can be pushed without starting down the SCAI path that Suleyman has warned about.  

Suleyman’s concerns about SCAI come at a time when we are starting to hear more and more stories about people being led astray by chatbots that are too engaging, too expressive, too helpful. OpenAI is being sued by the parents of a teenager who they allege was talked into killing himself by ChatGPT. There’s even a growing scene that celebrates romantic relationships with chatbots.

With all that in mind, I wanted to dig a bit deeper into Suleyman’s views. Because a couple of years ago he gave a TED Talk in which he told us that the best way to think about AI is as a new kind of digital species. Doesn’t that kind of hype feed the misperceptions Suleyman is now concerned about?  

In our conversation, Suleyman told me what he was trying to get across in that TED Talk, why he really believes SCAI is a problem, and why Microsoft would never build sex robots (his words). He had a lot of answers, but he left me with more questions.

Our conversation has been edited for length and clarity.

In an ideal world, what kind of chatbot do you want to build? You’ve just launched a bunch of updates to Copilot. How do you get the balance right when you’re building a chatbot that has to compete in a market in which people seem to value humanlike interaction, but you also say you want to avoid seemingly conscious AI?

It’s a good question. With group chat, this will be the first time that a large group of people will be able to speak to an AI at the same time. It really is a way of emphasizing that AIs shouldn’t be drawing you out of the real world. They should be helping you to connect, to bring in your family, your friends, to have community groups, and so on.

That is going to become a very significant differentiator over the next few years. My vision of AI has always been one where an AI is on your team, in your corner.

This is a very simple, obvious statement, but it isn’t about exceeding and replacing humanity—it’s about serving us. That should be the test of technology at every step. Does it actually, you know, deliver on the quest of civilization, which is to make us smarter and happier and more productive and healthier and stuff like that?

So we’re just trying to build features that constantly remind us to ask that question, and remind our users to push us on that issue.

Last time we spoke, you told me that you weren’t interested in making a chatbot that would role-play personalities. That’s not true of the wider industry. Elon Musk’s Grok is selling that kind of flirty experience. OpenAI has said it’s interested in exploring new adult interactions with ChatGPT. There’s a market for that. And yet this is something you’ll just stay clear of?

Yeah, we will never build sex robots. Sad in a way that we have to be so clear about that, but that’s just not our mission as a company. The joy of being at Microsoft is that for 50 years, the company has built, you know, software to empower people, to put people first.

Sometimes, as a result, that means the company moves slower than other startups and is more deliberate and more careful. But I think that’s a feature, not a bug, in this age, when being attentive to potential side effects and longer-term consequences is really important.

And that means what, exactly?

We’re very clear on, you know, trying to create an AI that fosters a meaningful relationship. It’s not that it’s trying to be cold and anodyne—it cares about being fluid and lucid and kind. It definitely has some emotional intelligence.

So where does it—where do you—draw those boundaries?

Our newest chat model, which is called Real Talk, is a little bit more sassy. It’s a bit more cheeky, it’s a bit more fun, it’s quite philosophical. It’ll happily talk about the big-picture questions, the meaning of life, and so on. But if you try and flirt with it, it’ll push back and it’ll be very clear—not in a judgmental way, but just, like: “Look, that’s not for me.”

There are other places where you can go to get that kind of experience, right? And I think that’s just a decision we’ve made as a company.

Is a no-flirting policy enough? Because if the idea is to stop people even imagining an entity, a consciousness, behind the interactions, you could still get that with a chatbot that wanted to keep things SFW. You know, I can imagine some people seeing something that’s not there even with a personality that’s saying, hey, let’s keep this professional.

Here’s a metaphor to try to make sense of it. We hold each other accountable in the workplace. There’s an entire architecture of boundary management, which essentially sculpts human behavior to fit a mold that’s functional and not irritating.

The same is true in our personal lives. The way that you interact with your third cousin is very different to the way you interact with your sibling. There’s a lot to learn from how we manage boundaries in real human interactions.

It doesn’t have to be either a complete open book of emotional sensuality or availability—drawing people into a spiraled rabbit hole of intensity—or, like, a cold dry thing. There’s a huge spectrum in between, and the craft that we’re learning as an industry and as a species is to sculpt these attributes.

And those attributes obviously reflect the values of the companies that design them. And I think that’s where Microsoft has a lot of strengths, because our values are pretty clear, and that’s what we’re standing behind.

A lot of people seem to like personalities. Some of the backlash to GPT-5, for example, was because the previous model’s personality had been taken away. Was it a mistake for OpenAI to have put a strong personality there in the first place, to give people something that they then missed?

No, personality is great. My point is that we’re trying to sculpt personality attributes in a more fine-grained way, right?

Like I said, Real Talk is a cool personality. It’s quite different to normal Copilot. We are also experimenting with Mico, which is this visual character, that, you know, people—some people—really love. It’s much more engaging. It’s easier to talk to about all kinds of emotional questions and stuff.

I guess this is what I’m trying to get straight. Features like Mico are meant to make Copilot more engaging and nicer to use, but it seems to go against the idea of doing whatever you can to stop people thinking there’s something there that you are actually having a friendship with.

Yeah. I mean, it doesn’t stop you necessarily. People want to talk to somebody, or something, that they like. And we know that if your teacher is nice to you at school, you’re going to be more engaged. The same with your manager, the same with your loved ones. And so emotional intelligence has always been a critical part of the puzzle, so it’s not to say that we don’t want to pursue it.

It’s just that the craft is in trying to find that boundary. And there are some things which we’re saying are just off the table, and there are other things which we’re going to be more experimental with. Like, certain people have complained that they don’t get enough pushback from Copilot—they want it to be more challenging. Other people aren’t looking for that kind of experience—they want it to be a basic information provider. The task for us is just learning to disentangle what type of experience to give to different people.

I know you’ve been thinking about how people engage with AI for some time. Was there an inciting incident that made you want to start this conversation in the industry about seemingly conscious AI?

I could see that there was a group of people emerging in the academic literature who were taking the question of moral consideration for artificial entities very seriously. And I think it’s very clear that if we start to do that, it would detract from the urgent need to protect the rights of many humans that already exist, let alone animals.

If you grant AI rights, that implies—you know—fundamental autonomy, and it implies that it might have free will to make its own decisions about things. So I’m really trying to frame a counter to that, which is that it won’t ever have free will. It won’t ever have complete autonomy like another human being.

AI will be able to take actions on our behalf. But these models are working for us. You wouldn’t want a pack of, you know, wolves wandering around that weren’t tame and that had complete freedom to go and compete with us for resources and weren’t accountable to humans. I mean, most people would think that was a bad idea and that you would want to go and kill the wolves.

Okay. So the idea is to stop some movement that’s calling for AI welfare or rights before it even gets going, by making sure that we don’t build AI that appears to be conscious? What about not building that kind of AI because certain vulnerable people may be tricked by it in a way that may be harmful? I mean, those seem to be two different concerns.

I think the test is going to be in the kinds of features the different labs put out and in the types of personalities that they create. Then we’ll be able to see how that’s affecting human behavior.

But is it a concern of yours that we are building a technology that might trick people into seeing something that isn’t there? I mean, people have claimed they’ve seen sentience inside far less sophisticated models than we have now. Or is that just something that some people will always do?

It’s possible. But my point is that a responsible developer has to do our best to try and detect these patterns emerging in people as quickly as possible and not take it for granted that people are going to be able to disentangle those kinds of experiences themselves.

When I read your post about seemingly conscious AI, I was struck by a line that says: “We must build AI for people; not to be a digital person.” It made me think of a TED Talk you gave last year where you say that the best way to think about AI is as a new kind of digital species. Can you help me understand why talking about this technology as a digital species isn’t a step down the path of thinking about AI models as digital persons or conscious entities?

I think the difference is that I’m trying to offer metaphors that make it easier for people to understand where things might be headed, and therefore how to avert that and how to control it.

Okay.

It’s not to say that we should do those things. It’s just pointing out that this is the emergence of a technology which is unique in human history. And if you just assume that it’s a tool or just a chatbot or a dumb— you know, I kind of wrote that TED Talk in the context of a lot of skepticism. And I think it’s important to be clear-eyed about what’s coming so that one can think about the right guardrails.

And yet, if you’re telling me this technology is a new digital species, I have some sympathy for the people who say, well, then we need to consider welfare.

I wouldn’t. [He starts laughing.] Just not in the slightest. No way. It’s not a direction that any of us want to go in.

No, that’s not what I meant. I don’t think chatbots should have welfare. I’m saying I’d have some sympathy for where such people were coming from when they hear, you know, Mustafa Suleyman tell them that this thing he’s building was a new digital species. I’d understand why they might then say that they wanted to stand up for it. I’m saying the words we use matter, I guess.

The rest of the TED Talk was all about how to contain AI and how not to let this species take over, right? That was the whole point of setting it up as, like, this is what’s coming. I mean, that’s what my whole book [The Coming Wave, published in 2023] was about—containment and alignment and stuff like that. There’s no point in pretending that it’s something that it’s not and then building guardrails and boundaries that don’t apply because you think it’s just a tool.

Honestly, it does have the potential to recursively self-improve. It does have the potential to set its own goals. Those are quite profound things. No other technology we’ve ever invented has that. And so, yeah, I think that it is accurate to say that it’s like a digital species, a new digital species. That’s what we’re trying to restrict to make sure it’s always in service of people. That’s the target for containment.