3 things Will Douglas Heaven is into right now

The most amazing drummer on the internet

My daughter introduced me to El Estepario Siberiano’s YouTube channel a few months back, and I have been obsessed ever since. The Spanish drummer (real name: Jorge Garrido) posts videos of himself playing supercharged cover versions of popular tracks, hitting his drums with such jaw-dropping speed and technique that he makes other pro drummers shake their heads in disbelief. The dozens of reaction videos posted by other musicians are a joy in themselves. 

Jorge Garrido playing drums

EL ESTEPARIO SIBERIANO VIA YOUTUBE

Garrido is up-front about the countless hours that it took to get this good. He says he sat behind his kit almost all day, every day for years. At a time when machines appear to do it all, there’s a kind of defiance in that level of human effort. It’s why my favorites are Garrido’s covers of electronic music, where he out-drums the drum machine. Check out his version of Skrillex and Missy Elliot’s “Ra Ta Ta” and tell me it doesn’t put happiness in your heart.

Finding signs of life in the uncanny valley

Watching Sora ­videos of Michael Jackson stealing a box of chicken nuggets or Sam Altman biting into the pink meat of a flame-grilled Pikachu has given me flashbacks to an Ed Atkins exhibition at Tate Britain I saw a few months ago. Atkins is one of the most influential and unsettling British artists of his generation. He is best known for hyper-detailed CG animations of himself (pore-perfect skin, janky movement) that play with the virtual representation of human emotions. 

Still from ED ATKINS PIANOWORK 2 2023
COURTESY: THE ARTIST, CABINET GALLERY, LONDON, DÉPENDANCE, BRUSSELS, GLADSTONE GALLERY

In The Worm we see a CGI Atkins make a long-distance call to his mother during a covid lockdown. The audio is from a recording of an actual conversation. Are we watching Atkins cry or his avatar? Our attention flickers between two realities. “When an actor breaks character during a scene, it’s known as corpsing,” Atkins has said. “I want everything I make to corpse.” Next to Atkins’s work, generative videos look like cardboard cutouts: lifelike but not alive.

A dark and dirty book about a talking dingo

What’s it like to be a pet? Australian author Laura Jean McKay’s debut novel, The Animals in That Country, will make you wish you’d never asked. A flu-like pandemic leaves people with the ability to hear what animals are saying. If that sounds too Dr. Dolittle for your tastes, rest assured: These animals are weird and nasty. A lot of the time they don’t even make any sense. 

cover of book

SCRIBE

With everybody now talking to their computers, McKay’s book resets the anthropomorphic trap we’ve all fallen into. It’s a brilliant evocation of what a nonhuman mind might containand a meditation on the hard limits of communication.

Why inventing new emotions feels so good

Have you ever felt “velvetmist”? 

It’s a “complex and subtle emotion that elicits feelings of comfort, serenity, and a gentle sense of floating.” It’s peaceful, but more ephemeral and intangible than contentment. It might be evoked by the sight of a sunset or a moody, low-key album.  

If you haven’t ever felt this sensation—or even heard of it—that’s not surprising. A Reddit user named noahjeadie generated it with ChatGPT, along with advice on how to evoke the feeling. With the right essential oils and soundtrack, apparently, you too can feel like “a soft fuzzy draping ghost floating through a lavender suburb.”

Don’t scoff: Researchers say more and more terms for these “neo-­emotions” are showing up online, describing new dimensions and aspects of feeling. Velvetmist was a key example in a journal article about the phenomenon published in July 2025. But most neo-emotions aren’t the inventions of emo artificial intelligences. Humans come up with them, and they’re part of a big change in the way researchers are thinking about feelings, one that emphasizes how people continuously spin out new ones in response to a changing world. 

Velvetmist might’ve been a chatbot one-off, but it’s not unique. The sociologist Marci Cottingham—whose 2024 paper got this vein of neo-emotion research started—cites many more new terms in circulation. There’s “Black joy” (Black people celebrating embodied pleasure as a form of political resistance), “trans euphoria” (the joy of having one’s gender identity affirmed and celebrated), “eco-anxiety” (the hovering fear of climate disaster), “hypernormalization” (the surreal pressure to continue performing mundane life and labor under capitalism during a global pandemic or fascist takeover), and the sense of “doom” found in “doomer” (one who is relentlessly pessimistic) or “doomscrolling” (being glued to an endless feed of bad news in an immobilized state combining apathy and dread). 

Of course, emotional vocabulary is always evolving. During the Civil War, doctors used the centuries-old term “nostalgia,” combining the Greek words for “returning home”and “pain,” to describe a sometimes fatal set of symptoms suffered by soldiers—a condition we’d probably describe today as post-traumatic stress disorder. Now nostalgia’s meaning has mellowed and faded to a gentle affection for an old cultural product or vanished way of life. And people constantly import emotion words from other cultures when they’re convenient or evocative—like hygge (the Danish word for friendly coziness) or kvell (a Yiddish term for brimming over with happy pride). 

Cottingham believes that neo-­emotions are proliferating as people spend more of their lives online. These coinages help us relate to one another and make sense of our experiences, and they get a lot of engagement on social media. So even when a neo-emotion is just a subtle variation on, or combination of, existing feelings, getting super-specific about those feelings helps us reflect and connect with other people. “These are potentially signals that tell us about our place in the world,” she says. 

These neo-emotions are part of a paradigm shift in emotion science. For decades, researchers argued that humans all share a set of a half-dozen or so basic emotions. But over the last decade, Lisa Feldman Barrett, a clinical psychologist at Northeastern University, has become one of the most cited scientists in the world for work demonstrating otherwise. By using tools like advanced brain imaging and studying babies and people from relatively isolated cultures, she has concluded there’s no such thing as a basic emotional palette. The way we experience and talk about our feelings is culturally determined. “How do you know what anger and sadness and fear are? Because somebody taught you,” Barrett says. 

If there are no true “basic” biological emotions, this puts more emphasis on social and cultural variations in how we interpret our experiences. And these interpretations can change over time. “As a sociologist, we think of all emotions as created,” Cottingham says. Just like any other tool humans make and use, “emotions are a practical resource people are using as they navigate the world.” 

Some neo-emotions, like velvetmist, might be mere novelties. Barrett playfully suggests “chiplessness” to describe the combined hunger, frustration, and relief of getting to the bottom of the bag. But others, like eco-anxiety and Black joy, can take on a life of their own and help galvanize social movements.  

Both reading about and crafting your own neo-emotions, with or without chatbot assistance, could be surprisingly helpful. Lots of research supports the benefits of emotional granularity. Basically, the more detailed and specific words you can use to describe your emotions, both positive and negative, the better. 

Researchers analogize this “emodiversity” to biodiversity or cultural diversity, arguing that a more diverse world is more enriched. It turns out that people who exhibit higher emotional granularity go to the doctor less frequently, spend fewer days hospitalized for illness, and are less likely to drink when stressed, drive recklessly, or smoke cigarettes. And many studies show emodiversity is a skill that, with training, people can develop at any age. Just imagine cruising into this sweet, comforting future. Is the idea giving you a certain dreamy thrill?

Are you sure you’ve never felt velvetmist?

Anya Kamenetz is a freelance education reporter who writes the Substack newsletter The Golden Hour.

MIT Technology Review’s most popular stories of 2025

It’s been a busy and productive year here at MIT Technology Review. We published magazine issues on power, creativity, innovation, bodies, relationships, and security. We hosted 14 exclusive virtual conversations with our editors and outside experts in our subscriber-only series, Roundtables, and held two events on MIT’s campus. And we published hundreds of articles online, following new developments in computing, climate tech, robotics, and more. 

As the year winds down, we wanted to give you a chance to revisit a bit of this work with us. Whether we were covering the red-hot rise of artificial intelligence or the future of biotech, these are some of the stories that resonated the most with our readers. 

We did the math on AI’s energy footprint. Here’s the story you haven’t heard.

Understanding AI’s energy use was a huge global conversation in 2025 as hundreds of millions of people began using generative AI tools on a regular basis. Senior reporters James O’Donnell and Casey Crownhart dug into the numbers and published an unprecedented look at AI’s resource demand, down to the level of a single query, to help us know how much energy and water AI may require moving forward. 

We’re learning more about what vitamin D does to our bodies

Vitamin D deficiency is widespread, particularly in the winter when there’s less sunlight to drive its production in our bodies. The “sunshine vitamin” is important for bone health, but as senior reporter Jessica Hamzelou reported, recent research is also uncovering surprising new insights into other ways it might influence our bodies, including our immune systems and heart health.

What is AI?

Senior editor Will Douglas Heaven’s expansive look at how to define AI was published in 2024, but it still managed to connect with many readers this year. He lays out why no one can agree on what AI is—and explains why that ambiguity matters, and how it can inform our own critical thinking about this technology.

Ethically sourced “spare” human bodies could revolutionize medicine

In this thought-provoking op-ed, a team of experts at Stanford University argue that creating living human bodies that can’t think, don’t have any awareness, and can’t feel pain could shake up medical research and drug development by providing essential biological materials for testing and transplantation. Recent advances in biotechnology now provide a potential pathway to such “bodyoids,” though plenty of technical challenges and ethical hurdles remain. 

It’s surprisingly easy to stumble into a relationship with an AI chatbot

Chatbots were everywhere this year, and reporter Rhiannon Williams chronicled how quickly people can develop bonds with one. That’s all right for some people, she notes, but dangerous for others. Some folks even describe unintentionally forming romantic relationships with chatbots. This is a trend we’ll definitely be keeping an eye on in 2026. 

Is this the electric grid of the future?

The electric grid is bracing for disruption from more frequent storms and fires, as well as an uncertain policy and regulatory landscape. And in many ways, the publicly owned utility company Lincoln Electric in Nebraska is an ideal lens through which to examine this shift as it works through the challenges of delivering service that’s reliable, affordable, and sustainable.

Exclusive: A record-breaking baby has been born from an embryo that’s over 30 years old

This year saw the birth of the world’s “oldest baby”: Thaddeus Daniel Pierce, who arrived on July 26. The embryo he developed from was created in 1994 during the early days of IVF and had been frozen and sitting in storage ever since. The new baby’s parents were toddlers at the time, and the embryo was donated to them decades later via a Christian “embryo adoption” agency.  

How these two brothers became go-to experts on America’s “mystery drone” invasion

Twin brothers John and Gerald Tedesco teamed up to investigate a concerning new threat—unidentified drones. In 2024 alone, some 350 drones entered airspace over a hundred different US military installations, and many cases went unsolved, according to a top military official. This story takes readers inside the equipment-filled RV the Tedescos created to study mysterious aerial phenomena, and how they made a name for themselves among government officials. 

10 Breakthrough Technologies of 2025 

Our newsroom has published this annual look at advances that will matter in the long run for over 20 years. This year’s list featured generative AI search, cleaner jet fuel, long-acting HIV prevention meds, and other emerging technologies that our journalists think are worth watching. We’ll publish the 2026 edition of the list on January 12, so stay tuned. (In the meantime, here’s what didn’t make the cut.)  

How I learned to stop worrying and love AI slop

Lately, everywhere I scroll, I keep seeing the same fish-eyed CCTV view: a grainy wide shot from the corner of a living room, a driveway at night, an empty grocery store. Then something impossible happens. JD Vance shows up at the doorstep in a crazy outfit. A car folds into itself like paper and drives away. A cat comes in and starts hanging out with capybaras and bears, as if in some weird modern fairy tale.

This fake-surveillance look has become one of the signature flavors of what people now call AI slop. For those of us who spend time online watching short videos, slop feels inescapable: a flood of repetitive, often nonsensical AI-generated clips that washes across TikTok, Instagram, and beyond. For that, you can thank new tools like OpenAI’s Sora (which exploded in popularity after launching in app form in September), Google’s Veo series, and AI models built by Runway. Now anyone can make videos, with just a few taps on a screen. 

@absolutemem

If I were to locate the moment slop broke through into popular consciousness, I’d pick the video of rabbits bouncing on a trampoline that went viral this summer. For many savvy internet users, myself included, it was the first time we were fooled by an AI video, and it ended up spawning a wave of almost identical riffs, with people making videos of all kinds of animals and objects bouncing on the same trampoline. 

My first reaction was that, broadly speaking, all of this sucked. That’s become a familiar refrain, in think pieces and at dinner parties. Everything online is slop now—the internet “enshittified,” with AI taking much of the blame. Initially, I largely agreed, quickly scrolling past every AI video in a futile attempt to send a message to my algorithm. But then friends started sharing AI clips in group chats that were compellingly weird, or funny. Some even had a grain of brilliance buried in the nonsense. I had to admit I didn’t fully understand what I was rejecting—what I found so objectionable. 

To try to get to the bottom of how I felt (and why), I recently spoke to the people making the videos, a company creating bespoke tools for creators, and experts who study how new media becomes culture. What I found convinced me that maybe generative AI will not end up ruining everything. Maybe we have been too quick to dismiss AI slop. Maybe there’s a case for looking beyond the surface and seeing a new kind of creativity—one we’re watching take shape in real time, with many of us actually playing a part. 

 The slop boom

“AI slop” can and does refer to text, audio, or images. But what’s really broken through this year is the flood of quick AI-generated video clips on social platforms, each produced by a short written prompt fed into an AI model. Under the hood, these models are trained on enormous data sets so they can predict what every subsequent frame should look or sound like. It’s much like the process by which text models produce answers in a chat, but slower and far more power-hungry.

Early text-to-video systems, released around 2022 to 2023, could manage only a few seconds of blurry motion; objects warped in and out of existence, characters teleported around, and the giveaway that it was AI was usually a mangled hand or a melting face. In the past two years, newer models like Sora2, Veo 3.1, and Runway’s latest Gen-4.5 have dramatically improved, creating realistic, seamless, and increasingly true-to-prompt videos that can last up to a minute. Some of these models even generate sound and video together, including ambient noise and rough dialogue.

These text-to-video models have often been pitched by AI companies as the future of cinema—tools for filmmakers, studios, and professional storytellers. The demos have leaned into widescreen shots and dramatic camera moves. OpenAI pitched Sora as a “world simulator” while courting Hollywood filmmakers with what it boasted were movie-quality shorts. Google introduced Veo 3 last year as a step toward storyboards and longer scenes, edging directly into film workflows. 

All this hinged on the idea that people wanted to make AI-generated videos that looked real. But the reality of how they’re being used is more modest, weirder—and arguably much more interesting. What has turned out to be the home turf for AI video is the six-inch screen in our hands. 

Anyone can and does use these tools; a report by Adobe released in October shows that 86% of creators are using generative AI. But so are average social media users—people who aren’t “creators” so much as just people with phones. 

That’s how you end up with clips showing things like Indian prime minister Narendra Modi dancing with Gandhi, a crystal that melts into butter the moment a knife touches it, or Game of Thrones reimagined as Henan opera—videos that are hypnotic, occasionally funny, and often deeply stupid. And while micro-trends didn’t start with AI—TikTok and Reels already ran on fast-moving formats—it feels as if AI poured fuel on that fire. Perhaps because the barrier to copying an idea becomes so low, a viral video like the bunnies on trampoline can easily and quickly spawn endless variations on the same concept. You don’t need a costume or a filming location anymore; you just tweak the prompt, hit Generate, and share. 

Big tech companies have also jumped on the idea of AI videos as a new social medium. The Sora app allows users to insert AI versions of themselves and other users into scenes. Meta’s Vibes app wants to turn your entire feed into nonstop AI clips.

Of course, the same frictionless setup that allows for harmless, delightful creations also makes it easy to generate much darker slop. Sora has already been used to create so many racist deepfakes of Martin Luther King Jr. that the King estate pushed the company to block new MLK videos entirely. TikTok and X are seeing Sora-watermarked clips of women and girls being strangled circulating in bulk, posted by accounts seemingly dedicated to this one theme. And then there’s “nazislop,” the nickname for AI videos that repackage fascist aesthetics and memes into glossy, algorithm-ready content aimed at teens’ For You pages.

But the prevalence of bad actors hasn’t stopped short AI videos from flourishing as a form. New apps, Discord servers for AI creators, and tutorial channels keep multiplying. And increasingly, the energy in the community seems to be shifting away from trying to create stuff that “passes as real” toward embracing AI’s inherent weirdness. Every day, I stumble across creators who are stretching what “AI slop” is supposed to look like. I decided to talk to some of them.

Meet the creators

Like those fake surveillance videos, many popular viral AI videos rely on a surreal, otherworldly quality. As Wenhui Lim, an architecture designer turned full-time AI artist, tells me, “There is definitely a competition of ‘How weird we can push this?’ among AI video creators.”  

It’s the kind of thing AI video tools seem to handle with ease: pushing physics past what a normal body can do or a normal camera can capture. This makes AI a surprisingly natural fit for satire, comedy skits, parody, and experimental video art—especially examples involving absurdism or even horror. Several popular AI creators that I spoke with eagerly tap into this capability. 

Drake Garibay, a 39-year-old software developer from Redlands, California, was inspired by body-horror AI clips circulating on social media in early 2025. He started playing with ComfyUI, a generative media tool, and ended up spending hours each week making his own strange creations. His favorite subject is morbid human-animal hybrids. “I fell right into it,” he says. “I’ve always been pretty artistic, [but] when I saw what AI video tools can do, I was blown away.”

Since the start of this year, Garibay has been posting his experiments online. One that went viral on TikTok, captioned “Cooking up some fresh AI slop,” shows a group of people pouring gooey dough into a pot. The mixture suddenly sprouts a human face, which then emerges from the boiling pot with a head and body. It has racked up more than 8.3 million views.

@digitalpersons

AI video technology is evolving so quickly that even for creative professionals, there is a lot to experiment with. Daryl Anselmo, a creative director turned digital artist, has been experimenting with the technology since its early days, posting an AI-generated video every day since 2021. He tells me that uses a wide range of tools, including Kling, Luma, and Midjourney, and is constantly iterating. To him, testing the boundaries of these AI tools is sometimes itself the reward. “I would like to think there are impossible things that you could not do before that are still yet to be discovered. That is exciting to me,” he says.

Anselmo has collected his daily creations over the past four years into an art project, titled AI Slop, that has been exhibited in multiple galleries, including the Grand Palais Immersif in Paris. There’s obvious attention to mood and composition. Some clips feel like something closer to an art-house vignette than a throwaway meme. Over time, Anselmo’s project has taken a darker turn as his subjects shift from landscapes and interior design toward more of the body horror that drew Garibay in. 

His breakout piece, feel the agi, shows a hyperrealistic bot peeling open its own skull. Another video he shared recently features a midnight diner populated by anthropomorphized Tater Tots, titled Tot and Bothered; with its vintage palette and slow, mystical soundtrack, the piece feels like a late-night fever dream. 

One further benefit of these AI systems is that they make it easier for creators to build recurring spaces and casts of characters that function like informal franchises. Lim, for instance, is the creator of a popular AI video account called Niceaunties, inspired by the “auntie culture” in Singapore, where she’s from.

“The word ‘aunties’ often has a slightly negative connotation in Singaporean culture. They are portrayed as old-fashioned, naggy, and lacking boundaries. But they are also so resourceful, funny, and at ease with themselves,” she says. “I want to create a world where it’s different for them.” 

Her cheeky, playful videos show elderly Asian women merging with fruits, other objects, and architecture, or just living their best lives in a fantasy world. A viral video called Auntlantis, which has racked up 13.5 million views on Instagram, imagines silver-haired aunties as industrial mermaids working in an underwater trash-processing plant.  

There’s also Granny Spills, an AI video account that features a glamorous, sassy old lady spitting hot takes and life advice to a street interviewer. It gained 1.8 million Instagram followers within three months of launch, posting new videos almost every day. Although the granny’s face looks slightly different in every video, the pink color scheme and her outfit stay mostly consistent. Creators Eric Suerez and Adam Vaserstein tell me that their entire workflow is powered by AI, from writing the script to constructing the scenes. Their role, as a result, becomes close to creative directing.

@grannyspills

These projects often spin off merch, miniseries, and branded universes. The creators of Granny Spills, for example, have expanded their network, creating a Black granny as well as an Asian granny to cater to different audiences. The grannies now appear in crossover videos, as if they share the same fictional universe, pushing traffic between channels. 

In the same vein, it’s now more possible than ever to participate in an online trend. Consider  “Italian brainrot,” which went viral earlier this year. Beloved by Gen Z and Gen Alpha, these videos feature human–animal–object hybrids with pseudo-Italian names like “Bombardiro Crocodilo” and “Tralalero Tralala.” According to Know Your Meme, the craze began with a few viral TikTok sounds in fake Italian. Soon, a lot of people were participating in what felt like a massive collaborative hallucination, inventing characters, backstories, and worldviews for an ever-expanding absurdist universe. 

@patapimai

“Italian brainrot was great when it first hit,” says Denim Mazuki, a software developer and content creator who has been following the trend. “It was the collective lore-building that made it wonderful. Everyone added a piece. The characters were not owned by a studio or a single creator—they were made by the chronically online users.” 

This trend and others are further enabled by specialized and sophisticated new tools—like OpenArt, a platform designed not just for video generation but for video storytelling, which gives users frame-to-frame control over a developing narrative.

Making a video on OpenArt is straightforward: Users start with a few AI-generated character images and a line of text as simple as “cat dancing in a park.” The platform then spins out a scene breakdown that users can tweak act by act, and they can run it through multiple mainstream models and compare the results to see which look best.

OpenArt cofounders Coco Mao and Chloe Fang tell me they sponsored tutorial videos and created quick-start templates to capitalize specifically on the trend of regular people wanting to get in on Italian brainrot. They say more than 80% of their users have no artistic background. 

In defense of slop

The current use of the word “slop” online traces back to the early 2010s on 4chan, a forum known for its insular and often toxic in-jokes. As the term has spread, its meaning has evolved; it’s now a kind of derogatory slur for anything that feels like low-quality mass production aimed at an unsuspecting public, says Adam Aleksic, an internet linguist. People now slap it onto everything from salad bowls to meaningless work reports.

But even with that broadened usage, AI remains the first association: “slop” has become a convenient shorthand for dismissing almost any AI-generated output, regardless of its actual quality. The Cambridge Dictionary’s new sense of “slop” will almost certainly cement this perception, describing it as “content on the internet that is of very low quality, especially when it is created by AI.”   

Perhaps unsurprisingly, the word has become a charged label among AI creators. 

Anselmo embraces it semi-ironically, hence the title of his yearslong art project. “I see this series as an experimental sketchbook,” he says. “I am working with the slop, pushing the models, breaking them, and developing a new visual language. I have no shame that I am deep into AI.” Anselmo says that he does not concern himself with whether his work is “art.”

Garibay, the creator of the viral video where a human face emerged from a pot of physical slop, uses the label playfully. “The AI slop art is really just a lot of weird glitchy stuff that happens, and there’s not really a lot of depth usually behind it, besides the shock value,” he says. “But you will find out really fast that there is a heck of a lot more involved, if you want a higher-end result.” 

That’s largely in line with what Suerez and Vaserstein, the creators of Granny Spills, tell me. They actually hate it when their work is called slop, given the way the term is often used to dismiss AI-generated content out of hand. It feels disrespectful of their creative input, they say. Even though they do not write the scripts or paint the frames, they say they are making legitimate artistic choices. 

Indeed, for most of the creators I spoke to, making AI content is rarely a one-click process. They tell me that it takes skill, trial and error, and a strong sense of taste to consistently get the visuals they want. Lim says a single one-minute video can take hours, sometimes even days, to make. Anselmo, for his part, takes pride in actively pushing the model rather than passively accepting its output. “There’s just so many things that you can do with it that go well beyond ‘Oh, way to go, you typed in a prompt,’” he says. Ultimately, slop evokes a lot of feelings. Aleksic puts it well: “There’s a feeling of guilt on the user end for enjoying something that you know to be lowbrow. There’s a feeling of anger toward the creator for making something that is not up to your content expectations, and all the meantime, there’s a pervasive algorithmic anxiety hanging over us. We know that the algorithm and the platforms are to blame for the distribution of this slop.”

And that anxiety long predates generative AI. We’ve been living for years with the low-grade dread of being nudged, of having our taste engineered and our attention herded, so it’s not surprising that the anger latches onto the newest, most visible culprit. Sometimes it is misplaced, sure, but I also get the urge to assert human agency against a new force that seems to push all of us away from what we know and toward something we didn’t exactly choose.

But the negative association has real harm for the earlier adopters. Every AI video creator I spoke to described receiving hateful messages and comments simply for using these tools at all. These messages accuse AI creators of taking opportunities away from artists already struggling to make a living, and some dismiss their work as “grifting” and “garbage.” The backlash, of course, did not come out of nowhere. A Brookings study of one major freelance marketplace found that after new generative-AI tools launched in 2022, freelancers in AI-exposed occupations saw about 2% decline in contracts and a 5% drop in earnings. 

“The phrase ‘AI slop’ implies, like, a certain ease of creation that really bothers a lot of people—understandably, because [making AI-generated videos] doesn’t incorporate the artistic labor that we typically associate with contemporary art,” says Mindy Seu, a researcher, artist, and associate professor in digital arts at UCLA. 

At the root of the conflict here is that the use of AI in art is still nascent; there are few best practices and almost no guardrails. And there’s a kind of shame involved—one I recognize when I find myself lingering on bad AI content. 

Historically, new technology has always carried a whiff of stigma when it first appears, especially in creative fields where it seems to encroach on a previously manual craft. Seu says that digital art, internet art, and new media have been slow to gain recognition from cultural institutions, which remain key arbiters of what counts as “serious” or “relevant” art. 

For many artists, AI now sits in that same lineage: “Every big advance in technology yields the question ‘What is the role of the artist?’” she says. This is true even if creators are not seeing it as a replacement for authorship but simply as another way to create. 

Mao, the OpenArt founder, believes that learning how to use generative video tools will be crucial for future content creators, much as learning Photoshop was almost synonymous with graphic design for a generation. “It is a skill to be learned and mastered,” she says.

There is a generous reading of the phenomenon so many people call AI slop, which is that it is a kind of democratization. A rare skill shifts away from craftsmanship to something closer to creative direction: being able to describe what you want with enough linguistic precision, and to anchor it in references the model is likely to understand. You have to know how to ask, and what to point to. In that sense, discernment and critique sit closer to the center of the process than ever before.

It’s not just about creative direction, though, but about the human intention behind the creation. “It’s very easy to copy the style,” Lim says. “It’s very easy to make, like, old Asian women doing different things, but they [imitators] don’t understand why I’m doing it … Even when people try to imitate that, they don’t have that consistency.”

“It’s the idea behind AI creation that makes it interesting to look at,” says Zach Lieberman, a professor at the MIT Media Lab who leads a research group called Future Sketches, where members explore code-enabled images. Lieberman, who has been posting daily sketches generated by code for years, tells me that mathematical logic is not the enemy of beauty. He echoes Mao in saying that a younger generation will inevitably see AI as just another tool in the toolbox. Still, he feels uneasy: By relying so heavily on black-box AI models, artists lose some of the direct control over output that they’ve traditionally enjoyed.

A new online culture

For many people, AI slop is simply everything they already resent about the internet, turned up: ugly, noisy, and crowding out human work. It’s only possible because it’s been trained to take all creative work and make it fodder, stripped of origin, aura, or credit, and blended into something engineered to be mathematically average—arguably perfectly mediocre, by design. Charles Pulliam-Moore, a writer for The Verge, calls this the “formulaic derivativeness” that already defines so much internet culture: unimaginative, unoriginal, and uninteresting. 

But I love internet culture, and I have for a long time. Even at its worst, it’s bad in an interesting way: It offers a corner for every kind of obsession and invites you to add your own. Years of being chronically online have taught me that the real logic of slop consumption isn’t mastery but a kind of submission. As a user, I have almost no leverage over platforms or algorithms; I can’t really change how they work. Submission, though, doesn’t mean giving up. It’s more like recognizing that the tide is stronger than you and choosing to let it carry you. Good scrolling isn’t about control anyway. It’s closer to surfing, and sometimes you wash up somewhere ridiculous, but not entirely alone.

Mass-produced click-bait content has always been around. What’s new is that we can now watch it being generated in real time, on a scale that would have been unimaginable before. And the way we respond to it in turn shapes new content (see the trampoline-bouncing bunnies) and more culture and so on. Perhaps AI slop is born of submission to algorithmic logic. It’s unserious, surreal, and spectacular in ways that mirror our relationship to the internet itself. It is so banal—so aggressively, inhumanly mediocre—that it loops back around and becomes compelling. 

To “love AI slop” is to admit the internet is broken, that the infrastructure of culture is opportunistic and extractive. But even in that wreckage, people still find ways to play, laugh, and make meaning. 

Earlier this fall, months after I was briefly fooled by the bunny video, I was scrolling on Rednote and landed on videos by Mu Tianran, a Chinese creator who acts out weird skits that mimic AI slop. In one widely circulated clip, he plays a street interviewer asking other actors, “Do you know you are AI generated?”—parodying an earlier wave of AI-generated street interviews. The actors’ responses seem so AI, but of course they’re not: Eyes are fixed just off-camera, their laughter a beat too slow, their movements slightly wrong. 

Watching this, it was hard to believe that AI was about to snuff out human creativity. If anything, it has handed people a new style to inhabit and mock, another texture to play with. Maybe it’s all fine. Maybe the urge to imitate, remix, and joke is still stubbornly human, and AI cannot possibly take it away. 

The 8 worst technology flops of 2025

Welcome to our annual list of the worst, least successful, and simply dumbest technologies of the year.

This year, politics was a recurring theme. Donald Trump swept back into office and used his executive pen to reshape the fortunes of entire sectors, from renewables to cryptocurrency. The wrecking-ball act began even before his inauguration, when the president-elect marketed his own memecoin, $TRUMP, in a shameless act of merchandising that, of course, we honor on this year’s worst tech list.

We like to think there’s a lesson in every technological misadventure. But when technology becomes dependent on power, sometimes the takeaway is simpler: it would have been better to stay away.

That was a conclusion Elon Musk drew from his sojourn as instigator of DOGE, the insurgent cost-cutting initiative that took a chainsaw to federal agencies. The public protested. Teslas were set alight, and drivers of his hyped Cybertruck discovered that instead of a thumbs-up, they were getting the middle finger.

On reflection, Musk said he wouldn’t do it again. “Instead of doing DOGE, I would have, basically … worked on my companies,” he told an interviewer this month. “And they wouldn’t have been burning the cars.”

Regrets—2025 had a few. Here are some of the more notable ones.

NEO, the home robot

1X TECH

Imagine a metal butler that fills your dishwasher and opens the door. It’s a dream straight out of science fiction. And it’s going to remain there—at least for a while.

That was the hilarious, and deflating, takeaway from the first reviews of NEO, a 66-pound humanoid robot whose maker claims it will “handle any of your chores reliably” when it ships next year.

But as a reporter for the Wall Street Journal learned, NEO took two minutes to fold a sweater and couldn’t crack a walnut. Not only that, but the robot was teleoperated the entire time by a person wearing a VR visor.

Still interested? Neo is available on preorder for $20,000 from startup 1X.

More: I Tried the Robot That’s Coming to Live With You. It’s Still Part Human (WSJ), The World’s Stupidest Robot Maid (The Daily Show) Why the humanoid workforce is running late (MIT Technology Review), NEO The Home Robot | Order Today (1X Corp.)

Sycophantic AI

It’s been said that San Francisco is the kind of place where no one will tell you if you have a bad idea. And its biggest product in a decade—ChatGPT—often behaves exactly that way.

This year, OpenAI released an especially sycophantic update that told users their mundane queries were brilliantly incisive. This electronic yes-man routine isn’t an accident; it’s a product strategy. Plenty of people like the flattery.

But it’s disingenuous and dangerous, too. Chatbots have shown a willingness to indulge users’ delusions and worst impulses, up to and including suicide.

In April, OpenAI acknowledged the issue when the company dialed back a model update whose ultra-agreeable personality, it said, had the side effect of “validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions.”

Don’t you dare agree the problem is solved. This month, when I fed ChatGPT one of my dumbest ideas, its response began: “I love this concept.”

More: What OpenAI Did When ChatGPT Users Lost Touch With Reality (New York Times), Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence (arXiv), Expanding on what we missed with sycophancy (OpenAI)

The company that cried “dire wolf”

Two dire wolves are seen at 3 months old.

COLOSSAL BIOSCIENCES

When you tell a lie, tell it big. Make it frolic and give it pointy ears. And make it white. Very white.

That’s what the Texas biotech concern Colossal Biosciences did when it unveiled three snow-white animals that it claimed were actual dire wolves, which went extinct more than 10 millennia ago.

To be sure, these genetically modified gray wolves were impressive feats of engineering. They’d been made white via a genetic mutation and even had some bits and bobs of DNA copied over from old dire wolf bones. But they “are not dire wolves,” according to canine specialists at the International Union for Conservation of Nature.

Colossal’s promotional blitz could hurt actual endangered species. Presenting de-extinction as “a ready-to-use conservation solution,” said the IUCN, “risks diverting attention from the more urgent need of ensuring functioning and healthy ecosystems.”

In a statement, Colossal said that sentiment analysis of online activity shows 98% agreement with its furry claims. “They’re dire wolves, end of story,” it says.  

More: Game of Clones: Colossal’s new wolves are cute, but are they dire? (MIT Technology Review), Conservation perspectives on gene editing in wild canids (IUCN),  A statement from Colossal’s Chief Science Officer, Dr. Beth Shapiro (Reddit)

mRNA political purge

RFK Jr composited with a vaccine vial that has a circle and slash icon over it

MITTR | GETTY IMAGES

Save the world, and this is the thanks you get?

During the covid-19 pandemic, the US bet big on mRNA vaccines—and the new technology delivered in record time. 

But now that America’s top health agencies are led by the antivax wackadoodle Robert F. Kennedy Jr., “mRNA” has become a political slur.

In August, Kennedy abruptly canceled hundreds of millions in contracts for next-generation vaccines. And shot maker Moderna—once America’s champion—has seen its stock slide by more than 90% since its Covid peak.

The purge targeting a key molecule of life (our bodies are full of mRNA) isn’t just bizarre. It could slow down other mRNA-based medicine, like cancer treatments and gene editing for rare diseases.

In August, a trade group fought back, saying: “Kennedy’s unscientific and misguided vilification of mRNA technology and cancellation of grants is the epitome of cutting off your nose to spite your face.”

More: HHS Winds Down mRNA Vaccine Development (US Department of Health and Human Services),  Cancelling mRNA studies is the highest irresponsibility (Nature), How Moderna, the company that helped save the world, unraveled (Stat News)

​​Greenlandic Wikipedia

WIKIPEDIA

Wikipedia has editions in 340 languages. But as of this year, there’s one less: Wikipedia in Greenlandic is no more.

Only around 60,000 people speak the Inuit language. And very few of them, it seems, ever cared much about the online encyclopedia. As a result, many of the entries were machine translations riddled with errors and nonsense.

Perhaps a website no one visits shouldn’t be a problem. But its existence created the risk of a linguistic “doom spiral” for the endangered language. That could happen if new AIs were trained on the corrupt Wikipedia articles.  

In September, administrators voted to close Greenlandic Wikipedia, citing possible “harm to the Greenlandic language.”

Read more:  Can AI Help Revitalize Indigenous Languages? (Smithsonian), How AI and Wikipedia have sent vulnerable languages into a doom spiral (MIT Technology Review), Closure of Greenlandic Wikipedia (Wikimedia)

Tesla Cybertruck

Tesla Cybertruck-rows of new cars in port

ADOBE STOCK

There’s a reason we’re late to the hate-fest around Elon Musk’s Cybertruck. That’s because 12 months ago, the polemical polygon was the #1 selling electric pickup in the US.

So maybe it would end up a hit.

Nope. Tesla is likely to sell only around 20,000 trucks this year, about half last year’s total. And a big part of the problem is that the entire EV pickup category is struggling. Just this month, Ford decided to scrap its own EV truck, the F-150 Lightning. 

With unsold inventory building, Musk has started selling Cybertrucks as fleet vehicles to his other enterprises, like SpaceX.

More: Elon’s Edsel: Tesla Cybertruck Is The Auto Industry’s Biggest Flop In Decades (Forbes), Why Tesla Cybertrucks Aren’t Selling (CNBC), Ford scraps fully-electric F-150 Lightning as mounting losses and falling demand hits EV plans (AP)

Presidential shitcoin

VIA GETTRUMPMEMES.COM

Donald Trump launched a digital currency called $TRUMP just days before his 2025 inauguration, accompanied by a logo showing his fist-pumping “Fight, fight, fight” pose.

This was a memecoin, or shitcoin, not real money. Memecoins are more like merchandise—collectibles designed to be bought and sold, usually for a loss. Indeed, they’ve been likened to a consensual scam in which a coin’s issuer can make a bundle while buyers take losses.

The White House says there’s nothing amiss. “The American public believe[s] it’s absurd for anyone to insinuate that this president is profiting off of the presidency,” said spokeswoman Karoline Leavitt in May.

More: Donald and Melania Trump’s Terrible, Tacky, Seemingly Legal Memecoin Adventure (Bloomberg), A crypto mogul who invested millions into Trump coins is getting a reprieve (CNN), How the Trump companies made $1 bn from crypto (Financial Times), Staff Statement on Meme Coins (SEC)

“Carbon-neutral” Apple Watch

Apple's Carbon Neutral logo with the product Apple Watch

APPLE

In 2023, Apple announced its “first-ever carbon-neutral product,” a watch with “zero” net emissions. It would get there using recycled materials and renewable energy, and by preserving forests or planting vast stretches of eucalyptus trees.

Critics say it’s greenwashing. This year, lawyers filed suit in California against Apple for deceptive advertising, and in Germany, a court ruled that the company can’t advertise products as carbon neutral because the “supposed storage of CO2 in commercial eucalyptus plantations” isn’t a sure thing.

Apple’s marketing team relented. Packaging for its newest watches doesn’t say “carbon neutral.” But Apple believes the legal nitpicking is counterproductive, arguing that it can only “discourage the kind of credible corporate climate action the world needs.”

More: Inside the controversial tree farms powering Apple’s carbon neutral goal (MIT Technology Review), Apple Watch not a ‘CO2-neutral product,’ German court finds (Reuters), Apple 2030: Our ambition to become carbon neutral (Apple)

4 technologies that didn’t make our 2026 breakthroughs list

If you’re a longtime reader, you probably know that our newsroom selects 10 breakthroughs every year that we think will define the future. This group exercise is mostly fun and always engrossing, but at times it can also be quite difficult. 

We collectively pitch dozens of ideas, and the editors meticulously review and debate the merits of each. We agonize over which ones might make the broadest impact, whether one is too similar to something we’ve featured in the past, and how confident we are that a recent advance will actually translate into long-term success. There is plenty of lively discussion along the way.  

The 2026 list will come out on January 12—so stay tuned. In the meantime, I wanted to share some of the technologies from this year’s reject pile, as a window into our decision-making process. 

These four technologies won’t be on our 2026 list of breakthroughs, but all were closely considered, and we think they’re worth knowing about. 

Male contraceptives 

There are several new treatments in the pipeline for men who are sexually active and wish to prevent pregnancy—potentially providing them with an alternative to condoms or vasectomies. 

Two of those treatments are now being tested in clinical trials by a company called Contraline. One is a gel that men would rub on their shoulder or upper arm once a day to suppress sperm production, and the other is a device designed to block sperm during ejaculation. (Kevin Eisenfrats, Contraline’s CEO, was recently named to our Innovators Under 35 list). A once-a-day pill is also in early-stage trials with the firm YourChoice Therapeutics. 

Though it’s exciting to see this progress, it will still take several years for any of these treatments to make their way through clinical trials—assuming all goes well.

World models 

World models have become the hot new thing in AI in recent months. Though they’re difficult to define, these models are generally trained on videos or spatial data and aim to produce 3D virtual worlds from simple prompts. They reflect fundamental principles, like gravity, that govern our actual world. The results could be used in game design or to make robots more capable by helping them understand their physical surroundings. 

Despite some disagreements on exactly what constitutes a world model, the idea is certainly gaining momentum. Renowned AI researchers including Yann LeCun and Fei-Fei Li have launched companies to develop them, and Li’s startup World Labs released its first version last month. And Google made a huge splash with the release of its Genie 3 world model earlier this year. 

Though these models are shaping up to be an exciting new frontier for AI in the year ahead, it seemed premature to deem them a breakthrough. But definitely watch this space. 

Proof of personhood 

Thanks to AI, it’s getting harder to know who and what is real online. It’s now possible to make hyperrealistic digital avatars of yourself or someone you know based on very little training data, using equipment many people have at home. And AI agents are being set loose across the internet to take action on people’s behalf. 

All of this is creating more interest in what are known as personhood credentials, which could offer a way to verify that you are, in fact, a real human when you do something important online. 

For example, we’ve reported on efforts by OpenAI, Microsoft, Harvard, and MIT to create a digital token that would serve this purpose. To get it, you’d first go to a government office or other organization and show identification. Then it’d be installed on your device and whenever you wanted to, say, log into your bank account, cryptographic protocols would verify that the token was authentic—confirming that you are the person you claim to be. 

Whether or not this particular approach catches on, many of us in the newsroom agree that the future internet will need something along these lines. Right now, though, many competing identity verification projects are in various stages of development. One is World ID by Sam Altman’s startup Tools for Humanity, which uses a twist on biometrics. 

If these efforts reach critical mass—or if one emerges as the clear winner, perhaps by becoming a universal standard or being integrated into a major platform—we’ll know it’s time to revisit the idea.  

The world’s oldest baby

In July, senior reporter Jessica Hamzelou broke the news of a record-setting baby. The infant developed from an embryo that had been sitting in storage for more than 30 years, earning him the bizarre honorific of “oldest baby.” 

This odd new record was made possible in part by advances in IVF, including safer methods of thawing frozen embryos. But perhaps the greater enabler has been the rise of “embryo adoption” agencies that pair donors with hopeful parents. People who work with these agencies are sometimes more willing to make use of decades-old embryos. 

This practice could help find a home for some of the millions of leftover embryos that remain frozen in storage banks today. But since this recent achievement was brought about by changing norms as much as by any sudden technological improvements, this record didn’t quite meet our definition of a breakthrough—though it’s impressive nonetheless.

Nominations are now open for our global 2026 Innovators Under 35 competition

We have some exciting news: Nominations are now open for MIT Technology Review’s 2026 Innovators Under 35 competition. This annual list recognizes 35 of the world’s best young scientists and inventors, and our newsroom has produced it for more than two decades. 

It’s free to nominate yourself or someone you know, and it only takes a few moments. Submit your nomination before 5 p.m. ET on Tuesday, January 20, 2026. 

We’re looking for people who are making important scientific discoveries and applying that knowledge to build new technologies. Or those who are engineering new systems and algorithms that will aid our work or extend our abilities. 

Each year, many honorees are focused on improving human health or solving major problems like climate change; others are charting the future path of artificial intelligence or developing the next generation of robots. 

The most successful candidates will have made a clear advance that is expected to have a positive impact beyond their own field. They should be the primary scientific or technical driver behind the work involved, and we like to see some signs that a candidate’s innovation is gaining real traction. You can look at last year’s list to get an idea of what we look out for.

We encourage self-nominations, and if you previously nominated someone who wasn’t selected, feel free to put them forward again. Please note: To be eligible for the 2026 list, nominees must be under the age of 35 as of October 1, 2026. 

Semifinalists will be notified by early March and asked to complete an application at that time. Winners are then chosen by the editorial staff of MIT Technology Review, with input from a panel of expert judges. (Here’s more info about our selection process and timelines.) 

If you have any questions, please contact tr35@technologyreview.com. We look forward to reviewing your nominations. Good luck! 

The first new subsea habitat in 40 years is about to launch

<div data-chronoton-summary="

  • Underwater living quarters Vanguard, launching in early 2025, will house four scientists at a time beneath Florida Keys waters. Its pressurized environment allows aquanauts to conduct extended dives without frequent decompression stops.
  • Scientific potential The habitat enables week-long missions for reef restoration, species surveys, and even astronaut training. With divers able to work many hours daily at depths up to 50 meters, it could dramatically accelerate ocean research.
  • Ambitious expansion plans Deep, Vanguard’s creator, envisions a larger successor called Sentinel by 2027 that could house up to 50 people at depths of 225 meters, advancing their mission to “make humans aquatic.”

” data-chronoton-post-id=”1127682″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Vanguard feels and smells like a new RV. It has long, gray banquettes that convert into bunks, a microwave cleverly hidden under a counter, a functional steel sink with a French press and crockery above. A weird little toilet hides behind a curtain.

But some clues hint that you can’t just fire up Vanguard’s engine and roll off the lot. The least subtle is its door, a massive disc of steel complete with a wheel that spins to lock.

Vanguard subsea human habitat from the outside door.

COURTESY MARK HARRIS

Once it is sealed and moved to its permanent home beneath the waves of the Florida Keys National Marine Sanctuary early next year, Vanguard will be the world’s first new subsea habitat in nearly four decades. Teams of four scientists will live and work on the seabed for a week at a time, entering and leaving the habitat as scuba divers. Their missions could include reef restoration, species surveys, underwater archaeology, or even astronaut training. 

One of Vanguard’s modules, unappetizingly named the “wet porch,” has a permanent opening in the floor (a.k.a. a “moon pool”) that doesn’t flood because Vanguard’s air pressure is matched to the water around it. 

It is this pressurization that makes the habitat so useful. Scuba divers working at its maximum operational depth of 50 meters would typically need to make a lengthy stop on their way back to the surface to avoid decompression sickness. This painful and potentially fatal condition, better known as the bends, develops if divers surface too quickly. A traditional 50-meter dive gives scuba divers only a handful of minutes on the seafloor, and they can make only a couple of such dives a day. With Vanguard’s atmosphere at the same pressure as the water, its aquanauts need to decompress only once, at the end of their stay. They can potentially dive for many hours every day.

That could unlock all kinds of new science and exploration. “More time in the ocean opens a world of possibility, accelerating discoveries, inspiration, solutions,” said Kristen Tertoole, Deep’s chief operating officer, at Vanguard’s unveiling in Miami in October. “The ocean is Earth’s life support system. It regulates our climate, sustains life, and holds mysteries we’ve only begun to explore, but it remains 95% undiscovered.”

Vanguard subsea human habitat unveiled in Miami

COURTESY DEEP

Subsea habitats are not a new invention. Jacques Cousteau (naturally) built the first in 1962, although it was only about the size of an elevator. Larger habitats followed in the 1970s and ’80s, maxing out at around the size of Vanguard.

But the technology has come a long way since then. Vanguard uses a tethered connection to a buoy above, known as the “surface expression,” that pipes fresh air and water down to the habitat. It also hosts a diesel generator to power a Starlink internet connection and a tank to hold wastewater. Norman Smith, Deep’s chief technology officer, says the company modeled the most severe hurricanes that Florida expects over the next 20 years and designed the tether to withstand them. Even if the worst happens and the link is broken, Deep says, Vanguard has enough air, water, and energy storage to support its crew for at least 72 hours.

That number came from DNV, an independent classification agency that inspects and certifies all types of marine vessels so that they can get commercial insurance. Vanguard will be the first subsea habitat to get a DNV classification. “That means you have to deal with the rules and all the challenging, frustrating things that come along with it, but it means that on a foundational level, it’s going to be safe,” says Patrick Lahey, founder of Triton Submarines, a manufacturer of classed submersibles.

An interior view of Vanguard during Life Under The Sea: Ocean Engineering and Technology Company DEEP's unveiling of Vanguard, its pilot subsea human habitat at The Hangar at Regatta Harbour on October 29, 2025 in Miami, Florida.

JASON KOERNER/GETTY IMAGES FOR DEEP

Although Deep hopes Vanguard itself will enable decades of useful science, its prime function for the company is to prove out technologies for its planned successor, an advanced modular habitat called Sentinel. Sentinel modules will be six meters wide, twice the diameter of Vanguard, complete with sweeping staircases and single-occupant cabins. A small deployment might have a crew of eight, about the same as the International Space Station. A big Sentinel system could house 50, up to 225 meters deep. Deep claims that Sentinel will be launched at some point in 2027.

Ultimately, according to its mission statement, Deep seeks to “make humans aquatic,” an indication that permanent communities are on its long-term road map. 

Deep has not publicly disclosed the identity of its principal funder, but business records in the UK indicate that as of January 31, 2025 a Canadian man, Robert MacGregor, owned at least 75% of its holding company. According to a Reuters investigation, MacGregor was once linked with Craig Steven Wright, a computer scientist who claimed to be Satoshi Nakamoto, as bitcoin’s elusive creator is pseudonymously known. However, Wright’s claims to be Nakamoto later collapsed. 

MacGregor has kept a very low public profile in recent years. When contacted for comment, Deep spokesperson Mike Bohan refused to comment on the link with Wright, only to say it was inaccurate, but said: “Robert MacGregor started his career as an IP lawyer in the dot-com era, moving into blockchain technology and has diverse interests including philanthropy, real estate, and now Deep.”

In any case, MacGregor could find keeping that low profile more difficult if Vanguard is successful in reinvigorating ocean science and exploration as the company hopes. The habitat is due to be deployed early next year, following final operational tests at Triton’s facility in Florida. It will welcome its first scientists shortly after. 

“The ocean is not just our resource; it is our responsibility,” says Tertoole. “Deep is more than a single habitat. We are building a full-stack capability for human presence in the ocean.”

An interior view of Vanguard during Life Under The Sea: Ocean Engineering and Technology Company DEEP's unveiling of Vanguard, its pilot subsea human habitat at The Hangar at Regatta Harbour on October 29, 2025 in Miami, Florida. (

JASON KOERNER/GETTY IMAGES FOR DEEP
It’s never been easier to be a conspiracy theorist

The timing was eerie.

On November 21, 1963, Richard Hofstadter delivered the annual Herbert Spencer Lecture at Oxford University. Hofstadter was a professor of American history at Columbia University who liked to use social psychology to explain political history, the better to defend liberalism from extremism on both sides. His new lecture was titled “The Paranoid Style in American Politics.” 

“I call it the paranoid style,” he began, “simply because no other word adequately evokes the qualities of heated exaggeration, suspiciousness, and conspiratorial fantasy that I have in mind.”

Then, barely 24 hours later, President John F. Kennedy was assassinated in Dallas. This single, shattering event, and subsequent efforts to explain it, popularized a term for something that is clearly the subject of Hofstadter’s talk though it never actually figures in the text: “conspiracy theory.”


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


Hofstadter’s lecture was later revised into what remains an essential essay, even after decades of scholarship on conspiracy theories, because it lays out, with both rigor and concision, a historical continuity of conspiracist politics. “The paranoid style is an old and recurrent phenomenon in our public life which has been frequently linked with movements of suspicious discontent,” he writes, tracing the phenomenon back to the early years of the republic. Though each upsurge in conspiracy theories feels alarmingly novel—new narratives disseminated through new technologies on a new scale—they all conform to a similar pattern. As Hofstadter demonstrated, the names may change, but the fundamental template remains the same.

His psychological reading of politics has been controversial, but it is psychology, rather than economics or other external circumstances, that best explains the flourishing of conspiracy theories. Subsequent research has indeed shown that we are prone to perceive intentionality and patterns where none exist—and that this helps us feel like a person of consequence. To identify and expose a secret plot is to feel heroic and gain the illusion of control over the bewildering mess of life. 

Like many pioneering theories exposed to the cold light of hindsight, Hofstadter’s has flaws and blind spots. His key oversight was to downplay  the paranoid style’s role in mainstream politics up to that point and underrate its potential to spread in the future.

In 1963, conspiracy theories were still a fringe phenomenon, not because they were inherently unusual but because they had limited reach and were stigmatized by people in power. Now that neither factor holds true, it is obvious how infectious they are. Hofstadter could not, of course, have imagined the information technologies that have become stitched into our lives, nor the fractured media ecosystem of the 21st century, both of which have allowed conspiracist thinking to reach more and more people—to morph, and to bloom like mold. And he could not have predicted that a serial conspiracy theorist would be elected president, twice, and that he would staff his second administration with fellow proponents of the paranoid style. 

But Hofstadter’s concept of the paranoid style remains useful—and ever relevant—because it also describes a way of reading the world. As he put it, “The distinguishing thing about the paranoid style is not that its exponents see conspiracies or plots here or there in history, but they regard a ‘vast’ or ‘gigantic’ conspiracy as the motive force in historical events. History is a conspiracy, set in motion by demonic forces of almost transcendent power, and what is felt to be needed to defeat it is not the usual methods of political give-and-take, but an all-out crusade.”

Needless to say, this mystically unified version of history is not just untrue but impossible. It doesn’t make sense on any level. So why has it proved so alluring for so long—and why does it seem to be getting more popular every day?

What is a conspiracy theory, anyway? 

The first person to define the “conspiracy theory” as a widespread phenomenon was the Austrian-British philosopher Karl Popper, in his 1948 lecture “Towards a Rational Theory of Tradition.” He was not referring to a theory about an individual conspiracy. He was interested in “the conspiracy theory of society”: a particular way of interpreting the course of events. 

He later defined it as “the view that an explanation of a social phenomenon consists in the discovery of the men or groups who are interested in the occurrence of this phenomenon (sometimes it is a hidden interest which has first to be revealed), and who have planned and conspired to bring it about.”

Take an unforeseen catastrophe that inspires fear, anger, and pain—a financial crash, a devastating fire, a terrorist attack, a war. The conventional historian will try to unpick a tangle of different factors, of which malice is only one, and one that may be less significant than dumb luck.

The conspiracist, however, will perceive only sinister calculation behind these terrible events—a fiendishly intricate plot conceived and executed to perfection. Intent is everything. Popper’s observation chimes with Hofstadter’s: “The paranoid’s interpretation of history is … distinctly personal: decisive events are not taken as part of the stream of history, but as the consequences of someone’s will.”

A Culture of Conspiracy
Michael Barkun
UNIVERSITY OF CALIFORNIA PRESS, 2013

According to Michael Barkun in the 2003 book A Culture of Conspiracy, the conspiracist interpretation of events rests on three assumptions: Everything is connected, everything is premeditated, and nothing is as it seems. Following that third law means that widely accepted and documented history is, by definition, suspect and alternative explanations, however outré, are more likely to be true. As Hannah Arendt wrote in The Origins of Totalitarianism, the purpose of conspiracy theories in 20th-century dictatorships “was always to reveal official history as a joke, to demonstrate a sphere of secret influences in which the visible, traceable, and known historical reality was only the outward façade erected explicitly to fool the people.” (Those dictators, of course, were conspirators themselves, projecting their own love of secret plots onto others.)

Still, it’s important to remember that “conspiracy theory” can mean different things. Barkun describes three varieties, nesting like Russian dolls. 

The “event conspiracy theory” concerns a specific, contained catastrophe, such as the Reichstag fire of 1933 or the origins of covid-19. These theories are relatively plausible, even if they can not be proved. 

The “systemic conspiracy theory” is much more ambitious, purporting to explain numerous events as the poisonous fruit of a clandestine international plot. Far-fetched though they are, they do at least fixate on named groups, whether the Illuminati or the World Economic Forum. 

It is increasingly clear that “conspiracy theory” is a misnomer and what we are really dealing with is conspiracy belief.

Finally, the “superconspiracy theory” is that impossible fantasy in which history itself is a conspiracy, orchestrated by unseen forces of almost supernatural power and malevolence. The most extreme variants of QAnon posit such a universal conspiracy. It seeks to encompass and explain nothing less than the entire world.

These are very different genres of storytelling. If the first resembles a detective story, then the other two are more akin to fables. Yet one can morph into the other. Take the theories surrounding the Kennedy assassination. The first wave of amateur investigators created event conspiracy theories—relatively self-contained plots with credible assassins such as Cubans or the Mafia. 

But over time, event conspiracy theories have come to seem parochial. By the time of Oliver Stone’s 1991 movie JFK, once-popular plots had been eclipsed by elaborate fictions of gigantic long-running conspiracies in which the murder of the president was just one component. One of Stone’s primary sources was the journalist Jim Marrs, who went on to write books about the Freemasons and UFOs. 

Why limit yourself to a laboriously researched hypothesis about a single event when one giant, dramatic plot can explain them all? 

The theory of everything 

In every systemic or superconspiracy theory, the world is corrupt and unjust and getting worse. An elite cabal of improbably powerful individuals, motivated by pure malignancy, is responsible for most of humanity’s misfortunes. Only through the revelation of hidden knowledge and the cracking of codes by a righteous minority can the malefactors be unmasked and defeated. The morality is as simplistic as the narrative is complex: It is a battle between good and evil.

Notice anything? This is not the language of democratic politics but that of myth and of religion. In fact, it is the fundamental message of the Book of Revelation. Conspiracist thinking can be seen as an offshoot, often but not always secularized, of apocalyptic Christianity, with its alluring web of prophecies, signs, and secrets and its promise of violent resolution. After studying several millenarian sects for his 1957 book The Pursuit of the Millennium, the historian Norman Cohn itemized some common traits, among them “the megalomaniac view of oneself as the Elect, wholly good, abominably persecuted yet assured of ultimate triumph; the attribution of gigantic and demonic powers to the adversary; the refusal to accept the ineluctable limitations and imperfections of human experience.”

Popper similarly considered the conspiracy theory of society “a typical result of the secularization of religious superstition,” adding: “The gods are abandoned. But their place is filled by powerful men or groups … whose wickedness is responsible for all the evils we suffer from.” 

QAnon’s mutation from a conspiracy theory on an internet message board into a movement with the characteristics of a cult makes explicit the kinship between conspiracy theories and apocalyptic religion.

This way of thinking facilitates the creation of dehumanized scapegoats—one of the oldest and most consistent features of a conspiracy theory. During the Middle Ages and beyond, political and religious leaders routinely flung the name “Antichrist” at their opponents. During the Crusades, Christians falsely accused Europe’s Jewish communities of collaborating with Islam or poisoning wells and put them to the sword. Witch-hunters implicated tens of thousands of innocent women in a supposed satanic conspiracy that was said to explain everything from illness to crop failure. “Conspiracy theories are, in the end, not so much an explanation of events as they are an effort to assign blame,” writes Anna Merlan in the 2019 book Republic of Lies.

cover of Republic of Lies
Republic of Lies: American Conspiracy Theorists and Their Surprising Rise to Power
Anna Merlan
METROPOLITAN PUBLISHERS, 2019

But the systemic conspiracy theory as we know it—that is, the ostensibly secular variety—was established three centuries later, with remarkable speed. Some horrified opponents of the French Revolution could not accept that such an upheaval could be simply a popular revolt and needed to attribute it to sinister, unseen forces. They settled on the Illuminati, a Bavarian secret society of Enlightenment intellectuals influenced in part by the rituals and hierarchy of Freemasonry. 

The group was founded by a young law professor named Adam Weishaupt, who used the alias Brother Spartacus. In reality, the Illuminati were few in number, fractious, powerless, and, by the time of the revolution in 1789, defunct. But in the imaginations of two influential writers who published “exposés” of the Illuminati in 1797—Scotland’s John Robison and France’s Augustin Barruel—they were everywhere. Each man erected a wobbling tower of wild supposition and feverish nonsense on a platform of plausible claims and verifiable facts. Robison alleged that the revolution was merely part of “one great and wicked project” whose ultimate aim was to “abolish all religion, overturn every government, and make the world a general plunder and a wreck.”  

The Illuminati’s bogeyman status faded during the 19th century, but the core narrative persisted and proceeded to underpin the notorious hoax The Protocols of the Elders of Zion, first published in a Russian newspaper in 1903. The document’s anonymous author reinvented antisemitism by grafting it onto the story of the one big plot and positing Jews as the secret rulers of the world. In this account, the Elders orchestrate every war, recession, and so on in order to destabilize the world to the point where they can impose tyranny. 

You might ask why, if they have such world-bending power already, they would require a dictatorship. You might also wonder how one group could be responsible for both communism and monopoly capitalism, anarchism and democracy, the theory of evolution, and much more besides. But the vast, self-contradicting incoherence of the plot is what made it impossible to disprove. Nothing was ruled out, so every development could potentially be taken as evidence of the Elders at work.

In 1921, the Protocols were exposed as what the London Times called a “clumsy forgery,” plagiarized from two obscure 19th-century novels, yet they remained the key text of European antisemitism—essentially “true” despite being demonstrably false. “I believe in the inner, but not the factual, truth of the Protocols,” said Joseph Goebbels, who would become Hitler’s minister of propaganda. In Mein Kampf, Hitler claimed that efforts to debunk the Protocols were actually “evidence in favor of their authenticity.” He alleged that Jews, if not stopped, would “one day devour the other nations and become lords of the earth.” Popper and Hofstadter both used the Holocaust as an example of what happens when a conspiracy theorist gains power and makes the paranoid style a governing principle.

esoteric symbols and figures on torn paper including a witchfinder, George Washington and a Civil war era solder

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | PUBLIC DOMAIN

The prominent role of Jewish Bolsheviks like Leon Trotsky and Grigory Zinoviev in the Russian Revolution of 1917 enabled a merger of antisemitism and anticommunism that survived the fascist era. Cold War red-baiters such as Senator Joseph McCarthy and the John Birch Society assigned to communists uncanny degrees of malice and ubiquity, far beyond the real threat of Soviet espionage. In fact, they presented this view as the only logical one. McCarthy claimed that a string of national security setbacks could be explained only if George C. Marshall, the secretary of defense and former secretary of state, was literally a Soviet agent. “How can we account for our present situation unless we believe that men high in this government are concerting to deliver us to disaster?” he asked in 1951. “This must be the product of a great conspiracy so immense as to dwarf any previous such venture in the history of man.”

This continuity between antisemitism, anticommunism, and 18th-century paranoia about secret societies isn’t hard to see. General Francisco Franco, Spain’s right-wing dictator, claimed to be fighting a “Judeo-Masonic-Bolshevik” conspiracy. The Nazis persecuted Freemasons alongside Jews and communists. Nesta Webster, the British fascist sympathizer who laundered the Protocols through the British press, revived interest in Robison and Barruel’s books about the Illuminati, which the pro-Nazi Baptist preacher Gerald Winrod then promoted in the US. Even Winston Churchill was briefly persuaded by Webster’s work, citing it in his claims of a “world-wide conspiracy for the overthrow of civilization … from the days of Spartacus-Weishaupt to the days of Karl Marx.”

To follow the chain further, Webster and Winrod’s stew of anticommunism, antisemitism, and anti-Illuminati conspiracy theories influenced the John Birch Society, whose publications would light a fire decades later under the Infowars founder Alex Jones, perhaps the most consequential conspiracy theorist of the early 21st century. 

The villains behind the one big plot might be the Illuminati, the Elders of Zion, the communists, or the New World Order, but they are always essentially the same people, aspiring to officially dominate a world that they already secretly control. The names can be swapped around without much difficulty. While Winrod maintained that “the real conspirators behind the Illuminati were Jews,” the anticommunist William Guy Carr conversely argued that antisemitic paranoia “plays right into the hands of the Illuminati.” These days, it might be the World Economic Forum or George Soros; liberal internationalists with aspirations to change the world are easily cast as the new Illuminati, working toward establishing one world government.

Finding connection

The main reason that conspiracy theorists have lost interest in the relatively hard work of micro-conspiracies in favor of grander schemes is that it has become much easier to draw lines between objectively unrelated people and events. Information technology is, after all, also misinformation technology. That’s nothing new. 

The witch craze could not have traveled as far or lasted as long without the printing press. Malleus Maleficarum (Hammer of the Witches), a 1486 screed by the German witch-hunter Heinrich Kramer, became the best-selling witch-hunter’s handbook, going through 28 editions by 1600. Similarly, it was the books and pamphlets “exposing” the Illuminati that allowed those ideas to spread everywhere following the French Revolution. And in the early 20th century, the introduction of the radio facilitated fascist propaganda. During the 1930s, the Nazi-sympathizing Catholic priest and radio host Charles Coughlin broadcast his antisemitic conspiracy theories to tens of millions of Americans on dozens of stations. 

The internet has, of course, vastly accelerated and magnified the spread of conspiracy theories. It is hard to recall now, but in the early days it was sweetly assumed that the internet would improve the world by democratizing access to information. While this initial idealism survives in doughty enclaves such as Wikipedia, most of us vastly underestimated the human appetite for false information that confirms the consumer’s biases.

Politicians, too, were slow to recognize the corrosive power of free-flowing conspiracy theories. For a long time, the more fantastical assertions of McCarthy and the Birchers were kept at arm’s length from the political mainstream, but that distance began to diminish rapidly during the 1990s, as right-wing activists built a cottage industry of outrageous claims about Bill and Hillary Clinton to advance the idea that they were not just corrupt or dishonest but actively evil and even satanic. This became an article of faith in the information ecosystem of internet message boards and talk radio, which expanded over time to include Fox News, blogs, and social media. So when Democrats nominated Hillary Clinton in 2016, a significant portion of the American public saw a monster at the heart of an organized crime ring whose activities included human trafficking and murder.

Nobody could make the same mistake about misinformation today. One could hardly design a more fertile breeding ground for conspiracy theories than social media. The algorithms of YouTube, Facebook, TikTok, and X, which operate on the principle that rage is engaging, have turned into radicalization machines. When these platforms took off during the second half of the 2010s, they offered a seamless system in which people were able to come across exciting new information, share it, connect it to other strands of misinformation, and weave them into self-contained, self-affirming communities, all without leaving the house.

It’s not hard to see how the problem will continue to grow as AI burrows ever deeper into our everyday lives. Elon Musk has tinkered with the AI chatbot Grok to produce information that conforms to his personal beliefs rather than to actual facts. This outcome does not even have to be intentional. Chatbots have been shown to validate and intensify some users’ beliefs, even if they’re rooted in paranoia or hubris. If you believe that you’re the hero in an epic battle between good and evil, then your chatbot is inclined to agree with you.

It’s all this digital noise that has brought about the virtual collapse of the event conspiracy theory. The industry produced by the JFK assassination may have been pseudo-scholarship, but at least researchers went through the motions of scrutinizing documents, gathering evidence, and putting forward a somewhat consistent hypothesis. However misguided the conclusions, that kind of conspiracy theory required hard work and commitment. 

Commuters reading of John F. Kennedy's assassination in the newspaper

CARL MYDANS/THE LIFE PICTURE COLLECTION/SHUTTERSTOCK

Today’s online conspiracy theorists, by contrast, are shamelessly sloppy. Events such as the attack on Paul Pelosi, husband of former US House Speaker Nancy Pelosi, in October 2022, or the murders of Minnesota House speaker Melissa Hortman and her husband Mark in June 2025, or even more recently the killing of Charlie Kirk, have inspired theories overnight, which then evaporate just as quickly. The point of such theories, if they even merit that label, is not to seek the truth but to defame political opponents and turn victims into villains.

Before he even ran for office, Trump was notorious for promoting false stories about Barack Obama’s birthplace or vaccine safety. Heir to Joseph McCarthy, Barry Goldwater, and the John Birch Society, he is the lurid incarnation of the paranoid style. He routinely damns his opponents as “evil” or “very bad people” and speaks of America’s future in apocalyptic terms. It is no surprise, then, that every member of the administration must subscribe to Trump’s false claim that the 2020 election was stolen from him, or that celebrity conspiracy theorists are now in charge of national intelligence, public health, and the FBI. Former Democrats who hold such roles, like Tulsi Gabbard and Robert F. Kennedy Jr., have entered Trump’s orbit through the gateway of conspiracy theories. They illustrate how this mindset can create counterintuitive alliances that collapse conventional political distinctions and scramble traditional notions of right and left. 

The antidemocratic implications of what’s happening today are obvious. “Since what is at stake is always a conflict between absolute good and absolute evil, the quality needed is not a willingness to compromise but the will to fight things out to the finish,” Hofstadter wrote. “Nothing but complete victory will do.” 

Meeting the moment

It’s easy to feel helpless in the face of this epistemic chaos. Because one other foundational feature of religious prophecy is that it can be disproved without being discredited: Perhaps the world does not come to an end on the predicted day, but that great day will still come. The prophet is never wrong—he is just not proven right yet

The same flexibility is enjoyed by systemic conspiracy theories. The plotters never actually succeed, nor are they ever decisively exposed, yet the theory remains intact. Recently, claims that covid-19 was either exaggerated or wholly fabricated in order to crush civil liberties did not wither away once lockdown restrictions were lifted. Surely the so-called “plandemic” was a complete disaster? No matter. This type of conspiracy theory does not have to make sense.

Scholars who have attempted to methodically repudiate conspiracy theories about the 9/11 attacks or the JFK assassination have found that even once all the supporting pillars have been knocked away, the edifice still stands. It is increasingly clear that “conspiracy theory” is a misnomer and what we are really dealing with is conspiracy belief—as Hofstadter suggested, a worldview buttressed with numerous cognitive biases and impregnable to refutation. As Goebbels implied, the “factual truth” pales in comparison to the “inner truth,” which is whatever somebody believes it be.

But at the very least, what we can do is identify the entirely different realities constructed by believers and recognize and internalize their common roots, tropes, and motives. 

Those different realities, after all, have proved remarkably consistent in shape if not in their details. What we saw then, we see now. The Illuminati were Enlightenment idealists whose liberal agenda to “dispel the clouds of superstition and of prejudice,” in Weishaupt’s words, was demonized as wicked and destructive. If they could be shown to have fomented the French Revolution, then the whole revolution was a sham. Similarly, today’s radical right recasts every plank of progressive politics as an anti-American conspiracy. The far-right Great Replacement Theory, for instance, posits that immigration policy is a calculated effort by elites to supplant the native population with outsiders. This all flows directly from what thinkers such as Hofstadter, Popper, and Arendt diagnosed more than 60 years ago. 

What is dangerously novel, at least in democracies, is conspiracy theories’ ubiquity, reach, and power to affect the lives of ordinary citizens. So understanding the paranoid style better equips us to counteract it in our daily existence. At minimum, this knowledge empowers us to spot the flaws and biases in our own thinking and stop ourselves from tumbling down dangerous rabbit holes. 

cover of book
The Paranoid Style in American Politics and Other Essays
Richard Hofstadter
VINTAGE BOOKS, 1967

On November 18, 1961, President Kennedy—almost exactly two years before Hofstadter’s lecture and his own assassination—offered his own definition of the paranoid style in a speech to the Democratic Party of California. “There have always been those on the fringes of our society who have sought to escape their own responsibility by finding a simple solution, an appealing slogan, or a convenient scapegoat,” he said. “At times these fanatics have achieved a temporary success among those who lack the will or the wisdom to face unpleasant facts or unsolved problems. But in time the basic good sense and stability of the great American consensus has always prevailed.” 

We can only hope that the consensus begins to see the rolling chaos and naked aggression of Trump’s two administrations as weighty evidence against the conspiracy theory of society. The notion that any group could successfully direct the larger mess of this moment in the world, let alone the course of history for decades, undetected, is palpably absurd. The important thing is not that the details of this or that conspiracy theory are wrong; it is that the entire premise behind this worldview is false. 

Not everything is connected, not everything is premeditated, and many things are in fact just as they seem. 

Dorian Lynskey is the author of several books, including The Ministry of Truth: The Biography of George Orwell’s 1984 and Everything Must Go: The Stories We Tell About the End of the World. He cohosts the podcast Origin Story and co-writes the Origin Story books with Ian Dunt. 

Can “The Simpsons” really predict the future?

According to internet listicles, the animated sitcom The Simpsons has predicted the future anywhere from 17 to 55 times. 

“As you know, we’ve inherited quite a budget crunch from President Trump,” the newly sworn-in President Lisa Simpson declared way back in 2000, 17 years before the real estate mogul was inaugurated as the 45th leader of the United States. Earlier, in 1993, an episode of the show featured the “Osaka flu,” which some felt was eerily prescient of the coronavirus pandemic. And—somehow!—Simpsons writers just knew that the US Olympic curling team would beat Sweden eight whole years before they did it.

still frame from The Simpson where Principal Skinner's mother stands next to him on the Olympic podium and leans to heckle the Swedish curling team
After Team USA wins, Principal Skinner’s mother gloats to the Swedish curling team, “Tell me how my ice tastes.”
THE SIMPSONS ™ & © 20TH TELEVISION

The 16th-century seer Nostradamus made 942 predictions. To date, there have been some 800 episodes of The Simpsons. How does it feel to be a showrunner turned soothsayer? What’s it like when the world combs your jokes for prophecies and thinks you knew about 9/11 four years before it happened? 


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


Al Jean has worked on The Simpsons on and off since 1989; he is the cartoon’s longest-serving showrunner. Here, he reflects on the conspiracy theories that have sprung from these apparent prophecies. 

When did you first start hearing rumblings about The Simpsons having predicted the future?

It definitely got huge when Donald Trump was elected president in 2016 after we “predicted” it in an episode from 2000. The original pitch for the line was Johnny Depp and that was in for a while, but it was decided that it wasn’t as funny as Trump. 

What people don’t remember is that in the year 2000, it wasn’t such a crazy name to pick, because Trump was talking about running as a Reform Party candidate. So, like a lot of our “predictions,” it’s an educated guess. I won’t comment on whether it’s a good thing that it happened, but I will say that it’s not the most illogical person you could have picked for that joke. And we did say that following him was Lisa, and now that he’s been elected again, we could still have Lisa next time—that’s my hope! 

How did it make you feel that people thought you were a prophet? 

Again, apart from the election’s impact on the free world, I would say that we were amused that we had said something that came true. Then we made a short video called “Trumptastic Voyage” in 2015 that predicted he would run in 2016, 2020, 2024, and 2028, so we’re three-quarters of the way through that arduous prediction.

But I like people thinking that I know something about the future. It’s a good reputation to have. You only need half a dozen things that were either on target or even uncanny to be considered an oracle. Or maybe we’re from the future—I’ll let you decide! 

Why do you think people are so drawn to the idea that The Simpsons is prophetic? 

Maybe it slightly satisfies a yearning people have for meaning, certainly when life is now so random.

Would you say that most of your predictions have logical explanations? 

It’s cherry-picking—there are 35 years of material. How many of the things that we said came true versus how many of the many things we said did not come true? 

In 2014, we predicted Germany would win the World Cup in Brazil. It’s because we wanted a joke where the Brazilians were sad and they were singing a sad version of the “Olé, olé” song. So we had to think about who would be likely to win if Brazil lost, and Germany was the number two, so they did win, but it wasn’t the craziest prediction. In the same episode, we predicted that FIFA would be corrupt, which is a very easy prediction! So a lot of them fall under that category. 

In one scene I wrote, Marge holds a book called Curious George and the Ebola Virus—people go, “Oh my God! He predicted that!” Well, Ebola existed when I wrote the joke. I’d seen a movie about it called Outbreak. It’s like predicting the Black Death. 

But have any of your so-called “predictions” made even you pause? 

There are a couple of really bizarre coincidences. There was a brochure in a New York episode [which aired in 1997] that said “New York, $9” next to a picture of the trade towers looking like an 11. That was nuts. It still sends chills down me. The writer of that episode, Ian Maxtone-Graham, was nonplussed. He really couldn’t believe it. 

THE SIMPSONS ™ & © 20TH TELEVISION

It’s not like we would’ve made that knowing what was going to come, which we didn’t. And people have advanced conspiracy theories that we’re all Ivy League writers who knew … it’s preposterous stuff that people say. There’s also a thing people do that we don’t really love, which is they fake predictions. So after something happens, they’ll concoct a Simpsons frame, and it’s not something that ever aired. [Editor’s note: People faked Simpsons screenshots seeming to predict the 2024 Baltimore bridge collapse and the 2019 Notre-Dame fire. Images from the real “Osaka flu” episode were also edited to include the word “coronavirus.”] 

How does that make you feel? Is it frustrating?

It shows you how you can really convince people of something that’s not the case. Our small denial doesn’t get as much attention. 

As far as internet conspiracies go, where would you rate the idea that The Simpsons can predict the future? 

I hope it’s harmless. I think it’s really lodged in the internet very well. I don’t think it’s disappearing anytime soon. I’m sure for the rest of my life I’ll be hearing about what a group of psychics and seers I was part of. If we really could predict that well, we’d all be retired from betting on football. Although, advice to readers: Don’t bet on football. 

THE SIMPSONS ™ & © 20TH TELEVISION

Still, it is a tiny part of a trend that is alarming, which is people being unable to distinguish fact from fiction. And I have that trouble too. You read something, and your natural inclination has always been, “Well, I read it—it’s true.” And you have to really be skeptical about that. 

Can I ask you to predict a solution to all of this?

I think my only solution is: Look at your phone less and read more books.

This interview has been edited for length and clarity. 

Amelia Tait is a London-based freelance features journalist who writes about culture, trends, and unusual phenomena.