Good technology should change the world

The billionaire investor Peter Thiel (or maybe his ghostwriter) once said, “We were promised flying cars, instead we got 140 characters.”

Mat Honan

That quip originally appeared in a manifesto for Thiel’s venture fund in 2011. All good investment firms have a manifesto, right? This one argued for making bold bets on risky, world-changing technologies rather than chasing the tepid mundanity of social software startups. What followed, however, was a decade that got even more mundane. Messaging, ride hailing, house shares, grocery delivery, burrito taxis, chat, all manner of photo sharing, games, juice on demand, and Yo. Remember Yo? Yo, yo.

It was an era defined more by business model disruptions than by true breakthroughs—a time when the most ambitious, high-profile startup doing anything resembling real science-based innovation was … Theranos? The 2010s made it easy to become a cynic about the industry, to the point that tech skepticism has replaced techno-optimism in the zeitgeist. Many of the “disruptions” of the last 15 years were about coddling a certain set of young, moneyed San Franciscans more than improving the world. Sure, that industry created an obscene amount of wealth for a small number of individuals. But maybe no company should be as powerful as the tech giants whose tentacles seem to wrap around every aspect of our lives. 

Yet you can be sympathetic to the techlash and still fully buy into the idea that technology can be good. We really can build tools that make this planet healthier, more livable, more equitable, and just all-around better. 

In fact, some people have been doing just that. Amid all the nonsense of the teeny-­boomers, a number of fundamental, potentially world-changing technologies have been making quiet progress. Quantum computing. Intelligent machines. Carbon capture. Gene editing. Nuclear fusion. mRNA vaccines. Materials discovery. Humanoid robots. Atmospheric water harvesting. Robotaxis. And, yes, even flying cars—have you heard of an EVTOL? The acronym stands for “electric vertical takeoff and landing.” It’s a small electric vehicle that can lift off and return to Earth without a runway. Basically, a flying car. You can buy one. Right now. (Good luck!)

Jetsons stuff. It’s here. 

Every year, MIT Technology Review publishes a list of 10 technologies that we believe are poised to fundamentally alter the world. The shifts aren’t always positive (see, for example, our 2023 entry on cheap military drones, which continue to darken the skies over Ukraine). But for the most part, we’re talking about changes for the better: curing diseases, fighting climate change, living in space. I don’t know about you, but … seems pretty good to me?

As the saying goes, two things can be true. Technology can be a real and powerful force for good in the world, and it can also be just an enormous factory for hype, bullshit, and harmful ideas. We try to keep both of those things in mind. We try to approach our subject matter with curious skepticism. 

But every once in a while we also approach it with awe, and even wonder. Our problems are myriad and sometimes seem insurmountable. Hyperobjects within hyperobjects. But a century ago, people felt that way about growing enough food for a booming population and facing the threat of communicable diseases. Half a century ago, they felt that way about toxic pollution and a literal hole in the atmosphere. Tech bros are wrong about a lot, but their build-big manifestos make a good point: We can solve problems. We have to. And in the quieter, more deliberate parts of the future, we will.

Why some “breakthrough” technologies don’t work out

Every year, MIT Technology Review publishes a list of 10 Breakthrough Technologies. In fact, the 2026 version is out today. This marks the 25th year the newsroom has compiled this annual list, which means its journalists and editors have now identified 250 technologies as breakthroughs. 

A few years ago, editor at large David Rotman revisited the publication’s original list, finding that while all the technologies were still relevant, each had evolved and progressed in often unpredictable ways. I lead students through a similar exercise in a graduate class I teach with James Scott for MIT’s School of Architecture and Planning. 

We ask these MIT students to find some of the “flops” from breakthrough lists in the archives and consider what factors or decisions led to their demise, and then to envision possible ways to “flip” the negative outcome into a success. The idea is to combine critical perspective and creativity when thinking about technology.

Although it’s less glamorous than envisioning which advances will change our future, analyzing failed technologies is equally important. It reveals how factors outside what is narrowly understood as technology play a role in its success—factors including cultural context, social acceptance, market competition, and simply timing.

In some cases, the vision behind a breakthrough was prescient but the technology of the day was not the best way to achieve it. Social TV (featured on the list in 2010) is an example: Its advocates proposed different ways to tie together social platforms and streaming services to make it easier to chat or interact with your friends while watching live TV shows when you weren’t physically together. 

This idea rightly reflected the great potential for connection in this modern era of pervasive cell phones, broadband, and Wi-Fi. But it bet on a medium that was in decline: live TV. 

Still, anyone who had teenage children during the pandemic can testify to the emergence of a similar phenomenon—youngsters started watching movies or TV series simultaneously on streaming platforms while checking comments on social media feeds and interacting with friends over messaging apps. 

Shared real-time viewing with geographically scattered friends did catch on, but instead of taking place through one centralized service, it emerged organically on multiple platforms and devices. And the experience felt unique to each group of friends, because they could watch whatever they wanted, whenever they wanted, independent of the live TV schedule.

Evaluating the record

Here are a few more examples of flops from the breakthroughs list that students in the 2025 edition of my course identified, and the lessons that we could take from each.

The DNA app store (from the 2016 list) was selected by Kaleigh Spears. It seemed like a great deal at the time—a startup called Helix could sequence your genome for just $80. Then, in the company’s app store, you could share that data with third parties that promised to analyze it for relevant medical info, or make it into fun merch. But Helix has since shut down the store and no longer sells directly to consumers.

Privacy concerns and doubts about the accuracy of third-party apps were among the main reasons the service didn’t catch on, particularly since there’s minimal regulation of health apps in the US. 

a Helix flow cell

HELIX

Elvis Chipiro picked universal memory (from the 2005 list). The vision was for one memory tech to rule them all—flash, random-access memory, and hard disk drives would be subsumed by a new method that relied on tiny structures called carbon nanotubes to store far more bits per square centimeter. The company behind the technology, Nantero, raised significant funds and signed on licensing partners but struggled to deliver a product on its stated timeline.

Nantero ran into challenges when it tried to produce its memory at scale because tiny variations in the way the nanotubes were arranged could cause errors. It also proved difficult to upend memory technologies that were already deeply embedded within the industry and well integrated into fabs.  

Light-field photography (from the 2012 list), chosen by Cherry Tang, let you snap a photo and adjust the image’s focus later. You’d never deal with a blurry photo ever again. To make this possible, the startup Lytro had developed a special camera that captured not just the color and intensity of light but also the angle of its rays. It was one of the first cameras of its kind designed for consumers. Even so, the company shut down in 2018.

Lytro field camera
Lytro’s unique light-field camera was ultimately not successful with consumers.
PUBLIC DOMAIN/WIKIMEDIA COMMONS

Ultimately, Lytro was outmatched by well-established incumbents like Sony and Nokia. The camera itself had a tiny display, and the images it produced were fairly low resolution. Readjusting the focus in images using the company’s own software also required a fair amount of manual work. And smartphones—with their handy built-in cameras—were becoming ubiquitous. 

Many students over the years have selected Project Loon (from the 2015 list)—one of the so-called “moonshots” out of Google X. It proposed using gigantic balloons to replace networks of cell-phone towers to provide internet access, mainly in remote areas. The company completed field tests in multiple countries and even provided emergency internet service to Puerto Rico during the aftermath of Hurricane Maria. But the company shut down the project in 2021, with Google X CEO Astro Teller saying in a blog post that “the road to commercial viability has proven much longer and riskier than hoped.” 

Sean Lee, from my 2025 class, saw the reason for its flop in the company’s very mission: Project Loon operated in low-income regions where customers had limited purchasing power. There were also substantial commercial hurdles that may have slowed development—the company relied on partnerships with local telecom providers to deliver the service and had to secure government approvals to navigate in national airspaces. 

One of Project Loon’s balloons on display at Google I/O 2016.
ANDREJ SOKOLOW/PICTURE-ALLIANCE/DPA/AP IMAGES

While this specific project did not become a breakthrough, the overall goal of making the internet more accessible through high-altitude connectivity has been carried forward by other companies, most notably Starlink with its constellation of low-orbit satellites. Sometimes a company has the right idea but the wrong approach, and a firm with a different technology can make more progress.

As part of this class exercise, we also ask students to pick a technology from the list that they think might flop in the future. Here, too, their choices can be quite illuminating. 

Lynn Grosso chose synthetic data for AI (a 2022 pick), which means using AI to generate data that mimics real-world patterns for other AI models to train on. Though it’s become more popular as tech companies have run out of real data to feed their models, she points out that this practice can lead to model collapse, with AI models trained exclusively on generated data eventually breaking the connection to data drawn from reality. 

And Eden Olayiwole thinks the long-term success of TikTok’s recommendation algorithm (a 2021 pick) is in jeopardy as awareness grows of the technology’s potential harms and its tendency to, as she puts it, incentive creators to “microwave” ideas for quick consumption. 

But she also offers a possible solution. Remember—we asked all the students what they would do to “flip” the flopped (or soon-to-flop) technologies they selected. The idea was to prompt them to think about better ways of building or deploying these tools. 

For TikTok, Olayiwole suggests letting users indicate which types of videos they want to see more of, instead of feeding them an endless stream based on their past watching behavior. TikTok already lets users express interest in specific topics, but she proposes taking it a step further to give them options for content and tone—allowing them to request more educational videos, for example, or more calming content. 

What did we learn?

It’s always challenging to predict how a technology will shape a future that itself is in motion. Predictions not only make a claim about the future; they also describe a vision of what matters to the predictor, and they can influence how we behave, innovate, and invest.

One of my main takeaways after years of running this exercise with students is that there’s not always a clear line between a successful breakthrough and a true flop. Some technologies may not have been successful on their own but are the basis of other breakthrough technologies (natural-language processing, 2001). Others may not have reached their potential as expected but could still have enormous impact in the future (brain-machine interfaces, 2001). Or they may need more investment, which is difficult to attract when they are not flashy (malaria vaccine, 2022). 

Despite the flops over the years, this annual practice of making bold and sometimes risky predictions is worthwhile. The list gives us a sense of what advances are on the technology community’s radar at a given time and reflects the economic, social, and cultural values that inform every pick. When we revisit the 2026 list in a few years, we’ll see which of today’s values have prevailed. 

Fabio Duarte is associate director and principal research scientist at the MIT Senseable City Lab.

3 things Will Douglas Heaven is into right now

The most amazing drummer on the internet

My daughter introduced me to El Estepario Siberiano’s YouTube channel a few months back, and I have been obsessed ever since. The Spanish drummer (real name: Jorge Garrido) posts videos of himself playing supercharged cover versions of popular tracks, hitting his drums with such jaw-dropping speed and technique that he makes other pro drummers shake their heads in disbelief. The dozens of reaction videos posted by other musicians are a joy in themselves. 

Jorge Garrido playing drums

EL ESTEPARIO SIBERIANO VIA YOUTUBE

Garrido is up-front about the countless hours that it took to get this good. He says he sat behind his kit almost all day, every day for years. At a time when machines appear to do it all, there’s a kind of defiance in that level of human effort. It’s why my favorites are Garrido’s covers of electronic music, where he out-drums the drum machine. Check out his version of Skrillex and Missy Elliot’s “Ra Ta Ta” and tell me it doesn’t put happiness in your heart.

Finding signs of life in the uncanny valley

Watching Sora ­videos of Michael Jackson stealing a box of chicken nuggets or Sam Altman biting into the pink meat of a flame-grilled Pikachu has given me flashbacks to an Ed Atkins exhibition at Tate Britain I saw a few months ago. Atkins is one of the most influential and unsettling British artists of his generation. He is best known for hyper-detailed CG animations of himself (pore-perfect skin, janky movement) that play with the virtual representation of human emotions. 

Still from ED ATKINS PIANOWORK 2 2023
COURTESY: THE ARTIST, CABINET GALLERY, LONDON, DÉPENDANCE, BRUSSELS, GLADSTONE GALLERY

In The Worm we see a CGI Atkins make a long-distance call to his mother during a covid lockdown. The audio is from a recording of an actual conversation. Are we watching Atkins cry or his avatar? Our attention flickers between two realities. “When an actor breaks character during a scene, it’s known as corpsing,” Atkins has said. “I want everything I make to corpse.” Next to Atkins’s work, generative videos look like cardboard cutouts: lifelike but not alive.

A dark and dirty book about a talking dingo

What’s it like to be a pet? Australian author Laura Jean McKay’s debut novel, The Animals in That Country, will make you wish you’d never asked. A flu-like pandemic leaves people with the ability to hear what animals are saying. If that sounds too Dr. Dolittle for your tastes, rest assured: These animals are weird and nasty. A lot of the time they don’t even make any sense. 

cover of book

SCRIBE

With everybody now talking to their computers, McKay’s book resets the anthropomorphic trap we’ve all fallen into. It’s a brilliant evocation of what a nonhuman mind might containand a meditation on the hard limits of communication.

Why inventing new emotions feels so good

Have you ever felt “velvetmist”? 

It’s a “complex and subtle emotion that elicits feelings of comfort, serenity, and a gentle sense of floating.” It’s peaceful, but more ephemeral and intangible than contentment. It might be evoked by the sight of a sunset or a moody, low-key album.  

If you haven’t ever felt this sensation—or even heard of it—that’s not surprising. A Reddit user named noahjeadie generated it with ChatGPT, along with advice on how to evoke the feeling. With the right essential oils and soundtrack, apparently, you too can feel like “a soft fuzzy draping ghost floating through a lavender suburb.”

Don’t scoff: Researchers say more and more terms for these “neo-­emotions” are showing up online, describing new dimensions and aspects of feeling. Velvetmist was a key example in a journal article about the phenomenon published in July 2025. But most neo-emotions aren’t the inventions of emo artificial intelligences. Humans come up with them, and they’re part of a big change in the way researchers are thinking about feelings, one that emphasizes how people continuously spin out new ones in response to a changing world. 

Velvetmist might’ve been a chatbot one-off, but it’s not unique. The sociologist Marci Cottingham—whose 2024 paper got this vein of neo-emotion research started—cites many more new terms in circulation. There’s “Black joy” (Black people celebrating embodied pleasure as a form of political resistance), “trans euphoria” (the joy of having one’s gender identity affirmed and celebrated), “eco-anxiety” (the hovering fear of climate disaster), “hypernormalization” (the surreal pressure to continue performing mundane life and labor under capitalism during a global pandemic or fascist takeover), and the sense of “doom” found in “doomer” (one who is relentlessly pessimistic) or “doomscrolling” (being glued to an endless feed of bad news in an immobilized state combining apathy and dread). 

Of course, emotional vocabulary is always evolving. During the Civil War, doctors used the centuries-old term “nostalgia,” combining the Greek words for “returning home”and “pain,” to describe a sometimes fatal set of symptoms suffered by soldiers—a condition we’d probably describe today as post-traumatic stress disorder. Now nostalgia’s meaning has mellowed and faded to a gentle affection for an old cultural product or vanished way of life. And people constantly import emotion words from other cultures when they’re convenient or evocative—like hygge (the Danish word for friendly coziness) or kvell (a Yiddish term for brimming over with happy pride). 

Cottingham believes that neo-­emotions are proliferating as people spend more of their lives online. These coinages help us relate to one another and make sense of our experiences, and they get a lot of engagement on social media. So even when a neo-emotion is just a subtle variation on, or combination of, existing feelings, getting super-specific about those feelings helps us reflect and connect with other people. “These are potentially signals that tell us about our place in the world,” she says. 

These neo-emotions are part of a paradigm shift in emotion science. For decades, researchers argued that humans all share a set of a half-dozen or so basic emotions. But over the last decade, Lisa Feldman Barrett, a clinical psychologist at Northeastern University, has become one of the most cited scientists in the world for work demonstrating otherwise. By using tools like advanced brain imaging and studying babies and people from relatively isolated cultures, she has concluded there’s no such thing as a basic emotional palette. The way we experience and talk about our feelings is culturally determined. “How do you know what anger and sadness and fear are? Because somebody taught you,” Barrett says. 

If there are no true “basic” biological emotions, this puts more emphasis on social and cultural variations in how we interpret our experiences. And these interpretations can change over time. “As a sociologist, we think of all emotions as created,” Cottingham says. Just like any other tool humans make and use, “emotions are a practical resource people are using as they navigate the world.” 

Some neo-emotions, like velvetmist, might be mere novelties. Barrett playfully suggests “chiplessness” to describe the combined hunger, frustration, and relief of getting to the bottom of the bag. But others, like eco-anxiety and Black joy, can take on a life of their own and help galvanize social movements.  

Both reading about and crafting your own neo-emotions, with or without chatbot assistance, could be surprisingly helpful. Lots of research supports the benefits of emotional granularity. Basically, the more detailed and specific words you can use to describe your emotions, both positive and negative, the better. 

Researchers analogize this “emodiversity” to biodiversity or cultural diversity, arguing that a more diverse world is more enriched. It turns out that people who exhibit higher emotional granularity go to the doctor less frequently, spend fewer days hospitalized for illness, and are less likely to drink when stressed, drive recklessly, or smoke cigarettes. And many studies show emodiversity is a skill that, with training, people can develop at any age. Just imagine cruising into this sweet, comforting future. Is the idea giving you a certain dreamy thrill?

Are you sure you’ve never felt velvetmist?

Anya Kamenetz is a freelance education reporter who writes the Substack newsletter The Golden Hour.

MIT Technology Review’s most popular stories of 2025

It’s been a busy and productive year here at MIT Technology Review. We published magazine issues on power, creativity, innovation, bodies, relationships, and security. We hosted 14 exclusive virtual conversations with our editors and outside experts in our subscriber-only series, Roundtables, and held two events on MIT’s campus. And we published hundreds of articles online, following new developments in computing, climate tech, robotics, and more. 

As the year winds down, we wanted to give you a chance to revisit a bit of this work with us. Whether we were covering the red-hot rise of artificial intelligence or the future of biotech, these are some of the stories that resonated the most with our readers. 

We did the math on AI’s energy footprint. Here’s the story you haven’t heard.

Understanding AI’s energy use was a huge global conversation in 2025 as hundreds of millions of people began using generative AI tools on a regular basis. Senior reporters James O’Donnell and Casey Crownhart dug into the numbers and published an unprecedented look at AI’s resource demand, down to the level of a single query, to help us know how much energy and water AI may require moving forward. 

We’re learning more about what vitamin D does to our bodies

Vitamin D deficiency is widespread, particularly in the winter when there’s less sunlight to drive its production in our bodies. The “sunshine vitamin” is important for bone health, but as senior reporter Jessica Hamzelou reported, recent research is also uncovering surprising new insights into other ways it might influence our bodies, including our immune systems and heart health.

What is AI?

Senior editor Will Douglas Heaven’s expansive look at how to define AI was published in 2024, but it still managed to connect with many readers this year. He lays out why no one can agree on what AI is—and explains why that ambiguity matters, and how it can inform our own critical thinking about this technology.

Ethically sourced “spare” human bodies could revolutionize medicine

In this thought-provoking op-ed, a team of experts at Stanford University argue that creating living human bodies that can’t think, don’t have any awareness, and can’t feel pain could shake up medical research and drug development by providing essential biological materials for testing and transplantation. Recent advances in biotechnology now provide a potential pathway to such “bodyoids,” though plenty of technical challenges and ethical hurdles remain. 

It’s surprisingly easy to stumble into a relationship with an AI chatbot

Chatbots were everywhere this year, and reporter Rhiannon Williams chronicled how quickly people can develop bonds with one. That’s all right for some people, she notes, but dangerous for others. Some folks even describe unintentionally forming romantic relationships with chatbots. This is a trend we’ll definitely be keeping an eye on in 2026. 

Is this the electric grid of the future?

The electric grid is bracing for disruption from more frequent storms and fires, as well as an uncertain policy and regulatory landscape. And in many ways, the publicly owned utility company Lincoln Electric in Nebraska is an ideal lens through which to examine this shift as it works through the challenges of delivering service that’s reliable, affordable, and sustainable.

Exclusive: A record-breaking baby has been born from an embryo that’s over 30 years old

This year saw the birth of the world’s “oldest baby”: Thaddeus Daniel Pierce, who arrived on July 26. The embryo he developed from was created in 1994 during the early days of IVF and had been frozen and sitting in storage ever since. The new baby’s parents were toddlers at the time, and the embryo was donated to them decades later via a Christian “embryo adoption” agency.  

How these two brothers became go-to experts on America’s “mystery drone” invasion

Twin brothers John and Gerald Tedesco teamed up to investigate a concerning new threat—unidentified drones. In 2024 alone, some 350 drones entered airspace over a hundred different US military installations, and many cases went unsolved, according to a top military official. This story takes readers inside the equipment-filled RV the Tedescos created to study mysterious aerial phenomena, and how they made a name for themselves among government officials. 

10 Breakthrough Technologies of 2025 

Our newsroom has published this annual look at advances that will matter in the long run for over 20 years. This year’s list featured generative AI search, cleaner jet fuel, long-acting HIV prevention meds, and other emerging technologies that our journalists think are worth watching. We’ll publish the 2026 edition of the list on January 12, so stay tuned. (In the meantime, here’s what didn’t make the cut.)  

How I learned to stop worrying and love AI slop

Lately, everywhere I scroll, I keep seeing the same fish-eyed CCTV view: a grainy wide shot from the corner of a living room, a driveway at night, an empty grocery store. Then something impossible happens. JD Vance shows up at the doorstep in a crazy outfit. A car folds into itself like paper and drives away. A cat comes in and starts hanging out with capybaras and bears, as if in some weird modern fairy tale.

This fake-surveillance look has become one of the signature flavors of what people now call AI slop. For those of us who spend time online watching short videos, slop feels inescapable: a flood of repetitive, often nonsensical AI-generated clips that washes across TikTok, Instagram, and beyond. For that, you can thank new tools like OpenAI’s Sora (which exploded in popularity after launching in app form in September), Google’s Veo series, and AI models built by Runway. Now anyone can make videos, with just a few taps on a screen. 

@absolutemem

If I were to locate the moment slop broke through into popular consciousness, I’d pick the video of rabbits bouncing on a trampoline that went viral this summer. For many savvy internet users, myself included, it was the first time we were fooled by an AI video, and it ended up spawning a wave of almost identical riffs, with people making videos of all kinds of animals and objects bouncing on the same trampoline. 

My first reaction was that, broadly speaking, all of this sucked. That’s become a familiar refrain, in think pieces and at dinner parties. Everything online is slop now—the internet “enshittified,” with AI taking much of the blame. Initially, I largely agreed, quickly scrolling past every AI video in a futile attempt to send a message to my algorithm. But then friends started sharing AI clips in group chats that were compellingly weird, or funny. Some even had a grain of brilliance buried in the nonsense. I had to admit I didn’t fully understand what I was rejecting—what I found so objectionable. 

To try to get to the bottom of how I felt (and why), I recently spoke to the people making the videos, a company creating bespoke tools for creators, and experts who study how new media becomes culture. What I found convinced me that maybe generative AI will not end up ruining everything. Maybe we have been too quick to dismiss AI slop. Maybe there’s a case for looking beyond the surface and seeing a new kind of creativity—one we’re watching take shape in real time, with many of us actually playing a part. 

 The slop boom

“AI slop” can and does refer to text, audio, or images. But what’s really broken through this year is the flood of quick AI-generated video clips on social platforms, each produced by a short written prompt fed into an AI model. Under the hood, these models are trained on enormous data sets so they can predict what every subsequent frame should look or sound like. It’s much like the process by which text models produce answers in a chat, but slower and far more power-hungry.

Early text-to-video systems, released around 2022 to 2023, could manage only a few seconds of blurry motion; objects warped in and out of existence, characters teleported around, and the giveaway that it was AI was usually a mangled hand or a melting face. In the past two years, newer models like Sora2, Veo 3.1, and Runway’s latest Gen-4.5 have dramatically improved, creating realistic, seamless, and increasingly true-to-prompt videos that can last up to a minute. Some of these models even generate sound and video together, including ambient noise and rough dialogue.

These text-to-video models have often been pitched by AI companies as the future of cinema—tools for filmmakers, studios, and professional storytellers. The demos have leaned into widescreen shots and dramatic camera moves. OpenAI pitched Sora as a “world simulator” while courting Hollywood filmmakers with what it boasted were movie-quality shorts. Google introduced Veo 3 last year as a step toward storyboards and longer scenes, edging directly into film workflows. 

All this hinged on the idea that people wanted to make AI-generated videos that looked real. But the reality of how they’re being used is more modest, weirder—and arguably much more interesting. What has turned out to be the home turf for AI video is the six-inch screen in our hands. 

Anyone can and does use these tools; a report by Adobe released in October shows that 86% of creators are using generative AI. But so are average social media users—people who aren’t “creators” so much as just people with phones. 

That’s how you end up with clips showing things like Indian prime minister Narendra Modi dancing with Gandhi, a crystal that melts into butter the moment a knife touches it, or Game of Thrones reimagined as Henan opera—videos that are hypnotic, occasionally funny, and often deeply stupid. And while micro-trends didn’t start with AI—TikTok and Reels already ran on fast-moving formats—it feels as if AI poured fuel on that fire. Perhaps because the barrier to copying an idea becomes so low, a viral video like the bunnies on trampoline can easily and quickly spawn endless variations on the same concept. You don’t need a costume or a filming location anymore; you just tweak the prompt, hit Generate, and share. 

Big tech companies have also jumped on the idea of AI videos as a new social medium. The Sora app allows users to insert AI versions of themselves and other users into scenes. Meta’s Vibes app wants to turn your entire feed into nonstop AI clips.

Of course, the same frictionless setup that allows for harmless, delightful creations also makes it easy to generate much darker slop. Sora has already been used to create so many racist deepfakes of Martin Luther King Jr. that the King estate pushed the company to block new MLK videos entirely. TikTok and X are seeing Sora-watermarked clips of women and girls being strangled circulating in bulk, posted by accounts seemingly dedicated to this one theme. And then there’s “nazislop,” the nickname for AI videos that repackage fascist aesthetics and memes into glossy, algorithm-ready content aimed at teens’ For You pages.

But the prevalence of bad actors hasn’t stopped short AI videos from flourishing as a form. New apps, Discord servers for AI creators, and tutorial channels keep multiplying. And increasingly, the energy in the community seems to be shifting away from trying to create stuff that “passes as real” toward embracing AI’s inherent weirdness. Every day, I stumble across creators who are stretching what “AI slop” is supposed to look like. I decided to talk to some of them.

Meet the creators

Like those fake surveillance videos, many popular viral AI videos rely on a surreal, otherworldly quality. As Wenhui Lim, an architecture designer turned full-time AI artist, tells me, “There is definitely a competition of ‘How weird we can push this?’ among AI video creators.”  

It’s the kind of thing AI video tools seem to handle with ease: pushing physics past what a normal body can do or a normal camera can capture. This makes AI a surprisingly natural fit for satire, comedy skits, parody, and experimental video art—especially examples involving absurdism or even horror. Several popular AI creators that I spoke with eagerly tap into this capability. 

Drake Garibay, a 39-year-old software developer from Redlands, California, was inspired by body-horror AI clips circulating on social media in early 2025. He started playing with ComfyUI, a generative media tool, and ended up spending hours each week making his own strange creations. His favorite subject is morbid human-animal hybrids. “I fell right into it,” he says. “I’ve always been pretty artistic, [but] when I saw what AI video tools can do, I was blown away.”

Since the start of this year, Garibay has been posting his experiments online. One that went viral on TikTok, captioned “Cooking up some fresh AI slop,” shows a group of people pouring gooey dough into a pot. The mixture suddenly sprouts a human face, which then emerges from the boiling pot with a head and body. It has racked up more than 8.3 million views.

@digitalpersons

AI video technology is evolving so quickly that even for creative professionals, there is a lot to experiment with. Daryl Anselmo, a creative director turned digital artist, has been experimenting with the technology since its early days, posting an AI-generated video every day since 2021. He tells me that uses a wide range of tools, including Kling, Luma, and Midjourney, and is constantly iterating. To him, testing the boundaries of these AI tools is sometimes itself the reward. “I would like to think there are impossible things that you could not do before that are still yet to be discovered. That is exciting to me,” he says.

Anselmo has collected his daily creations over the past four years into an art project, titled AI Slop, that has been exhibited in multiple galleries, including the Grand Palais Immersif in Paris. There’s obvious attention to mood and composition. Some clips feel like something closer to an art-house vignette than a throwaway meme. Over time, Anselmo’s project has taken a darker turn as his subjects shift from landscapes and interior design toward more of the body horror that drew Garibay in. 

His breakout piece, feel the agi, shows a hyperrealistic bot peeling open its own skull. Another video he shared recently features a midnight diner populated by anthropomorphized Tater Tots, titled Tot and Bothered; with its vintage palette and slow, mystical soundtrack, the piece feels like a late-night fever dream. 

One further benefit of these AI systems is that they make it easier for creators to build recurring spaces and casts of characters that function like informal franchises. Lim, for instance, is the creator of a popular AI video account called Niceaunties, inspired by the “auntie culture” in Singapore, where she’s from.

“The word ‘aunties’ often has a slightly negative connotation in Singaporean culture. They are portrayed as old-fashioned, naggy, and lacking boundaries. But they are also so resourceful, funny, and at ease with themselves,” she says. “I want to create a world where it’s different for them.” 

Her cheeky, playful videos show elderly Asian women merging with fruits, other objects, and architecture, or just living their best lives in a fantasy world. A viral video called Auntlantis, which has racked up 13.5 million views on Instagram, imagines silver-haired aunties as industrial mermaids working in an underwater trash-processing plant.  

There’s also Granny Spills, an AI video account that features a glamorous, sassy old lady spitting hot takes and life advice to a street interviewer. It gained 1.8 million Instagram followers within three months of launch, posting new videos almost every day. Although the granny’s face looks slightly different in every video, the pink color scheme and her outfit stay mostly consistent. Creators Eric Suerez and Adam Vaserstein tell me that their entire workflow is powered by AI, from writing the script to constructing the scenes. Their role, as a result, becomes close to creative directing.

@grannyspills

These projects often spin off merch, miniseries, and branded universes. The creators of Granny Spills, for example, have expanded their network, creating a Black granny as well as an Asian granny to cater to different audiences. The grannies now appear in crossover videos, as if they share the same fictional universe, pushing traffic between channels. 

In the same vein, it’s now more possible than ever to participate in an online trend. Consider  “Italian brainrot,” which went viral earlier this year. Beloved by Gen Z and Gen Alpha, these videos feature human–animal–object hybrids with pseudo-Italian names like “Bombardiro Crocodilo” and “Tralalero Tralala.” According to Know Your Meme, the craze began with a few viral TikTok sounds in fake Italian. Soon, a lot of people were participating in what felt like a massive collaborative hallucination, inventing characters, backstories, and worldviews for an ever-expanding absurdist universe. 

@patapimai

“Italian brainrot was great when it first hit,” says Denim Mazuki, a software developer and content creator who has been following the trend. “It was the collective lore-building that made it wonderful. Everyone added a piece. The characters were not owned by a studio or a single creator—they were made by the chronically online users.” 

This trend and others are further enabled by specialized and sophisticated new tools—like OpenArt, a platform designed not just for video generation but for video storytelling, which gives users frame-to-frame control over a developing narrative.

Making a video on OpenArt is straightforward: Users start with a few AI-generated character images and a line of text as simple as “cat dancing in a park.” The platform then spins out a scene breakdown that users can tweak act by act, and they can run it through multiple mainstream models and compare the results to see which look best.

OpenArt cofounders Coco Mao and Chloe Fang tell me they sponsored tutorial videos and created quick-start templates to capitalize specifically on the trend of regular people wanting to get in on Italian brainrot. They say more than 80% of their users have no artistic background. 

In defense of slop

The current use of the word “slop” online traces back to the early 2010s on 4chan, a forum known for its insular and often toxic in-jokes. As the term has spread, its meaning has evolved; it’s now a kind of derogatory slur for anything that feels like low-quality mass production aimed at an unsuspecting public, says Adam Aleksic, an internet linguist. People now slap it onto everything from salad bowls to meaningless work reports.

But even with that broadened usage, AI remains the first association: “slop” has become a convenient shorthand for dismissing almost any AI-generated output, regardless of its actual quality. The Cambridge Dictionary’s new sense of “slop” will almost certainly cement this perception, describing it as “content on the internet that is of very low quality, especially when it is created by AI.”   

Perhaps unsurprisingly, the word has become a charged label among AI creators. 

Anselmo embraces it semi-ironically, hence the title of his yearslong art project. “I see this series as an experimental sketchbook,” he says. “I am working with the slop, pushing the models, breaking them, and developing a new visual language. I have no shame that I am deep into AI.” Anselmo says that he does not concern himself with whether his work is “art.”

Garibay, the creator of the viral video where a human face emerged from a pot of physical slop, uses the label playfully. “The AI slop art is really just a lot of weird glitchy stuff that happens, and there’s not really a lot of depth usually behind it, besides the shock value,” he says. “But you will find out really fast that there is a heck of a lot more involved, if you want a higher-end result.” 

That’s largely in line with what Suerez and Vaserstein, the creators of Granny Spills, tell me. They actually hate it when their work is called slop, given the way the term is often used to dismiss AI-generated content out of hand. It feels disrespectful of their creative input, they say. Even though they do not write the scripts or paint the frames, they say they are making legitimate artistic choices. 

Indeed, for most of the creators I spoke to, making AI content is rarely a one-click process. They tell me that it takes skill, trial and error, and a strong sense of taste to consistently get the visuals they want. Lim says a single one-minute video can take hours, sometimes even days, to make. Anselmo, for his part, takes pride in actively pushing the model rather than passively accepting its output. “There’s just so many things that you can do with it that go well beyond ‘Oh, way to go, you typed in a prompt,’” he says. Ultimately, slop evokes a lot of feelings. Aleksic puts it well: “There’s a feeling of guilt on the user end for enjoying something that you know to be lowbrow. There’s a feeling of anger toward the creator for making something that is not up to your content expectations, and all the meantime, there’s a pervasive algorithmic anxiety hanging over us. We know that the algorithm and the platforms are to blame for the distribution of this slop.”

And that anxiety long predates generative AI. We’ve been living for years with the low-grade dread of being nudged, of having our taste engineered and our attention herded, so it’s not surprising that the anger latches onto the newest, most visible culprit. Sometimes it is misplaced, sure, but I also get the urge to assert human agency against a new force that seems to push all of us away from what we know and toward something we didn’t exactly choose.

But the negative association has real harm for the earlier adopters. Every AI video creator I spoke to described receiving hateful messages and comments simply for using these tools at all. These messages accuse AI creators of taking opportunities away from artists already struggling to make a living, and some dismiss their work as “grifting” and “garbage.” The backlash, of course, did not come out of nowhere. A Brookings study of one major freelance marketplace found that after new generative-AI tools launched in 2022, freelancers in AI-exposed occupations saw about 2% decline in contracts and a 5% drop in earnings. 

“The phrase ‘AI slop’ implies, like, a certain ease of creation that really bothers a lot of people—understandably, because [making AI-generated videos] doesn’t incorporate the artistic labor that we typically associate with contemporary art,” says Mindy Seu, a researcher, artist, and associate professor in digital arts at UCLA. 

At the root of the conflict here is that the use of AI in art is still nascent; there are few best practices and almost no guardrails. And there’s a kind of shame involved—one I recognize when I find myself lingering on bad AI content. 

Historically, new technology has always carried a whiff of stigma when it first appears, especially in creative fields where it seems to encroach on a previously manual craft. Seu says that digital art, internet art, and new media have been slow to gain recognition from cultural institutions, which remain key arbiters of what counts as “serious” or “relevant” art. 

For many artists, AI now sits in that same lineage: “Every big advance in technology yields the question ‘What is the role of the artist?’” she says. This is true even if creators are not seeing it as a replacement for authorship but simply as another way to create. 

Mao, the OpenArt founder, believes that learning how to use generative video tools will be crucial for future content creators, much as learning Photoshop was almost synonymous with graphic design for a generation. “It is a skill to be learned and mastered,” she says.

There is a generous reading of the phenomenon so many people call AI slop, which is that it is a kind of democratization. A rare skill shifts away from craftsmanship to something closer to creative direction: being able to describe what you want with enough linguistic precision, and to anchor it in references the model is likely to understand. You have to know how to ask, and what to point to. In that sense, discernment and critique sit closer to the center of the process than ever before.

It’s not just about creative direction, though, but about the human intention behind the creation. “It’s very easy to copy the style,” Lim says. “It’s very easy to make, like, old Asian women doing different things, but they [imitators] don’t understand why I’m doing it … Even when people try to imitate that, they don’t have that consistency.”

“It’s the idea behind AI creation that makes it interesting to look at,” says Zach Lieberman, a professor at the MIT Media Lab who leads a research group called Future Sketches, where members explore code-enabled images. Lieberman, who has been posting daily sketches generated by code for years, tells me that mathematical logic is not the enemy of beauty. He echoes Mao in saying that a younger generation will inevitably see AI as just another tool in the toolbox. Still, he feels uneasy: By relying so heavily on black-box AI models, artists lose some of the direct control over output that they’ve traditionally enjoyed.

A new online culture

For many people, AI slop is simply everything they already resent about the internet, turned up: ugly, noisy, and crowding out human work. It’s only possible because it’s been trained to take all creative work and make it fodder, stripped of origin, aura, or credit, and blended into something engineered to be mathematically average—arguably perfectly mediocre, by design. Charles Pulliam-Moore, a writer for The Verge, calls this the “formulaic derivativeness” that already defines so much internet culture: unimaginative, unoriginal, and uninteresting. 

But I love internet culture, and I have for a long time. Even at its worst, it’s bad in an interesting way: It offers a corner for every kind of obsession and invites you to add your own. Years of being chronically online have taught me that the real logic of slop consumption isn’t mastery but a kind of submission. As a user, I have almost no leverage over platforms or algorithms; I can’t really change how they work. Submission, though, doesn’t mean giving up. It’s more like recognizing that the tide is stronger than you and choosing to let it carry you. Good scrolling isn’t about control anyway. It’s closer to surfing, and sometimes you wash up somewhere ridiculous, but not entirely alone.

Mass-produced click-bait content has always been around. What’s new is that we can now watch it being generated in real time, on a scale that would have been unimaginable before. And the way we respond to it in turn shapes new content (see the trampoline-bouncing bunnies) and more culture and so on. Perhaps AI slop is born of submission to algorithmic logic. It’s unserious, surreal, and spectacular in ways that mirror our relationship to the internet itself. It is so banal—so aggressively, inhumanly mediocre—that it loops back around and becomes compelling. 

To “love AI slop” is to admit the internet is broken, that the infrastructure of culture is opportunistic and extractive. But even in that wreckage, people still find ways to play, laugh, and make meaning. 

Earlier this fall, months after I was briefly fooled by the bunny video, I was scrolling on Rednote and landed on videos by Mu Tianran, a Chinese creator who acts out weird skits that mimic AI slop. In one widely circulated clip, he plays a street interviewer asking other actors, “Do you know you are AI generated?”—parodying an earlier wave of AI-generated street interviews. The actors’ responses seem so AI, but of course they’re not: Eyes are fixed just off-camera, their laughter a beat too slow, their movements slightly wrong. 

Watching this, it was hard to believe that AI was about to snuff out human creativity. If anything, it has handed people a new style to inhabit and mock, another texture to play with. Maybe it’s all fine. Maybe the urge to imitate, remix, and joke is still stubbornly human, and AI cannot possibly take it away. 

The 8 worst technology flops of 2025

Welcome to our annual list of the worst, least successful, and simply dumbest technologies of the year.

This year, politics was a recurring theme. Donald Trump swept back into office and used his executive pen to reshape the fortunes of entire sectors, from renewables to cryptocurrency. The wrecking-ball act began even before his inauguration, when the president-elect marketed his own memecoin, $TRUMP, in a shameless act of merchandising that, of course, we honor on this year’s worst tech list.

We like to think there’s a lesson in every technological misadventure. But when technology becomes dependent on power, sometimes the takeaway is simpler: it would have been better to stay away.

That was a conclusion Elon Musk drew from his sojourn as instigator of DOGE, the insurgent cost-cutting initiative that took a chainsaw to federal agencies. The public protested. Teslas were set alight, and drivers of his hyped Cybertruck discovered that instead of a thumbs-up, they were getting the middle finger.

On reflection, Musk said he wouldn’t do it again. “Instead of doing DOGE, I would have, basically … worked on my companies,” he told an interviewer this month. “And they wouldn’t have been burning the cars.”

Regrets—2025 had a few. Here are some of the more notable ones.

NEO, the home robot

1X TECH

Imagine a metal butler that fills your dishwasher and opens the door. It’s a dream straight out of science fiction. And it’s going to remain there—at least for a while.

That was the hilarious, and deflating, takeaway from the first reviews of NEO, a 66-pound humanoid robot whose maker claims it will “handle any of your chores reliably” when it ships next year.

But as a reporter for the Wall Street Journal learned, NEO took two minutes to fold a sweater and couldn’t crack a walnut. Not only that, but the robot was teleoperated the entire time by a person wearing a VR visor.

Still interested? Neo is available on preorder for $20,000 from startup 1X.

More: I Tried the Robot That’s Coming to Live With You. It’s Still Part Human (WSJ), The World’s Stupidest Robot Maid (The Daily Show) Why the humanoid workforce is running late (MIT Technology Review), NEO The Home Robot | Order Today (1X Corp.)

Sycophantic AI

It’s been said that San Francisco is the kind of place where no one will tell you if you have a bad idea. And its biggest product in a decade—ChatGPT—often behaves exactly that way.

This year, OpenAI released an especially sycophantic update that told users their mundane queries were brilliantly incisive. This electronic yes-man routine isn’t an accident; it’s a product strategy. Plenty of people like the flattery.

But it’s disingenuous and dangerous, too. Chatbots have shown a willingness to indulge users’ delusions and worst impulses, up to and including suicide.

In April, OpenAI acknowledged the issue when the company dialed back a model update whose ultra-agreeable personality, it said, had the side effect of “validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions.”

Don’t you dare agree the problem is solved. This month, when I fed ChatGPT one of my dumbest ideas, its response began: “I love this concept.”

More: What OpenAI Did When ChatGPT Users Lost Touch With Reality (New York Times), Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence (arXiv), Expanding on what we missed with sycophancy (OpenAI)

The company that cried “dire wolf”

Two dire wolves are seen at 3 months old.

COLOSSAL BIOSCIENCES

When you tell a lie, tell it big. Make it frolic and give it pointy ears. And make it white. Very white.

That’s what the Texas biotech concern Colossal Biosciences did when it unveiled three snow-white animals that it claimed were actual dire wolves, which went extinct more than 10 millennia ago.

To be sure, these genetically modified gray wolves were impressive feats of engineering. They’d been made white via a genetic mutation and even had some bits and bobs of DNA copied over from old dire wolf bones. But they “are not dire wolves,” according to canine specialists at the International Union for Conservation of Nature.

Colossal’s promotional blitz could hurt actual endangered species. Presenting de-extinction as “a ready-to-use conservation solution,” said the IUCN, “risks diverting attention from the more urgent need of ensuring functioning and healthy ecosystems.”

In a statement, Colossal said that sentiment analysis of online activity shows 98% agreement with its furry claims. “They’re dire wolves, end of story,” it says.  

More: Game of Clones: Colossal’s new wolves are cute, but are they dire? (MIT Technology Review), Conservation perspectives on gene editing in wild canids (IUCN),  A statement from Colossal’s Chief Science Officer, Dr. Beth Shapiro (Reddit)

mRNA political purge

RFK Jr composited with a vaccine vial that has a circle and slash icon over it

MITTR | GETTY IMAGES

Save the world, and this is the thanks you get?

During the covid-19 pandemic, the US bet big on mRNA vaccines—and the new technology delivered in record time. 

But now that America’s top health agencies are led by the antivax wackadoodle Robert F. Kennedy Jr., “mRNA” has become a political slur.

In August, Kennedy abruptly canceled hundreds of millions in contracts for next-generation vaccines. And shot maker Moderna—once America’s champion—has seen its stock slide by more than 90% since its Covid peak.

The purge targeting a key molecule of life (our bodies are full of mRNA) isn’t just bizarre. It could slow down other mRNA-based medicine, like cancer treatments and gene editing for rare diseases.

In August, a trade group fought back, saying: “Kennedy’s unscientific and misguided vilification of mRNA technology and cancellation of grants is the epitome of cutting off your nose to spite your face.”

More: HHS Winds Down mRNA Vaccine Development (US Department of Health and Human Services),  Cancelling mRNA studies is the highest irresponsibility (Nature), How Moderna, the company that helped save the world, unraveled (Stat News)

​​Greenlandic Wikipedia

WIKIPEDIA

Wikipedia has editions in 340 languages. But as of this year, there’s one less: Wikipedia in Greenlandic is no more.

Only around 60,000 people speak the Inuit language. And very few of them, it seems, ever cared much about the online encyclopedia. As a result, many of the entries were machine translations riddled with errors and nonsense.

Perhaps a website no one visits shouldn’t be a problem. But its existence created the risk of a linguistic “doom spiral” for the endangered language. That could happen if new AIs were trained on the corrupt Wikipedia articles.  

In September, administrators voted to close Greenlandic Wikipedia, citing possible “harm to the Greenlandic language.”

Read more:  Can AI Help Revitalize Indigenous Languages? (Smithsonian), How AI and Wikipedia have sent vulnerable languages into a doom spiral (MIT Technology Review), Closure of Greenlandic Wikipedia (Wikimedia)

Tesla Cybertruck

Tesla Cybertruck-rows of new cars in port

ADOBE STOCK

There’s a reason we’re late to the hate-fest around Elon Musk’s Cybertruck. That’s because 12 months ago, the polemical polygon was the #1 selling electric pickup in the US.

So maybe it would end up a hit.

Nope. Tesla is likely to sell only around 20,000 trucks this year, about half last year’s total. And a big part of the problem is that the entire EV pickup category is struggling. Just this month, Ford decided to scrap its own EV truck, the F-150 Lightning. 

With unsold inventory building, Musk has started selling Cybertrucks as fleet vehicles to his other enterprises, like SpaceX.

More: Elon’s Edsel: Tesla Cybertruck Is The Auto Industry’s Biggest Flop In Decades (Forbes), Why Tesla Cybertrucks Aren’t Selling (CNBC), Ford scraps fully-electric F-150 Lightning as mounting losses and falling demand hits EV plans (AP)

Presidential shitcoin

VIA GETTRUMPMEMES.COM

Donald Trump launched a digital currency called $TRUMP just days before his 2025 inauguration, accompanied by a logo showing his fist-pumping “Fight, fight, fight” pose.

This was a memecoin, or shitcoin, not real money. Memecoins are more like merchandise—collectibles designed to be bought and sold, usually for a loss. Indeed, they’ve been likened to a consensual scam in which a coin’s issuer can make a bundle while buyers take losses.

The White House says there’s nothing amiss. “The American public believe[s] it’s absurd for anyone to insinuate that this president is profiting off of the presidency,” said spokeswoman Karoline Leavitt in May.

More: Donald and Melania Trump’s Terrible, Tacky, Seemingly Legal Memecoin Adventure (Bloomberg), A crypto mogul who invested millions into Trump coins is getting a reprieve (CNN), How the Trump companies made $1 bn from crypto (Financial Times), Staff Statement on Meme Coins (SEC)

“Carbon-neutral” Apple Watch

Apple's Carbon Neutral logo with the product Apple Watch

APPLE

In 2023, Apple announced its “first-ever carbon-neutral product,” a watch with “zero” net emissions. It would get there using recycled materials and renewable energy, and by preserving forests or planting vast stretches of eucalyptus trees.

Critics say it’s greenwashing. This year, lawyers filed suit in California against Apple for deceptive advertising, and in Germany, a court ruled that the company can’t advertise products as carbon neutral because the “supposed storage of CO2 in commercial eucalyptus plantations” isn’t a sure thing.

Apple’s marketing team relented. Packaging for its newest watches doesn’t say “carbon neutral.” But Apple believes the legal nitpicking is counterproductive, arguing that it can only “discourage the kind of credible corporate climate action the world needs.”

More: Inside the controversial tree farms powering Apple’s carbon neutral goal (MIT Technology Review), Apple Watch not a ‘CO2-neutral product,’ German court finds (Reuters), Apple 2030: Our ambition to become carbon neutral (Apple)

4 technologies that didn’t make our 2026 breakthroughs list

If you’re a longtime reader, you probably know that our newsroom selects 10 breakthroughs every year that we think will define the future. This group exercise is mostly fun and always engrossing, but at times it can also be quite difficult. 

We collectively pitch dozens of ideas, and the editors meticulously review and debate the merits of each. We agonize over which ones might make the broadest impact, whether one is too similar to something we’ve featured in the past, and how confident we are that a recent advance will actually translate into long-term success. There is plenty of lively discussion along the way.  

The 2026 list will come out on January 12—so stay tuned. In the meantime, I wanted to share some of the technologies from this year’s reject pile, as a window into our decision-making process. 

These four technologies won’t be on our 2026 list of breakthroughs, but all were closely considered, and we think they’re worth knowing about. 

Male contraceptives 

There are several new treatments in the pipeline for men who are sexually active and wish to prevent pregnancy—potentially providing them with an alternative to condoms or vasectomies. 

Two of those treatments are now being tested in clinical trials by a company called Contraline. One is a gel that men would rub on their shoulder or upper arm once a day to suppress sperm production, and the other is a device designed to block sperm during ejaculation. (Kevin Eisenfrats, Contraline’s CEO, was recently named to our Innovators Under 35 list). A once-a-day pill is also in early-stage trials with the firm YourChoice Therapeutics. 

Though it’s exciting to see this progress, it will still take several years for any of these treatments to make their way through clinical trials—assuming all goes well.

World models 

World models have become the hot new thing in AI in recent months. Though they’re difficult to define, these models are generally trained on videos or spatial data and aim to produce 3D virtual worlds from simple prompts. They reflect fundamental principles, like gravity, that govern our actual world. The results could be used in game design or to make robots more capable by helping them understand their physical surroundings. 

Despite some disagreements on exactly what constitutes a world model, the idea is certainly gaining momentum. Renowned AI researchers including Yann LeCun and Fei-Fei Li have launched companies to develop them, and Li’s startup World Labs released its first version last month. And Google made a huge splash with the release of its Genie 3 world model earlier this year. 

Though these models are shaping up to be an exciting new frontier for AI in the year ahead, it seemed premature to deem them a breakthrough. But definitely watch this space. 

Proof of personhood 

Thanks to AI, it’s getting harder to know who and what is real online. It’s now possible to make hyperrealistic digital avatars of yourself or someone you know based on very little training data, using equipment many people have at home. And AI agents are being set loose across the internet to take action on people’s behalf. 

All of this is creating more interest in what are known as personhood credentials, which could offer a way to verify that you are, in fact, a real human when you do something important online. 

For example, we’ve reported on efforts by OpenAI, Microsoft, Harvard, and MIT to create a digital token that would serve this purpose. To get it, you’d first go to a government office or other organization and show identification. Then it’d be installed on your device and whenever you wanted to, say, log into your bank account, cryptographic protocols would verify that the token was authentic—confirming that you are the person you claim to be. 

Whether or not this particular approach catches on, many of us in the newsroom agree that the future internet will need something along these lines. Right now, though, many competing identity verification projects are in various stages of development. One is World ID by Sam Altman’s startup Tools for Humanity, which uses a twist on biometrics. 

If these efforts reach critical mass—or if one emerges as the clear winner, perhaps by becoming a universal standard or being integrated into a major platform—we’ll know it’s time to revisit the idea.  

The world’s oldest baby

In July, senior reporter Jessica Hamzelou broke the news of a record-setting baby. The infant developed from an embryo that had been sitting in storage for more than 30 years, earning him the bizarre honorific of “oldest baby.” 

This odd new record was made possible in part by advances in IVF, including safer methods of thawing frozen embryos. But perhaps the greater enabler has been the rise of “embryo adoption” agencies that pair donors with hopeful parents. People who work with these agencies are sometimes more willing to make use of decades-old embryos. 

This practice could help find a home for some of the millions of leftover embryos that remain frozen in storage banks today. But since this recent achievement was brought about by changing norms as much as by any sudden technological improvements, this record didn’t quite meet our definition of a breakthrough—though it’s impressive nonetheless.

Nominations are now open for our global 2026 Innovators Under 35 competition

We have some exciting news: Nominations are now open for MIT Technology Review’s 2026 Innovators Under 35 competition. This annual list recognizes 35 of the world’s best young scientists and inventors, and our newsroom has produced it for more than two decades. 

It’s free to nominate yourself or someone you know, and it only takes a few moments. Submit your nomination before 5 p.m. ET on Tuesday, January 20, 2026. 

We’re looking for people who are making important scientific discoveries and applying that knowledge to build new technologies. Or those who are engineering new systems and algorithms that will aid our work or extend our abilities. 

Each year, many honorees are focused on improving human health or solving major problems like climate change; others are charting the future path of artificial intelligence or developing the next generation of robots. 

The most successful candidates will have made a clear advance that is expected to have a positive impact beyond their own field. They should be the primary scientific or technical driver behind the work involved, and we like to see some signs that a candidate’s innovation is gaining real traction. You can look at last year’s list to get an idea of what we look out for.

We encourage self-nominations, and if you previously nominated someone who wasn’t selected, feel free to put them forward again. Please note: To be eligible for the 2026 list, nominees must be under the age of 35 as of October 1, 2026. 

Semifinalists will be notified by early March and asked to complete an application at that time. Winners are then chosen by the editorial staff of MIT Technology Review, with input from a panel of expert judges. (Here’s more info about our selection process and timelines.) 

If you have any questions, please contact tr35@technologyreview.com. We look forward to reviewing your nominations. Good luck!