What’s next for AI in 2026

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

In an industry in constant flux, sticking your neck out to predict what’s coming next may seem reckless. (AI bubble? What AI bubble?) But for the last few years we’ve done just that—and we’re doing it again. 

How did we do last time? We picked five hot AI trends to look out for in 2025, including what we called generative virtual playgrounds, a.k.a world models (check: From Google DeepMind’s Genie 3 to World Labs’s Marble, tech that can generate realistic virtual environments on the fly keeps getting better and better); so-called reasoning models (check: Need we say more? Reasoning models have fast become the new paradigm for best-in-class problem solving); a boom in AI for science (check: OpenAI is now following Google DeepMind by setting up a dedicated team to focus on just that); AI companies that are cozier with national security (check: OpenAI reversed position on the use of its technology for warfare to sign a deal with the defense-tech startup Anduril to help it take down battlefield drones); and legitimate competition for Nvidia (check, kind of: China is going all in on developing advanced AI chips, but Nvidia’s dominance still looks unassailable—for now at least). 

So what’s coming in 2026? Here are our big bets for the next 12 months. 

More Silicon Valley products will be built on Chinese LLMs

The last year shaped up as a big one for Chinese open-source models. In January, DeepSeek released R1, its open-source reasoning model, and shocked the world with what a relatively small firm in China could do with limited resources. By the end of the year, “DeepSeek moment” had become a phrase frequently tossed around by AI entrepreneurs, observers, and builders—an aspirational benchmark of sorts. 

It was the first time many people realized they could get a taste of top-tier AI performance without going through OpenAI, Anthropic, or Google.

Open-weight models like R1 allow anyone to download a model and run it on their own hardware. They are also more customizable, letting teams tweak models through techniques like distillation and pruning. This stands in stark contrast to the “closed” models released by major American firms, where core capabilities remain proprietary and access is often expensive.

As a result, Chinese models have become an easy choice. Reports by CNBC and Bloomberg suggest that startups in the US have increasingly recognized and embraced what they can offer.

One popular group of models is Qwen, created by Alibaba, the company behind China’s largest e-commerce platform, Taobao. Qwen2.5-1.5B-Instruct alone has 8.85 million downloads, making it one of the most widely used pretrained LLMs. The Qwen family spans a wide range of model sizes alongside specialized versions tuned for math, coding, vision, and instruction-following, a breadth that has helped it become an open-source powerhouse.

Other Chinese AI firms that were previously unsure about committing to open source are following DeepSeek’s playbook. Standouts include Zhipu’s GLM and Moonshot’s Kimi. The competition has also pushed American firms to open up, at least in part. In August, OpenAI released its first open-source model. In November, the Allen Institute for AI, a Seattle-based nonprofit, released its latest open-source model, Olmo 3. 

Even amid growing US-China antagonism, Chinese AI firms’ near-unanimous embrace of open source has earned them goodwill in the global AI community and a long-term trust advantage. In 2026, expect more Silicon Valley apps to quietly ship on top of Chinese open models, and look for the lag between Chinese releases and the Western frontier to keep shrinking—from months to weeks, and sometimes less.

Caiwei Chen

The US will face another year of regulatory tug-of-war

T​​he battle over regulating artificial intelligence is heading for a showdown. On December 11, President Donald Trump signed an executive order aiming to neuter state AI laws, a move meant to handcuff states from keeping the growing industry in check. In 2026, expect more political warfare. The White House and states will spar over who gets to govern the booming technology, while AI companies wage a fierce lobbying campaign to crush regulations, armed with the narrative that a patchwork of state laws will smother innovation and hobble the US in the AI arms race against China.

Under Trump’s executive order, states may fear being sued or starved federal funding if they clash with his vision for light-touch regulation. Big Democratic states like California—which just enacted the nation’s first frontier AI law requiring companies to publish safety testing for their AI models—will take the fight to court, arguing that only Congress can override state laws. But states that can’t afford to lose federal funding, or fear getting in Trump’s crosshairs, might fold. Still, expect to see more state lawmaking on hot-button issues, especially where Trump’s order gives states a green light to legislate. With chatbots accused of triggering teen suicides and data centers sucking up more and more energy, states will face mounting public pressure to push for guardrails. 

In place of state laws, Trump promises to work with Congress to establish a federal AI law. Don’t count on it. Congress failed to pass a moratorium on state legislation twice in 2025, and we aren’t holding out hope that it will deliver its own bill this year. 

AI companies like OpenAI and Meta will continue to deploy powerful super-PACs to support political candidates who back their agenda and target those who stand in their way. On the other side, super-PACs supporting AI regulation will build their own war chests to counter. Watch them duke it out at next year’s midterm elections.

The further AI advances, the more people will fight to steer its course, and 2026 will be another year of regulatory tug-of-war—with no end in sight.

Michelle Kim

Chatbots will change the way we shop

Imagine a world in which you have a personal shopper at your disposal 24-7—an expert who can instantly recommend a gift for even the trickiest-to-buy-for friend or relative, or trawl the web to draw up a list of the best bookcases available within your tight budget. Better yet, they can analyze a kitchen appliance’s strengths and weaknesses, compare it with its seemingly identical competition, and find you the best deal. Then once you’re happy with their suggestion, they’ll take care of the purchasing and delivery details too.

But this ultra-knowledgeable shopper isn’t a clued-up human at all—it’s a chatbot. This is no distant prediction, either. Salesforce recently said it anticipates that AI will drive $263 billion in online purchases this holiday season. That’s some 21% of all orders. And experts are betting on AI-enhanced shopping becoming even bigger business within the next few years. By 2030, between $3 trillion and $5 trillion annually will be made from agentic commerce, according to research from the consulting firm McKinsey. 

Unsurprisingly, AI companies are already heavily invested in making purchasing through their platforms as frictionless as possible. Google’s Gemini app can now tap into the company’s powerful Shopping Graph data set of products and sellers, and can even use its agentic technology to call stores on your behalf. Meanwhile, back in November, OpenAI announced a ChatGPT shopping feature capable of rapidly compiling buyer’s guides, and the company has struck deals with Walmart, Target, and Etsy to allow shoppers to buy products directly within chatbot interactions. 

Expect plenty more of these kinds of deals to be struck within the next year as consumer time spent chatting with AI keeps on rising, and web traffic from search engines and social media continues to plummet. 

Rhiannon Williams

An LLM will make an important new discovery

I’m going to hedge here, right out of the gate. It’s no secret that large language models spit out a lot of nonsense. Unless it’s with monkeys-and-typewriters luck, LLMs won’t discover anything by themselves. But LLMs do still have the potential to extend the bounds of human knowledge.

We got a glimpse of how this could work in May, when Google DeepMind revealed AlphaEvolve, a system that used the firm’s Gemini LLM to come up with new algorithms for solving unsolved problems. The breakthrough was to combine Gemini with an evolutionary algorithm that checked its suggestions, picked the best ones, and fed them back into the LLM to make them even better.

Google DeepMind used AlphaEvolve to come up with more efficient ways to manage power consumption by data centers and Google’s TPU chips. Those discoveries are significant but not game-changing. Yet. Researchers at Google DeepMind are now pushing their approach to see how far it will go.

And others have been quick to follow their lead. A week after AlphaEvolve came out, Asankhaya Sharma, an AI engineer in Singapore, shared OpenEvolve, an open-source version of Google DeepMind’s tool. In September, the Japanese firm Sakana AI released a version of the software called SinkaEvolve. And in November, a team of US and Chinese researchers revealed AlphaResearch, which they claim improves on one of AlphaEvolve’s already better-than-human math solutions.

There are alternative approaches too. For example, researchers at the University of Colorado Denver are trying to make LLMs more inventive by tweaking the way so-called reasoning models work. They have drawn on what cognitive scientists know about creative thinking in humans to push reasoning models toward solutions that are more outside the box than their typical safe-bet suggestions.

Hundreds of companies are spending billions of dollars looking for ways to get AI to crack unsolved math problems, speed up computers, and come up with new drugs and materials. Now that AlphaEvolve has shown what’s possible with LLMs, expect activity on this front to ramp up fast.    

Will Douglas Heaven

Legal fights heat up

For a while, lawsuits against AI companies were pretty predictable: Rights holders like authors or musicians would sue companies that trained AI models on their work, and the courts generally found in favor of the tech giants. AI’s upcoming legal battles will be far messier.

The fights center on thorny, unresolved questions: Can AI companies be held liable for what their chatbots encourage people to do, as when they help teens plan suicides? If a chatbot spreads patently false information about you, can its creator be sued for defamation? If companies lose these cases, will insurers shun AI companies as clients?

In 2026, we’ll start to see the answers to these questions, in part because some notable cases will go to trial (the family of a teen who died by suicide will bring OpenAI to court in November).

At the same time, the legal landscape will be further complicated by President Trump’s executive order from December—see Michelle’s item above for more details on the brewing regulatory storm.

No matter what, we’ll see a dizzying array of lawsuits in all directions (not to mention some judges even turning to AI amid the deluge).

James O’Donnell

3 things Will Douglas Heaven is into right now

The most amazing drummer on the internet

My daughter introduced me to El Estepario Siberiano’s YouTube channel a few months back, and I have been obsessed ever since. The Spanish drummer (real name: Jorge Garrido) posts videos of himself playing supercharged cover versions of popular tracks, hitting his drums with such jaw-dropping speed and technique that he makes other pro drummers shake their heads in disbelief. The dozens of reaction videos posted by other musicians are a joy in themselves. 

Jorge Garrido playing drums

EL ESTEPARIO SIBERIANO VIA YOUTUBE

Garrido is up-front about the countless hours that it took to get this good. He says he sat behind his kit almost all day, every day for years. At a time when machines appear to do it all, there’s a kind of defiance in that level of human effort. It’s why my favorites are Garrido’s covers of electronic music, where he out-drums the drum machine. Check out his version of Skrillex and Missy Elliot’s “Ra Ta Ta” and tell me it doesn’t put happiness in your heart.

Finding signs of life in the uncanny valley

Watching Sora ­videos of Michael Jackson stealing a box of chicken nuggets or Sam Altman biting into the pink meat of a flame-grilled Pikachu has given me flashbacks to an Ed Atkins exhibition at Tate Britain I saw a few months ago. Atkins is one of the most influential and unsettling British artists of his generation. He is best known for hyper-detailed CG animations of himself (pore-perfect skin, janky movement) that play with the virtual representation of human emotions. 

Still from ED ATKINS PIANOWORK 2 2023
COURTESY: THE ARTIST, CABINET GALLERY, LONDON, DÉPENDANCE, BRUSSELS, GLADSTONE GALLERY

In The Worm we see a CGI Atkins make a long-distance call to his mother during a covid lockdown. The audio is from a recording of an actual conversation. Are we watching Atkins cry or his avatar? Our attention flickers between two realities. “When an actor breaks character during a scene, it’s known as corpsing,” Atkins has said. “I want everything I make to corpse.” Next to Atkins’s work, generative videos look like cardboard cutouts: lifelike but not alive.

A dark and dirty book about a talking dingo

What’s it like to be a pet? Australian author Laura Jean McKay’s debut novel, The Animals in That Country, will make you wish you’d never asked. A flu-like pandemic leaves people with the ability to hear what animals are saying. If that sounds too Dr. Dolittle for your tastes, rest assured: These animals are weird and nasty. A lot of the time they don’t even make any sense. 

cover of book

SCRIBE

With everybody now talking to their computers, McKay’s book resets the anthropomorphic trap we’ve all fallen into. It’s a brilliant evocation of what a nonhuman mind might containand a meditation on the hard limits of communication.

Job titles of the future: Head-transplant surgeon

The Italian neurosurgeon Sergio Canavero has been preparing for a surgery that might never happen. His idea? Swap a sick person’s head—or perhaps just the brain—onto a younger, healthier body.

Canavero caused a stir in 2017 when he announced that a team he advised in China had exchanged heads between two corpses. But he never convinced skeptics that his technique could succeed—or to believe his claim that a procedure on a live person was imminent. The Chicago Tribune labeled him the “P.T. Barnum of transplantation.”

Canavero withdrew from the spotlight. But the idea of head transplants isn’t going away. Instead, he says, the concept has recently been getting a fresh look from life-extension enthusiasts and stealth Silicon Valley startups.

Career path

It’s been rocky. After he began publishing his surgical ideas a decade ago, Canavero says, he got his “pink slip” from the Molinette Hospital in Turin, where he’d spent 22 years on staff. “I’m an out-of-the-establishment guy. So that has made things harder, I have to say,” he says.  

Why he persists

No other solution to aging is on the horizon. “It’s become absolutely clear over the past years that the idea of some incredible tech to rejuvenate elderly people—­happening in some secret lab, like Google—is really going nowhere,” he says. “You have to go for the whole shebang.”

The whole shebang?

He means getting a new body, not just one new organ. Canavero has an easy mastery of English idioms and an unexpected Southern twang. He says that’s due to a fascination with American comics as a child. “For me, learning the language of my heroes was paramount,” he says. “So I can shoot the breeze.” 

Cloned bodies

Canavero is now an independent investigator and has advised entrepreneurs who want to create brainless human clones as a source of DNA-matched organs that wouldn’t get rejected by a recipient’s immune system. “I can tell you there are guys from top universities involved,” he says.

What’s next

Combining the necessary technologies, like reliably precise surgical robots and artificial wombs to grow the clones, is going to be complex and very, very expensive. Canavero lacks the funds to take his plans further, but he believes “the money is out there” for a commercial moonshot project: “What I say to the billionaires is ‘Come together.’ You will all have your own share, plus make yourselves immortal.”

Why inventing new emotions feels so good

Have you ever felt “velvetmist”? 

It’s a “complex and subtle emotion that elicits feelings of comfort, serenity, and a gentle sense of floating.” It’s peaceful, but more ephemeral and intangible than contentment. It might be evoked by the sight of a sunset or a moody, low-key album.  

If you haven’t ever felt this sensation—or even heard of it—that’s not surprising. A Reddit user named noahjeadie generated it with ChatGPT, along with advice on how to evoke the feeling. With the right essential oils and soundtrack, apparently, you too can feel like “a soft fuzzy draping ghost floating through a lavender suburb.”

Don’t scoff: Researchers say more and more terms for these “neo-­emotions” are showing up online, describing new dimensions and aspects of feeling. Velvetmist was a key example in a journal article about the phenomenon published in July 2025. But most neo-emotions aren’t the inventions of emo artificial intelligences. Humans come up with them, and they’re part of a big change in the way researchers are thinking about feelings, one that emphasizes how people continuously spin out new ones in response to a changing world. 

Velvetmist might’ve been a chatbot one-off, but it’s not unique. The sociologist Marci Cottingham—whose 2024 paper got this vein of neo-emotion research started—cites many more new terms in circulation. There’s “Black joy” (Black people celebrating embodied pleasure as a form of political resistance), “trans euphoria” (the joy of having one’s gender identity affirmed and celebrated), “eco-anxiety” (the hovering fear of climate disaster), “hypernormalization” (the surreal pressure to continue performing mundane life and labor under capitalism during a global pandemic or fascist takeover), and the sense of “doom” found in “doomer” (one who is relentlessly pessimistic) or “doomscrolling” (being glued to an endless feed of bad news in an immobilized state combining apathy and dread). 

Of course, emotional vocabulary is always evolving. During the Civil War, doctors used the centuries-old term “nostalgia,” combining the Greek words for “returning home”and “pain,” to describe a sometimes fatal set of symptoms suffered by soldiers—a condition we’d probably describe today as post-traumatic stress disorder. Now nostalgia’s meaning has mellowed and faded to a gentle affection for an old cultural product or vanished way of life. And people constantly import emotion words from other cultures when they’re convenient or evocative—like hygge (the Danish word for friendly coziness) or kvell (a Yiddish term for brimming over with happy pride). 

Cottingham believes that neo-­emotions are proliferating as people spend more of their lives online. These coinages help us relate to one another and make sense of our experiences, and they get a lot of engagement on social media. So even when a neo-emotion is just a subtle variation on, or combination of, existing feelings, getting super-specific about those feelings helps us reflect and connect with other people. “These are potentially signals that tell us about our place in the world,” she says. 

These neo-emotions are part of a paradigm shift in emotion science. For decades, researchers argued that humans all share a set of a half-dozen or so basic emotions. But over the last decade, Lisa Feldman Barrett, a clinical psychologist at Northeastern University, has become one of the most cited scientists in the world for work demonstrating otherwise. By using tools like advanced brain imaging and studying babies and people from relatively isolated cultures, she has concluded there’s no such thing as a basic emotional palette. The way we experience and talk about our feelings is culturally determined. “How do you know what anger and sadness and fear are? Because somebody taught you,” Barrett says. 

If there are no true “basic” biological emotions, this puts more emphasis on social and cultural variations in how we interpret our experiences. And these interpretations can change over time. “As a sociologist, we think of all emotions as created,” Cottingham says. Just like any other tool humans make and use, “emotions are a practical resource people are using as they navigate the world.” 

Some neo-emotions, like velvetmist, might be mere novelties. Barrett playfully suggests “chiplessness” to describe the combined hunger, frustration, and relief of getting to the bottom of the bag. But others, like eco-anxiety and Black joy, can take on a life of their own and help galvanize social movements.  

Both reading about and crafting your own neo-emotions, with or without chatbot assistance, could be surprisingly helpful. Lots of research supports the benefits of emotional granularity. Basically, the more detailed and specific words you can use to describe your emotions, both positive and negative, the better. 

Researchers analogize this “emodiversity” to biodiversity or cultural diversity, arguing that a more diverse world is more enriched. It turns out that people who exhibit higher emotional granularity go to the doctor less frequently, spend fewer days hospitalized for illness, and are less likely to drink when stressed, drive recklessly, or smoke cigarettes. And many studies show emodiversity is a skill that, with training, people can develop at any age. Just imagine cruising into this sweet, comforting future. Is the idea giving you a certain dreamy thrill?

Are you sure you’ve never felt velvetmist?

Anya Kamenetz is a freelance education reporter who writes the Substack newsletter The Golden Hour.

The ascent of the AI therapist

We’re in the midst of a global mental-­health crisis. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year.

Given the clear demand for accessible and affordable mental-health services, it’s no wonder that people have looked to artificial intelligence for possible relief. Millions are already actively seeking therapy from popular chatbots like OpenAI’s ChatGPT and Anthropic’s Claude, or from specialized psychology apps like Wysa and Woebot. On a broader scale, researchers are exploring AI’s potential to monitor and collect behavioral and biometric observations using wearables and smart devices, analyze vast volumes of clinical data for new insights, and assist human mental-health professionals to help prevent burnout. 

But so far this largely uncontrolled experiment has produced mixed results. Many people have found solace in chatbots based on large language models (LLMs), and some experts see promise in them as therapists, but other users have been sent into delusional spirals by AI’s hallucinatory whims and breathless sycophancy. Most tragically, multiple families have alleged that chatbots contributed to the suicides of their loved ones, sparking lawsuits against companies responsible for these tools. In October, OpenAI CEO Sam Altman revealed in a blog post that 0.15% of ChatGPT users “have conversations that include explicit indicators of potential suicidal planning or intent.” That’s roughly a million people sharing suicidal ideations with just one of these software systems every week.

The real-world consequences of AI therapy came to a head in unexpected ways in 2025 as we waded through a critical mass of stories about human-chatbot relationships, the flimsiness of guardrails on many LLMs, and the risks of sharing profoundly personal information with products made by corporations that have economic incentives to harvest and monetize such sensitive data. 

Several authors anticipated this inflection point. Their timely books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust. 

LLMs have often been described as “black boxes” because nobody knows exactly how they produce their results. The inner workings that guide their outputs are opaque because their algorithms are so complex and their training data is so vast. In mental-health circles, people often describe the human brain as a “black box,” for analogous reasons. Psychology, psychiatry, and related fields must grapple with the impossibility of seeing clearly inside someone else’s head, let alone pinpointing the exact causes of their distress. 

These two types of black boxes are now interacting with each other, creating unpredictable feedback loops that may further impede clarity about the origins of people’s mental-­health struggles and the solutions that may be possible. Anxiety about these developments has much to do with the explosive recent advances in AI, but it also revives decades-old warnings from pioneers such as the MIT computer scientist Joseph Weizenbaum, who argued against computerized therapy as early as the 1960s.  


cover of Dr Bot
Dr. Bot: Why Doctors Can Fail Us— and
How AI Could Save Lives

Charlotte Blease
YALE UNIVERSITY PRESS, 2025

Charlotte Blease, a philosopher of medicine, makes the optimist’s case in Dr. Bot: Why Doctors Can Fail Us—and How AI Could Save Lives. Her book broadly explores the possible positive impacts of AI in a range of medical fields. While she remains clear-eyed about the risks, warning that readers who are expecting “a gushing love letter to technology” will be disappointed, she suggests that these models can help relieve patient suffering and medical burnout alike.

“Health systems are crumbling under patient pressure,” Blease writes. “Greater burdens on fewer doctors create the perfect petri dish for errors,” and “with palpable shortages of doctors and increasing waiting times for patients, many of us are profoundly frustrated.”

Blease believes that AI can not only ease medical professionals’ massive workloads but also relieve the tensions that have always existed between some patients and their caregivers. For example, people often don’t seek needed care because they are intimidated or fear judgment from medical professionals; this is especially true if they have mental-health challenges. AI could allow more people to share their concerns, she argues. 

But she’s aware that these putative upsides need to be weighed against major drawbacks. For instance, AI therapists can provide inconsistent and even dangerous responses to human users, according to a 2025 study, and they also raise privacy concerns, given that AI companies are currently not bound by the same confidentiality and HIPAA standards as licensed therapists. 

While Blease is an expert in this field, her motivation for writing the book is also personal: She has two siblings with an incurable form of muscular dystrophy, one of whom waited decades for a diagnosis. During the writing of her book, she also lost her partner to cancer and her father to dementia within a devastating six-month period. “I witnessed first-hand the sheer brilliance of doctors and the kindness of health professionals,” she writes. “But I also observed how things can go wrong with care.”


cover of the Silicon Shrink
The Silicon Shrink: How Artificial Intelligence Made the World an Asylum
Daniel Oberhaus
MIT PRESS, 2025

A similar tension animates Daniel Oberhaus’s engrossing book The Silicon Shrink: How Artificial Intelligence Made the World an Asylum. Oberhaus starts from a point of tragedy: the loss of his younger sister to suicide. As Oberhaus carried out the “distinctly twenty-first-century mourning process” of sifting through her digital remains, he wondered if technology could have eased the burden of the psychiatric problems that had plagued her since childhood.

“It seemed possible that all of this personal data might have held important clues that her mental health providers could have used to provide more effective treatment,” he writes. “What if algorithms running on my sister’s smartphone or laptop had used that data to understand when she was in distress? Could it have led to a timely intervention that saved her life? Would she have wanted that even if it did?”

This concept of digital phenotyping—in which a person’s digital behavior could be mined for clues about distress or illness—seems elegant in theory. But it may also become problematic if integrated into the field of psychiatric artificial intelligence (PAI), which extends well beyond chatbot therapy.

Oberhaus emphasizes that digital clues could actually exacerbate the existing challenges of modern psychiatry, a discipline that remains fundamentally uncertain about the underlying causes of mental illnesses and disorders. The advent of PAI, he says, is “the logical equivalent of grafting physics onto astrology.” In other words, the data generated by digital phenotyping is as precise as physical measurements of planetary positions, but it is then integrated into a broader framework—in this case, psychiatry—that, like astrology, is based on unreliable assumptions.  

Oberhaus, who uses the phrase “swipe psychiatry” to describe the outsourcing of clinical decisions based on behavioral data to LLMs, thinks that this approach cannot escape the fundamental issues facing psychiatry. In fact, it could worsen the problem by causing the skills and judgment of human therapists to atrophy as they grow more dependent on AI systems. 

He also uses the asylums of the past—in which institutionalized patients lost their right to freedom, privacy, dignity, and agency over their lives—as a touchstone for a more insidious digital captivity that may spring from PAI. LLM users are already sacrificing privacy by telling chatbots sensitive personal information that companies then mine and monetize, contributing to a new surveillance economy. Freedom and dignity are at stake when complex inner lives are transformed into data streams tailored for AI analysis. 

AI therapists could flatten humanity into patterns of prediction, and so sacrifice the intimate, individualized care that is expected of traditional human therapists. “The logic of PAI leads to a future where we may all find ourselves patients in an algorithmic asylum administered by digital wardens,” Oberhaus writes. “In the algorithmic asylum there is no need for bars on the window or white padded rooms because there is no possibility of escape. The asylum is already everywhere—in your homes and offices, schools and hospitals, courtrooms and barracks. Wherever there’s an internet connection, the asylum is waiting.”


cover of Chatbot Therapy
Chatbot Therapy:
A Critical Analysis of
AI Mental Health Treatment

Eoin Fullam
ROUTLEDGE, 2025

Eoin Fullam, a researcher who studies the intersection of technology and mental health, echoes some of the same concerns in Chatbot Therapy: A Critical Analysis of AI Mental Health Treatment. A heady academic primer, the book analyzes the assumptions underlying the automated treatments offered by AI chatbots and the way capitalist incentives could corrupt these kinds of tools.  

Fullam observes that the capitalist mentality behind new technologies “often leads to questionable, illegitimate, and illegal business practices in which the customers’ interests are secondary to strategies of market dominance.”

That doesn’t mean that therapy-bot makers “will inevitably conduct nefarious activities contrary to the users’ interests in the pursuit of market dominance,” Fullam writes. 

But he notes that the success of AI therapy depends on the inseparable impulses to make money and to heal people. In this logic, exploitation and therapy feed each other: Every digital therapy session generates data, and that data fuels the system that profits as unpaid users seek care. The more effective the therapy seems, the more the cycle entrenches itself, making it harder to distinguish between care and commodification. “The more the users benefit from the app in terms of its therapeutic or any other mental health intervention,” he writes, “the more they undergo exploitation.” 


This sense of an economic and psychological ouroboros—the snake that eats its own tail—serves as a central metaphor in Sike, the debut novel from Fred Lunzer, an author with a research background in AI. 

Described as a “story of boy meets girl meets AI psychotherapist,” Sike follows Adrian, a young Londoner who makes a living ghostwriting rap lyrics, in his romance with Maquie, a business professional with a knack for spotting lucrative technologies in the beta phase. 

cover of Sike
Sike
Fred Lunzer
CELADON BOOKS, 2025

The title refers to a splashy commercial AI therapist called Sike, uploaded into smart glasses, that Adrian uses to interrogate his myriad anxieties. “When I signed up to Sike, we set up my dashboard, a wide black panel like an airplane’s cockpit that showed my daily ‘vitals,’” Adrian narrates. “Sike can analyze the way you walk, the way you make eye contact, the stuff you talk about, the stuff you wear, how often you piss, shit, laugh, cry, kiss, lie, whine, and cough.”

In other words, Sike is the ultimate digital phenotyper, constantly and exhaustively analyzing everything in a user’s daily experiences. In a twist, Lunzer chooses to make Sike a luxury product, available only to subscribers who can foot the price tag of £2,000 per month. 

Flush with cash from his contributions to a hit song, Adrian comes to rely on Sike as a trusted mediator between his inner and outer worlds. The novel explores the impacts of the app on the wellness of the well-off, following rich people who voluntarily commit themselves to a boutique version of the digital asylum described by Oberhaus.

The only real sense of danger in Sike involves a Japanese torture egg (don’t ask). The novel strangely sidesteps the broader dystopian ripples of its subject matter in favor of drunken conversations at fancy restaurants and elite dinner parties. 

The sudden ascent of the AI therapist seems startlingly futuristic, as if it should be unfolding in some later time when the streets scrub themselves and we travel the world through pneumatic tubes.

Sike’s creator is simply “a great guy” in Adrian’s estimation, despite his techno-messianic vision of training the app to soothe the ills of entire nations. It always seems as if a shoe is meant to drop, but in the end, it never does, leaving the reader with a sense of non-resolution.

While Sike is set in the present day, something about the sudden ascent of the AI therapist—­in real life as well as in fiction—seems startlingly futuristic, as if it should be unfolding in some later time when the streets scrub themselves and we travel the world through pneumatic tubes. But this convergence of mental health and artificial intelligence has been in the making for more than half a century. The beloved astronomer Carl Sagan, for example, once imagined a “network of computer psychotherapeutic terminals, something like arrays of large telephone booths” that could address the growing demand for mental-health services.

Oberhaus notes that one of the first incarnations of a trainable neural network, known as the Perceptron, was devised not by a mathematician but by a psychologist named Frank Rosenblatt, at the Cornell Aeronautical Laboratory in 1958. The potential utility of AI in mental health was widely recognized by the 1960s, inspiring early computerized psychotherapists such as the DOCTOR script that ran on the ELIZA chatbot developed by Joseph Weizenbaum, who shows up in all three of the nonfiction books in this article.

Weizenbaum, who died in 2008, was profoundly concerned about the possibility of computerized therapy. “Computers can make psychiatric judgments,” he wrote in his 1976 book Computer Power and Human Reason. “They can flip coins in much more sophisticated ways than can the most patient human being. The point is that they ought not to be given such tasks. They may even be able to arrive at ‘correct’ decisions in some cases—but always and necessarily on bases no human being should be willing to accept.”

It’s a caution worth keeping in mind. As AI therapists arrive at scale, we’re seeing them play out a familiar dynamic: Tools designed with superficially good intentions are enmeshed with systems that can exploit, surveil, and reshape human behavior. In a frenzied attempt to unlock new opportunities for patients in dire need of mental-health support, we may be locking other doors behind them.

Becky Ferreira is a science reporter based in upstate New York and author of First Contact: The Story of Our Obsession with Aliens.

Bangladesh’s garment-making industry is getting greener

Pollution from textile production—dyes, chemicals, and heavy metals like lead and cadmium—is common in the waters of the Buriganga River as it runs through Dhaka, Bangladesh. It’s among many harms posed by a garment sector that was once synonymous with tragedy: In 2013, the eight-story Rana Plaza factory building collapsed, killing 1,134 people and injuring some 2,500 others. 

colored water pouring out of a cement tunnel into a river with a city in the far distance
Wastewater from Bangladesh’s garment industry flows into the Buriganga River.
ZAKIR HOSSAIN CHOWDHURY

But things are starting to change. In recent years the country has quietly become an unlikely leader in “frugal” factories that use a combination of resource-efficient technologies to cut waste, conserve water, and build resilience against climate impacts and global supply disruptions. Bangladesh now boasts 268 LEED-certified garment factories—more than any other country. Dye plants are using safer chemicals, tanneries are adopting cleaner tanning methods and treating wastewater, workshops are switching to more efficient LED lighting, and solar panels glint from rooftops. The hundreds of factories along the Buriganga’s banks and elsewhere in Bangladesh are starting to stitch together a new story, woven from greener threads.

a single factory worker in the midst of many workstation tables under industrial lighting fixtures
These energy-efficient, automated template sewing machines at the Fakir Eco Knitwears factory near Bangladesh’s capital help workers reduce waste.
ZAKIR HOSSAIN CHOWDHURY

In Fakir Eco Knitwears’ LEED Gold–certified factory in Narayanganj, a city near Dhaka, skylights reduce energy consumption from electric lighting by 40%, and AI-driven cutters allow workers to recycle 95% of fabric scraps into new yarns. “We save energy by using daylight, solar power, and rainwater instead of heavy AC and boilers,” says Md. Anisuzzaman, an engineer at the company. “It shows how local resources can make production greener and more sustainable.” 

The shift to green factories in Bangladesh is financed through a combination of factory investments, loans from Bangladesh Bank’s Green Transformation Fund, and pressure from international buyers who reward compliance with ongoing orders. One prominent program is the Partnership for Cleaner Textile (PaCT), an initiative run by the World Bank Group’s International Finance Corporation. Launched in 2013, PaCT has worked with more than 450 factories on cleaner production methods. By its count, the effort now saves 35 billion liters of fresh water annually, enough to meet the needs of 1.9 million people.

solar panels on a factory roof
Solar panels on top of the factory help reduce its energy footprint.
ZAKIR HOSSAIN CHOWDHURY
An exhaust gas absorption chiller absorbs heat and helps maintain the factory floor’s temperature at around 28 °C (82 °F).
ZAKIR HOSSAIN CHOWDHURY

Water reclaimed at the factory’s sewage treatment plant is used in the facility’s restrooms.
ZAKIR HOSSAIN CHOWDHURY

It’s a good start, but Bangladesh’s $40 billion garment industry still has a long way to go. The shift to environmentalism at the factory level hasn’t translated to improved outcomes for the sector’s 4.4 million workers. 

Wage theft and delayed payments are widespread. The minimum wage, some 12,500 taka per month (about $113), is far below the $200 proposed by unions—which has meant frequent strikes and protests over pay, overtime, and job security. “Since Rana Plaza, building safety and factory conditions have improved, but the mindset remains unchanged,” says A.K.M. Ashraf Uddin, executive director of the Bangladesh Labour Foundation, a nonprofit labor rights group. “Profit still comes first, and workers’ freedom of speech is yet to be realized.”

The smaller factories that dominate the garment sector may struggle to invest in green upgrades.
ZAKIR HOSSAIN CHOWDHURY

In the worst case, greener industry practices could actually exacerbate inequality. Smaller factories dominate the sector, and they struggle to afford upgrades. But without those upgrades, businesses could find themselves excluded from certain markets. One of those is the European Union, which plans to require companies to address human rights and environmental problems in supply chains starting in 2027. A cleaner Buriganga River mends just a small corner of a vast tapestry of need. 

Zakir Hossain Chowdhury is a visual journalist based in Bangladesh.

The paints, coatings, and chemicals making the world a cooler place

It’s getting harder to beat the heat. During the summer of 2025, heat waves knocked out power grids in North America, Europe, and the Middle East. Global warming means more people need air-­conditioning, which requires more power and strains grids. But a millennia-old idea (plus 21st-century tech) might offer an answer: radiative cooling. Paints, coatings, and textiles can scatter sunlight and dissipate heat—no additional energy required.

“Radiative cooling is universal—it exists everywhere in our daily life,” says Qiaoqiang Gan, a professor of materials science and applied physics at King Abdullah University of Science and Technology in Saudi Arabia. Pretty much any object will absorb heat from the sun during the day and radiate some of it back at night. It’s why cars parked outside overnight are often covered with condensation, Gan says—their metal roofs dissipate heat into the sky, cooling the surfaces below the ambient air temperature. That’s how you get dew.

Humans have harnessed this basic natural process for thousands of years. Desert peoples in Iran, North Africa, and India manufactured ice by leaving pools of water exposed to clear desert skies overnight, when radiative cooling happens naturally; other cultures constructed “cool roofs” capped with reflective materials that scattered sunlight and lowered interior temperatures. “People have taken advantage of this effect, either knowingly or unknowingly, for a very long time,” says Aaswath Raman, a materials scientist at UCLA and cofounder of the radiative­cooling startup SkyCool Systems.

Modern approaches, as demonstrated everywhere from California supermarket rooftops to Japan’s Expo 2025 pavilion, go even further. Normally, if the sun is up and pumping in heat, surfaces can’t get cooler than the ambient temperature. But back in 2014, Raman and his colleagues achieved radiative cooling in the daytime. They customized photonic films to absorb and then radiate heat at infrared wavelengths between eight and 13 micrometers—a range of electromagnetic wavelengths called an “atmospheric window,” because that radiation escapes to space rather than getting absorbed. Those films could dissipate heat even under full sun, cooling the inside of a building to 9 °F below ambient temperatures, with no AC or energy source required.

That was proof of concept; today, Raman says, the industry has mostly shifted away from advanced photonics that use the atmospheric-window effect to simpler sunlight-scattering materials. Ceramic cool roofs, nanostructure coatings, and reflective polymers all offer the possibility of diverting more sunlight across all wavelengths, and they’re more durable and scalable.

Now the race is on. Startups such as SkyCool, Planck Energies, Spacecool, and i2Cool are competing to commercially manufacture and sell coatings that reflect at least 94% of sunlight in most climates, and above 97% in humid tropical ones. Pilot projects have already provided significant cooling to residential buildings, reducing AC energy needs by 15% to 20% in some cases. 

This idea could go way beyond reflective rooftops and roads. Researchers are developing reflective textiles that can be worn by people most at risk of heat exposure. “This is personal thermal management,” says Gan. “We can realize passive cooling in T-shirts, sportswear, and garments.” 

thermal image of a person on a rooftop holding a stick in a bucket
A thermal image captured during a SkyCool installation shows treated areas (white, yellow) that are roughly 35 ºC cooler than the surrounding rooftop.
COURTESY OF SKYCOOL SYSTEMS

Of course, these technologies and materials have limits. Like solar power grids, they’re vulnerable to weather. Clouds prevent reflected sunlight from bouncing into space. Dust and air pollution dim materials’ bright surfaces. Lots of coatings lose their reflectivity after a few years. And the cheapest and toughest materials used in radiative cooling tend to rely on Teflon and other fluoropolymers, “forever chemicals” that don’t biodegrade, posing an environmental risk. “They are the best class of products that tend to survive outdoors,” says Raman. “So for long-term scale-up, can you do it without materials like those fluoropolymers and still maintain the durability and hit this low cost point?”

As with any other solution to the problems of climate change, one size won’t fit all. “We cannot be overoptimistic and say that radiative cooling can address all our future needs,” Gan says. “We still need more efficient active air-conditioning.” A shiny roof isn’t a panacea, but it’s still pretty cool. 

Becky Ferreira is a science reporter based in upstate New York and author of First Contact: The Story of Our Obsession with Aliens.

MIT Technology Review’s most popular stories of 2025

It’s been a busy and productive year here at MIT Technology Review. We published magazine issues on power, creativity, innovation, bodies, relationships, and security. We hosted 14 exclusive virtual conversations with our editors and outside experts in our subscriber-only series, Roundtables, and held two events on MIT’s campus. And we published hundreds of articles online, following new developments in computing, climate tech, robotics, and more. 

As the year winds down, we wanted to give you a chance to revisit a bit of this work with us. Whether we were covering the red-hot rise of artificial intelligence or the future of biotech, these are some of the stories that resonated the most with our readers. 

We did the math on AI’s energy footprint. Here’s the story you haven’t heard.

Understanding AI’s energy use was a huge global conversation in 2025 as hundreds of millions of people began using generative AI tools on a regular basis. Senior reporters James O’Donnell and Casey Crownhart dug into the numbers and published an unprecedented look at AI’s resource demand, down to the level of a single query, to help us know how much energy and water AI may require moving forward. 

We’re learning more about what vitamin D does to our bodies

Vitamin D deficiency is widespread, particularly in the winter when there’s less sunlight to drive its production in our bodies. The “sunshine vitamin” is important for bone health, but as senior reporter Jessica Hamzelou reported, recent research is also uncovering surprising new insights into other ways it might influence our bodies, including our immune systems and heart health.

What is AI?

Senior editor Will Douglas Heaven’s expansive look at how to define AI was published in 2024, but it still managed to connect with many readers this year. He lays out why no one can agree on what AI is—and explains why that ambiguity matters, and how it can inform our own critical thinking about this technology.

Ethically sourced “spare” human bodies could revolutionize medicine

In this thought-provoking op-ed, a team of experts at Stanford University argue that creating living human bodies that can’t think, don’t have any awareness, and can’t feel pain could shake up medical research and drug development by providing essential biological materials for testing and transplantation. Recent advances in biotechnology now provide a potential pathway to such “bodyoids,” though plenty of technical challenges and ethical hurdles remain. 

It’s surprisingly easy to stumble into a relationship with an AI chatbot

Chatbots were everywhere this year, and reporter Rhiannon Williams chronicled how quickly people can develop bonds with one. That’s all right for some people, she notes, but dangerous for others. Some folks even describe unintentionally forming romantic relationships with chatbots. This is a trend we’ll definitely be keeping an eye on in 2026. 

Is this the electric grid of the future?

The electric grid is bracing for disruption from more frequent storms and fires, as well as an uncertain policy and regulatory landscape. And in many ways, the publicly owned utility company Lincoln Electric in Nebraska is an ideal lens through which to examine this shift as it works through the challenges of delivering service that’s reliable, affordable, and sustainable.

Exclusive: A record-breaking baby has been born from an embryo that’s over 30 years old

This year saw the birth of the world’s “oldest baby”: Thaddeus Daniel Pierce, who arrived on July 26. The embryo he developed from was created in 1994 during the early days of IVF and had been frozen and sitting in storage ever since. The new baby’s parents were toddlers at the time, and the embryo was donated to them decades later via a Christian “embryo adoption” agency.  

How these two brothers became go-to experts on America’s “mystery drone” invasion

Twin brothers John and Gerald Tedesco teamed up to investigate a concerning new threat—unidentified drones. In 2024 alone, some 350 drones entered airspace over a hundred different US military installations, and many cases went unsolved, according to a top military official. This story takes readers inside the equipment-filled RV the Tedescos created to study mysterious aerial phenomena, and how they made a name for themselves among government officials. 

10 Breakthrough Technologies of 2025 

Our newsroom has published this annual look at advances that will matter in the long run for over 20 years. This year’s list featured generative AI search, cleaner jet fuel, long-acting HIV prevention meds, and other emerging technologies that our journalists think are worth watching. We’ll publish the 2026 edition of the list on January 12, so stay tuned. (In the meantime, here’s what didn’t make the cut.)  

AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.

If that’s left you feeling a little confused, fear not. As we near the end of 2025, our writers have taken a look back over the AI terms that dominated the year, for better or worse.

Make sure you take the time to brace yourself for what promises to be another bonkers year.

—Rhiannon Williams

1. Superintelligence

a jack russell terrier wearing glasses and a bow tie

As long as people have been hyping AI, they have been coming up with names for a future, ultra-powerful form of the technology that could bring about utopian or dystopian consequences for humanity. “Superintelligence” is that latest hot term. Meta announced in July that it would form an AI team to pursue superintelligence, and it was reportedly offering nine-figure compensation packages to AI experts from the company’s competitors to join.

In December, Microsoft’s head of AI followed suit, saying the company would be spending big sums, perhaps hundreds of billions, on the pursuit of superintelligence. If you think superintelligence is as vaguely defined as artificial general intelligence, or AGI, you’d be right! While it’s conceivable that these sorts of technologies will be feasible in humanity’s long run, the question is really when, and whether today’s AI is good enough to be treated as a stepping stone toward something like superintelligence. Not that that will stop the hype kings. —James O’Donnell

2. Vibe coding

Thirty years ago, Steve Jobs said everyone in America should learn how to program a computer. Today, people with zero knowledge of how to code can knock up an app, game, or website in no time at all thanks to vibe coding—a catch-all phrase coined by OpenAI cofounder Andrej Karpathy. To vibe-code, you simply prompt generative AI models’ coding assistants to create the digital object of your desire and accept pretty much everything they spit out. Will the result work? Possibly not. Will it be secure? Almost definitely not, but the technique’s biggest champions aren’t letting those minor details stand in their way. Also—it sounds fun! — Rhiannon Williams

3. Chatbot psychosis

One of the biggest AI stories over the past year has been how prolonged interactions with chatbots can cause vulnerable people to experience delusions and, in some extreme cases, can either cause or worsen psychosis. Although “chatbot psychosis” is not a recognized medical term, researchers are paying close attention to the growing anecdotal evidence from users who say it’s happened to them or someone they know. Sadly, the increasing number of lawsuits filed against AI companies by the families of people who died following their conversations with chatbots demonstrate the technology’s potentially deadly consequences. —Rhiannon Williams

4. Reasoning

Few things kept the AI hype train going this year more than so-called reasoning models, LLMs that can break down a problem into multiple steps and work through them one by one. OpenAI released its first reasoning models, o1 and o3, a year ago.

A month later, the Chinese firm DeepSeek took everyone by surprise with a very fast follow, putting out R1, the first open-source reasoning model. In no time, reasoning models became the industry standard: All major mass-market chatbots now come in flavors backed by this tech. Reasoning models have pushed the envelope of what LLMs can do, matching top human performances in prestigious math and coding competitions. On the flip side, all the buzz about LLMs that could “reason” reignited old debates about how smart LLMs really are and how they really work. Like “artificial intelligence” itself, “reasoning” is technical jargon dressed up with marketing sparkle. Choo choo! —Will Douglas Heaven

5. World models 

For all their uncanny facility with language, LLMs have very little common sense. Put simply, they don’t have any grounding in how the world works. Book learners in the most literal sense, LLMs can wax lyrical about everything under the sun and then fall flat with a howler about how many elephants you could fit into an Olympic swimming pool (exactly one, according to one of Google DeepMind’s LLMs).

World models—a broad church encompassing various technologies—aim to give AI some basic common sense about how stuff in the world actually fits together. In their most vivid form, world models like Google DeepMind’s Genie 3 and Marble, the much-anticipated new tech from Fei-Fei Li’s startup World Labs, can generate detailed and realistic virtual worlds for robots to train in and more. Yann LeCun, Meta’s former chief scientist, is also working on world models. He has been trying to give AI a sense of how the world works for years, by training models to predict what happens next in videos. This year he quit Meta to focus on this approach in a new start up called Advanced Machine Intelligence Labs. If all goes well, world models could be the next thing. —Will Douglas Heaven

6. Hyperscalers

Have you heard about all the people saying no thanks, we actually don’t want a giant data center plopped in our backyard? The data centers in question—which tech companies want to built everywhere, including space—are typically referred to as hyperscalers: massive buildings purpose-built for AI operations and used by the likes of OpenAI and Google to build bigger and more powerful AI models. Inside such buildings, the world’s best chips hum away training and fine-tuning models, and they’re built to be modular and grow according to needs.

It’s been a big year for hyperscalers. OpenAI announced, alongside President Donald Trump, its Stargate project, a $500 billion joint venture to pepper the country with the largest data centers ever. But it leaves almost everyone else asking: What exactly do we get out of it? Consumers worry the new data centers will raise their power bills. Such buildings generally struggle to run on renewable energy. And they don’t tend to create all that many jobs. But hey, maybe these massive, windowless buildings could at least give a moody, sci-fi vibe to your community. —James O’Donnell

7. Bubble

The lofty promises of AI are levitating the economy. AI companies are raising eye-popping sums of money and watching their valuations soar into the stratosphere. They’re pouring hundreds of billions of dollars into chips and data centers, financed increasingly by debt and eyebrow-raising circular deals. Meanwhile, the companies leading the gold rush, like OpenAI and Anthropic, might not turn a profit for years, if ever. Investors are betting big that AI will usher in a new era of riches, yet no one knows how transformative the technology will actually be.

Most organizations using AI aren’t yet seeing the payoff, and AI work slop is everywhere. There’s scientific uncertainty about whether scaling LLMs will deliver superintelligence or whether new breakthroughs need to pave the way. But unlike their predecessors in the dot-com bubble, AI companies are showing strong revenue growth, and some are even deep-pocketed tech titans like Microsoft, Google, and Meta. Will the manic dream ever burst—Michelle Kim

8. Agentic

This year, AI agents were everywhere. Every new feature announcement, model drop, or security report throughout 2025 was peppered with mentions of them, even though plenty of AI companies and experts disagree on exactly what counts as being truly “agentic,” a vague term if ever there was one. No matter that it’s virtually impossible to guarantee that an AI acting on your behalf out in the wide web will always do exactly what it’s supposed to do—it seems as though agentic AI is here to stay for the foreseeable. Want to sell something? Call it agentic! —Rhiannon Williams

9. Distillation

Early this year, DeepSeek unveiled its new model DeepSeek R1, an open-source reasoning model that matches top Western models but costs a fraction of the price. Its launch freaked Silicon Valley out, as many suddenly realized for the first time that huge scale and resources were not necessarily the key to high-level AI models. Nvidia stock plunged by 17% the day after R1 was released.

The key to R1’s success was distillation, a technique that makes AI models more efficient. It works by getting a bigger model to tutor a smaller model: You run the teacher model on a lot of examples and record the answers, and reward the student model as it copies those responses as closely as possible, so that it gains a compressed version of the teacher’s knowledge.  —Caiwei Chen

10. Sycophancy

As people across the world spend increasing amounts of time interacting with chatbots like ChatGPT, chatbot makers are struggling to work out the kind of tone and “personality” the models should adopt. Back in April, OpenAI admitted it’d struck the wrong balance between helpful and sniveling, saying a new update had rendered GPT-4o too sycophantic. Having it suck up to you isn’t just irritating—it can mislead users by reinforcing their incorrect beliefs and spreading misinformation. So consider this your reminder to take everything—yes, everything—LLMs produce with a pinch of salt. —Rhiannon Williams

11. Slop

If there is one AI-related term that has fully escaped the nerd enclosures and entered public consciousness, it’s “slop.” The word itself is old (think pig feed), but “slop” is now commonly used to refer to low-effort, mass-produced content generated by AI, often optimized for online traffic. A lot of people even use it as a shorthand for any AI-generated content. It has felt inescapable in the past year: We have been marinated in it, from fake biographies to shrimp Jesus images to surreal human-animal hybrid videos.

But people are also having fun with it. The term’s sardonic flexibility has made it easy for internet users to slap it on all kinds of words as a suffix to describe anything that lacks substance and is absurdly mediocre: think “work slop” or “friend slop.” As the hype cycle resets, “slop” marks a cultural reckoning about what we trust, what we value as creative labor, and what it means to be surrounded by stuff that was made for engagement rather than expression. —Caiwei Chen

12. Physical intelligence

Did you come across the hypnotizing video from earlier this year of a humanoid robot putting away dishes in a bleak, gray-scale kitchen? That pretty much embodies the idea of physical intelligence: the idea that advancements in AI can help robots better move around the physical world. 

It’s true that robots have been able to learn new tasks faster than ever before, everywhere from operating rooms to warehouses. Self-driving-car companies have seen improvements in how they simulate the roads, too. That said, it’s still wise to be skeptical that AI has revolutionized the field. Consider, for example, that many robots advertised as butlers in your home are doing the majority of their tasks thanks to remote operators in the Philippines

The road ahead for physical intelligence is also sure to be weird. Large language models train on text, which is abundant on the internet, but robots learn more from videos of people doing things. That’s why the robot company Figure suggested in September that it would pay people to film themselves in their apartments doing chores. Would you sign up? —James O’Donnell

13. Fair use

AI models are trained by devouring millions of words and images across the internet, including copyrighted work by artists and writers. AI companies argue this is “fair use”—a legal doctrine that lets you use copyrighted material without permission if you transform it into something new that doesn’t compete with the original. Courts are starting to weigh in. In June, Anthropic’s training of its AI model Claude on a library of books was ruled fair use because the technology was “exceedingly transformative.”

That same month, Meta scored a similar win, but only because the authors couldn’t show that the company’s literary buffet cut into their paychecks. As copyright battles brew, some creators are cashing in on the feast. In December, Disney signed a splashy deal with OpenAI to let users of Sora, the AI video platform, generate videos featuring more than 200 characters from Disney’s franchises. Meanwhile, governments around the world are rewriting copyright rules for the content-guzzling machines. Is training AI on copyrighted work fair use? As with any billion-dollar legal question, it depends—Michelle Kim

14. GEO

Just a few short years ago, an entire industry was built around helping websites rank highly in search results (okay, just in Google). Now search engine optimization (SEO), is giving way to GEO—generative engine optimization—as the AI boom forces brands and businesses to scramble to maximize their visibility in AI, whether that’s in AI-enhanced search results like Google’s AI Overviews or within responses from LLMs. It’s no wonder they’re freaked out. We already know that news companies have experienced a colossal drop in search-driven web traffic, and AI companies are working on ways to cut out the middleman and allow their users to visit sites from directly within their platforms. It’s time to adapt or die. —Rhiannon Williams

Meet the man hunting the spies in your smartphone

In April 2025, Ronald Deibert left all electronic devices at home in Toronto and boarded a plane. When he landed in Illinois, he took a taxi to a mall and headed directly to the Apple Store to purchase a new laptop and iPhone. He’d wanted to keep the risk of having his personal devices confiscated to a minimum, because he knew his work made him a prime target for surveillance. “I’m traveling under the assumption that I am being watched, right down to exactly where I am at any moment,” Deibert says.

Deibert directs the Citizen Lab, a research center he founded in 2001 to serve as “counterintelligence for civil society.” Housed at the University of Toronto, the lab operates independently of governments or corporate interests, relying instead on research grants and private philanthropy for financial support. It’s one of the few institutions that investigate cyberthreats exclusively in the public interest, and in doing so, it has exposed some of the most egregious digital abuses of the past two decades.

For many years, Deibert and his colleagues have held up the US as the standard for liberal democracy. But that’s changing, he says: “The pillars of democracy are under assault in the United States. For many decades, in spite of its flaws, it has upheld norms about what constitutional democracy looks like or should aspire to. [That] is now at risk.”

Even as some of his fellow Canadians avoided US travel after Donald Trump’s second election, Deibert relished the opportunity to visit. Alongside his meetings with human rights defenders, he also documented active surveillance at Columbia University during the height of its student protests. Deibert snapped photos of drones above campus and noted the exceptionally strict security protocols. “It was unorthodox to go to the United States,” he says. “But I really gravitate toward problems in the world.”


Deibert, 61, grew up in East Vancouver, British Columbia, a gritty area with a boisterous countercultural presence. In the ’70s, Vancouver brimmed with draft dodgers and hippies, but Deibert points to American investigative journalism—exposing the COINTELPRO surveillance program, the Pentagon Papers, Watergate—as the seed of his respect for antiestablishment sentiment. He didn’t imagine that this fascination would translate into a career, however.

“My horizons were pretty low because I came from a working-class family, and there weren’t many people in my family—in fact, none—who went on to university,” he says.

Deibert eventually entered a graduate program in international relations at the University of British Columbia. His doctoral research brought him to a field of inquiry that would soon explode: the geopolitical implications of the nascent internet.

“In my field, there were a handful of people beginning to talk about the internet, but it was very shallow, and that frustrated me,” he says. “And meanwhile, computer science was very technical, but not political—[politics] was almost like a dirty word.”

Deibert continued to explore these topics at the University of Toronto when he was appointed to a tenure-track professorship, but it wasn’t until after he founded the Citizen Lab in 2001 that his work rose to global prominence. 

What put the lab on the map, Deibert says, was its 2009 report “Tracking GhostNet,” which uncovered a digital espionage network in China that had breached offices of foreign embassies and diplomats in more than 100 countries, including the office of the Dalai Lama. The report and its follow-up in 2010 were among the first to publicly expose cybersurveillance in real time. In the years since, the lab has published over 180 such analyses, garnering praise from human rights advocates ranging from Margaret Atwood to Edward Snowden.

The lab has rigorously investigated authoritarian regimes around the world (Deibert says both Russia and China have his name on a “list” barring his entry). The group was the first to uncover the use of commercial spyware to surveil people close to the Saudi dissident and Washington Post journalist Jamal Khashoggi prior to his assassination, and its research has directly informed G7 and UN resolutions on digital repression and led to sanctions on spyware vendors. Even so, in 2025 US Immigration and Customs Enforcement reactivated a $2 million contract with the spyware vendor Paragon. The contract, which the Biden administration had previously placed under a stop-work order, resembles steps taken by governments in Europe and Israel that have also deployed domestic spyware to address security concerns. 

“It saves lives, quite literally,” Cindy Cohn, executive director of the Electronic Frontier Foundation, says of the lab’s work. “The Citizen Lab [researchers] were the first to really focus on technical attacks on human rights activists and democracy activists all around the world. And they’re still the best at it.”


When recruiting new Citizen Lab employees (or “Labbers,” as they refer to one another), Deibert forgoes stuffy, pencil-pushing academics in favor of brilliant, colorful personalities, many of whom personally experienced repression from some of the same regimes the lab now investigates.

Noura Aljizawi, a researcher on digital repression who survived torture at the hands of the al-Assad regime in Syria, researches the distinct threat that digital technologies pose to women and queer people, particularly when deployed against exiled nationals. She helped create Security Planner, a tool that gives personalized, expert-reviewed guidance to people looking to improve their digital hygiene, for which the University of Toronto awarded her an Excellence Through Innovation Award. 

Work for the lab is not without risk. Citizen Lab fellow Elies Campo, for example, was followed and photographed after the lab published a 2022 report that exposed the digital surveillance of dozens of Catalonian citizens and members of parliament, including four Catalonian presidents who were targeted during or after their terms.

Still, the lab’s reputation and mission make recruitment fairly easy, Deibert says. “This good work attracts a certain type of person,” he says. “But they’re usually also drawn to the sleuthing. It’s detective work, and that can be highly intoxicating—even addictive.”

Deibert frequently deflects the spotlight to his fellow Labbers. He rarely discusses the group’s accomplishments without referencing two senior researchers, Bill Marczak and John Scott-Railton, alongside other staffers. And on the occasion that someone decides to leave the Citizen Lab to pursue another position, this appreciation remains.

“We have a saying: Once a Labber, always a Labber,” Deibert says.


While in the US, Deibert taught a seminar on the Citizen Lab’s work to Northwestern University undergraduates and delivered talks on digital authoritarianism at the Columbia University Graduate School of Journalism. Universities in the US had been subjected to funding cuts and heightened scrutiny from the Trump administration, and Deibert wanted to be “in the mix” at such institutions to respond to what he sees as encroaching authoritarian practices by the US government. 

Since Deibert’s return to Canada, the lab has continued its work unearthing digital threats to civil society worldwide, but now Deibert must also contend with the US—a country that was once his benchmark for democracy but has become another subject of his scrutiny. “I do not believe that an institution like the Citizen Lab could exist right now in the United States,” he says. “The type of research that we pioneered is under threat like never before.”

He is particularly alarmed by the increasing pressures facing federal oversight bodies and academic institutions in the US. In September, for example, the Trump administration defunded the Council of the Inspectors General on Integrity and Efficiency, a government organization dedicated to preventing waste, fraud, and abuse within federal agencies, citing partisanship concerns. The White House has also threatened to freeze federal funding to universities that do not comply with administration directives related to gender, DEI, and campus speech. These sorts of actions, Deibert says, undermine the independence of watchdogs and research groups like the Citizen Lab. 

Cohn, the director of the EFF, says the lab’s location in Canada allows it to avoid many of these attacks on institutions that provide accountability. “Having the Citizen Lab based in Toronto and able to continue to do its work largely free of the things we’re seeing in the US,” she says, “could end up being tremendously important if we’re going to return to a place of the rule of law and protection of human rights and liberties.” 

Finian Hazen is a journalism and political science student at Northwestern University.