The ascent of the AI therapist

We’re in the midst of a global mental-­health crisis. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year.

Given the clear demand for accessible and affordable mental-health services, it’s no wonder that people have looked to artificial intelligence for possible relief. Millions are already actively seeking therapy from popular chatbots like OpenAI’s ChatGPT and Anthropic’s Claude, or from specialized psychology apps like Wysa and Woebot. On a broader scale, researchers are exploring AI’s potential to monitor and collect behavioral and biometric observations using wearables and smart devices, analyze vast volumes of clinical data for new insights, and assist human mental-health professionals to help prevent burnout. 

But so far this largely uncontrolled experiment has produced mixed results. Many people have found solace in chatbots based on large language models (LLMs), and some experts see promise in them as therapists, but other users have been sent into delusional spirals by AI’s hallucinatory whims and breathless sycophancy. Most tragically, multiple families have alleged that chatbots contributed to the suicides of their loved ones, sparking lawsuits against companies responsible for these tools. In October, OpenAI CEO Sam Altman revealed in a blog post that 0.15% of ChatGPT users “have conversations that include explicit indicators of potential suicidal planning or intent.” That’s roughly a million people sharing suicidal ideations with just one of these software systems every week.

The real-world consequences of AI therapy came to a head in unexpected ways in 2025 as we waded through a critical mass of stories about human-chatbot relationships, the flimsiness of guardrails on many LLMs, and the risks of sharing profoundly personal information with products made by corporations that have economic incentives to harvest and monetize such sensitive data. 

Several authors anticipated this inflection point. Their timely books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust. 

LLMs have often been described as “black boxes” because nobody knows exactly how they produce their results. The inner workings that guide their outputs are opaque because their algorithms are so complex and their training data is so vast. In mental-health circles, people often describe the human brain as a “black box,” for analogous reasons. Psychology, psychiatry, and related fields must grapple with the impossibility of seeing clearly inside someone else’s head, let alone pinpointing the exact causes of their distress. 

These two types of black boxes are now interacting with each other, creating unpredictable feedback loops that may further impede clarity about the origins of people’s mental-­health struggles and the solutions that may be possible. Anxiety about these developments has much to do with the explosive recent advances in AI, but it also revives decades-old warnings from pioneers such as the MIT computer scientist Joseph Weizenbaum, who argued against computerized therapy as early as the 1960s.  


cover of Dr Bot
Dr. Bot: Why Doctors Can Fail Us— and
How AI Could Save Lives

Charlotte Blease
YALE UNIVERSITY PRESS, 2025

Charlotte Blease, a philosopher of medicine, makes the optimist’s case in Dr. Bot: Why Doctors Can Fail Us—and How AI Could Save Lives. Her book broadly explores the possible positive impacts of AI in a range of medical fields. While she remains clear-eyed about the risks, warning that readers who are expecting “a gushing love letter to technology” will be disappointed, she suggests that these models can help relieve patient suffering and medical burnout alike.

“Health systems are crumbling under patient pressure,” Blease writes. “Greater burdens on fewer doctors create the perfect petri dish for errors,” and “with palpable shortages of doctors and increasing waiting times for patients, many of us are profoundly frustrated.”

Blease believes that AI can not only ease medical professionals’ massive workloads but also relieve the tensions that have always existed between some patients and their caregivers. For example, people often don’t seek needed care because they are intimidated or fear judgment from medical professionals; this is especially true if they have mental-health challenges. AI could allow more people to share their concerns, she argues. 

But she’s aware that these putative upsides need to be weighed against major drawbacks. For instance, AI therapists can provide inconsistent and even dangerous responses to human users, according to a 2025 study, and they also raise privacy concerns, given that AI companies are currently not bound by the same confidentiality and HIPAA standards as licensed therapists. 

While Blease is an expert in this field, her motivation for writing the book is also personal: She has two siblings with an incurable form of muscular dystrophy, one of whom waited decades for a diagnosis. During the writing of her book, she also lost her partner to cancer and her father to dementia within a devastating six-month period. “I witnessed first-hand the sheer brilliance of doctors and the kindness of health professionals,” she writes. “But I also observed how things can go wrong with care.”


cover of the Silicon Shrink
The Silicon Shrink: How Artificial Intelligence Made the World an Asylum
Daniel Oberhaus
MIT PRESS, 2025

A similar tension animates Daniel Oberhaus’s engrossing book The Silicon Shrink: How Artificial Intelligence Made the World an Asylum. Oberhaus starts from a point of tragedy: the loss of his younger sister to suicide. As Oberhaus carried out the “distinctly twenty-first-century mourning process” of sifting through her digital remains, he wondered if technology could have eased the burden of the psychiatric problems that had plagued her since childhood.

“It seemed possible that all of this personal data might have held important clues that her mental health providers could have used to provide more effective treatment,” he writes. “What if algorithms running on my sister’s smartphone or laptop had used that data to understand when she was in distress? Could it have led to a timely intervention that saved her life? Would she have wanted that even if it did?”

This concept of digital phenotyping—in which a person’s digital behavior could be mined for clues about distress or illness—seems elegant in theory. But it may also become problematic if integrated into the field of psychiatric artificial intelligence (PAI), which extends well beyond chatbot therapy.

Oberhaus emphasizes that digital clues could actually exacerbate the existing challenges of modern psychiatry, a discipline that remains fundamentally uncertain about the underlying causes of mental illnesses and disorders. The advent of PAI, he says, is “the logical equivalent of grafting physics onto astrology.” In other words, the data generated by digital phenotyping is as precise as physical measurements of planetary positions, but it is then integrated into a broader framework—in this case, psychiatry—that, like astrology, is based on unreliable assumptions.  

Oberhaus, who uses the phrase “swipe psychiatry” to describe the outsourcing of clinical decisions based on behavioral data to LLMs, thinks that this approach cannot escape the fundamental issues facing psychiatry. In fact, it could worsen the problem by causing the skills and judgment of human therapists to atrophy as they grow more dependent on AI systems. 

He also uses the asylums of the past—in which institutionalized patients lost their right to freedom, privacy, dignity, and agency over their lives—as a touchstone for a more insidious digital captivity that may spring from PAI. LLM users are already sacrificing privacy by telling chatbots sensitive personal information that companies then mine and monetize, contributing to a new surveillance economy. Freedom and dignity are at stake when complex inner lives are transformed into data streams tailored for AI analysis. 

AI therapists could flatten humanity into patterns of prediction, and so sacrifice the intimate, individualized care that is expected of traditional human therapists. “The logic of PAI leads to a future where we may all find ourselves patients in an algorithmic asylum administered by digital wardens,” Oberhaus writes. “In the algorithmic asylum there is no need for bars on the window or white padded rooms because there is no possibility of escape. The asylum is already everywhere—in your homes and offices, schools and hospitals, courtrooms and barracks. Wherever there’s an internet connection, the asylum is waiting.”


cover of Chatbot Therapy
Chatbot Therapy:
A Critical Analysis of
AI Mental Health Treatment

Eoin Fullam
ROUTLEDGE, 2025

Eoin Fullam, a researcher who studies the intersection of technology and mental health, echoes some of the same concerns in Chatbot Therapy: A Critical Analysis of AI Mental Health Treatment. A heady academic primer, the book analyzes the assumptions underlying the automated treatments offered by AI chatbots and the way capitalist incentives could corrupt these kinds of tools.  

Fullam observes that the capitalist mentality behind new technologies “often leads to questionable, illegitimate, and illegal business practices in which the customers’ interests are secondary to strategies of market dominance.”

That doesn’t mean that therapy-bot makers “will inevitably conduct nefarious activities contrary to the users’ interests in the pursuit of market dominance,” Fullam writes. 

But he notes that the success of AI therapy depends on the inseparable impulses to make money and to heal people. In this logic, exploitation and therapy feed each other: Every digital therapy session generates data, and that data fuels the system that profits as unpaid users seek care. The more effective the therapy seems, the more the cycle entrenches itself, making it harder to distinguish between care and commodification. “The more the users benefit from the app in terms of its therapeutic or any other mental health intervention,” he writes, “the more they undergo exploitation.” 


This sense of an economic and psychological ouroboros—the snake that eats its own tail—serves as a central metaphor in Sike, the debut novel from Fred Lunzer, an author with a research background in AI. 

Described as a “story of boy meets girl meets AI psychotherapist,” Sike follows Adrian, a young Londoner who makes a living ghostwriting rap lyrics, in his romance with Maquie, a business professional with a knack for spotting lucrative technologies in the beta phase. 

cover of Sike
Sike
Fred Lunzer
CELADON BOOKS, 2025

The title refers to a splashy commercial AI therapist called Sike, uploaded into smart glasses, that Adrian uses to interrogate his myriad anxieties. “When I signed up to Sike, we set up my dashboard, a wide black panel like an airplane’s cockpit that showed my daily ‘vitals,’” Adrian narrates. “Sike can analyze the way you walk, the way you make eye contact, the stuff you talk about, the stuff you wear, how often you piss, shit, laugh, cry, kiss, lie, whine, and cough.”

In other words, Sike is the ultimate digital phenotyper, constantly and exhaustively analyzing everything in a user’s daily experiences. In a twist, Lunzer chooses to make Sike a luxury product, available only to subscribers who can foot the price tag of £2,000 per month. 

Flush with cash from his contributions to a hit song, Adrian comes to rely on Sike as a trusted mediator between his inner and outer worlds. The novel explores the impacts of the app on the wellness of the well-off, following rich people who voluntarily commit themselves to a boutique version of the digital asylum described by Oberhaus.

The only real sense of danger in Sike involves a Japanese torture egg (don’t ask). The novel strangely sidesteps the broader dystopian ripples of its subject matter in favor of drunken conversations at fancy restaurants and elite dinner parties. 

The sudden ascent of the AI therapist seems startlingly futuristic, as if it should be unfolding in some later time when the streets scrub themselves and we travel the world through pneumatic tubes.

Sike’s creator is simply “a great guy” in Adrian’s estimation, despite his techno-messianic vision of training the app to soothe the ills of entire nations. It always seems as if a shoe is meant to drop, but in the end, it never does, leaving the reader with a sense of non-resolution.

While Sike is set in the present day, something about the sudden ascent of the AI therapist—­in real life as well as in fiction—seems startlingly futuristic, as if it should be unfolding in some later time when the streets scrub themselves and we travel the world through pneumatic tubes. But this convergence of mental health and artificial intelligence has been in the making for more than half a century. The beloved astronomer Carl Sagan, for example, once imagined a “network of computer psychotherapeutic terminals, something like arrays of large telephone booths” that could address the growing demand for mental-health services.

Oberhaus notes that one of the first incarnations of a trainable neural network, known as the Perceptron, was devised not by a mathematician but by a psychologist named Frank Rosenblatt, at the Cornell Aeronautical Laboratory in 1958. The potential utility of AI in mental health was widely recognized by the 1960s, inspiring early computerized psychotherapists such as the DOCTOR script that ran on the ELIZA chatbot developed by Joseph Weizenbaum, who shows up in all three of the nonfiction books in this article.

Weizenbaum, who died in 2008, was profoundly concerned about the possibility of computerized therapy. “Computers can make psychiatric judgments,” he wrote in his 1976 book Computer Power and Human Reason. “They can flip coins in much more sophisticated ways than can the most patient human being. The point is that they ought not to be given such tasks. They may even be able to arrive at ‘correct’ decisions in some cases—but always and necessarily on bases no human being should be willing to accept.”

It’s a caution worth keeping in mind. As AI therapists arrive at scale, we’re seeing them play out a familiar dynamic: Tools designed with superficially good intentions are enmeshed with systems that can exploit, surveil, and reshape human behavior. In a frenzied attempt to unlock new opportunities for patients in dire need of mental-health support, we may be locking other doors behind them.

Becky Ferreira is a science reporter based in upstate New York and author of First Contact: The Story of Our Obsession with Aliens.

Bangladesh’s garment-making industry is getting greener

Pollution from textile production—dyes, chemicals, and heavy metals like lead and cadmium—is common in the waters of the Buriganga River as it runs through Dhaka, Bangladesh. It’s among many harms posed by a garment sector that was once synonymous with tragedy: In 2013, the eight-story Rana Plaza factory building collapsed, killing 1,134 people and injuring some 2,500 others. 

colored water pouring out of a cement tunnel into a river with a city in the far distance
Wastewater from Bangladesh’s garment industry flows into the Buriganga River.
ZAKIR HOSSAIN CHOWDHURY

But things are starting to change. In recent years the country has quietly become an unlikely leader in “frugal” factories that use a combination of resource-efficient technologies to cut waste, conserve water, and build resilience against climate impacts and global supply disruptions. Bangladesh now boasts 268 LEED-certified garment factories—more than any other country. Dye plants are using safer chemicals, tanneries are adopting cleaner tanning methods and treating wastewater, workshops are switching to more efficient LED lighting, and solar panels glint from rooftops. The hundreds of factories along the Buriganga’s banks and elsewhere in Bangladesh are starting to stitch together a new story, woven from greener threads.

a single factory worker in the midst of many workstation tables under industrial lighting fixtures
These energy-efficient, automated template sewing machines at the Fakir Eco Knitwears factory near Bangladesh’s capital help workers reduce waste.
ZAKIR HOSSAIN CHOWDHURY

In Fakir Eco Knitwears’ LEED Gold–certified factory in Narayanganj, a city near Dhaka, skylights reduce energy consumption from electric lighting by 40%, and AI-driven cutters allow workers to recycle 95% of fabric scraps into new yarns. “We save energy by using daylight, solar power, and rainwater instead of heavy AC and boilers,” says Md. Anisuzzaman, an engineer at the company. “It shows how local resources can make production greener and more sustainable.” 

The shift to green factories in Bangladesh is financed through a combination of factory investments, loans from Bangladesh Bank’s Green Transformation Fund, and pressure from international buyers who reward compliance with ongoing orders. One prominent program is the Partnership for Cleaner Textile (PaCT), an initiative run by the World Bank Group’s International Finance Corporation. Launched in 2013, PaCT has worked with more than 450 factories on cleaner production methods. By its count, the effort now saves 35 billion liters of fresh water annually, enough to meet the needs of 1.9 million people.

solar panels on a factory roof
Solar panels on top of the factory help reduce its energy footprint.
ZAKIR HOSSAIN CHOWDHURY
An exhaust gas absorption chiller absorbs heat and helps maintain the factory floor’s temperature at around 28 °C (82 °F).
ZAKIR HOSSAIN CHOWDHURY

Water reclaimed at the factory’s sewage treatment plant is used in the facility’s restrooms.
ZAKIR HOSSAIN CHOWDHURY

It’s a good start, but Bangladesh’s $40 billion garment industry still has a long way to go. The shift to environmentalism at the factory level hasn’t translated to improved outcomes for the sector’s 4.4 million workers. 

Wage theft and delayed payments are widespread. The minimum wage, some 12,500 taka per month (about $113), is far below the $200 proposed by unions—which has meant frequent strikes and protests over pay, overtime, and job security. “Since Rana Plaza, building safety and factory conditions have improved, but the mindset remains unchanged,” says A.K.M. Ashraf Uddin, executive director of the Bangladesh Labour Foundation, a nonprofit labor rights group. “Profit still comes first, and workers’ freedom of speech is yet to be realized.”

The smaller factories that dominate the garment sector may struggle to invest in green upgrades.
ZAKIR HOSSAIN CHOWDHURY

In the worst case, greener industry practices could actually exacerbate inequality. Smaller factories dominate the sector, and they struggle to afford upgrades. But without those upgrades, businesses could find themselves excluded from certain markets. One of those is the European Union, which plans to require companies to address human rights and environmental problems in supply chains starting in 2027. A cleaner Buriganga River mends just a small corner of a vast tapestry of need. 

Zakir Hossain Chowdhury is a visual journalist based in Bangladesh.

The paints, coatings, and chemicals making the world a cooler place

It’s getting harder to beat the heat. During the summer of 2025, heat waves knocked out power grids in North America, Europe, and the Middle East. Global warming means more people need air-­conditioning, which requires more power and strains grids. But a millennia-old idea (plus 21st-century tech) might offer an answer: radiative cooling. Paints, coatings, and textiles can scatter sunlight and dissipate heat—no additional energy required.

“Radiative cooling is universal—it exists everywhere in our daily life,” says Qiaoqiang Gan, a professor of materials science and applied physics at King Abdullah University of Science and Technology in Saudi Arabia. Pretty much any object will absorb heat from the sun during the day and radiate some of it back at night. It’s why cars parked outside overnight are often covered with condensation, Gan says—their metal roofs dissipate heat into the sky, cooling the surfaces below the ambient air temperature. That’s how you get dew.

Humans have harnessed this basic natural process for thousands of years. Desert peoples in Iran, North Africa, and India manufactured ice by leaving pools of water exposed to clear desert skies overnight, when radiative cooling happens naturally; other cultures constructed “cool roofs” capped with reflective materials that scattered sunlight and lowered interior temperatures. “People have taken advantage of this effect, either knowingly or unknowingly, for a very long time,” says Aaswath Raman, a materials scientist at UCLA and cofounder of the radiative­cooling startup SkyCool Systems.

Modern approaches, as demonstrated everywhere from California supermarket rooftops to Japan’s Expo 2025 pavilion, go even further. Normally, if the sun is up and pumping in heat, surfaces can’t get cooler than the ambient temperature. But back in 2014, Raman and his colleagues achieved radiative cooling in the daytime. They customized photonic films to absorb and then radiate heat at infrared wavelengths between eight and 13 micrometers—a range of electromagnetic wavelengths called an “atmospheric window,” because that radiation escapes to space rather than getting absorbed. Those films could dissipate heat even under full sun, cooling the inside of a building to 9 °F below ambient temperatures, with no AC or energy source required.

That was proof of concept; today, Raman says, the industry has mostly shifted away from advanced photonics that use the atmospheric-window effect to simpler sunlight-scattering materials. Ceramic cool roofs, nanostructure coatings, and reflective polymers all offer the possibility of diverting more sunlight across all wavelengths, and they’re more durable and scalable.

Now the race is on. Startups such as SkyCool, Planck Energies, Spacecool, and i2Cool are competing to commercially manufacture and sell coatings that reflect at least 94% of sunlight in most climates, and above 97% in humid tropical ones. Pilot projects have already provided significant cooling to residential buildings, reducing AC energy needs by 15% to 20% in some cases. 

This idea could go way beyond reflective rooftops and roads. Researchers are developing reflective textiles that can be worn by people most at risk of heat exposure. “This is personal thermal management,” says Gan. “We can realize passive cooling in T-shirts, sportswear, and garments.” 

thermal image of a person on a rooftop holding a stick in a bucket
A thermal image captured during a SkyCool installation shows treated areas (white, yellow) that are roughly 35 ºC cooler than the surrounding rooftop.
COURTESY OF SKYCOOL SYSTEMS

Of course, these technologies and materials have limits. Like solar power grids, they’re vulnerable to weather. Clouds prevent reflected sunlight from bouncing into space. Dust and air pollution dim materials’ bright surfaces. Lots of coatings lose their reflectivity after a few years. And the cheapest and toughest materials used in radiative cooling tend to rely on Teflon and other fluoropolymers, “forever chemicals” that don’t biodegrade, posing an environmental risk. “They are the best class of products that tend to survive outdoors,” says Raman. “So for long-term scale-up, can you do it without materials like those fluoropolymers and still maintain the durability and hit this low cost point?”

As with any other solution to the problems of climate change, one size won’t fit all. “We cannot be overoptimistic and say that radiative cooling can address all our future needs,” Gan says. “We still need more efficient active air-conditioning.” A shiny roof isn’t a panacea, but it’s still pretty cool. 

Becky Ferreira is a science reporter based in upstate New York and author of First Contact: The Story of Our Obsession with Aliens.

MIT Technology Review’s most popular stories of 2025

It’s been a busy and productive year here at MIT Technology Review. We published magazine issues on power, creativity, innovation, bodies, relationships, and security. We hosted 14 exclusive virtual conversations with our editors and outside experts in our subscriber-only series, Roundtables, and held two events on MIT’s campus. And we published hundreds of articles online, following new developments in computing, climate tech, robotics, and more. 

As the year winds down, we wanted to give you a chance to revisit a bit of this work with us. Whether we were covering the red-hot rise of artificial intelligence or the future of biotech, these are some of the stories that resonated the most with our readers. 

We did the math on AI’s energy footprint. Here’s the story you haven’t heard.

Understanding AI’s energy use was a huge global conversation in 2025 as hundreds of millions of people began using generative AI tools on a regular basis. Senior reporters James O’Donnell and Casey Crownhart dug into the numbers and published an unprecedented look at AI’s resource demand, down to the level of a single query, to help us know how much energy and water AI may require moving forward. 

We’re learning more about what vitamin D does to our bodies

Vitamin D deficiency is widespread, particularly in the winter when there’s less sunlight to drive its production in our bodies. The “sunshine vitamin” is important for bone health, but as senior reporter Jessica Hamzelou reported, recent research is also uncovering surprising new insights into other ways it might influence our bodies, including our immune systems and heart health.

What is AI?

Senior editor Will Douglas Heaven’s expansive look at how to define AI was published in 2024, but it still managed to connect with many readers this year. He lays out why no one can agree on what AI is—and explains why that ambiguity matters, and how it can inform our own critical thinking about this technology.

Ethically sourced “spare” human bodies could revolutionize medicine

In this thought-provoking op-ed, a team of experts at Stanford University argue that creating living human bodies that can’t think, don’t have any awareness, and can’t feel pain could shake up medical research and drug development by providing essential biological materials for testing and transplantation. Recent advances in biotechnology now provide a potential pathway to such “bodyoids,” though plenty of technical challenges and ethical hurdles remain. 

It’s surprisingly easy to stumble into a relationship with an AI chatbot

Chatbots were everywhere this year, and reporter Rhiannon Williams chronicled how quickly people can develop bonds with one. That’s all right for some people, she notes, but dangerous for others. Some folks even describe unintentionally forming romantic relationships with chatbots. This is a trend we’ll definitely be keeping an eye on in 2026. 

Is this the electric grid of the future?

The electric grid is bracing for disruption from more frequent storms and fires, as well as an uncertain policy and regulatory landscape. And in many ways, the publicly owned utility company Lincoln Electric in Nebraska is an ideal lens through which to examine this shift as it works through the challenges of delivering service that’s reliable, affordable, and sustainable.

Exclusive: A record-breaking baby has been born from an embryo that’s over 30 years old

This year saw the birth of the world’s “oldest baby”: Thaddeus Daniel Pierce, who arrived on July 26. The embryo he developed from was created in 1994 during the early days of IVF and had been frozen and sitting in storage ever since. The new baby’s parents were toddlers at the time, and the embryo was donated to them decades later via a Christian “embryo adoption” agency.  

How these two brothers became go-to experts on America’s “mystery drone” invasion

Twin brothers John and Gerald Tedesco teamed up to investigate a concerning new threat—unidentified drones. In 2024 alone, some 350 drones entered airspace over a hundred different US military installations, and many cases went unsolved, according to a top military official. This story takes readers inside the equipment-filled RV the Tedescos created to study mysterious aerial phenomena, and how they made a name for themselves among government officials. 

10 Breakthrough Technologies of 2025 

Our newsroom has published this annual look at advances that will matter in the long run for over 20 years. This year’s list featured generative AI search, cleaner jet fuel, long-acting HIV prevention meds, and other emerging technologies that our journalists think are worth watching. We’ll publish the 2026 edition of the list on January 12, so stay tuned. (In the meantime, here’s what didn’t make the cut.)  

AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.

If that’s left you feeling a little confused, fear not. As we near the end of 2025, our writers have taken a look back over the AI terms that dominated the year, for better or worse.

Make sure you take the time to brace yourself for what promises to be another bonkers year.

—Rhiannon Williams

1. Superintelligence

a jack russell terrier wearing glasses and a bow tie

As long as people have been hyping AI, they have been coming up with names for a future, ultra-powerful form of the technology that could bring about utopian or dystopian consequences for humanity. “Superintelligence” is that latest hot term. Meta announced in July that it would form an AI team to pursue superintelligence, and it was reportedly offering nine-figure compensation packages to AI experts from the company’s competitors to join.

In December, Microsoft’s head of AI followed suit, saying the company would be spending big sums, perhaps hundreds of billions, on the pursuit of superintelligence. If you think superintelligence is as vaguely defined as artificial general intelligence, or AGI, you’d be right! While it’s conceivable that these sorts of technologies will be feasible in humanity’s long run, the question is really when, and whether today’s AI is good enough to be treated as a stepping stone toward something like superintelligence. Not that that will stop the hype kings. —James O’Donnell

2. Vibe coding

Thirty years ago, Steve Jobs said everyone in America should learn how to program a computer. Today, people with zero knowledge of how to code can knock up an app, game, or website in no time at all thanks to vibe coding—a catch-all phrase coined by OpenAI cofounder Andrej Karpathy. To vibe-code, you simply prompt generative AI models’ coding assistants to create the digital object of your desire and accept pretty much everything they spit out. Will the result work? Possibly not. Will it be secure? Almost definitely not, but the technique’s biggest champions aren’t letting those minor details stand in their way. Also—it sounds fun! — Rhiannon Williams

3. Chatbot psychosis

One of the biggest AI stories over the past year has been how prolonged interactions with chatbots can cause vulnerable people to experience delusions and, in some extreme cases, can either cause or worsen psychosis. Although “chatbot psychosis” is not a recognized medical term, researchers are paying close attention to the growing anecdotal evidence from users who say it’s happened to them or someone they know. Sadly, the increasing number of lawsuits filed against AI companies by the families of people who died following their conversations with chatbots demonstrate the technology’s potentially deadly consequences. —Rhiannon Williams

4. Reasoning

Few things kept the AI hype train going this year more than so-called reasoning models, LLMs that can break down a problem into multiple steps and work through them one by one. OpenAI released its first reasoning models, o1 and o3, a year ago.

A month later, the Chinese firm DeepSeek took everyone by surprise with a very fast follow, putting out R1, the first open-source reasoning model. In no time, reasoning models became the industry standard: All major mass-market chatbots now come in flavors backed by this tech. Reasoning models have pushed the envelope of what LLMs can do, matching top human performances in prestigious math and coding competitions. On the flip side, all the buzz about LLMs that could “reason” reignited old debates about how smart LLMs really are and how they really work. Like “artificial intelligence” itself, “reasoning” is technical jargon dressed up with marketing sparkle. Choo choo! —Will Douglas Heaven

5. World models 

For all their uncanny facility with language, LLMs have very little common sense. Put simply, they don’t have any grounding in how the world works. Book learners in the most literal sense, LLMs can wax lyrical about everything under the sun and then fall flat with a howler about how many elephants you could fit into an Olympic swimming pool (exactly one, according to one of Google DeepMind’s LLMs).

World models—a broad church encompassing various technologies—aim to give AI some basic common sense about how stuff in the world actually fits together. In their most vivid form, world models like Google DeepMind’s Genie 3 and Marble, the much-anticipated new tech from Fei-Fei Li’s startup World Labs, can generate detailed and realistic virtual worlds for robots to train in and more. Yann LeCun, Meta’s former chief scientist, is also working on world models. He has been trying to give AI a sense of how the world works for years, by training models to predict what happens next in videos. This year he quit Meta to focus on this approach in a new start up called Advanced Machine Intelligence Labs. If all goes well, world models could be the next thing. —Will Douglas Heaven

6. Hyperscalers

Have you heard about all the people saying no thanks, we actually don’t want a giant data center plopped in our backyard? The data centers in question—which tech companies want to built everywhere, including space—are typically referred to as hyperscalers: massive buildings purpose-built for AI operations and used by the likes of OpenAI and Google to build bigger and more powerful AI models. Inside such buildings, the world’s best chips hum away training and fine-tuning models, and they’re built to be modular and grow according to needs.

It’s been a big year for hyperscalers. OpenAI announced, alongside President Donald Trump, its Stargate project, a $500 billion joint venture to pepper the country with the largest data centers ever. But it leaves almost everyone else asking: What exactly do we get out of it? Consumers worry the new data centers will raise their power bills. Such buildings generally struggle to run on renewable energy. And they don’t tend to create all that many jobs. But hey, maybe these massive, windowless buildings could at least give a moody, sci-fi vibe to your community. —James O’Donnell

7. Bubble

The lofty promises of AI are levitating the economy. AI companies are raising eye-popping sums of money and watching their valuations soar into the stratosphere. They’re pouring hundreds of billions of dollars into chips and data centers, financed increasingly by debt and eyebrow-raising circular deals. Meanwhile, the companies leading the gold rush, like OpenAI and Anthropic, might not turn a profit for years, if ever. Investors are betting big that AI will usher in a new era of riches, yet no one knows how transformative the technology will actually be.

Most organizations using AI aren’t yet seeing the payoff, and AI work slop is everywhere. There’s scientific uncertainty about whether scaling LLMs will deliver superintelligence or whether new breakthroughs need to pave the way. But unlike their predecessors in the dot-com bubble, AI companies are showing strong revenue growth, and some are even deep-pocketed tech titans like Microsoft, Google, and Meta. Will the manic dream ever burst—Michelle Kim

8. Agentic

This year, AI agents were everywhere. Every new feature announcement, model drop, or security report throughout 2025 was peppered with mentions of them, even though plenty of AI companies and experts disagree on exactly what counts as being truly “agentic,” a vague term if ever there was one. No matter that it’s virtually impossible to guarantee that an AI acting on your behalf out in the wide web will always do exactly what it’s supposed to do—it seems as though agentic AI is here to stay for the foreseeable. Want to sell something? Call it agentic! —Rhiannon Williams

9. Distillation

Early this year, DeepSeek unveiled its new model DeepSeek R1, an open-source reasoning model that matches top Western models but costs a fraction of the price. Its launch freaked Silicon Valley out, as many suddenly realized for the first time that huge scale and resources were not necessarily the key to high-level AI models. Nvidia stock plunged by 17% the day after R1 was released.

The key to R1’s success was distillation, a technique that makes AI models more efficient. It works by getting a bigger model to tutor a smaller model: You run the teacher model on a lot of examples and record the answers, and reward the student model as it copies those responses as closely as possible, so that it gains a compressed version of the teacher’s knowledge.  —Caiwei Chen

10. Sycophancy

As people across the world spend increasing amounts of time interacting with chatbots like ChatGPT, chatbot makers are struggling to work out the kind of tone and “personality” the models should adopt. Back in April, OpenAI admitted it’d struck the wrong balance between helpful and sniveling, saying a new update had rendered GPT-4o too sycophantic. Having it suck up to you isn’t just irritating—it can mislead users by reinforcing their incorrect beliefs and spreading misinformation. So consider this your reminder to take everything—yes, everything—LLMs produce with a pinch of salt. —Rhiannon Williams

11. Slop

If there is one AI-related term that has fully escaped the nerd enclosures and entered public consciousness, it’s “slop.” The word itself is old (think pig feed), but “slop” is now commonly used to refer to low-effort, mass-produced content generated by AI, often optimized for online traffic. A lot of people even use it as a shorthand for any AI-generated content. It has felt inescapable in the past year: We have been marinated in it, from fake biographies to shrimp Jesus images to surreal human-animal hybrid videos.

But people are also having fun with it. The term’s sardonic flexibility has made it easy for internet users to slap it on all kinds of words as a suffix to describe anything that lacks substance and is absurdly mediocre: think “work slop” or “friend slop.” As the hype cycle resets, “slop” marks a cultural reckoning about what we trust, what we value as creative labor, and what it means to be surrounded by stuff that was made for engagement rather than expression. —Caiwei Chen

12. Physical intelligence

Did you come across the hypnotizing video from earlier this year of a humanoid robot putting away dishes in a bleak, gray-scale kitchen? That pretty much embodies the idea of physical intelligence: the idea that advancements in AI can help robots better move around the physical world. 

It’s true that robots have been able to learn new tasks faster than ever before, everywhere from operating rooms to warehouses. Self-driving-car companies have seen improvements in how they simulate the roads, too. That said, it’s still wise to be skeptical that AI has revolutionized the field. Consider, for example, that many robots advertised as butlers in your home are doing the majority of their tasks thanks to remote operators in the Philippines

The road ahead for physical intelligence is also sure to be weird. Large language models train on text, which is abundant on the internet, but robots learn more from videos of people doing things. That’s why the robot company Figure suggested in September that it would pay people to film themselves in their apartments doing chores. Would you sign up? —James O’Donnell

13. Fair use

AI models are trained by devouring millions of words and images across the internet, including copyrighted work by artists and writers. AI companies argue this is “fair use”—a legal doctrine that lets you use copyrighted material without permission if you transform it into something new that doesn’t compete with the original. Courts are starting to weigh in. In June, Anthropic’s training of its AI model Claude on a library of books was ruled fair use because the technology was “exceedingly transformative.”

That same month, Meta scored a similar win, but only because the authors couldn’t show that the company’s literary buffet cut into their paychecks. As copyright battles brew, some creators are cashing in on the feast. In December, Disney signed a splashy deal with OpenAI to let users of Sora, the AI video platform, generate videos featuring more than 200 characters from Disney’s franchises. Meanwhile, governments around the world are rewriting copyright rules for the content-guzzling machines. Is training AI on copyrighted work fair use? As with any billion-dollar legal question, it depends—Michelle Kim

14. GEO

Just a few short years ago, an entire industry was built around helping websites rank highly in search results (okay, just in Google). Now search engine optimization (SEO), is giving way to GEO—generative engine optimization—as the AI boom forces brands and businesses to scramble to maximize their visibility in AI, whether that’s in AI-enhanced search results like Google’s AI Overviews or within responses from LLMs. It’s no wonder they’re freaked out. We already know that news companies have experienced a colossal drop in search-driven web traffic, and AI companies are working on ways to cut out the middleman and allow their users to visit sites from directly within their platforms. It’s time to adapt or die. —Rhiannon Williams

Meet the man hunting the spies in your smartphone

In April 2025, Ronald Deibert left all electronic devices at home in Toronto and boarded a plane. When he landed in Illinois, he took a taxi to a mall and headed directly to the Apple Store to purchase a new laptop and iPhone. He’d wanted to keep the risk of having his personal devices confiscated to a minimum, because he knew his work made him a prime target for surveillance. “I’m traveling under the assumption that I am being watched, right down to exactly where I am at any moment,” Deibert says.

Deibert directs the Citizen Lab, a research center he founded in 2001 to serve as “counterintelligence for civil society.” Housed at the University of Toronto, the lab operates independently of governments or corporate interests, relying instead on research grants and private philanthropy for financial support. It’s one of the few institutions that investigate cyberthreats exclusively in the public interest, and in doing so, it has exposed some of the most egregious digital abuses of the past two decades.

For many years, Deibert and his colleagues have held up the US as the standard for liberal democracy. But that’s changing, he says: “The pillars of democracy are under assault in the United States. For many decades, in spite of its flaws, it has upheld norms about what constitutional democracy looks like or should aspire to. [That] is now at risk.”

Even as some of his fellow Canadians avoided US travel after Donald Trump’s second election, Deibert relished the opportunity to visit. Alongside his meetings with human rights defenders, he also documented active surveillance at Columbia University during the height of its student protests. Deibert snapped photos of drones above campus and noted the exceptionally strict security protocols. “It was unorthodox to go to the United States,” he says. “But I really gravitate toward problems in the world.”


Deibert, 61, grew up in East Vancouver, British Columbia, a gritty area with a boisterous countercultural presence. In the ’70s, Vancouver brimmed with draft dodgers and hippies, but Deibert points to American investigative journalism—exposing the COINTELPRO surveillance program, the Pentagon Papers, Watergate—as the seed of his respect for antiestablishment sentiment. He didn’t imagine that this fascination would translate into a career, however.

“My horizons were pretty low because I came from a working-class family, and there weren’t many people in my family—in fact, none—who went on to university,” he says.

Deibert eventually entered a graduate program in international relations at the University of British Columbia. His doctoral research brought him to a field of inquiry that would soon explode: the geopolitical implications of the nascent internet.

“In my field, there were a handful of people beginning to talk about the internet, but it was very shallow, and that frustrated me,” he says. “And meanwhile, computer science was very technical, but not political—[politics] was almost like a dirty word.”

Deibert continued to explore these topics at the University of Toronto when he was appointed to a tenure-track professorship, but it wasn’t until after he founded the Citizen Lab in 2001 that his work rose to global prominence. 

What put the lab on the map, Deibert says, was its 2009 report “Tracking GhostNet,” which uncovered a digital espionage network in China that had breached offices of foreign embassies and diplomats in more than 100 countries, including the office of the Dalai Lama. The report and its follow-up in 2010 were among the first to publicly expose cybersurveillance in real time. In the years since, the lab has published over 180 such analyses, garnering praise from human rights advocates ranging from Margaret Atwood to Edward Snowden.

The lab has rigorously investigated authoritarian regimes around the world (Deibert says both Russia and China have his name on a “list” barring his entry). The group was the first to uncover the use of commercial spyware to surveil people close to the Saudi dissident and Washington Post journalist Jamal Khashoggi prior to his assassination, and its research has directly informed G7 and UN resolutions on digital repression and led to sanctions on spyware vendors. Even so, in 2025 US Immigration and Customs Enforcement reactivated a $2 million contract with the spyware vendor Paragon. The contract, which the Biden administration had previously placed under a stop-work order, resembles steps taken by governments in Europe and Israel that have also deployed domestic spyware to address security concerns. 

“It saves lives, quite literally,” Cindy Cohn, executive director of the Electronic Frontier Foundation, says of the lab’s work. “The Citizen Lab [researchers] were the first to really focus on technical attacks on human rights activists and democracy activists all around the world. And they’re still the best at it.”


When recruiting new Citizen Lab employees (or “Labbers,” as they refer to one another), Deibert forgoes stuffy, pencil-pushing academics in favor of brilliant, colorful personalities, many of whom personally experienced repression from some of the same regimes the lab now investigates.

Noura Aljizawi, a researcher on digital repression who survived torture at the hands of the al-Assad regime in Syria, researches the distinct threat that digital technologies pose to women and queer people, particularly when deployed against exiled nationals. She helped create Security Planner, a tool that gives personalized, expert-reviewed guidance to people looking to improve their digital hygiene, for which the University of Toronto awarded her an Excellence Through Innovation Award. 

Work for the lab is not without risk. Citizen Lab fellow Elies Campo, for example, was followed and photographed after the lab published a 2022 report that exposed the digital surveillance of dozens of Catalonian citizens and members of parliament, including four Catalonian presidents who were targeted during or after their terms.

Still, the lab’s reputation and mission make recruitment fairly easy, Deibert says. “This good work attracts a certain type of person,” he says. “But they’re usually also drawn to the sleuthing. It’s detective work, and that can be highly intoxicating—even addictive.”

Deibert frequently deflects the spotlight to his fellow Labbers. He rarely discusses the group’s accomplishments without referencing two senior researchers, Bill Marczak and John Scott-Railton, alongside other staffers. And on the occasion that someone decides to leave the Citizen Lab to pursue another position, this appreciation remains.

“We have a saying: Once a Labber, always a Labber,” Deibert says.


While in the US, Deibert taught a seminar on the Citizen Lab’s work to Northwestern University undergraduates and delivered talks on digital authoritarianism at the Columbia University Graduate School of Journalism. Universities in the US had been subjected to funding cuts and heightened scrutiny from the Trump administration, and Deibert wanted to be “in the mix” at such institutions to respond to what he sees as encroaching authoritarian practices by the US government. 

Since Deibert’s return to Canada, the lab has continued its work unearthing digital threats to civil society worldwide, but now Deibert must also contend with the US—a country that was once his benchmark for democracy but has become another subject of his scrutiny. “I do not believe that an institution like the Citizen Lab could exist right now in the United States,” he says. “The type of research that we pioneered is under threat like never before.”

He is particularly alarmed by the increasing pressures facing federal oversight bodies and academic institutions in the US. In September, for example, the Trump administration defunded the Council of the Inspectors General on Integrity and Efficiency, a government organization dedicated to preventing waste, fraud, and abuse within federal agencies, citing partisanship concerns. The White House has also threatened to freeze federal funding to universities that do not comply with administration directives related to gender, DEI, and campus speech. These sorts of actions, Deibert says, undermine the independence of watchdogs and research groups like the Citizen Lab. 

Cohn, the director of the EFF, says the lab’s location in Canada allows it to avoid many of these attacks on institutions that provide accountability. “Having the Citizen Lab based in Toronto and able to continue to do its work largely free of the things we’re seeing in the US,” she says, “could end up being tremendously important if we’re going to return to a place of the rule of law and protection of human rights and liberties.” 

Finian Hazen is a journalism and political science student at Northwestern University.

Four bright spots in climate news in 2025

Climate news hasn’t been great in 2025. Global greenhouse-gas emissions hit record highs (again). This year is set to be either the second or third warmest on record. Climate-fueled disasters like wildfires in California and flooding in Indonesia and Pakistan devastated communities and caused billions in damage.

In addition to these worrying indicators of our continued contributions to climate change and their obvious effects, the world’s largest economy has made a sharp U-turn on climate policy this year. The US under the Trump administration withdrew from the Paris Agreement, cut funds for climate research, and scrapped billions of dollars in funding for climate tech projects.

We’re in a severe situation with climate change. But for those looking for bright spots, there was some good news in 2025. Here are a few of the positive stories our climate reporters noticed this year.

China’s flattening emissions

Solar panels field on hillside

GETTY IMAGES

One of the most notable and encouraging signs of progress this year occurred in China. The world’s second-biggest economy and biggest climate polluter has managed to keep carbon dioxide emissions flat for the last year and a half, according to an analysis in Carbon Brief.

That’s happened before, but only when the nation’s economy was retracting, including in the midst of the covid-19 pandemic. But emissions are now falling even as China’s economy is on track to grow about 5% this year, and electricity demands continue to rise.

So what’s changed? China has now installed so much solar and wind, and put so many EVs on the road, that its economy can continue to expand without increasing the amount of carbon dioxide it’s pumping into the atmosphere, decoupling the traditional link between emissions and growth.

Specifically, China added an astounding 240 gigawatts of solar power capacity and 61 gigawatts of wind power in the first nine months of the year, the Carbon Brief analysis noted. That’s nearly as much solar power as the US has installed in total, in just the first three quarters of this year.

It’s too early to say China’s emissions have peaked, but the country has said it will officially reach that benchmark before 2030.

To be clear, China still isn’t moving fast enough to keep the world on track for meeting relatively safe temperature targets. (Indeed, very few countries are.) But it’s now both producing most of the world’s clean energy technologies and curbing its emissions growth, providing a model for cleaning up industrial economies without sacrificing economic prosperity—and setting the stage for faster climate progress in the coming years.

Batteries on the grid

looking down a row on battery storage units on an overcast day

AP PHOTO/SAM HODDE

It’s hard to articulate just how quickly batteries for grid storage are coming online. These massive arrays of cells can soak up electricity when sources like solar are available and prices are low, and then discharge power back to the grid when it’s needed most.

Back in 2015, the battery storage industry had installed only a fraction of a gigawatt of battery storage capacity across the US. That year, it set a seemingly bold target of adding 35 gigawatts by 2035. The sector passed that goal a decade early this year and then hit 40 gigawatts a couple of months later. 

Costs are still falling, which could help maintain the momentum for the technology’s deployment. This year, battery prices for EVs and stationary storage fell yet again, reaching a record low, according to data from BloombergNEF. Battery packs specifically used for grid storage saw prices fall even faster than the average; they cost 45% less than last year.

We’re starting to see what happens on grids with lots of battery capacity, too: in California and Texas, batteries are already helping meet demand in the evenings, reducing the need to run natural-gas plants. The result: a cleaner, more stable grid.

AI’s energy funding influx

Aerial view of a large Google Data Centre being built in Cheshunt, Hertfordshire, UK

GETTY IMAGES

The AI boom is complicated for our energy system, as we covered at length this year. Electricity demand is ticking up: the amount of power utilities supplied to US data centers jumped 22% this year and will more than double by 2030.

But at least one positive shift is coming out of AI’s influence on energy: It’s driving renewed interest and investment in next-generation energy technologies.

In the near term, much of the energy needed for data centers, including those that power AI, will likely come from fossil fuels, especially new natural-gas power plants. But tech giants like Google, Microsoft, and Meta all have goals on the books to reduce their greenhouse-gas emissions, so they’re looking for alternatives.

Meta signed a deal with XGS Energy in June to purchase up to 150 megawatts of electricity from a geothermal plant. In October, Google signed an agreement that will help reopen Duane Arnold Energy Center in Iowa, a previously shuttered nuclear power plant.

Geothermal and nuclear could be key pieces of the grid of the future, as they can provide constant power in a way that wind and solar don’t. There’s a long way to go for many of the new versions of the tech, but more money and interest from big, powerful players can’t hurt.

Good news, bad news

Aerial view of solar power and battery storage units in the desert

ADOBE STOCK

Perhaps the strongest evidence of collective climate progress so far: We’ve already avoided the gravest dangers that scientists feared just a decade ago.

The world is on track for about 2.6 °C of warming over preindustrial conditions by 2100, according to Climate Action Tracker, an independent scientific effort to track the policy progress that nations have made toward their goals under the Paris climate agreement.

That’s a lot warmer than we want the planet to ever get. But it’s also a whole degree better than the 3.6 °C path that we were on a decade ago, just before nearly 200 countries signed the Paris deal.

That progress occurred because more and more nations passed emissions mandates, funded subsidies, and invested in research and development—and private industry got busy cranking out vast amounts of solar panels, wind turbines, batteries, and EVs. 

The bad news is that progress has stalled. Climate Action Tracker notes that its warming projections have remained stubbornly fixed for the last four years, as nations have largely failed to take the additional action needed to bend that curve closer to the 2 °C goal set out in the international agreement.

But having shaved off a degree of danger is still demonstrable proof that we can pull together in the face of a global threat and address a very, very hard problem. And it means we’ve done the difficult work of laying down the technical foundation for a society that can largely run without spewing ever more greenhouse gas into the atmosphere.

Hopefully, as cleantech continues to improve and climate change steadily worsens, the world will find the collective will to pick up the pace again soon.

How social media encourages the worst of AI boosterism

Demis Hassabis, CEO of Google DeepMind, summed it up in three words: “This is embarrassing.”  

Hassabis was replying on X to an overexcited post by Sébastien Bubeck, a research scientist at the rival firm OpenAI, announcing that two mathematicians had used OpenAI’s latest large language model, GPT-5, to find solutions to 10 unsolved problems in mathematics. “Science acceleration via AI has officially begun,” Bubeck crowed.

Put your math hats on for a minute, and let’s take a look at what this beef from mid-October was about. It’s a perfect example of what’s wrong with AI right now.

Bubeck was excited that GPT-5 seemed to have somehow solved a number of puzzles known as Erdős problems.

Paul Erdős, one of the most prolific mathematicians of the 20th century, left behind hundreds of puzzles when he died. To help keep track of which ones have been solved, Thomas Bloom, a mathematician at the University of Manchester, UK, set up erdosproblems.com, which lists more than 1,100 problems and notes that around 430 of them come with solutions. 

When Bubeck celebrated GPT-5’s breakthrough, Bloom was quick to call him out. “This is a dramatic misrepresentation,” he wrote on X. Bloom explained that a problem isn’t necessarily unsolved if this website does not list a solution. That simply means Bloom wasn’t aware of one. There are millions of mathematics papers out there, and nobody has read all of them. But GPT-5 probably has.

It turned out that instead of coming up with new solutions to 10 unsolved problems, GPT-5 had scoured the internet for 10 existing solutions that Bloom hadn’t seen before. Oops!

There are two takeaways here. One is that breathless claims about big breakthroughs shouldn’t be made via social media: Less knee jerk and more gut check.

The second is that GPT-5’s ability to find references to previous work that Bloom wasn’t aware of is also amazing. The hype overshadowed something that should have been pretty cool in itself.

Mathematicians are very interested in using LLMs to trawl through vast numbers of existing results, François Charton, a research scientist who studies the application of LLMs to mathematics at the AI startup Axiom Math, told me when I talked to him about this Erdős gotcha.

But literature search is dull compared with genuine discovery, especially to AI’s fervent boosters on social media. Bubeck’s blunder isn’t the only example.

In August, a pair of mathematicians showed that no LLM at the time was able to solve a math puzzle known as Yu Tsumura’s 554th Problem. Two months later, social media erupted with evidence that GPT-5 now could. “Lee Sedol moment is coming for many,” one observer commented, referring to the Go master who lost to DeepMind’s AI AlphaGo in 2016.

But Charton pointed out that solving Yu Tsumura’s 554th Problem isn’t a big deal to mathematicians. “It’s a question you would give an undergrad,” he said. “There is this tendency to overdo everything.”

Meanwhile, more sober assessments of what LLMs may or may not be good at are coming in. At the same time that mathematicians were fighting on the internet about GPT-5, two new studies came out that looked in depth at the use of LLMs in medicine and law (two fields that model makers have claimed their tech excels at). 

Researchers found that LLMs could make certain medical diagnoses, but they were flawed at recommending treatments. When it comes to law, researchers found that LLMs often give inconsistent and incorrect advice. “Evidence thus far spectacularly fails to meet the burden of proof,” the authors concluded.

But that’s not the kind of message that goes down well on X. “You’ve got that excitement because everybody is communicating like crazy—nobody wants to be left behind,” Charton said. X is where a lot of AI news drops first, it’s where new results are trumpeted, and it’s where key players like Sam Altman, Yann LeCun, and Gary Marcus slug it out in public. It’s hard to keep up—and harder to look away.

Bubeck’s post was only embarrassing because his mistake was caught. Not all errors are. Unless something changes researchers, investors, and non-specific boosters will keep teeing each other up. “Some of them are scientists, many are not, but they are all nerds,” Charton told me. “Huge claims work very well on these networks.”

*****

There’s a coda! I wrote everything you’ve just read above for the Algorithm column in the January/February 2026 issue of MIT Technology Review magazine (out very soon). Two days after that went to press, Axiom told me its own math model, AxiomProver, had solved two open Erdős problems (#124 and #481, for the math fans in the room). That’s impressive stuff for a small startup founded just a few months ago. Yup—AI moves fast!

But that’s not all. Five days later the company announced that AxiomProver had solved nine out of 12 problems in this year’s Putnam competition, a college-level math challenge that some people consider harder than the better-known International Math Olympiad (which LLMs from both Google DeepMind and OpenAI aced a few months back). 

The Putnam result was lauded on X by big names in the field, including Jeff Dean, chief scientist at Google DeepMind, and Thomas Wolf, cofounder at the AI firm Hugging Face. Once again familiar debates played out in the replies. A few researchers pointed out that while the International Math Olympiad demands more creative problem-solving, the Putnam competition tests math knowledge—which makes it notoriously hard for undergrads, but easier, in theory, for LLMs that have ingested the internet.

How should we judge Axiom’s achievements? Not on social media, at least. And the eye-catching competition wins are just a starting point. Determining just how good LLMs are at math will require a deeper dive into exactly what these models are doing when they solve hard (read: hard for humans) math problems.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

How I learned to stop worrying and love AI slop

Lately, everywhere I scroll, I keep seeing the same fish-eyed CCTV view: a grainy wide shot from the corner of a living room, a driveway at night, an empty grocery store. Then something impossible happens. JD Vance shows up at the doorstep in a crazy outfit. A car folds into itself like paper and drives away. A cat comes in and starts hanging out with capybaras and bears, as if in some weird modern fairy tale.

This fake-surveillance look has become one of the signature flavors of what people now call AI slop. For those of us who spend time online watching short videos, slop feels inescapable: a flood of repetitive, often nonsensical AI-generated clips that washes across TikTok, Instagram, and beyond. For that, you can thank new tools like OpenAI’s Sora (which exploded in popularity after launching in app form in September), Google’s Veo series, and AI models built by Runway. Now anyone can make videos, with just a few taps on a screen. 

@absolutemem

If I were to locate the moment slop broke through into popular consciousness, I’d pick the video of rabbits bouncing on a trampoline that went viral this summer. For many savvy internet users, myself included, it was the first time we were fooled by an AI video, and it ended up spawning a wave of almost identical riffs, with people making videos of all kinds of animals and objects bouncing on the same trampoline. 

My first reaction was that, broadly speaking, all of this sucked. That’s become a familiar refrain, in think pieces and at dinner parties. Everything online is slop now—the internet “enshittified,” with AI taking much of the blame. Initially, I largely agreed, quickly scrolling past every AI video in a futile attempt to send a message to my algorithm. But then friends started sharing AI clips in group chats that were compellingly weird, or funny. Some even had a grain of brilliance buried in the nonsense. I had to admit I didn’t fully understand what I was rejecting—what I found so objectionable. 

To try to get to the bottom of how I felt (and why), I recently spoke to the people making the videos, a company creating bespoke tools for creators, and experts who study how new media becomes culture. What I found convinced me that maybe generative AI will not end up ruining everything. Maybe we have been too quick to dismiss AI slop. Maybe there’s a case for looking beyond the surface and seeing a new kind of creativity—one we’re watching take shape in real time, with many of us actually playing a part. 

 The slop boom

“AI slop” can and does refer to text, audio, or images. But what’s really broken through this year is the flood of quick AI-generated video clips on social platforms, each produced by a short written prompt fed into an AI model. Under the hood, these models are trained on enormous data sets so they can predict what every subsequent frame should look or sound like. It’s much like the process by which text models produce answers in a chat, but slower and far more power-hungry.

Early text-to-video systems, released around 2022 to 2023, could manage only a few seconds of blurry motion; objects warped in and out of existence, characters teleported around, and the giveaway that it was AI was usually a mangled hand or a melting face. In the past two years, newer models like Sora2, Veo 3.1, and Runway’s latest Gen-4.5 have dramatically improved, creating realistic, seamless, and increasingly true-to-prompt videos that can last up to a minute. Some of these models even generate sound and video together, including ambient noise and rough dialogue.

These text-to-video models have often been pitched by AI companies as the future of cinema—tools for filmmakers, studios, and professional storytellers. The demos have leaned into widescreen shots and dramatic camera moves. OpenAI pitched Sora as a “world simulator” while courting Hollywood filmmakers with what it boasted were movie-quality shorts. Google introduced Veo 3 last year as a step toward storyboards and longer scenes, edging directly into film workflows. 

All this hinged on the idea that people wanted to make AI-generated videos that looked real. But the reality of how they’re being used is more modest, weirder—and arguably much more interesting. What has turned out to be the home turf for AI video is the six-inch screen in our hands. 

Anyone can and does use these tools; a report by Adobe released in October shows that 86% of creators are using generative AI. But so are average social media users—people who aren’t “creators” so much as just people with phones. 

That’s how you end up with clips showing things like Indian prime minister Narendra Modi dancing with Gandhi, a crystal that melts into butter the moment a knife touches it, or Game of Thrones reimagined as Henan opera—videos that are hypnotic, occasionally funny, and often deeply stupid. And while micro-trends didn’t start with AI—TikTok and Reels already ran on fast-moving formats—it feels as if AI poured fuel on that fire. Perhaps because the barrier to copying an idea becomes so low, a viral video like the bunnies on trampoline can easily and quickly spawn endless variations on the same concept. You don’t need a costume or a filming location anymore; you just tweak the prompt, hit Generate, and share. 

Big tech companies have also jumped on the idea of AI videos as a new social medium. The Sora app allows users to insert AI versions of themselves and other users into scenes. Meta’s Vibes app wants to turn your entire feed into nonstop AI clips.

Of course, the same frictionless setup that allows for harmless, delightful creations also makes it easy to generate much darker slop. Sora has already been used to create so many racist deepfakes of Martin Luther King Jr. that the King estate pushed the company to block new MLK videos entirely. TikTok and X are seeing Sora-watermarked clips of women and girls being strangled circulating in bulk, posted by accounts seemingly dedicated to this one theme. And then there’s “nazislop,” the nickname for AI videos that repackage fascist aesthetics and memes into glossy, algorithm-ready content aimed at teens’ For You pages.

But the prevalence of bad actors hasn’t stopped short AI videos from flourishing as a form. New apps, Discord servers for AI creators, and tutorial channels keep multiplying. And increasingly, the energy in the community seems to be shifting away from trying to create stuff that “passes as real” toward embracing AI’s inherent weirdness. Every day, I stumble across creators who are stretching what “AI slop” is supposed to look like. I decided to talk to some of them.

Meet the creators

Like those fake surveillance videos, many popular viral AI videos rely on a surreal, otherworldly quality. As Wenhui Lim, an architecture designer turned full-time AI artist, tells me, “There is definitely a competition of ‘How weird we can push this?’ among AI video creators.”  

It’s the kind of thing AI video tools seem to handle with ease: pushing physics past what a normal body can do or a normal camera can capture. This makes AI a surprisingly natural fit for satire, comedy skits, parody, and experimental video art—especially examples involving absurdism or even horror. Several popular AI creators that I spoke with eagerly tap into this capability. 

Drake Garibay, a 39-year-old software developer from Redlands, California, was inspired by body-horror AI clips circulating on social media in early 2025. He started playing with ComfyUI, a generative media tool, and ended up spending hours each week making his own strange creations. His favorite subject is morbid human-animal hybrids. “I fell right into it,” he says. “I’ve always been pretty artistic, [but] when I saw what AI video tools can do, I was blown away.”

Since the start of this year, Garibay has been posting his experiments online. One that went viral on TikTok, captioned “Cooking up some fresh AI slop,” shows a group of people pouring gooey dough into a pot. The mixture suddenly sprouts a human face, which then emerges from the boiling pot with a head and body. It has racked up more than 8.3 million views.

@digitalpersons

AI video technology is evolving so quickly that even for creative professionals, there is a lot to experiment with. Daryl Anselmo, a creative director turned digital artist, has been experimenting with the technology since its early days, posting an AI-generated video every day since 2021. He tells me that uses a wide range of tools, including Kling, Luma, and Midjourney, and is constantly iterating. To him, testing the boundaries of these AI tools is sometimes itself the reward. “I would like to think there are impossible things that you could not do before that are still yet to be discovered. That is exciting to me,” he says.

Anselmo has collected his daily creations over the past four years into an art project, titled AI Slop, that has been exhibited in multiple galleries, including the Grand Palais Immersif in Paris. There’s obvious attention to mood and composition. Some clips feel like something closer to an art-house vignette than a throwaway meme. Over time, Anselmo’s project has taken a darker turn as his subjects shift from landscapes and interior design toward more of the body horror that drew Garibay in. 

His breakout piece, feel the agi, shows a hyperrealistic bot peeling open its own skull. Another video he shared recently features a midnight diner populated by anthropomorphized Tater Tots, titled Tot and Bothered; with its vintage palette and slow, mystical soundtrack, the piece feels like a late-night fever dream. 

One further benefit of these AI systems is that they make it easier for creators to build recurring spaces and casts of characters that function like informal franchises. Lim, for instance, is the creator of a popular AI video account called Niceaunties, inspired by the “auntie culture” in Singapore, where she’s from.

“The word ‘aunties’ often has a slightly negative connotation in Singaporean culture. They are portrayed as old-fashioned, naggy, and lacking boundaries. But they are also so resourceful, funny, and at ease with themselves,” she says. “I want to create a world where it’s different for them.” 

Her cheeky, playful videos show elderly Asian women merging with fruits, other objects, and architecture, or just living their best lives in a fantasy world. A viral video called Auntlantis, which has racked up 13.5 million views on Instagram, imagines silver-haired aunties as industrial mermaids working in an underwater trash-processing plant.  

There’s also Granny Spills, an AI video account that features a glamorous, sassy old lady spitting hot takes and life advice to a street interviewer. It gained 1.8 million Instagram followers within three months of launch, posting new videos almost every day. Although the granny’s face looks slightly different in every video, the pink color scheme and her outfit stay mostly consistent. Creators Eric Suerez and Adam Vaserstein tell me that their entire workflow is powered by AI, from writing the script to constructing the scenes. Their role, as a result, becomes close to creative directing.

@grannyspills

These projects often spin off merch, miniseries, and branded universes. The creators of Granny Spills, for example, have expanded their network, creating a Black granny as well as an Asian granny to cater to different audiences. The grannies now appear in crossover videos, as if they share the same fictional universe, pushing traffic between channels. 

In the same vein, it’s now more possible than ever to participate in an online trend. Consider  “Italian brainrot,” which went viral earlier this year. Beloved by Gen Z and Gen Alpha, these videos feature human–animal–object hybrids with pseudo-Italian names like “Bombardiro Crocodilo” and “Tralalero Tralala.” According to Know Your Meme, the craze began with a few viral TikTok sounds in fake Italian. Soon, a lot of people were participating in what felt like a massive collaborative hallucination, inventing characters, backstories, and worldviews for an ever-expanding absurdist universe. 

@patapimai

“Italian brainrot was great when it first hit,” says Denim Mazuki, a software developer and content creator who has been following the trend. “It was the collective lore-building that made it wonderful. Everyone added a piece. The characters were not owned by a studio or a single creator—they were made by the chronically online users.” 

This trend and others are further enabled by specialized and sophisticated new tools—like OpenArt, a platform designed not just for video generation but for video storytelling, which gives users frame-to-frame control over a developing narrative.

Making a video on OpenArt is straightforward: Users start with a few AI-generated character images and a line of text as simple as “cat dancing in a park.” The platform then spins out a scene breakdown that users can tweak act by act, and they can run it through multiple mainstream models and compare the results to see which look best.

OpenArt cofounders Coco Mao and Chloe Fang tell me they sponsored tutorial videos and created quick-start templates to capitalize specifically on the trend of regular people wanting to get in on Italian brainrot. They say more than 80% of their users have no artistic background. 

In defense of slop

The current use of the word “slop” online traces back to the early 2010s on 4chan, a forum known for its insular and often toxic in-jokes. As the term has spread, its meaning has evolved; it’s now a kind of derogatory slur for anything that feels like low-quality mass production aimed at an unsuspecting public, says Adam Aleksic, an internet linguist. People now slap it onto everything from salad bowls to meaningless work reports.

But even with that broadened usage, AI remains the first association: “slop” has become a convenient shorthand for dismissing almost any AI-generated output, regardless of its actual quality. The Cambridge Dictionary’s new sense of “slop” will almost certainly cement this perception, describing it as “content on the internet that is of very low quality, especially when it is created by AI.”   

Perhaps unsurprisingly, the word has become a charged label among AI creators. 

Anselmo embraces it semi-ironically, hence the title of his yearslong art project. “I see this series as an experimental sketchbook,” he says. “I am working with the slop, pushing the models, breaking them, and developing a new visual language. I have no shame that I am deep into AI.” Anselmo says that he does not concern himself with whether his work is “art.”

Garibay, the creator of the viral video where a human face emerged from a pot of physical slop, uses the label playfully. “The AI slop art is really just a lot of weird glitchy stuff that happens, and there’s not really a lot of depth usually behind it, besides the shock value,” he says. “But you will find out really fast that there is a heck of a lot more involved, if you want a higher-end result.” 

That’s largely in line with what Suerez and Vaserstein, the creators of Granny Spills, tell me. They actually hate it when their work is called slop, given the way the term is often used to dismiss AI-generated content out of hand. It feels disrespectful of their creative input, they say. Even though they do not write the scripts or paint the frames, they say they are making legitimate artistic choices. 

Indeed, for most of the creators I spoke to, making AI content is rarely a one-click process. They tell me that it takes skill, trial and error, and a strong sense of taste to consistently get the visuals they want. Lim says a single one-minute video can take hours, sometimes even days, to make. Anselmo, for his part, takes pride in actively pushing the model rather than passively accepting its output. “There’s just so many things that you can do with it that go well beyond ‘Oh, way to go, you typed in a prompt,’” he says. Ultimately, slop evokes a lot of feelings. Aleksic puts it well: “There’s a feeling of guilt on the user end for enjoying something that you know to be lowbrow. There’s a feeling of anger toward the creator for making something that is not up to your content expectations, and all the meantime, there’s a pervasive algorithmic anxiety hanging over us. We know that the algorithm and the platforms are to blame for the distribution of this slop.”

And that anxiety long predates generative AI. We’ve been living for years with the low-grade dread of being nudged, of having our taste engineered and our attention herded, so it’s not surprising that the anger latches onto the newest, most visible culprit. Sometimes it is misplaced, sure, but I also get the urge to assert human agency against a new force that seems to push all of us away from what we know and toward something we didn’t exactly choose.

But the negative association has real harm for the earlier adopters. Every AI video creator I spoke to described receiving hateful messages and comments simply for using these tools at all. These messages accuse AI creators of taking opportunities away from artists already struggling to make a living, and some dismiss their work as “grifting” and “garbage.” The backlash, of course, did not come out of nowhere. A Brookings study of one major freelance marketplace found that after new generative-AI tools launched in 2022, freelancers in AI-exposed occupations saw about 2% decline in contracts and a 5% drop in earnings. 

“The phrase ‘AI slop’ implies, like, a certain ease of creation that really bothers a lot of people—understandably, because [making AI-generated videos] doesn’t incorporate the artistic labor that we typically associate with contemporary art,” says Mindy Seu, a researcher, artist, and associate professor in digital arts at UCLA. 

At the root of the conflict here is that the use of AI in art is still nascent; there are few best practices and almost no guardrails. And there’s a kind of shame involved—one I recognize when I find myself lingering on bad AI content. 

Historically, new technology has always carried a whiff of stigma when it first appears, especially in creative fields where it seems to encroach on a previously manual craft. Seu says that digital art, internet art, and new media have been slow to gain recognition from cultural institutions, which remain key arbiters of what counts as “serious” or “relevant” art. 

For many artists, AI now sits in that same lineage: “Every big advance in technology yields the question ‘What is the role of the artist?’” she says. This is true even if creators are not seeing it as a replacement for authorship but simply as another way to create. 

Mao, the OpenArt founder, believes that learning how to use generative video tools will be crucial for future content creators, much as learning Photoshop was almost synonymous with graphic design for a generation. “It is a skill to be learned and mastered,” she says.

There is a generous reading of the phenomenon so many people call AI slop, which is that it is a kind of democratization. A rare skill shifts away from craftsmanship to something closer to creative direction: being able to describe what you want with enough linguistic precision, and to anchor it in references the model is likely to understand. You have to know how to ask, and what to point to. In that sense, discernment and critique sit closer to the center of the process than ever before.

It’s not just about creative direction, though, but about the human intention behind the creation. “It’s very easy to copy the style,” Lim says. “It’s very easy to make, like, old Asian women doing different things, but they [imitators] don’t understand why I’m doing it … Even when people try to imitate that, they don’t have that consistency.”

“It’s the idea behind AI creation that makes it interesting to look at,” says Zach Lieberman, a professor at the MIT Media Lab who leads a research group called Future Sketches, where members explore code-enabled images. Lieberman, who has been posting daily sketches generated by code for years, tells me that mathematical logic is not the enemy of beauty. He echoes Mao in saying that a younger generation will inevitably see AI as just another tool in the toolbox. Still, he feels uneasy: By relying so heavily on black-box AI models, artists lose some of the direct control over output that they’ve traditionally enjoyed.

A new online culture

For many people, AI slop is simply everything they already resent about the internet, turned up: ugly, noisy, and crowding out human work. It’s only possible because it’s been trained to take all creative work and make it fodder, stripped of origin, aura, or credit, and blended into something engineered to be mathematically average—arguably perfectly mediocre, by design. Charles Pulliam-Moore, a writer for The Verge, calls this the “formulaic derivativeness” that already defines so much internet culture: unimaginative, unoriginal, and uninteresting. 

But I love internet culture, and I have for a long time. Even at its worst, it’s bad in an interesting way: It offers a corner for every kind of obsession and invites you to add your own. Years of being chronically online have taught me that the real logic of slop consumption isn’t mastery but a kind of submission. As a user, I have almost no leverage over platforms or algorithms; I can’t really change how they work. Submission, though, doesn’t mean giving up. It’s more like recognizing that the tide is stronger than you and choosing to let it carry you. Good scrolling isn’t about control anyway. It’s closer to surfing, and sometimes you wash up somewhere ridiculous, but not entirely alone.

Mass-produced click-bait content has always been around. What’s new is that we can now watch it being generated in real time, on a scale that would have been unimaginable before. And the way we respond to it in turn shapes new content (see the trampoline-bouncing bunnies) and more culture and so on. Perhaps AI slop is born of submission to algorithmic logic. It’s unserious, surreal, and spectacular in ways that mirror our relationship to the internet itself. It is so banal—so aggressively, inhumanly mediocre—that it loops back around and becomes compelling. 

To “love AI slop” is to admit the internet is broken, that the infrastructure of culture is opportunistic and extractive. But even in that wreckage, people still find ways to play, laugh, and make meaning. 

Earlier this fall, months after I was briefly fooled by the bunny video, I was scrolling on Rednote and landed on videos by Mu Tianran, a Chinese creator who acts out weird skits that mimic AI slop. In one widely circulated clip, he plays a street interviewer asking other actors, “Do you know you are AI generated?”—parodying an earlier wave of AI-generated street interviews. The actors’ responses seem so AI, but of course they’re not: Eyes are fixed just off-camera, their laughter a beat too slow, their movements slightly wrong. 

Watching this, it was hard to believe that AI was about to snuff out human creativity. If anything, it has handed people a new style to inhabit and mock, another texture to play with. Maybe it’s all fine. Maybe the urge to imitate, remix, and joke is still stubbornly human, and AI cannot possibly take it away. 

Researchers are getting organoids pregnant with human embryos

At first glance, it looks like the start of a human pregnancy: A ball-shaped embryo presses gently into the receptive lining of the uterus and then grips tight, burrowing in as the first tendrils of a future placenta appear. 

This is implantation—the moment that pregnancy officially begins.

Only none of it is happening inside a body. These images were captured in a Beijing laboratory, inside a microfluidic chip, as scientists watched the scene unfold.

a microfluidic chip with channel measurements marked in mm
This transparent microfluidic chip is used to grow an organoid that mimics the lining of a uterus.
COURTESY OF THE RESEARCHERS

In three papers published this week by Cell Press, scientists are reporting what they call the most accurate efforts yet to mimic the first moments of pregnancy in the lab. They’ve taken human embryos from IVF centers and let these merge with “organoids” made of endometrial cells, which form the lining of the uterus.

The reports—two from China and a third involving a collaboration among researchers in the United Kingdom, Spain, and the US—show how scientists are using engineered tissues to better understand early pregnancy and potentially improve IVF outcomes.

“You have an embryo and the endometrial organoid together,” says Jun Wu, a biologist at the University of Texas Southwestern Medical Center, in Dallas, who contributed to both Chinese reports. “That’s the overarching message of all three papers.”

According to the papers, these 3D combinations are the most complete re-creations yet of the first days of pregnancy and should be useful for studying why IVF treatments often fail.

In each case, the experiments were stopped when the embryos were two weeks old, if not sooner. That is due to legal and ethical rules that typically restrict scientists from going any further than 14 days.

In your basic IVF procedure, an egg is fertilized in the lab and allowed to develop into a spherical embryo called a blastocyst—a process that takes a few days. That blastocyst then gets put into a patient’s uterus in the hope it will establish itself there and ultimately become a baby.

two embryos growing in placental tissue
Two blastoids, or artificial embryos (circles), grow inside an organoid.
COURTESY OF THE RESEARCHERS

But that’s a common failure point. Many patients will learn that their IVF procedure didn’t work because an embryo never attached.

In the new reports, it’s that initial bond between mother and embryo that is being reproduced in the lab. “IVF means in vitro fertilization, but now this is the stage of in vitro implantation,” says Matteo Molè, a biologist at Stanford University whose results with collaborators in Europe are among those published today. “Considering that implantation is a barrier [to pregnancy], we have the potential to increase the success rate if we can model it in the laboratory.”

Normally implantation is entirely hidden from view because it occurs in someone’s uterus, says Hongmei Wang, a developmental biologist at the Beijing Institute for Stem Cell and Regenerative Medicine, who co-led the effort there. Wang often studies monkeys because she can interrupt their pregnancies to collect the tissues she needs to see. “We’ve always hoped to understand human embryo implantation, but we have lacked a way to do so,” she says. “It’s all happening in the uterus.”

In the Beijing study, researchers tested about 50 donated IVF embryos, but they also ran a thousand more experiments using so-called blastoids. The latter are mimics of early-stage human embryos manufactured from stem cells. Blastoids are easy to make in large numbers and, since they aren’t true embryos, don’t have as many ethical rules on their use.

“The question was, if we have these blastoids, what can we use them for?” says Leqian Yu, the senior author of the report from the Beijing Institute. “The obvious next step was implantation. So how do you do that?”

For the Beijing team, the answer was to build a soft silicone chamber with tiny channels to add nutrients and a space to grow the uterine organoid. After that, blastoids—or real embryos—could be introduced through a window in the device, so the “pregnancy” could start.

“The key question we want to try to answer is what is the first cross-talk between embryo and mother,” says Yu. “I think this is maybe the first time we can see the entire process.”

Medical applications

This isn’t the first time researchers have tried using organoids for this kind of research. At least two startup companies have raised funds to commercialize similar systems—in some cases presenting the organoids as a tool to predict IVF success. In addition to Dawn Bio, a startup based in Vienna, there is Simbryo Technologies, in Houston, which last month said it would begin offering “personalized” predictions for IVF patients using blastoids and endometrial organoids.

To do that test, doctors will take a biopsy of a patient’s uterine lining and grow organoids from it. After that, blastoids will be added to the organoids to gauge whether a woman is likely to be able to support a pregnancy or not. If the blastoids don’t start to implant, it could mean the patient’s uterus isn’t receptive and is the reason IVF isn’t working.

The Beijing team thinks the pregnancy organoids could also be used to identify drugs that might help those patients. In their paper, they describe how they made organoids out of tissue taken from women who’ve had repeated IVF failures. Then they tested 1,119 approved drugs on those samples to see if anything improved.

Several seemed to have helpful effects. One chemical, avobenzone, an ingredient in some types of sunblock, increased the chance that a blastoid would start implanting from just 5% of the time to around 25% of the time. Yu says his center hopes to eventually start a clinical trial if they can find the right drug to try. 

Artificial womb?

The Beijing group is working on ways to improve the organoid system so that it’s even more realistic. Right now, it lacks important cell types, including immune cells and a blood supply. Yu says a next step he’s working on is to add blood vessels and tiny pumps to his chip device, so that he can give the organoids a kind of rudimentary circulation.

This means that in the near future, blastoids or embryos could likely be grown longer, raising questions about how far scientists will be able to take pregnancy in the lab. “I think this technology does raise the possibility of growing things longer,” says Wu, who says some view the research as an initial step toward creating babies entirely outside the body.

However, Wu says incubating a human to term in the laboratory remains impossible, for the time being. “This technology is certainly related to ectogenesis, or development outside the body,” he says. “But I don’t think it’s anywhere near an artificial womb. That’s still science fiction.”