AI and the future of sex

The power of pornography doesn’t lie in arousal but in questions. What is obscene? What is ethical or safe to watch? 

We don’t have to consume or even support it, but porn will still demand answers. The question now is: What is “real” porn? 

Anti-porn crusades have been at the heart of the US culture wars for generations, but by the start of the 2000s, the issue had lost its hold. Smartphones made porn too easy to spread and hard to muzzle. Porn became a politically sticky issue, too entangled with free speech and evolving tech. An uneasy truce was made: As long as the imagery was created by consenting adults and stayed on the other side of paywalls and age verification systems, it was to be left alone. 

But today, as AI porn infiltrates dinner tables, PTA meetings, and courtrooms, that truce may not endure much longer. The issue is already making its way back into the national discourse; Project 2025, the Heritage Foundation–backed policy plan for a future Republican administration, proposes the criminalization of porn and the arrest of its creators.

But what if porn is wholly created by an algorithm? In that case, whether it’s obscene, ethical, or safe becomes secondary to What does it mean for porn to be “real”—and what will the answer demand from all of us? 

During my time as a filmmaker in adult entertainment, I witnessed seismic shifts: the evolution from tape to digital, the introduction of new HIV preventions, and the disruption of the industry by free streaming and social media. An early tech adopter, porn was an industry built on desires, greed, and fantasy, propped up by performances and pharmaceuticals. Its methods and media varied widely, but the one constant was its messy humanity. Until now.

What does it mean for porn to be “real”—and what will the answer demand from all of us?

When AI-generated pornography first emerged, it was easy to keep a forensic distance from the early images and dismiss them as a parlor trick. They were laughable and creepy: cheerleaders with seven fingers and dead, wonky eyes. Then, seemingly overnight, they reached uncanny photorealism. Synthetic erotica, like hentai and CGI, has existed for decades, but I had never seen porn like this. These were the hallucinations of a machine trained on a million pornographic images, both the creation of porn and a distillation of it. Femmes fatales with psychedelic genitalia, straight male celebrities in same-sex scenes, naked girls in crowded grocery stores—posted not in the dark corners of the internet but on social media. The images were glistening and warm, raising fresh questions about consent and privacy. What would these new images turn us into?

In September of 2023, the small Spanish town of Almendralejo was forced to confront this question. Twenty girls returned from summer break to find naked selfies they’d never taken being passed around at school. Boys had rendered the images using an AI “nudify” app with just a few euros and a yearbook photo. The girls were bullied and blackmailed, suffered panic attacks and depression. The youngest was 11. The school and parents were at a loss. The tools had arrived faster than the speed of conversation, and they did not discriminate. By the end of the school year, similar cases had spread to Australia, Quebec, London, and Mexico. Then explicit AI images of Taylor Swift flooded social media. If she couldn’t stop this, a 15-year-old from Michigan stood no chance.

The technology behind pornography never slows down, regardless of controversies. When students return to school this fall, it will be in the shadow of AI video engines like Sora and Runway 3, which produce realistic video from text prompts and photographs. If still images have caused so much global havoc, imagine what video could do and where the footage could end up. 

As porn becomes more personal, it’s also becoming more personalized. Users can now check boxes on a list of options as long as the Cheesecake Factory menu to create their ideal scenes: categories like male, female, and trans; ages from 18 to 90; breast and penis size; details like tan lines and underwear color; backdrops like grocery stores, churches, the Eiffel Tower, and Stonehenge; even weather, like tornadoes. It may be 1s and 0s, but AI holds no binary; it holds no judgment or beauty standards. It can render seldom-represented bodies, like those of mature, transgender, and disabled people, in all pairings. Hyper-customizable porn will no longer require performers—only selections and an answer to the question “What is it that I really like?” While Hollywood grapples with the ethics of AI, artificial porn films will become a reality. Celebrities may boost their careers by promoting their synthetic sex tapes on late-night shows.

The progress of AI porn may shift our memories, too. AI is already used to extend home movies and turn vintage photos into live-action scenes. What happens when we apply this to sex? Early sexual images etch themselves on us: glimpses of flesh from our first crush, a lost lover, a stranger on the bus. These erotic memories depend on the specific details for their power: a trail of hair, panties in a specific color, sunlight on wet lips, my PE teacher’s red gym shorts. They are ideal for AI prompts. 

Porn and real-life sex affect each other in a loop. If people become accustomed to getting exactly what they want from erotic media, this could further affect their expectations of relationships. A first date may have another layer of awkwardness if each party has already seen an idealized, naked digital doppelganger of the other. 

Despite (or because of) this blurring of lines, we may actually start to see a genre of “ethical porn.” Without the need for sets, shoots, or even performers, future porn studios might not deal with humans at all. This may be appealing for some viewers, who can be sure that new actors are not underage, trafficked, or under the influence.

A synergy has been brewing since the ’90s, when CD-ROM games, life-size silicone dolls, and websites introduced “interactivity” to adult entertainment. Thirty years later, AI chatbot “partners” and cheaper, lifelike sex dolls are more accessible than ever. Porn tends to merge all available tech toward complete erotic immersion. The realism of AI models has already broken the dam to the uncanny valley. Soon, these avatars will be powered by chatbots and embodied in three-dimensional prosthetics, all existing in virtual-reality worlds. What follows will be the fabled sex robot. 

So what happens when we’ve removed the “messy humanity” from sex itself? Porn is defined by the needs of its era. Ours has been marked by increasing isolation. The pandemic further conditioned us to digitize our most intimate moments, bringing us FaceTime hospital visits and weddings, and caused a deep discharge of our social batteries. Adult entertainment may step into that void. The rise of AI-generated porn may be a symptom of a new synthetic sexuality, not the cause. In the near future, we may find this porn arousing because of its artificiality, not in spite of it.

Leo Herrera is a writer and artist. He explores how tech intersects with sex and culture on Substack at Herrera Words.

Inside the long quest to advance Chinese writing technology

Every second of every day, someone is typing in Chinese. In a park in Hong Kong, at a desk in Taiwan, in the checkout line at a Family Mart in Shanghai, the automatic doors chiming a song each time they open. Though the mechanics look a little different from typing in English or French—people usually type the pronunciation of a character and then pick it out of a selection that pops up, autocomplete-style—it’s hard to think of anything more quotidian. The software that allows this exists beneath the awareness of pretty much everyone who uses it. It’s just there.

cover of The Chinese Computer by Tom Mullaney
The Chinese Computer: A Global History of the Information Age
Thomas S. Mullaney
MIT PRESS, 2024

What’s largely been forgotten—and what most people outside Asia never even knew in the first place—is that a large cast of eccentrics and linguists, engineers and polymaths, spent much of the 20th century torturing themselves over how Chinese was ever going to move away from the ink brush to any other medium. This process has been the subject of two books published in the last two years: Thomas Mullaney’s scholarly work The Chinese Computer and Jing Tsu’s more accessible Kingdom of Characters. Mullaney’s book focuses on the invention of various input systems for Chinese starting in the 1940s, while Tsu’s covers more than a century of efforts to standardize Chinese and transmit it using the telegraph, typewriter, and computer. But both reveal a story that’s tumultuous and chaotic—and just a little unsettling in the futility it reflects.   

cover of Kingdom of Characters
Kingdom of Characters: The Language Revolution That Made China Modern
Jing Tsu
RIVERHEAD BOOKS, 2022

Chinese characters are not as cryptic as they sometimes appear. The general rule is that they stand for a word, or sometimes part of a word, and learning to read is a process of memorization. Along the way, it becomes easier to guess how a character should be spoken, because often phonetic elements are tucked in among other symbols. The characters were traditionally written by hand with a brush, and part of becoming literate involves memorizing the order in which the strokes are made. Put them in the wrong order and the character doesn’t look right. Or rather, as I found some years ago as a second-language learner in Guangzhou, China, it looks childish. (My husband, a translator of Chinese literature, found it hilarious and adorable that at the age of 30, I wrote like a kindergartner.)

The trouble, however, is that there are a lot of characters. One needs to know at least a few thousand to be considered basically literate, and there are thousands more beyond that basic set. Many modern learners of Chinese devote themselves essentially full-time to learning to read, at least in the beginning. More than a century ago, this was such a monumental task that leading thinkers worried it was impairing China’s ability to survive the attentions of more aggressive powers.

In the 19th century, a huge proportion of Chinese people were illiterate. They had little access to schooling. Many were subsistence farmers. China, despite its immense population and vast territory, was perpetually finding itself on the losing end of deals with nimbler, more industrialized nations. The Opium Wars, in the mid-19th century, had led to a situation where foreign powers effectively colonized Chinese soil. What advanced infrastructure there was had been built and was owned by foreigners.  

Some felt these things were connected. Wang Zhao, for one, was a reformer who believed that a simpler way to write spoken Chinese was essential to the survival of the nation. Wang’s idea was to use a set of phonetic symbols, representing one specific dialect of Chinese. If people could sound out words, having memorized just a handful of shapes the way speakers of languages using an alphabet did, they could become literate more quickly. With literacy, they could learn technical skills, study science, and help China get ownership of its future back. 

Wang believed in this goal so strongly that though he’d been thrown out of China in 1898, he returned two years later in disguise. After arriving by boat from Japan, he traveled over land on foot in the costume of a Buddhist monk. His story forms the first chapter of Jing Tsu’s book, and it is thick with drama, including a shouting match and brawl on the grounds of a former palace, during a meeting to decide which dialect a national version of such a system should represent. Wang’s system for learning Mandarin was used by schools in Beijing for a few years, but ultimately it did not survive the rise of competing systems and the period of chaos that swallowed China not long after the Qing Dynasty’s fall in 1911. Decades of disorder and uneasy truces gave way to Japan’s invasion of Manchuria in northern China in 1931. For a long time, basic survival was all most people had time for.

However, strange inventions soon began to turn up in China. Chinese students and scientists abroad had started to work on a typewriter for the language, which they felt was lagging behind others. Texts in English and other tongues using Roman characters could be printed swiftly and cheaply with keyboard-controlled machines that injected liquid metal into type molds, but Chinese texts required thousands upon thousands of bits of type to be placed in a manual printing press. And while English correspondence could be whacked out on a typewriter, Chinese correspondence was still, after all this time, written by hand.      

Of all the technologies Mullaney and Tsu describe, these baroque metal monsters stick most in the mind. Equipped with cylinders and wheels, with type arrayed in starbursts or in a massive tray, they are simultaneously writing machines and incarnations of philosophies about how to organize a language. Because Chinese characters don’t have an inherent order (no A-B-C-D-E-F-G) and because there are so many (if you just glance at 4,000 of them, you’re not likely to spot the one you need quickly), people tried to arrange these bits of type according to predictable rules. The first article ever published by Lin Yutang, who would go on to become one of China’s most prominent writers in English, described a system of ordering characters according to the number of strokes it took to form them. He eventually designed a Chinese typewriter that consumed his life and finances, a lovely thing that failed its demo in front of potential investors.

woman using a large desk-sized terminal
Chinese keyboard designers considered many interfaces, including tabletop-size devices that included 2,000 or more commonly used characters.
PUBLIC DOMAIN/COURTESY OF THOMAS S. MULLANEY

Technology often seems to demand new ways of engaging with the physical, and the Chinese typewriter was no exception. When I first saw a functioning example, at a private museum in a basement in Switzerland, I was entranced by the gliding arm and slender rails of the sheet-cake-size device, its tray full of characters. “Operating the machine was a full-body exercise,” Tsu writes of a very early typewriter from the late 1890s, designed by an American missionary. Its inventor expected that with time, muscle memory would take over, and the typist would move smoothly around the machine, picking out characters and depressing keys. 

However, though Chinese typewriters eventually got off the ground (the first commercial typewriter was available in the 1920s), a few decades later it became clear that the next challenge was getting Chinese characters into the computer age. And there was still the problem of how to get more people reading. Through the 1930s, ’40s, ’50s, and ’60s, systems for ordering and typing Chinese continued to occupy the minds of intellectuals; particularly odd and memorable is the story of the librarian at Sun Yat-sen University in Guangzhou, who in the 1930s came up with a system of light and dark glyphs like semaphore flags to stand for characters. Mullaney and Tsu both linger on the case of Zhi Bingyi, an engineer imprisoned in solitary confinement during the Cultural Revolution in the late 1960s, who was inspired by the characters of a slogan written on his cell wall to devise his own code for inputting characters into a computer.

As the child of a futurist, I’ve seen firsthand that the path to where we are is littered with technological dead ends.

The tools for literacy were advancing over the same period, thanks to government-­mandated reforms introduced after the Communist Revolution in 1949. To assist in learning to read, everyone in mainland China would now be taught pinyin, a system that uses Roman letters to indicate how Chinese characters are pronounced. Meanwhile, thousands of characters would be replaced with simplified versions, with fewer strokes to learn. This is still how it’s done today in the mainland, though in Taiwan and Hong Kong, the characters are not simplified, and Taiwan uses a different pronunciation guide, one based on 37 phonetic symbols and five tone marks. 

Myriad ideas were thrown at the problem of getting these characters into computers. Images of a graveyard of failed designs—256-key keyboards and the enormous cylinder of the Ideo-Matic Encoder, a keyboard with more than 4,000 options—are scattered poignantly through Mullaney’s pages. 

In Tsu’s telling, perhaps the most consequential link between this awkward period of dedicated hardware and today’s wicked-quick mobile-phone typing came in 1988, with an idea hatched by engineers in California. “Unicode was envisioned as a master converter,” she writes. “It would bring all human script systems, Western, Chinese, or otherwise, under one umbrella standard and assign each character a single, standardized code for communicating with any machine.” Once Chinese characters had Unicode codes, they could be manipulated by software like any other glyph, letter, or symbol. Today’s input systems allow users to call up and select characters using pinyin or stroke order, among other options.

There is something curiously deflating, however, about the way both these books end. Mullaney’s careful documenting of the typing machines of the last century and Tsu’s collection of adventurous tales about language show the same thing: A simply unbelievable amount of time, energy, and cleverness was poured into making Chinese characters easier for both machines and the human mind to manipulate. But very few of these systems seem to have had any direct impact on the current solutions, like the pronunciation-led input systems that more than a billion people now use to type Chinese. 

This pattern of evolution isn’t unique to language. As the child of a futurist, I’ve seen firsthand that the path to where we are is littered with technological dead ends. The month after Google Glass, the glasses-borne computer, made headlines, my mother helped set up an exhibit of personal heads-up displays. In the obscurity of a warehouse space, ghostly white foam heads each bore a crown of metal, glass, and plastic, the attempts of various inventors to put a screen in front of our eyes. Augmented reality seemed as if it might finally be arriving in the hands of the people—or, rather, on their faces. 

That version of the future did not materialize, and if augmented-reality viewing ever does become part of everyday life, it won’t be through those objects. When historians write about these devices, in books like these, I don’t think they will be able to trace a chain of unbroken thought, a single arc from idea to fruition.

A charming moment, late in Mullaney’s book, speaks to this. He has been slipping letters in the mailboxes of people he’s found listed as inventors of input methods in the Chinese patent database, and now he’s meeting one such inventor, an elderly man, and his granddaughter in a Beijing Starbucks. The old fellow is pleased to talk about his approach, which involves the graphical shapes of Chinese characters. But his granddaughter drops a bomb on Mullaney when she leans in and whispers, “I think my input system is a bit easier to use.” It turns out both she and her father have built systems of their own. 

The story’s not over, in other words.    

People tinker with technology and systems of thought like those detailed in these two books not just because they have to, but because they want to. And though it’s human nature to want to make a trajectory out of what lies behind us so that the present becomes a grand culmination, what these books detail are episodes in the life of a language. There is no beginning, no middle, no satisfying end. There is only evolution—an endless unfurling of something always in the process of becoming a fuller version of itself. 

Veronique Greenwood is a science writer and essayist based in England. Her work has appeared in the New York Times, the Atlantic, and many other publications.

Move over, text: Video is the new medium of our lives

The other day I idly opened TikTok to find a video of a young woman refinishing an old hollow-bodied electric guitar.

It was a montage of close-up shots—looking over her shoulder as she sanded and scraped the wood, peeled away the frets, expertly patched the cracks with filler, and then spray-painted it a radiant purple. She compressed days of work into a tight 30-second clip. It was mesmerizing.

Of course, that wasn’t the only video I saw that day. In barely another five minutes of swiping around, I saw a historian discussing the songs Tolkien wrote in The Lord of the Rings; a sailor puzzling over a capsized boat he’d found deep at sea; a tearful mother talking about parenting a child with ADHD; a Latino man laconically describing a dustup with his racist neighbor; and a linguist discussing how Gen Z uses video-game metaphors in everyday life.

I could go on. I will! And so, probably, will you. This is what the internet looks like now. It used to be a preserve of text and photos—but increasingly, it is a forest of video.

This is one of the most profound technology shifts that will define our future: We are entering the age of the moving image.

For centuries, when everyday people had to communicate at a distance, they really had only two options. They could write something down; they could send a picture. The moving image was too expensive to shoot, edit, and disseminate. Only pros could wield it.

The smartphone, the internet, and social networks like TikTok have rapidly and utterly transformed this situation. It’s now common, when someone wants to hurl an idea into the world, not to pull out a keyboard and type but to turn on a camera and talk. For many young people, video might be the prime way to express ideas.

As media thinkers like Marshall McLuhan have intoned, a new medium changes us. It changes the way we learn, the way we think—and what we think about. When mass printing emerged, it helped create a culture of news, mass literacy, and bureaucracy, and—some argue—the very idea of scientific evidence. So how will mass video shift our culture?

For starters, I’d argue, it is helping us share knowledge that used to be damnably hard to capture in text. I’m a long-distance cyclist, for example, and if I need to fix my bike, I don’t bother reading a guide. I look for a video explainer. If you’re looking to express—or absorb—knowledge that’s visual, physical, or proprioceptive, the moving image nearly always wins. Athletes don’t read a textual description of what they did wrong in the last game; they watch the clips. Hence the wild popularity, on video platforms, of instructional video—makeup tutorials, cooking demonstrations. (Or even learn-to-code material: I learned Python by watching coders do it.)

Video also is no longer about mere broadcast, but about conversation—it’s a way to respond to others, notes Raven Maragh-Lloyd, the author of Black Networked Resistance and a professor of film and media studies at Washington University. “We’re seeing a rise of audience participation,” she notes, including people doing “duets” on TikTok or response videos on YouTube. Everyday creators see video platforms as ways to talk back to power.

“My students were like, ‘If there’s a video over seven seconds, we’re not watching it.’”

Brianna Wiens, Waterloo University

There’s also an increasingly sophisticated lexicon of visual styles. Today’s video creators riff on older film aesthetics to make their points. Brianna Wiens, an assistant professor of digital media and rhetoric at Waterloo University, says she admired how a neuroscientist used stop-motion video, a technique from the early days of film, to produce TikTok discussions of vaccines during the height of the covid-19 pandemic. Or consider the animated GIF, which channels the “zoetrope” of the 1800s, looping a short moment in time to examine over and over.

Indeed, as video becomes more woven into the vernacular of daily life, it’s both expanding and contracting in size. There are streams on Twitch where you can watch someone for hours—and viral videos where someone compresses an idea into mere seconds. Those latter ones have a particular rhetorical power because they’re so ingestible. “I was teaching a class called Digital Lives, and my students were like, If there’s a video over seven seconds, we’re not watching it,” Wiens says, laughing.

Are there dangers ahead as use of the moving image grows? Possibly. Maybe it will too powerfully reward people with the right visual and physical charisma. (Not necessarily a novel danger: Text and radio had their own versions.) More subtly, video is technologically still adolescent. It’s not yet easy to search, or to clip and paste and annotate and collate—to use video for quietly organizing our thoughts, the way we do with text. Until those tool sets emerge (and you can see that beginning), its power will be limited. Lastly, maybe the moving image will become so common and go-to that’ll kill off print culture.

Media scholars are not terribly stressed about this final danger. New forms of media rarely kill off older ones. Indeed, as the late priest and scholar Walter Ong pointed out, creating television and radio requires writing plenty of text—all those scripts. Today’s moving-media culture is possibly even more saturated with writing. Videos on Instagram and TikTok often include artfully arranged captions, “diegetic” text commenting on the action, or data visualizations. You read while you watch; write while you shoot.

“We’re getting into all kinds of interesting hybrids and relationships,” notes Lev Manovich, a professor at the City University of New York. The tool sets for sculpting and editing video will undoubtedly improve too, perhaps using AI to help auto-edit, redact, summarize. 

One firm, Reduct, already offers a clever trick: You alter a video by editing the transcript. Snip out a sentence, and it snips out the related visuals. Public defenders use it to parse and edit police videos. They’re often knee-deep in the stuff—the advent of body cameras worn by officers has produced an ocean of footage, as Reduct’s CEO, Robert Ochshorn, tells me. 

Meanwhile, generative AI will make it easier to create a film out of pure imagination. This means, of course, that we’ll see a new flood of visual misinformation. We’ll need to develop a sharper culture of finding the useful amid the garbage. It took print a couple of centuries to do that, as scholars of the book will tell you—centuries during which the printing press helped spark untold war and upheaval. We’ll be living through the same process with the moving image.

So strap yourselves in. Whatever else happens, it’ll be interesting. 

Clive Thompson is the author of Coders: The Making of a New Tribe and the Remaking of the World.

Beyond gene-edited babies: the possible paths for tinkering with human evolution

In 2016, I attended a large meeting of journalists in Washington, DC. The keynote speaker was Jennifer Doudna, who just a few years before had co-invented CRISPR, a revolutionary method of changing genes that was sweeping across biology labs because it was so easy to use. With its discovery, Doudna explained, humanity had achieved the ability to change its own fundamental molecular nature. And that capability came with both possibility and danger. One of her biggest fears, she said, was “waking up one morning and reading about the first CRISPR baby”—a child with deliberately altered genes baked in from the start.  

As a journalist specializing in genetic engineering—the weirder the better—I had a different fear. A CRISPR baby would be a story of the century, and I worried some other journalist would get the scoop. Gene editing had become the biggest subject on the biotech beat, and once a team in China had altered the DNA of a monkey to introduce customized mutations, it seemed obvious that further envelope-pushing wasn’t far off. 

If anyone did create an edited baby, it would raise moral and ethical issues, among the profoundest of which, Doudna had told me, was that doing so would be “changing human evolution.” Any gene alterations made to an embryo that successfully developed into a baby would get passed on to any children of its own, via what’s known as the germline. What kind of scientist would be bold enough to try that? 

Two years and nearly 8,000 miles in an airplane seat later, I found the answer. At a hotel in Guangzhou, China, I joined a documentary film crew for a meeting with a biophysicist named He Jiankui, who appeared with a retinue of advisors. During the meeting, He was immensely gregarious and spoke excitedly about his research on embryos of mice, monkeys, and humans, and about his eventual plans to improve human health by adding beneficial genes to people’s bodies from birth. Still imagining that such a step must lie at least some way off, I asked if the technology was truly ready for such an undertaking. 

“Ready,” He said. Then, after a laden pause: “Almost ready.”

Why wait 100,000 years for natural selection to do its job? For a few hundred dollars in chemicals, you could try to install these changes in an embryo in 10 minutes.

Four weeks later, I learned that he’d already done it, when I found data that He had placed online describing the genetic profiles of two gene-edited human fetuses—that is, ”CRISPR babies” in gestation—as well an explanation of his plan, which was to create humans immune to HIV. He had targeted a gene called CCR5, which in some people has a variation known to protect against HIV infection. It’s rare for numbers in a spreadsheet to make the hair on your arms stand up, although maybe some climatologists feel the same way seeing the latest Arctic temperatures. It appeared that something historic—and frightening—had already happened. In our story breaking the news that same day, I ventured that the birth of genetically tailored humans would be something between a medical breakthrough and the start of a slippery slope of human enhancement. 

For his actions, He was later sentenced to three years in prison, and his scientific practices were roundly excoriated. The edits he made, on what proved to be twin girls (and a third baby, revealed later), had in fact been carelessly imposed, almost in an out-of-control fashion, according to his own data. And I was among a flock of critics—in the media and academia—who would subject He and his circle of advisors to Promethean-level torment via a daily stream of articles and exposés. Just this spring, Fyodor Urnov, a gene-editing specialist at the University of California, Berkeley, lashed out on X, calling He a scientific “pyromaniac” and comparing him to a Balrog, a demon from J.R.R. Tolkien’s The Lord of the Rings. It could seem as if He’s crime wasn’t just medical wrongdoing but daring to take the wheel of the very processes that brought you, me, and him into being. 

Futurists who write about the destiny of humankind have imagined all sorts of changes. We’ll all be given auxiliary chromosomes loaded with genetic goodies, or maybe we’ll march through life as a member of a pod of identical clones. Perhaps sex will become outdated as we reproduce exclusively through our stem cells. Or human colonists on another planet will be isolated so long that they become their own species. The thing about He’s idea, though, is that he drew it from scientific realities close at hand. Just as some gene mutations cause awful, rare diseases, others are being discovered that lend a few people the ability to resist common ones, like diabetes, heart disease, Alzheimer’s—and HIV. Such beneficial, superpower-like traits might spread to the rest of humanity, given enough time. But why wait 100,000 years for natural selection to do its job? For a few hundred dollars in chemicals, you could try to install these changes in an embryo in 10 minutes. That is, in theory, the easiest way to go about making such changes—it’s just one cell to start with. 

Editing human embryos is restricted in much of the world—and making an edited baby is flatly illegal in most countries surveyed by legal scholars. But advancing technology could render the embryo issue moot. New ways of adding CRISPR to the bodies of people already born—children and adults—could let them easily receive changes as well. Indeed, if you are curious what the human genome could look like in 125 years, it’s possible that many people will be the beneficiaries of multiple rare, but useful, gene mutations currently found in only small segments of the population. These could protect us against common diseases and infections, but eventually they could also yield frank improvements in other traits, such as height, metabolism, or even cognition. These changes would not be passed on genetically to people’s offspring, but if they were widely distributed, they too would become a form of human-directed self-evolution—easily as big a deal as the emergence of computer intelligence or the engineering of the physical world around us.

I was surprised to learn that even as He’s critics take issue with his methods, they see the basic stratagem as inevitable. When I asked Urnov, who helped coin the term “genome editing” in 2005, what the human genome could be like in, say, a century, he readily agreed that improvements using superpower genes will probably be widely introduced into adults—and embryos—as the technology to do so improves. But he warned that he doesn’t necessarily trust humanity to do things the right way. Some groups will probably obtain the health benefits before others. And commercial interests could eventually take the trend in unhelpful directions—much as algorithms keep his students’ noses pasted, unnaturally, to the screens of their mobile phones. “I would say my enthusiasm for what the human genome is going to be in 100 years is tempered by our history of a lack of moderation and wisdom,” he said. “You don’t need to be Aldous Huxley to start writing dystopias.”

Editing early

At around 10 p.m. Beijing time, He’s face flicked into view over the Tencent videoconferencing app. It was May 2024, nearly six years after I had first interviewed him, and he appeared in a loftlike space with a soaring ceiling and a wide-screen TV on a wall. Urnov had warned me not to speak with He, since it would be like asking “Bernie Madoff to opine about ethical investing.” But I wanted to speak to him, because he’s still one of the few scientists willing to promote the idea of broad improvements to humanity’s genes. 

Of course, it’s his fault everyone is so down on the idea. After his experiment, China formally made “implantation” of gene-edited human embryos into the uterus a crime. Funding sources evaporated. “He created this blowback, and it brought to a halt many people’s research. And there were not many to begin with,” says Paula Amato, a fertility doctor at Oregon Health and Science University who co-leads one of only two US teams that have ever reported editing human embryos in a lab.  “And the publicity—nobody wants to be associated with something that is considered scandalous or eugenic.”

After leaving prison in 2022, the Chinese biophysicist surprised nearly everyone by seeking to make a scientific comeback. At first, he floated ideas for DNA-based data storage and “affordable” cures for children who have muscular dystrophy. But then, in summer 2023, he posted to social media that he intended to return to research on how to change embryos with gene editing, with the caveat that “no human embryo will be implanted for pregnancy.” His new interest was a gene called APP, or amyloid precursor protein. It’s known that people who possess a very rare version, or “allele,” of this gene almost never develop Alzheimer’s disease

In our video call, He said the APP gene is the main focus of his research now and that he is determining how to change it. The work, he says, is not being conducted on human embryos, but rather on mice and on kidney cells, using an updated form of CRISPR called base editing, which can flip individual letters of DNA without breaking the molecule. 

“We just want to expand the protective allele from small amounts of lucky people to maybe most people,” He told me. And if you made the adjustment at the moment an egg is fertilized, you would only have to change one cell in order for the change to take hold in the embryo and, eventually, everywhere in a person’s brain. Trying to edit an individual’s brain after birth “is as hard a delivering a person to the moon,” He said. “But if you deliver gene editing to an embryo, it’s as easy as driving home.” 

In the future, He said, human embryos will “obviously” be corrected for all severe genetic diseases. But they will also receive “a panel” of “perhaps 20 or 30” edits to improve health. (If you’ve seen the sci-fi film Gattaca, it takes place in a world where such touch-ups are routine—leading to stigmatization of the movie’s hero, a would-be space pilot who lacks them.) One of these would be to install the APP variant, which involves changing a single letter of DNA. Others would protect against diabetes, and maybe cancer and heart disease. He calls these proposed edits “genetic vaccines” and believes people in the future “won’t have to worry” about many of the things most likely to kill them today.  

Is He the person who will bring about this future? Last year, in what seemed to be a step toward his rehabilitation, he got a job heading a gene center at Wuchang University of Technology, a third-tier institution in Wuhan. But He said during our call that he had already left the position. He didn’t say what had caused the split but mentioned that a flurry of press coverage had “made people feel pressured.” One item, in a French financial paper, Les Echos, was titled “GMO babies: The secrets of a Chinese Frankenstein.” Now he carries out research at his own private lab, he says, with funding from Chinese and American supporters. He has early plans for a startup company. Could he tell me names and locations? “Of course not,” he said with a chuckle. 

little girl holding a snake

MICHAEL BYERS

It could be there is no lab, just a concept. But it’s a concept that is hard to dismiss. Would you give your child a gene tweak—a swap of a single genetic letter among the 3 billion that run the length of the genome—to prevent Alzheimer’s, the mind thief that’s the seventh-leading cause of death in the US? Polls find that the American public is about evenly split on the ethics of adding disease resistance traits to embryos. A sizable minority, though, would go further. A 2023 survey published in Science found that nearly 30% of people would edit an embryo if it enhanced the resulting child’s chance of attending a top-ranked college. 

The benefits of the genetic variant He claims to be working with were discovered by the Icelandic gene-hunting company deCode Genetics. Twenty-six years ago, in 1998, its founder, a doctor named Kári Stefánsson, got the green light to obtain medical records and DNA from Iceland’s citizens, allowing deCode to amass one of the first large national gene databases. Several similar large biobanks now operate, including one in the United Kingdom, which recently finished sequencing the genomes of 500,000 volunteers. These biobanks make it possible to do computerized searches to find relationships between people’s genetic makeup and real-life differences like how long they live, what diseases they get, and even how much beer they drink. The result is a statistical index of how strongly every possible difference in human DNA affects every trait that can be measured. 

In 2012, deCode’s geneticists used the technique to study a tiny change in the APP gene and determined that the individuals who had it rarely developed Alzheimer’s. They otherwise seemed healthy. In fact, they seemed particularly sharp in old age and appeared to live longer, too. Lab tests confirmed that the change reduces the production of brain plaques, the abnormal clumps of protein that are a hallmark of the disease. 

“This is beginning to be about the essence of who we are as a species.”

Kári Stefánsson, founder and CEO, deCode genetics

One way evolution works is when a small change or error appears in one baby’s DNA. If the change helps that person survive and reproduce, it will tend to become more common in the species—eventually, over many generations, even universal. This process is slow, but it’s visible to science. In 2018, for example, researchers determined that the Bajau, a group indigenous to Indonesia whose members collect food by diving, possess genetic changes associated with bigger spleens. This allows them to store more oxygenated red blood cells—an advantage in their lives. 

Even though the variation in the APP gene seems hugely beneficial, it’s a change that benefits old people, way past their reproductive years. So it’s not the kind of advantage natural selection can readily act on. But we could act on it. That is what technology-assisted evolution would look like—seizing on a variation we think is useful and spreading it. “The way, probably, that enhancement will be done will be to look at the population, look at people who have enhanced capabilities—whatever those might be,” the Israeli medical geneticist Ephrat Levy-Lahad said during a gene-editing summit last year. “You are going to be using variations that already exist in the population that you already have information on.”

One advantage of zeroing in on advantageous DNA changes that already exist in the population is that their effects are pretested. The people located by deCode were in their 80s and 90s. There didn’t seem to be anything different about them—except their unusually clear minds. Their lives—as seen from the computer screens of deCode’s biobank—served as a kind of long-term natural experiment. Yet scientists could not be fully confident placing this variant into an embryo, since the benefits or downsides might differ depending on what other genetic factors are already present, especially other Alzheimer’s risk genes. And it would be difficult to run a study to see what happens. In the case of APP, it would take 70 years for the final evidence to emerge. By that time, the scientists involved would all be dead. 

When I spoke with Stefánsson last year, he made the case both for and against altering genomes with “rare variants of large effect,” like the change in APP. “All of us would like to keep our marbles until we die. There is no question about it. And if you could, by pushing a button, install the kind of protection people with this mutation have, that would be desirable,” he said. But even if the technology to make this edit before birth exists, he says, the risks of doing so seem almost impossible to gauge: “You are not just affecting the person, but all their descendants forever. These are mutations that would allow for further selection and further evolution, so this is beginning to be about the essence of who we are as a species.”

Editing everyone

Some genetic engineers believe that editing embryos, though in theory easy to do, will always be held back by these grave uncertainties. Instead, they say, DNA editing in living adults could become easy enough to be used not only to correct rare diseases but to add enhanced capabilities to those who seek them. If that happens, editing for improvement could spread just as quickly as any consumer technology or medical fad. “I don’t think it’s going to be germline,” says George Church, a Harvard geneticist often sought out for his prognostications. “The 8 billion of us who are alive kind of constitute the marketplace.” For several years, Church has been circulating what he calls “my famous, or infamous, table of enhancements.” It’s a tally of gene variants that lend people superpowers, including APP and another that leads to extra-hard bones, which was found in a family that complained of not being able to stay afloat in swimming pools. The table is infamous because some believe Church’s inclusion of the HIV-protective CCR5 variant inspired He’s effort to edit it into the CRISPR babies.

Church believes novel gene treatments for very serious diseases, once proven, will start leading the way toward enhancements and improvements to people already born. “You’d constantly be tweaking and getting feedback,” he says—something that’s hard to do with the germline, since humans take so long to grow up. Changes to adult bodies would not be passed down, but Church thinks they could easily count as a form of heredity. He notes that railroads, eyeglasses, cell phones—and the knowledge of how to make and use all these technologies—are already all transmitted between generations. “We’re clearly inheriting even things that are inorganic,” he says. 

The biotechnology industry is already finding ways to emulate the effects of rare, beneficial variants. A new category of heart drugs, for instance, mimics the effect of a rare variation in a gene, called PCSK9, that helps maintain cholesterol levels. The variation, initially discovered in a few people in the US and Zimbabwe, blocks the gene’s activity and gives them ultra-low cholesterol levels for life. The drugs, taken every few weeks or months, work by blocking the PCSK9 protein. One biotech company, though, has started trying to edit the DNA of people’s liver cells (the site of cholesterol metabolism) to introduce the same effect permanently. 

For now, gene editing of adult bodies is still challenging and is held back by the difficulty of “delivering” the CRISPR instructions to thousands, or even billions of cells—often using viruses to carry the payloads. Organs like the brain and muscles are hard to access, and the treatments can be ordeals. Fatalities in studies aren’t unheard-of. But biotech companies are pouring dollars into new, sleeker ways to deliver CRISPR to hard-to-reach places. Some are designing special viruses that can home in on specific types of cells. Others are adopting nanoparticles similar to those used in the covid-19 vaccines, with the idea of introducing editors easily, and cheaply, via a shot in the arm. 

At the Innovative Genomics Institute, a center established by Doudna in Berkeley, California, researchers anticipate that as delivery improves, they will be able to create a kind of CRISPR conveyor belt that, with a few clicks of a mouse, allows doctors to design gene-editing treatments for any serious inherited condition that afflicts children, including immune deficiencies so uncommon that no company will take them on. “This is the trend in my field. We can capitalize on human genetics quite quickly, and the scope of the editable human will rapidly expand,” says Urnov, who works at the institute. “We know that already, today—and forget 2124, this is in 2024—we can build enough CRISPR for the entire planet. I really, really think that [this idea of] gene editing in a syringe will grow. And as it does, we’re going to start to face very clearly the question of how we equitably distribute these resources.” 

For now, gene-editing interventions are so complex and costly that only people in wealthy countries are receiving them. The first such therapy to get FDA approval, a treatment for sickle-cell disease, is priced at over $2 million and requires a lengthy hospital stay. Because it’s so difficult to administer, it’s not yet being offered in most of Africa, even though that is where sickle-cell disease is most common. Such disparities are now propelling efforts to greatly simplify gene editing, including a project jointly paid for by the Gates Foundation and the National Institutes of Health that aims to design “shot in the arm” CRISPR, potentially making cures scalable and “accessible to all.” A gene editor built along the lines of the covid-19 vaccine might cost only $1,000. The Gates Foundation sees the technology as a way to widely cure both sickle-cell and HIV—an “unmet need” in Africa, it says. To do that, the foundation is considering introducing into people’s bone marrow the exact HIV-defeating genetic change that He tried to install in embryos. 

Then there’s the risk that gene terrorists, or governments, could change people’s DNA without their permission or knowledge.

Scientists can foresee great benefits ahead—even a “final frontier of molecular liberty,” as Christopher Mason, a “space geneticist” at Weill Cornell Medicine in New York, characterizes it. Mason works with newer types of gene editors that can turn genes on or off temporarily. He is using these in his lab to make cells resistant to radiation damage. The technology could be helpful to astronauts or, he says, for a weekend of “recreational genomics”—say, boosting your repair genes in preparation to visit the site of the Chernobyl power plant. The technique is “getting to be, I actually think it is, a euphoric application of genetic technologies,” says Mason. “We can say, hey, find a spot on the genome and flip a light switch on or off on any given gene to control its expression at a whim.”  

Easy delivery of gene editors to adult bodies could give rise to policy questions just as urgent as the ones raised by the CRISPR babies. Whether we encourage genetic enhancement—in particular, free-market genome upgrades—is one of them. Several online health influencers have already been touting an unsanctioned gene therapy, offered in Honduras, that its creators claim increases muscle mass. Another risk: If changing people’s DNA gets easy enough, gene terrorists or governments could do it without their permission or knowledge. One genetic treatment for a skin disease, approved in the US last year, is formulated as a cream—the first rub-on gene therapy (though not a gene editor). 

Some scientists believe new delivery tools should be kept purposefully complex and cumbersome, so that only experts can use them—a biological version of “security through obscurity.” But that’s not likely to happen. “Building a gene editor to make these changes is no longer, you know, the kind of technology that’s in the realm of 100 people who can do it. This is out there,” says Urnov. “And as delivery improves, I don’t know how we will be able to regulate that.”

man sitting and reading with man behind him

MICHAEL BYERS

In our conversation, Urnov frequently returned to that list of superpowers—genetic variants that make some people outliers in one way or another. There is a mutation that allows people to get by on five hours of sleep a night, with no ill effects. There is a woman in Scotland whose genetic peculiarity means she feels no pain and is perpetually happy, though also forgetful. Then there is Eero Mäntyranta, the cross-country ski champion who won three medals at the 1964 Winter Olympics and who turned out to have an inordinate number of red blood cells thanks to an alteration in a gene called the EPO receptor. It’s basically a blueprint for anyone seeking to join the Enhanced Games, the libertarian plan for a pro-doping international sports competition that critics call “borderline criminal” but which has the backing of billionaire Peter Thiel, among others. 

All these are possibilities for the future of the human genome, and we won’t even necessarily need to change embryos to get there. Some researchers even expect that with some yet-to-be-conceived technology, updating a person’s DNA could become as simple as sending a document via Wi-Fi, with today’s viruses or nanoparticles becoming anachronisms like floppy disks. I asked Church for his prediction about where gene-editing technology is going in the long term. “Eventually you’d get shot up with a whole bunch of things when you’re born, or it could even be introduced during pregnancy,” he said. “You’d have all the advantages without the disadvantages of being stuck with heritable changes.” 

And that will be evolution too.

Want to understand the future of technology? Take a look at this one obscure metal.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

On a sunny morning in late spring, I found myself carefully examining an array of somewhat unassuming-looking rocks at the American Museum of Natural History. 

I’ve gotten to see some cutting-edge technologies as a reporter, from high-tech water treatment plants to test nuclear reactors. Peering at samples of dusty reddish monazite and speckled bastnäsite, I saw the potential for innovation there, too. That’s because all the minerals spread out across the desk contain neodymium, a rare earth metal that’s used today in all sorts of devices, from speakers to wind turbines. And it’s likely going to become even more crucial in the future. 

By the time I came to the museum to see some neodymium for myself, I’d been thinking (or perhaps obsessing) about the metal for months—basically since I’d started reporting a story for our upcoming print issue that is finally out online. The story takes a look at what challenges we’ll face with materials for the next century, and neodymium is center stage. Let’s take a look at why I spent so long thinking about this obscure metal, and why I think it reveals so much about the future of technology. 

In the new issue of our print magazine, MIT Technology Review is celebrating its 125th anniversary. But rather than look back to our 1899 founding, the team decided to look forward to the next 125 years. 

I’ve been fascinated with topics like mining, recycling, and alternative technologies since I’ve been reporting on climate. So when I started thinking about the distant future, my mind immediately went to materials. What kind of stuff will we need? Will there be enough of it? How does tech advancement change the picture?

Zooming out to the 2100s and beyond changed the stakes and altered how I thought about some of the familiar topics I’ve been reporting on for years. 

For example, we have enough of the stuff we need to power our world with renewables. But in theory, there is some future point at which we could burn through our existing resources. What happens then? As it turns out, there’s more uncertainty about the amount of resources available than you might imagine. And we can learn a lot from previous efforts to project when the supply of fossil fuels will begin to run out, a concept known as peak oil. 

We can set up systems to reuse and recycle the metals that are most important for our future. These facilities could eventually help us mine less and make material supply steadier and even cheaper. But what happens when the technology these facilities are designed to recycle inevitably changes, possibly rendering old setups obsolete? Predicting what materials will be important, and adjusting efforts to make and reuse them, is complicated to say the least. 

To try to answer these massive questions, I took a careful look at one particular metal: neodymium. It’s a silvery-white rare earth metal, central to powerful magnets that are at the heart of many different technologies, both in the energy sector and beyond. 

Neodymium can stand in for many of the challenges and opportunities we face with materials in the coming century. We’re going to need a lot more of it in the near future, and we could run into some supply constraints as we race to mine enough to meet our needs. It’s possible to recycle the metal to cut down on the extraction needed in the future, and some companies are already trying to set up the infrastructure to do so. 

The world is well on its way to adapting to conditions that are a lot more neodymium-centric. But at the same time, efforts are already underway to build technologies that wouldn’t need neodymium at all. If companies are able to work out an alternative, it could totally flip all our problems, as well as efforts to solve them, upside down. 

Advances in technology can shift the materials we need, and our material demands can push technology to develop in turn. It’s a loop, one that we need to attempt to understand and untangle as we move forward. I hope you’ll read my attempt to start doing that in my feature story here


Now read the rest of The Spark

Related reading

For a more immediate look at the race to produce rare earth metals, check out this feature story by Mureji Fatunde from January. 

I started thinking more deeply about material demand when I was reporting stories about recycling, including this 2023 feature on the battery recycling company Redwood Materials. 

For one example of how companies are trying to develop new technologies that’ll change the materials we need in the future, check out this story about rare-earth-free magnets from earlier this year. 

Another thing

“If we rely on hope, we give up agency. And that may be seductive, but it’s also surrender.”

So writes Lydia Millet, author of over a dozen books, in a new essay about the emotions behind fighting for a future beyond climate change. It was just published online this week. It’s also featured in our upcoming print issue, and I’d highly recommend it. 

Keeping up with climate  

For a look inside what it’s really like to drive a hydrogen car, this reporter rented one and took it on a road trip, speaking to drivers along the way. (The Verge)

→ Here’s why electric vehicles are beating out hydrogen-powered ones in the race to clean up transportation. (MIT Technology Review)

As temperatures climb, we’ve got a hot steel problem on our hands. Heat can cause steel, as well as other materials like concrete, to expand or warp, which can cause problems from slowing down trains to reducing the amount of electricity that power lines can carry. (The Atlantic)

Oakland is the first city in the US running all-electric school buses. And the vehicles aren’t only ferrying kids around; they’re also able to use their batteries to help the grid when it’s needed. (Electrek)

Form Energy plans to build the largest battery installation in the world in Maine. The system, which will use the company’s novel iron-air chemistry, will be capable of storing 8,500 megawatt-hours’ worth of energy. (Canary Media)

→ We named Form one of our 15 Climate Tech companies to watch in 2023. (MIT Technology Review)

In one of the more interesting uses I’ve seen for electric vehicles, Brussels has replaced horse-drawn carriages with battery-powered ones. They look a little like old-timey cars, and operators say business hasn’t slowed down since the switch. (New York Times)

Homeowners are cashing in on billions of dollars in tax credits in the US. The money, which rewards use of technologies that help make homes more energy efficient and cut emissions, is disproportionately going to wealthier households. (E&E News)

Airlines are making big promises about using new jet fuels that can help cut emissions. Much of the industry aims to reach 10% alternative fuel use by the end of the decade. Actual rates hit 0.17% in 2023. (Bloomberg)

Solar farms can’t get enough sheep—they’re great landscaping partners. Soon, 6,000 sheep will be helping keep the grass in check between panels in what will be the largest solar grazing project in the US. (Canary Media)

We finally have a definition for open-source AI

Open-source AI is everywhere right now. The problem is, no one agrees on what it actually is. Now we may finally have an answer. The Open Source Initiative (OSI), the self-appointed arbiters of what it means to be open source, has released a new definition, which it hopes will help lawmakers develop regulations to protect consumers from AI risks. 

Though OSI has published much about what constitutes open-source technology in other fields, this marks its first attempt to define the term for AI models. It asked a 70-person group of researchers, lawyers, policymakers, and activists, as well as representatives from big tech companies like Meta, Google, and Amazon, to come up with the working definition. 

According to the group, an open-source AI system can be used for any purpose without securing permission, and researchers should be able to inspect its components and study how the system works.

It should also be possible to modify the system for any purpose—including to change its output—and to share it with others to use, with or without modifications, for any purpose. In addition, the standard attempts to define a level of transparency for a given model’s training data, source code, and weights. 

The previous lack of an open-source standard presented a problem. Although we know that the decisions of OpenAI and Anthropic to keep their models, data sets, and algorithms secret makes their AI closed source, some experts argue that Meta and Google’s freely accessible models, which are open to anyone to inspect and adapt, aren’t truly open source either, because of licenses that restrict what users can do with the models and because the training data sets aren’t made public. Meta, Google, and OpenAI have been contacted for their response to the new definition but did not reply before publication.

“Companies have been known to misuse the term when marketing their models,” says Avijit Ghosh, an applied policy researcher at Hugging Face, a platform for building and sharing AI models. Describing models as open source may cause them to be perceived as more trustworthy, even if researchers aren’t able to independently investigate whether they really are open source.

Ayah Bdeir, a senior advisor to Mozilla and a participant in OSI’s process, says certain parts of the open-source definition were relatively easy to agree upon, including the need to reveal model weights (the parameters that help determine how an AI model generates an output). Other parts of the deliberations were more contentious, particularly the question of how public training data should be.

The lack of transparency about where training data comes from has led to innumerable lawsuits against big AI companies, from makers of large language models like OpenAI to music generators like Suno, which do not disclose much about their training sets beyond saying they contain “publicly accessible information.” In response, some advocates say that open-source models should disclose all their training sets, a standard that Bdeir says would be difficult to enforce because of issues like copyright and data ownership. 

Ultimately, the new definition requires that open-source models provide information about the training data to the extent that “a skilled person can recreate a substantially equivalent system using the same or similar data.” It’s not a blanket requirement to share all training data sets, but it also goes further than what many proprietary models or even ostensibly open-source models do today. It’s a compromise.

“Insisting on an ideologically pristine kind of gold standard that actually will not effectively be met by anybody ends up backfiring,” Bdeir says. She adds that OSI is planning some sort of enforcement mechanism, which will flag models that are described as open source but do not meet its definition. It also plans to release a list of AI models that do meet the new definition. Though none are confirmed, the handful of models that Bdeir told MIT Technology Review are expected to land on the list are relatively small names, including Pythia by Eleuther, OLMo by Ai2, and models by the open-source collective LLM360.

AI could be a game changer for people with disabilities

As a lifelong disabled person who constantly copes with multiple conditions, I have a natural tendency to view emerging technologies with skepticism. Most new things are built for the majority of people—in this case, people without disabilities—and the truth of the matter is there’s no guarantee I’ll have access to them.

There are certainly exceptions to the rule. A prime example is the iPhone. Although discrete accessibility software did not appear until the device’s third-generation model, in 2009, earlier generations were still revolutionary for me. After I’d spent years using flip phones with postage-stamp-size screens and hard-to-press buttons, the fact that the original iPhone had a relatively large screen and a touch-based UI was accessibility unto itself. 

AI could make these kinds of jumps in accessibility more common across a wide range of technologies. But you probably haven’t heard much about that possibility. While the New York Times sues OpenAI over ChatGPT’s scraping of its content and everyone ruminates over the ethics of AI tools, there seems to be less consideration of the good ChatGPT can do for people of various abilities. For someone with visual and motor delays, using ChatGPT to do research can be a lifesaver. Instead of trying to manage a dozen browser tabs with Google searches and other pertinent information, you can have ChatGPT collate everything into one space. Likewise, it’s highly plausible that artists who can’t draw in the conventional manner could use voice prompts to have Midjourney or Adobe Firefly create what they’re thinking of. That might be the only way for such a person to indulge an artistic passion. 

For those who, like me, are blind or have low vision, the ability to summon a ride on demand and go anywhere without imposing on anyone else for help is a huge deal.

Of course, data needs to be vetted for accuracy and gathered with permission—there are ample reasons to be wary of AI’s potential to serve up wrong or potentially harmful, ableist information about the disabled community. Still, it feels unappreciated (and underreported) that AI-based software can truly be an assistive technology, enabling people to do things they otherwise would be excluded from. AI could give a disabled person agency and autonomy. That’s the whole point of accessibility—freeing people in a society not designed for their needs.

The ability to automatically generate video captions and image descriptions provides additional examples of how automation can make computers and productivity technology more accessible. And more broadly, it’s hard not to be enthused about ever-burgeoning technologies like autonomous vehicles. Most tech journalists and other industry watchers are interested in self-driving cars for the sheer novelty, but the reality is the AI software behind vehicles like Waymo’s fleet of Jaguar SUVs is quite literally enabling many in the disability community to exert more agency over their transport. For those who, like me, are blind or have low vision, the ability to summon a ride on demand and go anywhere without imposing on anyone else for help is a huge deal. It’s not hard to envision a future in which, as the technology matures, autonomous vehicles are normalized to the point where blind people could buy their own cars. 

At the same time, AI is enabling serious advances in technology for people with limb differences. How exciting will it be, decades from now, to have synthetic arms and legs, hands or feet, that more or less function like the real things? Similarly, the team at Boston-based Tatum Robotics is combining hardware with AI to make communication more accessible for deaf-blind people: A robotic hand forms hand signs, or words in American Sign Language that can be read tactilely against the palm. Like autonomous vehicles, these applications have enormous potential to positively influence the everyday lives of countless people. All this goes far beyond mere chatbots.

It should be noted that disabled people historically have been among the earliest adopters of new technologies. AI is no different, yet public discourse routinely fails to meaningfully account for this. After all, AI plays to a computer’s greatest strength: automation. As time marches on, the way AI grows and evolves will be unmistakably and indelibly shaped by disabled people and our myriad needs and tolerances. It will offer us more access to information, to productivity, and most important, to society writ large.

Steven Aquino is a freelance tech journalist covering accessibility and assistive technologies. He is based in San Francisco.

Tech that measures our brainwaves is 100 years old. How will we be using it 100 years from now?

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week, we’re acknowledging a special birthday. It’s 100 years since EEG (electroencephalography) was first used to measure electrical activity in a person’s brain. The finding was revolutionary. It helped people understand that epilepsy was a neurological disorder as opposed to a personality trait, for one thing (yes, really).

The fundamentals of EEG have not changed much over the last century—scientists and doctors still put electrodes on people’s heads to try to work out what’s going on inside their brains. But we’ve been able to do a lot more with the information that’s collected.

We’ve been able to use EEG to learn more about how we think, remember, and solve problems. EEG has been used to diagnose brain and hearing disorders, explore how conscious a person might be, and even allow people to control devices like computers, wheelchairs, and drones.

But an anniversary is a good time to think about the future. You might have noticed that my colleagues and I are currently celebrating 125 years of MIT Technology Review by pondering the technologies the next 125 years might bring. What will EEG allow us to do 100 years from now?

First, a quick overview of what EEG is and how it works. EEG involves placing electrodes on the top of someone’s head, collecting electrical signals from brainwaves, and feeding these to a computer for analysis. Today’s devices often resemble swimming caps. They’re very cheap compared with other types of brain imaging technologies, such as fMRI scanners, and they’re pretty small and portable.

The first person to use EEG in people was Hans Berger, a German psychiatrist who was fascinated by the idea of telepathy. Berger developed EEG as a tool to measure “psychic energy,” and he carried out his early research—much of it on his teenage son—in secret, says Faisal Mushtaq, a cognitive neuroscientist at the University of Leeds in the UK. Berger was, and remains, a controversial figure owing to his unclear links with Nazi regime, Mushtaq tells me.

But EEG went on to take the neuroscience world by storm. It has become a staple of neuroscience labs, where it can be used on people of all ages, even newborns. Neuroscientists use EEG to explore how babies learn and think, and even what makes them laugh. In my own reporting, I’ve covered the use of EEG to understand the phenomenon of lucid dreaming, to reveal how our memories are filed away during sleep, and to allow people to turn on the TV by thought alone.   

EEG can also serve as a portal into the minds of people who are otherwise unable to communicate. It has been used to find signs of consciousness in people with unresponsive wakefulness syndrome (previously called a “vegetative state”). The technology has also allowed people paralyzed with amyotrophic lateral sclerosis (ALS) to communicate by thought and tell their family members they are happy.

So where do we go from here? Mushtaq, along with Pedro Valdes-Sosa at the University of Electronic Science and Technology of China in Chengdu and their colleagues, put the question to 500 people who work with EEG, including neuroscientists, clinical neurophysiologists, and brain surgeons. Specifically, with the help of ChatGPT, the team generated a list of predictions, which ranged from the very likely to the somewhat fanciful. Each of the 500 survey responders was asked to estimate when, if at all, each prediction might be likely to pan out.  

Some of the soonest breakthroughs will be in sleep analysis, according to the responders. EEG is already used to diagnose and monitor sleep disorders—but this is set to become routine practice in the next decade. Consumer EEG is also likely to take off in the near future, potentially giving many of us the opportunity to learn more about our own brain activity, and how it corresponds with our wellbeing. “Perhaps it’s integrated into a sort of baseball cap that you wear as you walk around, and it’s connected to your smartphone,” says Mushtaq. EEG caps like these have already been trialed on employees in China and used to monitor fatigue in truck drivers and mining workers, for example.

For the time being, EEG communication is limited to the lab or hospital, where studies focus on the technology’s potential to help people who are paralyzed, or who have disorders of consciousness. But that is likely to change in the coming years, once more clinical trials have been completed. Survey respondents think that EEG could become a primary tool of communication for individuals like these in the next 20 years or so.

At the other end of the scale is what Mushtaq calls the “more fanciful” application—the idea of using EEG to read people’s thoughts, memories, and even dreams.

Mushtaq thinks this is a “relatively crazy” prediction—one that’s a long, long way from coming to pass considering we don’t yet have a clear picture of how and where our memories are formed. But it’s not completely science fiction, and some respondents predict the technology could be with us in around 60 years.

Artificial intelligence will probably help neuroscientists squeeze more information from EEG recordings by identifying hidden patterns in brain activity. And it is already being used to turn a person’s thoughts into written words, albeit with limited accuracy. “We’re on the precipice of this AI revolution,” says Mushtaq.

These kinds of advances will raise questions over our right to mental privacy and how we can protect our thoughts. I talked this over with Nita Farahany, a futurist and legal ethicist at Duke University in Durham, North Carolina, last year. She told me that while brain data itself is not thought, it can be used to make inferences about what a person is thinking or feeling. “The only person who has access to your brain data right now is you, and it is only analyzed in the internal software of your mind,” she said. “But once you put a device on your head … you’re immediately sharing that data with whoever the device manufacturer is, and whoever is offering the platform.”

Valdes-Sosa is optimistic about the future of EEG. Its low cost, portability, and ease of use make the technology a prime candidate for use in poor countries with limited resources, he says; he has been using it in his research since 1969. (You can see what his set up looked like in 1970 in the image below!) EEG should be used to monitor and improve brain health around the world, he says: “It’s difficult … but I think it could happen in the future.” 

photo from the 1970s of two medical professionals facing an eeg machine

PEDRO VALDES-SOSA

Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

You can read the full interview with Nita Farahany, in which she describes some decidedly creepy uses of brain data, here.

Ross Compton’s heart data was used against him when he was accused of burning down his home in Ohio in 2016. Brain data could be used in a similar way. One person has already had to hand over recordings from a brain implant to law enforcement officials after being accused of assaulting a police officer. (It turned out that person was actually having a seizure at the time.) I looked at some of the other ways your brain data could be used against you in a previous edition of The Checkup.

Teeny-tiny versions of EEG caps have been used to measure electrical activity in brain organoids (clumps of neurons that are meant to represent a full brain), as my colleague Rhiannon Williams reported a couple of years ago.

EEG has also been used to create a “brain-to-brain network that allows three people to collaborate on a game of Tetris by thought alone.

Some neuroscientists are using EEG to search for signs of consciousness in people who seem completely unresponsive. One team found such signs in a 21-year-old woman who had experienced a traumatic brain injury. “Every clinical diagnostic test, experimental and established, showed no signs of consciousness,” her neurophysiologist told MIT Technology Review. After a test that involved EEG found signs of consciousness, the neurophysiologist told rehabilitation staff to “search everywhere and find her!” They did, about a month later. With physical and drug therapy, she learned to move her fingers to answer simple questions.

From around the web

Food waste is a problem. This Japanese company is fermenting it to create sustainable animal feed. In case you were wondering, the food processing plant smells like a smoothie, and the feed itself tastes like sour yogurt. (BBC Future)

The pharmaceutical company Gilead Sciences is accused of “patent hopping”—having dragged its feet to bring a safer HIV treatment to market while thousands of people took a harmful one. The company should be held accountable, argues a cofounder of PrEP4All, an advocacy organization promoting a national HIV prevention plan. (STAT)

Anti-suicide nets under San Francisco’s Golden Gate Bridge are already saving lives, perhaps by acting as a deterrent. (The San Francisco Standard)

Genetic screening of newborn babies could help identify treatable diseases early in life. Should every baby be screened as part of a national program? (Nature Medicine)

Is “race science”—which, it’s worth pointing out, is nothing but pseudoscience—on the rise, again? The far right’s references to race and IQ make it seem that way. (The Atlantic)

As part of our upcoming magazine issue celebrating 125 years of MIT Technology Review and looking ahead to the next 125, my colleague Antonio Regalado explores how the gene-editing tool CRISPR might influence the future of human evolution. (MIT Technology Review)

How we could turn plastic waste into food

In 2019, an agency within the U.S. Department of Defense released a call for research projects to help the military deal with the copious amount of plastic waste generated when troops are sent to work in remote locations or disaster zones. The agency wanted a system that could convert food wrappers and water bottles, among other things, into usable products, such as fuel and rations. The system needed to be small enough to fit in a Humvee and capable of running on little energy. It also needed to harness the power of plastic-eating microbes.

“When we started this project four years ago, the ideas were there. And in theory, it made sense,” said Stephen Techtmann, a microbiologist at Michigan Technological University, who leads one of the three research groups receiving funding. Nevertheless, he said, in the beginning, the effort “felt a lot more science-fiction than really something that would work.”

In one reactor, shown here at a recent MTU demonstration, some deconstructed plastics are subject to high heat and the absence of oxygen — a process called pyrolysis.
KADEN STALEY/MICHIGAN TECHNOLOGICAL UNIVERSITY

That uncertainty was key. The Defense Advanced Research Projects Agency, or DARPA, supports high-risk, high-reward projects. This means there’s a good chance that any individual effort will end in failure. But when a project does succeed, it has the potential to be a true scientific breakthrough. “Our goal is to go from disbelief, like, ‘You’re kidding me. You want to do what?’ to ‘You know, that might be actually feasible,’” said Leonard Tender, a program manager at DARPA who is overseeing the plastic waste projects.

The problems with plastic production and disposal are well known. According to the United Nations Environment Program, the world creates about 440 million tons of plastic waste per year. Much of it ends up in landfills or in the ocean, where microplastics, plastic pellets, and plastic bags pose a threat to wildlife. Many governments and experts agree that solving the problem will require reducing production, and some countries and U.S. states have additionally introduced policies to encourage recycling.

For years, scientists have also been experimenting with various species of plastic-eating bacteria. But DARPA is taking a slightly different approach in seeking a compact and mobile solution that uses plastic to create something else entirely: food for humans.

In the beginning, the effort “felt a lot more science-fiction than really something that would work.”

The goal, Techtmann hastens to add, is not to feed people plastic. Rather, the hope is that the plastic-devouring microbes in his system will themselves prove fit for human consumption. While Techtmann believes most of the project will be ready in a year or two, it’s this food step that could take longer. His team is currently doing toxicity testing, and then they will submit their results to the Food and Drug Administration for review. Even if all that goes smoothly, an additional challenge awaits. There’s an ick factor, said Techtmann, “that I think would have to be overcome.”

The military isn’t the only entity working to turn microbes into nutrition. From Korea to Finland, a small number of researchers, as well as some companies, are exploring whether microorganisms might one day help feed the world’s growing population.


According to Tender, DARPA’s call for proposals was aimed at solving two problems at once. First, the agency hoped to reduce what he called supply-chain vulnerability: During war, the military needs to transport supplies to troops in remote locations, which creates a safety risk for people in the vehicle. Additionally, the agency wanted to stop using hazardous burn pits as a means of dealing with plastic waste. “Getting those waste products off of those sites responsibly is a huge lift,” Tender said.

A research engineer working on the MTU project takes a raw sample from the pyrolysis reactor, which can be upcycled into fuels and lubricants.
KADEN STALEY/MICHIGAN TECHNOLOGICAL UNIVERSITY

The Michigan Tech system begins with a mechanical shredder, which reduces the plastic to small shards that then move into a reactor, where they soak in ammonium hydroxide under high heat. Some plastics, such as PET, which is commonly used to make disposable water bottles, break down at this point. Other plastics used in military food packaging — namely polyethylene and polypropylene — are passed along to another reactor, where they are subject to much higher heat and an absence of oxygen.

Under these conditions, the polyethylene and polypropylene are converted into compounds that can be upcycled into fuels and lubricants. David Shonnard, a chemical engineer at Michigan Tech who oversaw this component of the project, has developed a startup company called Resurgent Innovation to commercialize some of the technology. (Other members of the research team, said Shonnard, are pursuing additional patents related to other parts of the system.)

After the PET has broken down in the ammonium hydroxide, the liquid is moved to another reactor, where it is consumed by a colony of microbes. Techtmann initially thought he would need to go to a highly contaminated environment to find bacteria capable of breaking down the deconstructed plastic. But as it turned out, bacteria from compost piles worked really well. This may be because the deconstructed plastic that enters the reactor has a similar molecular structure to some plant material compounds, he said. So the bacteria that would otherwise eat plants can perhaps instead draw their energy from the plastic.

Materials for the MTU project are shown at a recent demonstration. Before being placed in a reactor, plastic feedstocks (bottom row) are mechanically shredded into small pieces.
KADEN STALEY/MICHIGAN TECHNOLOGICAL UNIVERSITY

After the bacteria consume the plastic, the microbes are then dried into a powder that smells a bit like nutritional yeast and has a balance of fats, carbohydrates, and proteins, said Techtmann.

Research into edible microorganisms dates back at least 60 years, but the body of evidence is decidedly small. (One review estimated that since 1961, an average of seven papers have been published per year.) Still, researchers in the field say there are good reasons for countries to consider microbes as a food source. Among other things, they are rich in protein, wrote Sang Yup Lee, a bioengineer and senior vice president for research at Korea Advanced Institute of Science and Technology, in an email to Undark. Lee and others have noted that growing microbes requires less land and water than conventional agriculture. Therefore, they might prove to be a more sustainable source of nutrition, particularly as the human population grows.

The product from the microbe reactor is collected in a glass jar. The microbes can be dried into a powder for human consumption — once they are deemed safe by regulators.
After PET is broken down in the ammonium hydroxide, the liquid is moved to a reactor where it is consumed by a colony of microbes.

Lee reviewed a paper describing the microbial portion of the Michigan Tech project, and said that the group’s plans are feasible. But he pointed out a significant challenge: At the moment, only certain microorganisms are considered safe to eat, namely “those we have been eating thorough fermented food and beverages, such as lactic acid bacteria, bacillus, some yeasts.” But these don’t degrade plastics.


Before using the plastic-eating microbes as food for humans, the research team will submit evidence to regulators indicating that the substance is safe. Joshua Pearce, an electrical engineer at Western University in Ontario, Canada, performed the initial toxicology screening, breaking the microbes down into smaller pieces, which they compared against known toxins.

“We’re pretty sure there’s nothing bad in there,” said Pearce. He added that the microbes have also been fed to C. elegans roundworms without apparent ill-effects, and the team is currently looking at how rats do when they consume the microbes over the longer term. If the rats do well, then the next step would be to submit data to the Food and Drug Administration for review.

Before using the plastic-eating microbes as food for humans, the research team will submit evidence to regulators indicating that the substance is safe.

At least a handful of companies are in various stages of commercializing new varieties of edible microbes. A Finnish startup, Solar Foods, for example, has taken a bacterium found in nature and created a powdery product with a mustard brown hue that has been approved for use in Singapore. In an email to Undark, chief experience officer Laura Sinisalo said that the company has applied for approval in the E.U. and the U.K., as well as in the U.S., where it hopes to enter the market by the end of this year.

Even if the plastic-eating microbes turn out to be safe for human consumption, Techtmann said, the public might still balk at the prospect of eating something nourished on plastic waste. For this reason, he said, this particular group of microbes might prove most useful on remote military bases or during disaster relief, where it could be consumed short-term, to help people survive.

“I think there’s a bit less of a concern about the ick factor,” said Techtmann, “if it’s really just, ‘This is going to keep me alive for another day or two.’”

This article was originally published on Undark. Read the original article.

A new system lets robots sense human touch without artificial skin

Even the most capable robots aren’t great at sensing human touch; you typically need a computer science degree or at least a tablet to interact with them effectively. That may change, thanks to robots that can now sense and interpret touch without being covered in high-tech artificial skin. It’s a significant step toward robots that can interact more intuitively with humans. 

To understand the new approach, led by the German Aerospace Center and published today in Science Robotics, consider the two distinct ways our own bodies sense touch. If you hold your left palm facing up and press lightly on your left pinky finger, you may first recognize that touch through the skin of your fingertip. That makes sense–you have thousands of receptors on your hands and fingers alone. Roboticists often try to replicate that blanket of sensors for robots through artificial skins, but these can be expensive and ineffective at withstanding impacts or harsh environments.

But if you press harder, you may notice a second way of sensing the touch: through your knuckles and other joints. That sensation–a feeling of torque, to use the robotics jargon–is exactly what the researchers have re-created in their new system.

Their robotic arm contains six sensors, each of which can register even incredibly small amounts of pressure against any section of the device. After precisely measuring the amount and angle of that force, a series of algorithms can then map where a person is touching the robot and analyze what exactly they’re trying to communicate. For example, a person could draw letters or numbers anywhere on the robotic arm’s surface with a finger, and the robot could interpret directions from those movements. Any part of the robot could also be used as a virtual button.

It means that every square inch of the robot essentially becomes a touch screen, except without the cost, fragility, and wiring of one, says Maged Iskandar, researcher at the German Aerospace Center and lead author of the study. 

“Human-robot interaction, where a human can closely interact with and command a robot, is still not optimal, because the human needs an input device,” Iskandar says. “If you can use the robot itself as a device, the interactions will be more fluid.”

A system like this could provide a cheaper and simpler way of providing not only a sense of touch, but also a new way to communicate with robots. That could be particularly significant for larger robots, like humanoids, which continue to receive billions in venture capital investment. 

Calogero Maria Oddo, a roboticist who leads the Neuro-Robotic Touch Laboratory at the BioRobotics Institute but was not involved in the work, says the development is significant, thanks to the way the research combines sensors, elegant use of mathematics to map out touch, and new AI methods to put it all together. Oddo says commercial adoption could be fairly quick, since the investment required is more in software than hardware, which is far more expensive.

There are caveats, though. For one, the new model cannot handle more than two points of contact at once. In a fairly controlled setting like a factory floor that might not be an issue, but in environments where human-robot interactions are less predictable, it could present limitations. And the sorts of sensors needed to communicate touch to a robot, though commercially available, can also cost tens of thousands of dollars.

Overall, though, Oddo envisions a future where skin-based sensors and joint-based ones are merged to give robots a more comprehensive sense of touch.

“We humans and other animals have integrated both solutions,” he says. “I expect robots working in the real world will use both, too, to interact safely and smoothly with the world and learn.”