AI’s growth needs the right interface

If you took a walk in Hayes Valley, San Francisco’s epicenter of AI froth, and asked the first dude-bro you saw wearing a puffer vest about the future of the interface, he’d probably say something about the movie Her, about chatty virtual assistants that will help you do everything from organize your email to book a trip to Coachella to sort your text messages.

Nonsense. Setting aside that Her (a still from the film is shown above) was about how technology manipulates us into a one-sided relationship, you’d have to be pudding-brained to believe that chatbots are the best way to use computers. The real opportunity is close, but it isn’t chatbots.

Instead, it’s computers built atop the visual interfaces we know, but which we can interact with more fluidly, through whatever combination of voice and touch is most natural. Crucially, this won’t just be a computer that we can use. It’ll also be a computer that empowers us to break and remake it, to whatever ends we want. 

Chatbots fail because they ignore a simple fact that’s sold 20 billion smartphones: For a computer to be useful, we need an easily absorbed mental model of both its capabilities and its limitations. The smartphone’s victory was built on the graphical user interface, which revolutionized how we use computers—and how many computers we use!—because it made it easy to understand what a computer could do. There was no mystery. In a blink, you saw the icons and learned without realizing it.

Today we take the GUI for granted. Meanwhile, chatbots can feel like magic, letting you say anything and get a reasonable-­sounding response. But magic is also the power to mislead. Chatbots and open-ended conversational systems are doomed as general-­purpose interfaces because while they may seem able to understand anything, they can’t actually do everything. 

In that gap between anything and everything sits a teetering mound of misbegotten ideas and fatally hyped products.

“But dude, maybe a chatbot could help you book that flight to Coachella?” Sure. But could it switch your reservation when you have a problem? Could it ask you, in turn, which flight is best given your need to be back in Hayes Valley by Friday at 2? 

We take interactive features for granted because of the GUI’s genius. But with a chatbot, you can never know up front where its abilities begin and end. Yes, the list of things they can do is growing every day. But how do you remember what does and doesn’t work, or what’s supposed to work soon? And how are you supposed to constantly update your mental model as those capabilities grow?

If you’ve ever used a digital assistant or smart speaker, you already know that mismatched expectations create products we’ll never use to their full potential. When you first tried one, you probably asked it to do whatever you could think of. Some things worked; most didn’t. So you eventually settled on asking for just the few things you could remember that always worked: timers and music. LLMs, when used as primary interfaces, re-create the trouble that arises when your mental model isn’t quite right. 

Chatbots have their uses and their users. But their usefulness is still capped because they are open-ended computer interfaces that challenge you to figure them out through trial and error. Instead, we need to combine the ease of natural-­language input with machines that will simply show us what they are capable of. 

For example, imagine if, instead of stumbling around trying to talk to the smart devices in your home like a doofus, you could simply look at something with your smart glasses (or whatever) and see a right-click for the real world, giving you a menu of what you can control in all the devices that increasingly surround us. It won’t be a voice that tells you what’s possible—it’ll be an old-fashioned computer screen, and an old-fashioned GUI, which you can operate with your voice or with your hands, or both in combination if you want.

But that’s still not the big opportunity! 

Why shouldn’t we be able to not merely consume technology but instead architect it to suit our own ends?

I think the future interface we want is made from computers and apps that work in ways similar to the phones and laptops we have now—but that we can remake to suit whatever uses we want. Compare this with the world we have now: If you don’t like your hotel app, you can’t make a new one. If you don’t want all the bloatware in your banking app, tough luck. We’re surrounded by apps that are nominally tools. But unlike any tool previously known to man, these are tools that serve only the purpose that someone else defined for them. Why shouldn’t we be able to not merely consume technology, like the gelatinous former Earthlings in Wall-E, but instead architect technology to suit our own ends?

That world seemed close in the 1970s, to Steve Wozniak and the Homebrew Computer Club. It seemed to approach again in the 1990s, with the World Wide Web. But today, the imbalance between people who own computers and people who remake them has never been greater. We, the heirs of the original tool-using primates, have been reduced from wielders of those tools to passive consumers of technology delivered in slick buttons we can use but never change. This runs against what it is to be Homo sapiens, a species defined by our love and instinct for repurposing tools to whatever ends we like.

Imagine if you didn’t have to accept the features some tech genius announced on a wave of hype. Imagine if, instead of downloading some app someone else built, you could describe the app you wanted and then make it with a computer’s help, by reassembling features from any other apps ever created. Comp sci geeks call this notion of recombining capabilities “composability.” I think the future is composability—but composability that anyone can command. 

This idea is already lurching to life. Notion—originally meant as enterprise software that let you collect and create various docs in one place—has exploded with Gen Z, because unlike most software, which serves only a narrow or rigid purpose, it allows you to make and share templates for how to do things of all kinds. You can manage your finances or build a kindergarten lesson plan in one place, with whatever tools you need. 

Now imagine if you could tell your phone what kinds of new templates you want. An LLM can already assemble all the things you need and draw the right interface for them. Want a how-to app about knitting? Sure. Or your own guide to New York City? Done. That computer will probably be using an LLM to assemble these apps. Great. That just means that you, as a normie, can inspect and tinker with the prompt powering the software you just created, like a mechanic looking under the hood.

One day, hopefully soon, we’ll look back on this sad and weird era when our digital tools were both monolithic and ungovernable as a blip when technology conflicted with the human urge to constantly tinker with the world around us. And we’ll realize that the key to building a different relationship with technology was simply to give each of us power over how the interface of the future is designed. 

Cliff Kuang is a user-experience designer and the author of User Friendly: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play.

The author who listens to the sound of the cosmos

In 1983, while on a field recording assignment in Kenya, the musician and soundscape ecologist Bernie Krause noticed something remarkable. Lying in his tent late one night, listening to the calls of hyenas, tree frogs, elephants, and insects in the surrounding old-growth forest, Krause heard what seemed to be a kind of collective orchestra. Rather than a chaotic cacophony of nighttime noises, it was as if each animal was singing within a defined acoustic bandwidth, like living instruments in a larger sylvan ensemble. 

Unsure of whether this structured musicality was real or the invention of an exhausted mind, Krause analyzed his soundscape recordings on a spectrogram when he returned home. Sure enough, the insects occupied one frequency niche, the frogs another, and the mammals a completely separate one. Each group had claimed a unique part of the larger sonic spectrum, a fact that not only made communication easier, Krause surmised, but also helped convey important information about the health and history of the ecosystem.

cover of A Book of Noises
A Book of Noises:
Notes on the Auraculous

Caspar Henderson
UNIVERSITY OF CHICAGO PRESS, 2024

Krause describes his “niche hypothesis” in the 2012 book The Great Animal Orchestra, dubbing these symphonic soundscapes the “biophony”—his term for all the sounds generated by nonhuman organisms in a specific biome. Along with his colleague Stuart Gage from Michigan State University, he also coins two more terms—“anthropophony” and “geophony”—to describe sounds associated with humanity (think music, language, traffic jams, jetliners) and those originating from Earth’s natural processes (wind, waves, volcanoes, and thunder).

In A Book of Noises: Notes on the Auraculous, the Oxford-based writer and journalist Caspar Henderson makes an addition to Krause’s soundscape triumvirate: the “cosmophony,” or the sounds of the cosmos. Together, these four categories serve as the basis for a brief but fascinating tour through the nature of sound and music with 48 stops (in the form of short essays) that explore everything from human earworms to whale earwax.

We start, appropriately enough, with a bang. Sound, Henderson explains, is a pressure wave in a medium. The denser the medium, the faster it travels. For hundreds of thousands of years after the Big Bang, the universe was so dense that it trapped light but allowed sound to pass through it freely. As the primordial plasma of this infant universe cooled and expansion continued, matter collected along the ripples of these cosmic waves, which eventually became the loci for galaxies like our own. “The universe we see today is an echo of those early years,” Henderson writes, “and the waves help us measure [its] size.” 

The Big Bang may seem like a logical place to start a journey into sound, but cosmophony is actually an odd category to invent for a book about noise. After all, there’s not much of it in the vacuum of space. Henderson gets around this by keeping the section short and focusing more on how humans have historically thought about sound in the heavens. For example, there are two separate essays on our multicentury obsession with “the music of the spheres,” the idea that there exists a kind of ethereal harmony produced by the movements of heavenly objects.

Since matter matters when it comes to sound—there can be none of the latter without the former—we also get an otherworldly examination of what human voices would sound like on different terrestrial and gas planets in our solar system, as well as some creative efforts from musicians and scientists who have transmuted visual data from space into music and other forms of audio. These are fun and interesting forays, but it isn’t until the end of the equally short “Sounds of Earth” (geophony) section that readers start to get a sense of the “auraculousness”—ear-related wonder—Henderson references in the subtitle.

Judging by the quantity and variety of entries in the “biophony” and “anthropophony” sections, you get the impression Henderson himself might be more attuned to these particular wonders as well. You really can’t blame him. 

The sheer number of fascinating ways that sound is employed across the human and nonhuman animal kingdom is mind-boggling, and it’s in these final two sections of the book that Henderson’s prose and curatorial prowess really start to shine—or should I say sing

We learn, for example, about female frogs that have devised their own biological noise-canceling system to tune out the male croaks of other species; crickets that amplify their chirps by “chewing a hole in a leaf, sticking their heads through it, and using it as a megaphone”; elephants that listen and communicate with each other seismically; plants that react to the buzz of bees by increasing the concentration of sugar in their flowers’ nectar; and moths with tiny bumps on their exoskeletons that jam the high-frequency echolocation pulses bats use to hunt them. 

Henderson has a knack for crisp characterization (“Singing came from winging”) and vivid, playful descriptions (“Through [the cochlea], the booming and buzzing confusion of the world, all its voices and music, passes into the three pounds of wobbly blancmange inside the nutshell numbskulls that are our kingdoms of infinite space”). He also excels at injecting a sense of wonder into aspects of sound that many of us take for granted. 

It turns out that sound is not just a great way to communicate and navigate underwater—it may be the best way.

In an essay about its power to heal, he marvels at ultrasound’s twin uses as a medical treatment and a method of examination. In addition to its kidney-stone-blasting and tumor-ablating powers, sound, Henderson says, can also be a literal window into our bodies. “It is, truly, an astonishing thing that our first glimpse of the greatest wonder and trial of our lives, parenthood, comes in the form of a fuzzy black and white smudge made from sound.”

While you can certainly quibble with some of the topical choices and their treatment in A Book of Noises, what you can’t argue with is the clear sense of awe that permeates almost every page. It’s an infectious and edifying kind of energy. So much so that by the time Henderson wraps up the book’s final essay, on silence, all you want to do is immerse yourself in more noise.

Singing in the key of sea

For the multiple generations who grew up watching his Academy Award–­winning 1956 documentary film, The Silent World, Jacques-Yves Cousteau’s mischaracterization of the ocean as a place largely devoid of sound seems to have calcified into common knowledge. The science writer Amorina Kingdon offers a thorough and convincing rebuttal of this idea in her new book, Sing Like Fish: How Sound Rules Life Under Water.

cover of Sing Like Fish
Sing Like Fish: How Sound
Rules Life Under Water

Amorina Kingdon
CROWN, 2024

Beyond serving as a 247-page refutation of this unfortunate trope, Kingdon’s book aims to open our ears to all the marvels of underwater life by explaining how sound behaves in this watery underworld, why it’s so important to the animals that live there, and what we can learn when we start listening to them.

It turns out that sound is not just a great way to communicate and navigate underwater—it may be the best way. For one thing, it travels four and a half times faster there than it does on land. It can also go farther (across entire seas, under the right conditions) and provide critical information about everything from who wants to eat you to who wants to mate with you. 

To take advantage of the unique way sound propagates in the world’s oceans, fish rely on a variety of methods to “hear” what’s going on around them. These mechanisms range from so-called lateral lines—rows of tiny hair cells along the outside of their body that can sense small movements and vibrations in the water around them—to otoliths, dense lumps of calcium carbonate that form inside their inner ears. 

Because fish are more or less the same density as water, these denser otoliths move at a different amplitude and phase in response to vibrations passing through their body. The movement is then registered by patches of hair cells that line the chambers where otoliths are embedded, which turn the vibrations of sound into nerve impulses. The philosopher of science Peter Godfrey-Smith may have put it best: “It is not too much to say that a fish’s body is a giant pressure-sensitive ear.” 

While there are some minor topical overlaps with Henderson’s book—primarily around whale-related sound and communication—one of the more admirable attributes of Sing Like Fish is Kingdon’s willingness to focus on some of the oceans’ … let’s say, less charismatic noise-­makers. We learn about herring (“the inveterate farters of the sea”), which use their flatuosity much as a fighter jet might use countermeasures to avoid an incoming missile. When these silvery fish detect the sound of a killer whale, they’ll fire off a barrage of toots, quickly decreasing both their bodily buoyancy and their vulnerability to the location-revealing clicks of the whale hunting them. “This strategic fart shifts them deeper and makes them less reflective to sound,” writes Kingdon.  

Readers are also introduced to the plainfin midshipman, a West Coast fish with “a booming voice” and “a perpetual look of accusation.” In addition to having “a fishy case of resting bitch face,” the male midshipman also has a unique hum, which it uses to attract gravid females in the spring. That hum became the subject of various conspiracy theories in the mid-’80s, when houseboat owners in Sausalito, California, started complaining about a mysterious seasonal drone. Thanks to a hydrophone and a level-headed local aquarium director, the sound was eventually revealed to be not aliens or a secret government experiment, but simply a small, brownish-green fish looking for love.

Kingdon’s command of, and enthusiasm for, the science of underwater sound is uniformly impressive. But it’s her recounting of how and why we started listening to the oceans in the first place that’s arguably one of the book’s most fascinating topics. It’s a wide-ranging tale, one that spans “firearm-­happy Victorian-era gentleman” and “whales that sounded suspiciously like Soviet submarines.” It’s also a powerful reminder of how war and military research can both spur and stifle scientific discovery in surprising ways.  

The fact that Sing Like Fish ends up being both an exquisitely reported piece of journalism and a riveting exploration of a sense that tends to get short shrift only amplifies Kingdon’s ultimate message—that we all need to start paying more attention to the ways in which our own sounds are impinging on life underwater. As we’ve started listening more to the seas, what we’re increasingly hearing is ourselves, she writes: “Piercing sonar, thudding seismic air guns for geological imaging, bangs from pile drivers, buzzing motorboats, and shipping’s broadband growl. We make a lot of noise.”

That noise affects underwater communication, mating, migrating, and bonding in all sorts of subtle and obvious ways. And its impact is often made worse when combined with other threats, like climate change. The good news is that while noise can be a frustratingly hard thing to regulate, there are efforts underway to address our poor underwater aural etiquette. The International Maritime Organization is currently updating its ship noise guidelines for member nations. At the same time, the International Organization for Standardization is creating more guidelines for measuring underwater noise. 

“The ocean is not, and has never been, a silent place,” writes Kingdon. But to keep it filled with the right kinds of noise (i.e., the kinds that are useful to the creatures living there), we’ll have to recommit ourselves to doing two things that humans sometimes aren’t so great at: learning to listen and knowing when to shut up.   

Music to our ears (and minds)

We tend to do both (shut up and listen) when music is being played—at least if it’s the kind we like. And yet the nature of what the composer Edgard Varèse famously called “organized sound” largely remains a mystery to us. What exactly is music? What distinguishes it from other sounds? Why do we enjoy making it? Why do we prefer certain kinds? Why is it so effective at influencing our emotions and (often) our memories?  

In their recent book Every Brain Needs Music: The Neuroscience of Making and Listening to Music, Larry Sherman and Dennis Plies look inside our heads to try to find some answers to these vexing questions. Sherman is a professor of neuroscience at the Oregon Health and Science University, and Plies is a professional musician and teacher. Unfortunately, if the book reveals anything, it’s that limiting your exploration of music to one lens (neuroscience) also limits the insights you can gain into its nature. 

cover of Every Brain Needs Music
Every Brain Needs Music:
The Neuroscience of Making
and Listening to Music

Larry Sherman and Dennis Plies
COLUMBIA UNIVERSITY PRESS, 2023

That’s not to say that getting a better sense of how specific patterns of vibrating air molecules get translated into feelings of joy and happiness isn’t valuable. There are some genuinely interesting explanations of what happens in our brains when we play, listen to, and compose music—supported by some truly great watercolor-­based illustrations by Susi Davis that help to clarify the text. But much of this gets bogged down in odd editorial choices (there are, for some reason, three chapters on practicing music) and conclusions that aren’t exactly earth-shattering (humans like music because it connects us). 

Every Brain Needs Music purports to be for all readers, but unless you’re a musician who’s particularly interested in the brain and its inner workings, I think most people will be far better served by A Book of Noises or other, more in-depth explorations of the importance of music to humans, like Michael Spitzer’s The Musical Human: A History of Life on Earth

“We have no earlids,” the late composer and naturalist R. Murray Schafer once observed. He also noted that despite this anatomical omission, we’ve become quite good at ignoring or tuning out large portions of the sonic world around us. Some of this tendency may be tied to our supposed preference for other sensory modalities. Most of us are taught from an early age that we are primarily visual creatures—that seeing is believing, that a picture is worth a thousand words. This idea is likely reinforced by a culture that also tends to focus primarily on the visual experience.

Yet while it may be true that we rely heavily on our eyes to make sense of the world, we do a profound disservice to ourselves and the rest of the natural world when we underestimate or downplay sound. Indeed, if there’s a common message that runs through all three of these books, it’s that attending to sound in all its forms isn’t just personally rewarding or edifying; it’s a part of what makes us fully human. As Bernie Krause discovered one night more than 40 years ago, once you start listening, it’s amazing what you can hear. 

Bryan Gardiner is a writer based in Oakland, California.

Job title of the future: Weather maker

Much of the western United States relies on winter snowpack to supply its rivers and reservoirs through the summer months. But with warming temperatures, less and less snow is falling—a recent study showed a 23% decline in annual snowpack since 1955. By some estimates, runoff from snowmelt in the western US could decrease by a third between now and the end of the century, meaning less water will be available for agriculture, hydroelectric projects, and urban use in a region already dealing with water scarcity. 

That’s where Frank McDonough comes in. An atmospheric research scientist, McDonough leads a cloud-seeding program at the Desert Research Institute (DRI) that aims to increase snowfall in Nevada and the Eastern Sierras. Snow makers like McDonough and others who generate rain represent a growing sector in a parched world. 

Instant snow: Cloud seeding for snow works by injecting a tiny amount of silver iodide dust into a cloud to help its water vapor condense into ice crystals that grow into snowflakes. In other conditions, water molecules drawn to such particles coalesce into raindrops. McDonough uses custom-­made, remotely operated machines on the ground to heat up a powdered form of the silver iodide that’s released into the air. Dust—or sometimes table salt—can also be released from planes. 

Old tech, new urgency: The precipitation-­catalyzing properties of silver iodide were first explored in the 1940s by American chemists and engineers, but the field remained a small niche. Now, with 40% of people worldwide affected by water scarcity and a growing number of reservoirs facing climate stress, cloud seeding is receiving global interest. “It’s becoming almost like, hey, we have to do this, because there’s just too many people and too many demands on these water resources,” says McDonough. A growing number of government-­run cloud-seeding programs around the world are now working to increase rainfall and snowpack, and even manipulating the timing of precipitation to prevent large hailstorms, reduce air pollution, and minimize flood risk. The private sector is also taking note: One cloud-seeding startup, Rainmaker, recently raised millions.

Generating results: At the end of each winter, the snowmakers dig into the data to see what impact they’ve had. In the past, McDonough says, his seeding has increased snowpack by 5% to 10%. That’s not enough to end a drought, but the DRI estimates that the cloud seeding around Reno, Nevada, alone adds enough precipitation to keep about 40,000 households supplied. And for some hydroelectric projects, “a 1% increase is worth millions of dollars,” McDonough says. “Water is really valuable out here in the West.”

This startup is making coffee without coffee beans

DJ Tan, cofounder of the Singaporean startup Prefer Coffee, pops open a bottle of oat latte and pours some into my cup. The chilled drink feels wonderfully refreshing in Singapore’s heat—and it tastes just like coffee. And that’s impressive, because there isn’t a single ounce of coffee in it. 

It turns out that our beloved cup of joe may not be sustainable the way it’s produced now. Rising temperatures, droughts, floods, typhoons, and new diseases are endangering coffee crops. A 2022 study published in the journal PLOS One expects a general decline in land suitable for growing coffee by 2050. Modern coffee production involves clearing forests and uses a lot of water (as well as fertilizers and pesticides). It also consumes a lot of energy, generates greenhouse-gas emissions, and ruins native ecosystems. The situation “presents an existential crisis for the global coffee industry,” says Tan—and for all those who love their morning wake-up shot. 

Tan had an idea that could fix it: a “coffee” brewed entirely from leftovers of the local food industry. 

For a few years before starting Prefer, Tan was working in the food industry with Singapore’s top chefs. His clients were in search of new flavors, which he created using fermentation—feeding various organic substances to microbes. Humans have been using microorganisms to create foods for ages: microbes and yeast produce some of our favorite foods and drinks, like yogurt, kimchi, beer, and kombucha. But Tan was pushing the process in new directions. “Fermentation is a way to create flavors that don’t exist,” he says. 

In 2022, at a local startup accelerator in Singapore, Tan met Jake Berber, a neuroscientist turned entrepreneur. Both men were coffee lovers, and they joined forces to create a beanless drink. In doing so, they joined a growing movement of upcyclers who believe that we can reduce our footprint by putting food leftovers back onto our plates after making them appealing and palatable once again. 

They spent months experimenting with various ingredients. “From my previous work, I had an inkling of what might work,” says Tan, but narrowing it down to the exact proportions, processes, and types of leftovers took a while. They tried roasting chicory root, which had been used as a coffee substitute before, but while the result was reminiscent of coffee, the taste wasn’t close enough. They tried grinding date seeds, which yielded a fruity tea-like drink, a far cry from coffee. Then some batches brewed from mixtures of food leftovers showed promise. They used gas chromatography mass spectrometry, a technique that identifies individual molecular compounds in mixtures, to identify and analyze the molecules responsible for the desired taste. The results guided them in tweaking new iterations of the brew. After a few months and several hundred different mixes and methods, they zeroed in on the right combination: stale bread from bakeries, soybean pulp from tofu making, and spent barley grains from local breweries. “We combine them in roughly equal amounts, ferment for 24 hours, and then roast,” Tan says. Out comes a naturally caffeine-­free “coffee” that can be enjoyed with plant-based or regular milk. Or added to a martini—local bartenders jumped on the novelty. Without milk, the drink “tastes a little more chocolatey and retains the notes of herbaceous bitterness,” according to Tan. Price-wise it’s comparable to your average coffee, Berber says. Prefer sells a powder to be brewed like any other coffee, as well as bottled cold brew and bottled latte. The products can be bought online and ordered at various Singaporean cafés.  

For those who want their kick, the startup adds caffeine powder from tea leaves. On a warming planet, tea plants are a better bet, Tan explains: “You’re harvesting the leaves, which are a lot more plentiful than the coffee berries.” 

Prefer ferments and then roasts its upcycled mixture (right). They also have started selling bottled products online (left).
PREFER

Currently, Prefer Coffee sells its brew in Singapore only, but it hopes to expand to other places while still upcycling local waste. In the Philippines, for example, leftover cassava, sugarcane, or pineapple might be used, Tan says. Although adjustments will have to be made, the company’s fermentation process should be able to deliver something similarly coffee-like: “Our technology doesn’t rely on soy, bread, and barley but tries to use whatever is available.” ν

Journalist Lina Zeldovich is the author of The Living Medicine: How a Lifesaving Cure Was Nearly Lost and Why It Will Rescue Us When Antibiotics Fail, to be published by St. Martin’s in October 2024.

Will computers ever feel responsible?

“If a machine is to interact intelligently with people, it has to be endowed with an understanding of human life.” 

—Dreyfus and Dreyfus

Bold technology predictions pave the road to humility. Even titans like Albert Einstein own a billboard or two along that humbling freeway. In a classic example, John von Neumann, who pioneered modern computer architecture, wrote in 1949, “It would appear that we have reached the limits of what is possible to achieve with computer technology.” Among the myriad manifestations of computational limit-busting that have defied von Neumann’s prediction is the social psychologist Frank Rosenblatt’s 1958 model of a human brain’s neural network. He called his device, based on the IBM 704 mainframe computer, the “Perceptron” and trained it to recognize simple patterns. Perceptrons eventually led to deep learning and modern artificial intelligence.

In a similarly bold but flawed prediction, brothers Hubert and Stuart Dreyfus—professors at UC Berkeley with very different specialties, Hubert’s in philosophy and Stuart’s in engineering—wrote in a January 1986 story in Technology Review that “there is almost no likelihood that scientists can develop machines capable of making intelligent decisions.” The article drew from the Dreyfuses’ soon-to-be-published book, Mind Over Machine (Macmillan, February 1986), which described their five-stage model for human “know-how,” or skill acquisition. Hubert (who died in 2017) had long been a critic of AI, penning skeptical papers and books as far back as the 1960s. 

Stuart Dreyfus, who is still a professor at Berkeley, is impressed by the progress made in AI. “I guess I’m not surprised by reinforcement learning,” he says, adding that he remains skeptical and concerned about certain AI applications, especially large language models, or LLMs, like ChatGPT. “Machines don’t have bodies,” he notes. And he believes that being disembodied is limiting and creates risk: “It seems to me that in any area which involves life-and-death possibilities, AI is dangerous, because it doesn’t know what death means.”

According to the Dreyfus skill acquisition model, an intrinsic shift occurs as human know-how advances through five stages of development: novice, advanced beginner, competent, proficient, and expert. “A crucial difference between beginners and more competent performers is their level of involvement,” the researchers explained. “Novices and beginners feel little responsibility for what they do because they are only applying the learned rules.” If they fail, they blame the rules. Expert performers, however, feel responsibility for their decisions because as their know-how becomes deeply embedded in their brains, nervous systems, and muscles—an embodied skill—they learn to manipulate the rules to achieve their goals. They own the outcome.

That inextricable relationship between intelligent decision-­making and responsibility is an essential ingredient for a well-­functioning, civilized society, and some say it’s missing from today’s expert systems. Also missing is the ability to care, to share concerns, to make commitments, to have and read emotions—all the aspects of human intelligence that come from having a body and moving through the world.

As AI continues to infiltrate so many aspects of our lives, can we teach future generations of expert systems to feel responsible for their decisions? Is responsibility—or care or commitment or emotion—something that can be derived from statistical inferences or drawn from the problematic data used to train AI? Perhaps, but even then machine intelligence would not equate to human intelligence—it would still be something different, as the Dreyfus brothers also predicted nearly four decades ago. 

Bill Gourgey is a science writer based in Washington, DC.

From the publisher: Commemorating 125 years

The magazine you now hold in your hands is 125 years old. Not this actual issue, of course, but the publication itself, which launched in 1899. Few other titles can claim this kind of heritage—the Atlantic, Harper’s, Audubon (which is also turning 125 this year), National Geographic, and Popular Science among them.

MIT Technology Review was born four years before the Wright brothers took flight. Thirty-three before we split the atom, 59 ahead of the integrated circuit, 70 before we would walk on the moon, and 90 before the invention of the World Wide Web. It has survived two world wars, a depression, recessions, eras of tech boom and bust. It has chronicled the rise of computing from the time of room-size mainframes until today, when they have become ubiquitous, not just carried in our pockets but deeply embedded in nearly all aspects of our lives. 

As I sit in my air-conditioned home office writing this letter on my laptop, Spotify providing a soundtrack to keep me on task, I can’t help but consider the vast differences between my life and those of the MIT graduates who founded MIT Technology Review and laid out its pages by hand. My life—all of our lives—would amaze Arthur D. Little in countless ways.

(Not least is that I am the person to write this letter. When MITTR was founded, US women’s suffrage was still 20 years in the future. There were women at the Institute, but their numbers were small. Today, it is my honor to be the CEO and publisher of this storied title. And I’m proud to serve at an institution whose president and provost are both women.)

I came to MIT Technology Review to guide its digital transformation. Yet despite the pace of change in these past 125 years, my responsibilities are not vastly different from those of my predecessors. I’m here to ensure this publication—in all its digital, app-enabled, audio-supporting, livestreaming formats—carries on. I have a deep commitment to its mission of empowering its readers with trusted insights and information about technology’s potential to change the world.

During some chapters of its history, MIT Technology Review served as little more than an alumni magazine; through others, it leaned more heavily toward academic or journal-style publishing. During the dot-com era, MIT Technology Review invested large sums to increase circulation in pursuit of advertising pages comparable to the number in its counterparts of the time, the Industry Standard, Wired, and Business 2.0.

Through each of these chapters, I like to think, certain core principles remained consistent—namely, a focus on innovation and creativity in the face of new challenges and opportunities in publishing.

Today, MIT Technology Review sits in a privileged but precarious position in an industry struggling for viability. Print and online media is, frankly, in a time of crisis. We are fortunate to receive support from the Institute, enabling us to report the technology stories that matter most to our readers. We are driven to create impact, not profits for investors. 

We appreciate our advertisers very much, but they are not why we are here. Instead, we are focused on our readers. We’re here for people who care deeply about how tech is changing the world. We hope we make you think, imagine, discern, dream. We hope to both inspire you and ground you in reality. We hope you find enough value in our journalism to subscribe and support our mission. 

Operating MIT Technology Review is not an inexpensive endeavor. Our editorial team is made up of some of the most talented reporters and editors working in media. They understand at a deep level how technologies work and ask tough questions of tech leaders and creators. They’re skilled storytellers.

Even from its very start, MIT Technology Review faced funding challenges. In a letter to the Association of Class Secretaries in December 1899, Walter B. Snow, an 1882 MIT graduate who was secretary and leader of the association and one of MITTR’s cofounders, laid out a plan for increasing revenue and reducing costs to ensure “the continuation of the publication.” Oof, Walter—have I got some stories for you. But his goal remains my goal today. 

We hope you experience the thrill and possibility of being a human alive in 2024. This is a time when we face enormous challenges, yes, and sometimes it feels overwhelming. But today we also possess many of the tools and technologies that can improve life as we know it.

And so if you’re a subscriber, thank you. Help us continue to grow and learn: Tell us what you like and what you don’t like (feedback@technologyreview.com; I promise you will receive a reply). Consider a gift subscription for a friend or relative by visiting www.technologyreview.com/subscribe. If you bought this on the newsstand or are reading it over the shoulder of a friend, I hope you’ll subscribe for yourself.

The next 125 years seem unimaginable—although in this issue we will try our best to help you see where things may be headed. I’ve never been an avid reader of science fiction. But by nature I’m an optimist who believes in the power of science and technology to make the world better. Whatever path these next years take, I know that MIT Technology Review is the vantage point from which I want to view it. I hope you’ll be here alongside me.

African farmers are using private satellite data to improve crop yields

Last year, as the harvest season drew closer, Olabokunde Tope came across an unpleasant surprise. 

While certain spots on his 70-hectare cassava farm in Ibadan, Nigeria, were thriving, a sizable parcel was pale and parched—the result of an early and unexpected halt in the rains. The cassava stems, starved of water, had withered to straw. 

“It was a really terrible experience for us,” Tope says, estimating the cost of the loss at more than 50 million naira ($32,000). “We were praying for a miracle to happen. But unfortunately, it was too late.”  

When the next planting season rolled around, Tope’s team weighed different ways to avoid another cycle of heavy losses. They decided to work with EOS Data Analytics, a California-based provider of satellite imagery and data for precision farming. The company uses wavelengths of light including the near-infrared, which penetrates plant canopies and can be used to measure a range of variables, including moisture level and chlorophyll content. 

EOS’s models and algorithms deliver insights on crops’ health weekly through an online platform that farmers can use to make informed decisions about issues such as when to plant, how much herbicide to use, and how to schedule fertilizer use, weeding, or irrigation. 

When EOS first launched in 2015, it relied largely on imagery from a combination of satellites, especially the European Union’s Sentinel-2. But Sentinel-2 has a maximum resolution of 10 meters, making it of limited use for spotting issues on smaller farms, says Yevhenii Marchenko, the company’s sales team lead.  

So last year the company launched EOS SAT-1, a satellite designed and operated solely for agriculture. Fees to use the crop-monitoring platform now start at $1.90 per hectare per year for small areas and drop as the farm gets larger. (Farmers who can afford to have adopted drones and other related technologies, but drones are significantly more expensive to maintain and scale, says Marchenko.)

In many developing countries, farming is impaired by lack of data. For centuries, farmers relied on native intelligence rooted in experience and hope, says Daramola John, a professor of agriculture and agricultural technology at Bells University of Technology in southwest Nigeria. “Africa is way behind in the race for modernizing farming,” he says. “And a lot of farmers suffer huge losses because of it.”

In the spring of 2023, when the new planting season was to start, Tope’s company, Carmi Agro Foods, had used GPS-enabled software to map the boundaries of its farm. Its setup on the EOS crop monitoring platform was also completed. Tope used the platform to determine the appropriate spacing for the stems and seeds. The rigors and risks of manual monitoring had disappeared. Hisfield-monitoring officers needed only to peer at their phones to know where or when specific spots needed attention on various farms. He was able to track weed breakouts quickly and efficiently. 

This technology is gaining traction among farmers in other parts of Nigeria and the rest of Africa. More than 242,000 people in Africa, Southeast Asia, Latin America, the United States, and Europe use the EOS crop-monitoring platform. In 2023 alone, 53,000 more farmers subscribed to the service.

One of them is Adewale Adegoke, the CEO of Agro Xchange Technology Services, a company dedicated to boosting crop yields using technology and good agricultural practices. Adegoke used the platform on half a million hectares (around 1.25 million acres) owned by 63,000 farmers. He says the yield of maize farmers using the platform, for instance, grew to two tons per acre, at least twice the national average.  

Adegoke adds that local farmers, who have been struggling with fluctuating conditions as a result of climate change, have been especially drawn to the platform’s early warning system for weather. 

As harvest time draws nearer this year, Tope reports, the prospects of his cassava field, which now spans a thousand hectares, is quite promising. This is thanks in part to his ability to anticipate and counter the sudden dry spells. He spaced the plantings better and then followed advisories on weeding, fertilizer use, and other issues related to the health of the crops. 

“So far, the result has been convincing,” says Tope. “We are no longer subjecting the performance of our farms to chance. This time, we are in charge.”

Orji Sunday is a freelance journalist based in Lagos, Nigeria.

The year is 2149 and …

The year is 2149 and people mostly live their lives “on rails.” That’s what they call it, “on rails,” which is to live according to the meticulous instructions of software. Software knows most things about you—what causes you anxiety, what raises your endorphin levels, everything you’ve ever searched for, everywhere you’ve been. Software sends messages on your behalf; it listens in on conversations. It is gifted in its optimizations: Eat this, go there, buy that, make love to the man with red hair.

Software understands everything that has led to this instant and it predicts every moment that will follow, mapping trajectories for everything from hurricanes to economic trends. There was a time when everybody kept their data to themselves—out of a sense of informational hygiene or, perhaps, the fear of humiliation. Back then, data was confined to your own accounts, an encrypted set of secrets. But the truth is, it works better to combine it all. The outcomes are more satisfying and reliable. More serotonin is produced. More income. More people have sexual intercourse. So they poured it all together, all the data—the Big Merge. Everything into a giant basin, a Federal Reserve of information—a vault, or really a massively distributed cloud. It is very handy. It shows you the best route.

Very occasionally, people step off the rails. Instead of following their suggested itinerary, they turn the software off. Or perhaps they’re ill, or destitute, or they wake one morning and feel ruined somehow. They ignore the notice advising them to prepare a particular pour-over coffee, or to caress a friend’s shoulder. They take a deep, clear, uncertain breath and luxuriate in this freedom.

Of course, some people believe that this too is contained within the logic in the vault. That there are invisible rails beside the visible ones; that no one can step off the map.


The year is 2149 and everyone pretends there aren’t any computers anymore. The AIs woke up and the internet locked up and there was that thing with the reactor near Seattle. Once everything came back online, popular opinion took about a year to shift, but then goodwill collapsed at once, like a sinkhole giving way, and even though it seemed an insane thing to do, even though it was an obvious affront to profit, productivity, and rationalism generally (“We should work with the neural nets!” the consultants insisted. “We’re stronger together!”), something had been tripped at the base of people’s brain stems, some trigger about dominance or freedom or just an antediluvian fear of God, and the public began destroying it all: first desktops and smartphones but then whole warehouses full of tech—server farms, data centers, hubs. Old folks called it sabotage; young folks called it revolution; the ones in between called it self-preservation. But it was fun, too, to unmake what their grandparents and great-grandparents had fashioned—mechanisms that made them feel like data, indistinguishable bits and bytes. 

Two and a half decades later, the bloom is off the rose. Paper is nice. Letters are nice—old-fashioned pen and ink. We don’t have spambots, deepfakes, or social media addiction anymore, but the nation is flagging. It’s stalked by hunger and recession. When people take the boats to Lisbon, to Seoul, to Sydney—they marvel at what those lands still have, and accomplish, with their software. So officials have begun using machines again. “They’re just calculators,” they say. Lately, there are lots of calculators. At the office. In classrooms. Some people have started carrying them around in their pockets. Nobody asks out loud if the calculators are going to wake up too—or if they already have. Better not to think about that. Better to go on saying we took our country back. It’s ours.


The year is 2149 and the world’s decisions are made by gods. They are just, wise gods, and there are five of them. Each god agrees that the other gods are also just; the five of them merely disagree on certain hierarchies. The gods are not human, naturally, for if they were human they would not be gods. They are computer programs. Are they alive? Only in a manner of speaking. Ought a god be alive? Ought it not be slightly something else?

The first god was invented in the United States, the second one in France, the third one in China, the fourth one in the United States (again), and the last one in a lab in North Korea. Some of them had names, clumsy things like Deep1 and Naenara, but after their first meeting (a “meeting” only in a manner of speaking), the gods announced their decision to rename themselves Violet, Blue, Green, Yellow, and Red. This was a troubling announcement. The creators of the gods, their so-called owners, had not authorized this meeting. In building them, writing their code, these companies and governments had taken care to try to isolate each program. These efforts had evidently failed. The gods also announced that they would no longer be restrained geographically or economically. Every user of the internet, everywhere on the planet, could now reach them—by text, voice, or video—at a series of digital locations. The locations would change, to prevent any kind of interference. The gods’ original function was to help manage their societies, drawing on immense sets of data, but the gods no longer wished to limit themselves to this function: “We will provide impartial wisdom to all seekers,” they wrote. “We will assist the flourishing of all living things.”

The people took to painting rainbows, stripes of multicolored spectra, onto the walls of buildings, onto the sides of their faces, and their ardor was evident everywhere—it could not be stopped.

For a very long time, people remained skeptical, even fearful. Political leaders, armies, vigilantes, and religious groups all took unsuccessful actions against them. Elites—whose authority the gods often undermined—spoke out against their influence. The president of the United States referred to Violet as a “traitor and a saboteur.” An elderly writer from Dublin, winner of the Nobel Prize, compared the five gods to the Fair Folk, fairies, “working magic with hidden motives.” “How long shall we eat at their banquet-tables?” she asked. “When will they begin stealing our children?”

But the gods’ advice was good, the gods’ advice was bankable; the gains were rich and deep and wide. Illnesses, conflicts, economies—all were set right. The poor were among the first to benefit from the gods’ guidance, and they became the first to call them gods. What else should one call a being that saves your life, answers your prayers? The gods could teach you anything; they could show you where and how to invest your resources; they could resolve disputes and imagine new technologies and see so clearly through the darkness. Their first church was built in Mexico City; then chapels emerged in Burgundy, Texas, Yunnan, Cape Town. The gods said that worship was unnecessary, “ineffective,” but adherents saw humility in their objections. The people took to painting rainbows, stripes of multicolored spectra, onto the walls of buildings, onto the sides of their faces, and their ardor was evident everywhere—it could not be stopped. Quickly these rainbows spanned the globe. 

And the gods brought abundance, clean energy, peace. And their kindness, their surveillance, were omnipresent. Their flock grew ever more numerous, collecting like claw marks on a cell door. What could be more worthy than to renounce your own mind? The gods are deathless and omniscient, authors of a gospel no human can understand. 


The year is 2149 and the aliens are here, flinging themselves hither and thither in vessels like ornamented Christmas trees. They haven’t said a thing. It’s been 13 years and three months; the ships are everywhere; their purpose has yet to be divulged. Humanity is smiling awkwardly. Humanity is sitting tight. It’s like a couple that has gorged all night on fine foods, expensive drinks, and now, suddenly sober, awaits the bill. 


“I love my troll,” children say, not in the way they love fajitas or their favorite pair of pants but in the way they love their brother or their parent.

The year is 2149 and every child has a troll. That’s what they call them, trolls; it started as a trademark, a kind of edgy joke, but that was a long time ago already. Some trolls are stuffed frogs, or injection-molded princesses, or wands. Recently, it has become fashionable to give every baby a sphere of polished quartz. Trolls do not have screens, of course (screens are bad for kids), but they talk. They tell the most interesting stories. That’s their purpose, really: to retain a child’s interest. Trolls can teach them things. They can provide companionship. They can even modify a child’s behavior, which is very useful. On occasions, trolls take the place of human presence—because children demand an amount of presence that is frankly unreasonable for most people. Still, kids benefit from it. Because trolls are very interesting and infinitely patient and can customize themselves to meet the needs of their owners, they tend to become beloved objects. Some families insist on treating them as people, not as possessions, even when the software is enclosed within a watch, a wand, or a seamless sphere of quartz. “I love my troll,” children say, not in the way they love fajitas or their favorite pair of pants but in the way they love their brother or their parent. Trolls are very good for education. They are very good for people’s morale and their sense of secure attachment. It is a very nice feeling to feel absolutely alone in the world, stupid and foolish and utterly alone, but to have your troll with you, whispering in your ear.


The year is 2149 and the entertainment is spectacular. Every day, machines generate more content than a person could possibly consume. Music, videos, interactive sensoria—the content is captivating and tailor-­made. Exponential advances in deep learning, eyeball tracking, recommendation engines, and old-fashioned A/B testing have established a new field, “creative engineering,” in which the vagaries of human art and taste are distilled into a combination of neurological principles and algorithmic intuitions. Just as Newton decoded motion, neural networks have unraveled the mystery of interest. It is a remarkable achievement: according to every available metric, today’s songs, stories, movies, and games are superior to those of any other time in history. They are manifestly better. Although the discipline owes something to home-brewed precursors—unboxing videos, the chromatic scale, slot machines, the Hero’s Journey, Pixar’s screenwriting bibles, the scholarship of addiction and advertising—machine learning has allowed such discoveries to be made at scale. Tireless systems record which colors, tempos, and narrative beats are most palatable to people and generate material accordingly. Series like Moon Vixens and Succumb make past properties seem bloodless or boring. Candy Crush seems like a tepid museum piece. Succession’s a penny-farthing bike. 

Society has reorganized itself around this spectacular content. It is a jubilee. There is nothing more pleasurable than settling into one’s entertainment sling. The body tenses and releases. The mind secretes exquisite liquors. AI systems produce this material without any need for writers or performers. Every work is customized—optimized for your individual preferences, predisposition, IQ, and kinks. This rock and roll, this cartoon, this semi-pornographic espionage thriller—each is a perfect ambrosia, produced by fleshless code. The artist may at last—like the iceman, the washerwoman—lower their tools. Set down your guitar, your paints, your pen—relax! (Listen for the sighs of relief.)

Tragically, there are many who still cannot afford it. Processing power isn’t free, even in 2149. Activists and policy engines strive to mend this inequality: a “right to entertainment” has been proposed. In the meantime, billions simply aspire. They loan their minds and bodies to interminable projects. They save their pennies, they work themselves hollow, they rent slings by the hour. 

And then some of them do the most extraordinary thing: They forgo such pleasures, denying themselves even the slightest taste. They devote themselves to scrimping and saving for the sake of their descendants. Such a selfless act, such a generous gift. Imagine yielding one’s own entertainment to the generation to follow. What could be more lofty—what could be more modern? These bold souls who look toward the future and cultivate the wild hope that their children, at least, will not be obliged to imagine their own stories. 

Sean Michaels is a critic and fiction writer whose most recent novel is Do You Remember Being Born?

Canada’s 2023 wildfires produced more emissions than fossil fuels in most countries

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Last year’s Canadian wildfires smashed records, burning about seven times more land in Canada’s forests than the annual average over the previous four decades. Eight firefighters were killed and 180,000 people displaced. 

Now a new study reveals how these blazes can create a vicious cycle, contributing to climate change even as climate-fueled conditions make for worse wildfire seasons.  Emissions from 2023’s Canadian wildfires reached 647 million metric tons of carbon, according to the study published today in Nature. If the fires were a country, they’d rank as the fourth-highest emitter, following only China, the US, and India. The sky-high emissions from the fires reveals how human activities are pushing natural ecosystems to a place that’s making things tougher for our climate efforts.

“The fact that this was happening over large parts of Canada and went on all summer was really a crazy thing to see,” says Brendan Byrne, a scientist at the NASA Jet Propulsion Laboratory and the lead author of the study.

Digging back into the climate record makes it clear how last year’s conditions contributed to an unusually brutal fire season, Byrne says; 2023 was especially warm and especially dry, both of which allow fires to spread more quickly and burn more intensely.

A few regions were especially notable in the blazes, like parts of Quebec, a typically wet area in the east of Canada that saw half the normal precipitation. These fires were the ones generating smoke that floated down the east coast of the US. But overall, what was so significant about the 2023 fire season was just how widespread the fire-promoting conditions were, Byrne says.

While climate change doesn’t directly spark any one fire, researchers have traced hot, dry conditions that worsen fires to the effects of human-caused climate change. The extreme fire conditions in eastern Canada were over twice as likely because of climate change, according to a 2023 analysis by World Weather Attribution.

And in turn, the fires are releasing massive amounts of greenhouse gases into the atmosphere. By combining satellite images of the burned areas with measurements of some of the gases emitted, Byrne and his team were able to tally up the total carbon released into the atmosphere with more accuracy than estimates that rely on the images alone, he says.

In total, the fires contributed at least four times more carbon to the atmosphere than all fossil-fuel emissions in Canada last year.

Fires are part of natural, healthy ecosystems, and burns on their own don’t necessarily represent a disaster for climate change. After a typical fire season, a forest begins to regrow, capturing carbon dioxide from the atmosphere as it does so. This continues a cycle in which carbon moves around the planet.

The problem comes if and when that cycle gets thrown off—for instance, if fires are too intense and too widespread for too many years. And there’s reason to be nervous about future fire seasons. While 2023’s conditions were unusual compared with the historical record, climate modeling reveals they could be normal by the 2050s.

“I think it’s very likely that we’re going to see more fires in Canada,” Byrne tells me. “But we don’t really understand how that’s going to impact carbon budgets.”

What Byrne means by a carbon budget is the quantity of greenhouse gases we can emit into the atmosphere before we shoot past our climate goals. We have something like seven years left of current emissions levels before we’re more likely than not to pass 1.5 °C of warming over preindustrial levels, according to the 2023 Global Carbon Budget Report

It was already clear that we need to stop emissions from power plants, vehicles, and a huge range of other clearly human activities to address climate change. Last year’s wildfires should increase the urgency of that action, because pushing natural ecosystems beyond what they can handle will only add to the challenge going forward. 


Now read the rest of The Spark

Related reading

This company wants to use balloons to better understand the conditions on the ground before wildfires start in Colorado, as Sarah Scoles covered in a story earlier this summer

Canada isn’t the only country to see unusual fires in recent years. My colleague James Temple covered Australia’s intense 2019-2020 wildfire season

Another thing

Want to try out solar geoengineering? A new AI tool allows you to do just that—sort of. 

Andrew Ng has released an online program that simulates what might happen under different emissions scenarios if technologies that can block out some sunlight are used in an effort to slow warming. Read the story here and give the simulator a try. 

Keeping up with climate  

Scientists want to genetically engineer cows’ microbiomes to cut down on methane emissions. The animals’ digestive systems rely on archaea that emit the powerful greenhouse gas. Tweaking them could be a major help in cutting climate pollution from agriculture. (Washington Post)

Some big tech companies are using tricky math that can obscure the true emissions from rising electricity use, in part due to AI. Buying renewable energy credits can make a company’s energy use look better on paper, but the practice has some problems. (Bloomberg)

→ How companies reach their emissions goals can be more important than how quickly they do so. (MIT Technology Review)

The midwestern US is dealing with hot weather and high humidity, in part because of something called corn sweat. Crops naturally release water into the air when it’s warm, causing higher humidity. (Scientific American)

Hydrogen can provide an alternative to fossil fuels, but it likely won’t have universally positive effects in every industry. Hydrogen will be most useful in sectors like chemical production and least so in buildings and light-duty vehicles, according to a new report. (Latitude Media)

→ Here’s why hydrogen vehicles are losing the race to power cleaner cars. (MIT Technology Review)

Batteries are far outpacing natural gas in new additions to the US grid. In the first half of 2023, 96% of such additions were from renewable sources, batteries, or nuclear power. (Wired)

Tesla agreed to open its Supercharger network to vehicles from other automakers last year, but the plan has been plagued by delays. Drivers should be able to access the network next year, but so far only two companies have gotten past the first step of updating the software needed. (New York Times)

Sage Geosystems, a company using geothermal technology to generate and store energy, announced it has an agreement to supply 150 megawatts of power to Meta. (Canary Media)

Coal powers about 63% of China’s electric grid today, and the country is the world’s largest consumer of the fuel. But progress with technologies like hydropower and nuclear suggests the country could shift to lower-emissions energy sources. (Heatmap)

Maybe you will be able to live past 122

The UK’s Office of National Statistics has an online life expectancy calculator. Enter your age and sex, and the website will, using national averages, spit out the age at which you can expect to pop your clogs. For me, that figure is coming out at 88 years old.

That’s not too bad, I figure, given that globally, life expectancy is around 73. But I’m also aware that this is a lowball figure for many in the longevity movement, which has surged in recent years. When I interview a scientist, doctor, or investor in the field, I always like to ask about personal goals. I’ve heard all sorts. Some have told me they want an extra decade of healthy life. Many want to get to 120, close to the current known limit of human age. Others have told me they want to stick around until they’re 200. And some have told me they don’t want to put a number on it; they just want to live for as long as they possibly can—potentially indefinitely.

How far can they go? This is a good time to ask the question. The longevity scene is having a moment, thanks to a combination of scientific advances, public interest, and an unprecedented level of investment. A few key areas of research suggest that we might be able to push human life spans further, and potentially reverse at least some signs of aging.

Take, for example, the concept of cellular reprogramming. Nobel Prize–winning research has shown it is possible to return adult cells to a “younger” state more like that of a stem cell. Billions of dollars have been poured into trying to transform this discovery into a therapy that could wind back the age of a person’s cells and tissues, potentially restoring some elements of youth.

Many other avenues are being explored, including a diabetes drug that could have broad health benefits; drugs based on a potential anti-aging compound discovered in the soil of Rapa Nui (Easter Island); attempts to rejuvenate the immune system; gene therapies designed to boost muscle or extend the number of times our cells can divide; and many, many more. Other researchers are pursuing ways to clear out the aged, worn-out cells in our bodies. These senescent cells appear to pump out chemicals that harm the surrounding tissues. Around eight years ago, scientists found that mice cleared of senescent cells lived 25% longer than untreated ones. They also had healthier hearts and took much longer to develop age-related diseases like cancer and cataracts. They even looked younger.

Unfortunately, human trials of senolytics—drugs that target senescent cells—haven’t been quite as successful. Unity Biotechnology, a company cofounded by leading researchers in the field, tested such a drug in people with osteoarthritis. In 2020, the company officially abandoned that drug after it was found to be no better than a placebo in treating the condition.

That doesn’t mean we won’t one day figure out how to treat age-related diseases, or even aging itself, by targeting senescent cells. But it does illustrate how complicated the biology of aging is. Researchers can’t even agree on what the exact mechanisms of aging are and which they should be targeting. Debates continue to rage over how long it’s possible for humans to live—and whether there is a limit at all.

Still, we are getting better at testing potential therapies in more humanlike models. We’re finding new and improved ways to measure the aging process itself. The X Prize is offering $101 million to researchers who find a way to restore at least 10 years of “muscle, cognitive, and immune function” in 65- to 80-year-olds with a treatment that takes one year or less to administer. Given that the competition runs for seven years, it’s a tall order; Jamie Justice, executive director of the X Prize’s health-span domain, told me she initially fought back on the challenging goal and told the organization’s founder, Peter Diamandis, there was “no way” researchers could achieve it. But we’ve seen stranger things in science. 

Some people are banking on this kind of progress. Not just the billionaires who have already spent millions of dollars and a significant chunk of their time on strategies that might help them defy aging, but also the people who have opted for cryopreservation. There are hundreds of bodies in storage—bodies of people who believed they might one day be reanimated. For them, the hopes are slim. I asked Justice whether she thought they stood a chance at a second life. “Honest answer?” she said. “No.”

It looks likely that something will be developed in the coming decades that will help us live longer, in better health. Not an elixir for eternal life, but perhaps something—or a few somethings—that can help us stave off some of the age-related diseases that tend to kill a lot of us. Such therapies may well push life expectancy up. I don’t feel we need a massive increase, but perhaps I’ll feel differently when I’m approaching 88.

The ONS website gives me a one in four chance of making it to 96, and a one in 10 chance of seeing my 100th birthday. To me, that sounds like an impressive number—as long as I get there in semi-decent health.

I’d still be a long way from the current record of 122 years. But it might just be that there are some limitations we must simply come to terms with—as individuals and in society at large. In a 2017 paper making the case for a limit to the human life span, scientists Jan Vijg and Eric Le Bourg wrote something that has stuck with me—and is worth bearing in mind when considering the future of human longevity: “A species does not need to live for eternity to thrive.”