The author who listens to the sound of the cosmos

In 1983, while on a field recording assignment in Kenya, the musician and soundscape ecologist Bernie Krause noticed something remarkable. Lying in his tent late one night, listening to the calls of hyenas, tree frogs, elephants, and insects in the surrounding old-growth forest, Krause heard what seemed to be a kind of collective orchestra. Rather than a chaotic cacophony of nighttime noises, it was as if each animal was singing within a defined acoustic bandwidth, like living instruments in a larger sylvan ensemble. 

Unsure of whether this structured musicality was real or the invention of an exhausted mind, Krause analyzed his soundscape recordings on a spectrogram when he returned home. Sure enough, the insects occupied one frequency niche, the frogs another, and the mammals a completely separate one. Each group had claimed a unique part of the larger sonic spectrum, a fact that not only made communication easier, Krause surmised, but also helped convey important information about the health and history of the ecosystem.

cover of A Book of Noises
A Book of Noises:
Notes on the Auraculous

Caspar Henderson
UNIVERSITY OF CHICAGO PRESS, 2024

Krause describes his “niche hypothesis” in the 2012 book The Great Animal Orchestra, dubbing these symphonic soundscapes the “biophony”—his term for all the sounds generated by nonhuman organisms in a specific biome. Along with his colleague Stuart Gage from Michigan State University, he also coins two more terms—“anthropophony” and “geophony”—to describe sounds associated with humanity (think music, language, traffic jams, jetliners) and those originating from Earth’s natural processes (wind, waves, volcanoes, and thunder).

In A Book of Noises: Notes on the Auraculous, the Oxford-based writer and journalist Caspar Henderson makes an addition to Krause’s soundscape triumvirate: the “cosmophony,” or the sounds of the cosmos. Together, these four categories serve as the basis for a brief but fascinating tour through the nature of sound and music with 48 stops (in the form of short essays) that explore everything from human earworms to whale earwax.

We start, appropriately enough, with a bang. Sound, Henderson explains, is a pressure wave in a medium. The denser the medium, the faster it travels. For hundreds of thousands of years after the Big Bang, the universe was so dense that it trapped light but allowed sound to pass through it freely. As the primordial plasma of this infant universe cooled and expansion continued, matter collected along the ripples of these cosmic waves, which eventually became the loci for galaxies like our own. “The universe we see today is an echo of those early years,” Henderson writes, “and the waves help us measure [its] size.” 

The Big Bang may seem like a logical place to start a journey into sound, but cosmophony is actually an odd category to invent for a book about noise. After all, there’s not much of it in the vacuum of space. Henderson gets around this by keeping the section short and focusing more on how humans have historically thought about sound in the heavens. For example, there are two separate essays on our multicentury obsession with “the music of the spheres,” the idea that there exists a kind of ethereal harmony produced by the movements of heavenly objects.

Since matter matters when it comes to sound—there can be none of the latter without the former—we also get an otherworldly examination of what human voices would sound like on different terrestrial and gas planets in our solar system, as well as some creative efforts from musicians and scientists who have transmuted visual data from space into music and other forms of audio. These are fun and interesting forays, but it isn’t until the end of the equally short “Sounds of Earth” (geophony) section that readers start to get a sense of the “auraculousness”—ear-related wonder—Henderson references in the subtitle.

Judging by the quantity and variety of entries in the “biophony” and “anthropophony” sections, you get the impression Henderson himself might be more attuned to these particular wonders as well. You really can’t blame him. 

The sheer number of fascinating ways that sound is employed across the human and nonhuman animal kingdom is mind-boggling, and it’s in these final two sections of the book that Henderson’s prose and curatorial prowess really start to shine—or should I say sing

We learn, for example, about female frogs that have devised their own biological noise-canceling system to tune out the male croaks of other species; crickets that amplify their chirps by “chewing a hole in a leaf, sticking their heads through it, and using it as a megaphone”; elephants that listen and communicate with each other seismically; plants that react to the buzz of bees by increasing the concentration of sugar in their flowers’ nectar; and moths with tiny bumps on their exoskeletons that jam the high-frequency echolocation pulses bats use to hunt them. 

Henderson has a knack for crisp characterization (“Singing came from winging”) and vivid, playful descriptions (“Through [the cochlea], the booming and buzzing confusion of the world, all its voices and music, passes into the three pounds of wobbly blancmange inside the nutshell numbskulls that are our kingdoms of infinite space”). He also excels at injecting a sense of wonder into aspects of sound that many of us take for granted. 

It turns out that sound is not just a great way to communicate and navigate underwater—it may be the best way.

In an essay about its power to heal, he marvels at ultrasound’s twin uses as a medical treatment and a method of examination. In addition to its kidney-stone-blasting and tumor-ablating powers, sound, Henderson says, can also be a literal window into our bodies. “It is, truly, an astonishing thing that our first glimpse of the greatest wonder and trial of our lives, parenthood, comes in the form of a fuzzy black and white smudge made from sound.”

While you can certainly quibble with some of the topical choices and their treatment in A Book of Noises, what you can’t argue with is the clear sense of awe that permeates almost every page. It’s an infectious and edifying kind of energy. So much so that by the time Henderson wraps up the book’s final essay, on silence, all you want to do is immerse yourself in more noise.

Singing in the key of sea

For the multiple generations who grew up watching his Academy Award–­winning 1956 documentary film, The Silent World, Jacques-Yves Cousteau’s mischaracterization of the ocean as a place largely devoid of sound seems to have calcified into common knowledge. The science writer Amorina Kingdon offers a thorough and convincing rebuttal of this idea in her new book, Sing Like Fish: How Sound Rules Life Under Water.

cover of Sing Like Fish
Sing Like Fish: How Sound
Rules Life Under Water

Amorina Kingdon
CROWN, 2024

Beyond serving as a 247-page refutation of this unfortunate trope, Kingdon’s book aims to open our ears to all the marvels of underwater life by explaining how sound behaves in this watery underworld, why it’s so important to the animals that live there, and what we can learn when we start listening to them.

It turns out that sound is not just a great way to communicate and navigate underwater—it may be the best way. For one thing, it travels four and a half times faster there than it does on land. It can also go farther (across entire seas, under the right conditions) and provide critical information about everything from who wants to eat you to who wants to mate with you. 

To take advantage of the unique way sound propagates in the world’s oceans, fish rely on a variety of methods to “hear” what’s going on around them. These mechanisms range from so-called lateral lines—rows of tiny hair cells along the outside of their body that can sense small movements and vibrations in the water around them—to otoliths, dense lumps of calcium carbonate that form inside their inner ears. 

Because fish are more or less the same density as water, these denser otoliths move at a different amplitude and phase in response to vibrations passing through their body. The movement is then registered by patches of hair cells that line the chambers where otoliths are embedded, which turn the vibrations of sound into nerve impulses. The philosopher of science Peter Godfrey-Smith may have put it best: “It is not too much to say that a fish’s body is a giant pressure-sensitive ear.” 

While there are some minor topical overlaps with Henderson’s book—primarily around whale-related sound and communication—one of the more admirable attributes of Sing Like Fish is Kingdon’s willingness to focus on some of the oceans’ … let’s say, less charismatic noise-­makers. We learn about herring (“the inveterate farters of the sea”), which use their flatuosity much as a fighter jet might use countermeasures to avoid an incoming missile. When these silvery fish detect the sound of a killer whale, they’ll fire off a barrage of toots, quickly decreasing both their bodily buoyancy and their vulnerability to the location-revealing clicks of the whale hunting them. “This strategic fart shifts them deeper and makes them less reflective to sound,” writes Kingdon.  

Readers are also introduced to the plainfin midshipman, a West Coast fish with “a booming voice” and “a perpetual look of accusation.” In addition to having “a fishy case of resting bitch face,” the male midshipman also has a unique hum, which it uses to attract gravid females in the spring. That hum became the subject of various conspiracy theories in the mid-’80s, when houseboat owners in Sausalito, California, started complaining about a mysterious seasonal drone. Thanks to a hydrophone and a level-headed local aquarium director, the sound was eventually revealed to be not aliens or a secret government experiment, but simply a small, brownish-green fish looking for love.

Kingdon’s command of, and enthusiasm for, the science of underwater sound is uniformly impressive. But it’s her recounting of how and why we started listening to the oceans in the first place that’s arguably one of the book’s most fascinating topics. It’s a wide-ranging tale, one that spans “firearm-­happy Victorian-era gentleman” and “whales that sounded suspiciously like Soviet submarines.” It’s also a powerful reminder of how war and military research can both spur and stifle scientific discovery in surprising ways.  

The fact that Sing Like Fish ends up being both an exquisitely reported piece of journalism and a riveting exploration of a sense that tends to get short shrift only amplifies Kingdon’s ultimate message—that we all need to start paying more attention to the ways in which our own sounds are impinging on life underwater. As we’ve started listening more to the seas, what we’re increasingly hearing is ourselves, she writes: “Piercing sonar, thudding seismic air guns for geological imaging, bangs from pile drivers, buzzing motorboats, and shipping’s broadband growl. We make a lot of noise.”

That noise affects underwater communication, mating, migrating, and bonding in all sorts of subtle and obvious ways. And its impact is often made worse when combined with other threats, like climate change. The good news is that while noise can be a frustratingly hard thing to regulate, there are efforts underway to address our poor underwater aural etiquette. The International Maritime Organization is currently updating its ship noise guidelines for member nations. At the same time, the International Organization for Standardization is creating more guidelines for measuring underwater noise. 

“The ocean is not, and has never been, a silent place,” writes Kingdon. But to keep it filled with the right kinds of noise (i.e., the kinds that are useful to the creatures living there), we’ll have to recommit ourselves to doing two things that humans sometimes aren’t so great at: learning to listen and knowing when to shut up.   

Music to our ears (and minds)

We tend to do both (shut up and listen) when music is being played—at least if it’s the kind we like. And yet the nature of what the composer Edgard Varèse famously called “organized sound” largely remains a mystery to us. What exactly is music? What distinguishes it from other sounds? Why do we enjoy making it? Why do we prefer certain kinds? Why is it so effective at influencing our emotions and (often) our memories?  

In their recent book Every Brain Needs Music: The Neuroscience of Making and Listening to Music, Larry Sherman and Dennis Plies look inside our heads to try to find some answers to these vexing questions. Sherman is a professor of neuroscience at the Oregon Health and Science University, and Plies is a professional musician and teacher. Unfortunately, if the book reveals anything, it’s that limiting your exploration of music to one lens (neuroscience) also limits the insights you can gain into its nature. 

cover of Every Brain Needs Music
Every Brain Needs Music:
The Neuroscience of Making
and Listening to Music

Larry Sherman and Dennis Plies
COLUMBIA UNIVERSITY PRESS, 2023

That’s not to say that getting a better sense of how specific patterns of vibrating air molecules get translated into feelings of joy and happiness isn’t valuable. There are some genuinely interesting explanations of what happens in our brains when we play, listen to, and compose music—supported by some truly great watercolor-­based illustrations by Susi Davis that help to clarify the text. But much of this gets bogged down in odd editorial choices (there are, for some reason, three chapters on practicing music) and conclusions that aren’t exactly earth-shattering (humans like music because it connects us). 

Every Brain Needs Music purports to be for all readers, but unless you’re a musician who’s particularly interested in the brain and its inner workings, I think most people will be far better served by A Book of Noises or other, more in-depth explorations of the importance of music to humans, like Michael Spitzer’s The Musical Human: A History of Life on Earth

“We have no earlids,” the late composer and naturalist R. Murray Schafer once observed. He also noted that despite this anatomical omission, we’ve become quite good at ignoring or tuning out large portions of the sonic world around us. Some of this tendency may be tied to our supposed preference for other sensory modalities. Most of us are taught from an early age that we are primarily visual creatures—that seeing is believing, that a picture is worth a thousand words. This idea is likely reinforced by a culture that also tends to focus primarily on the visual experience.

Yet while it may be true that we rely heavily on our eyes to make sense of the world, we do a profound disservice to ourselves and the rest of the natural world when we underestimate or downplay sound. Indeed, if there’s a common message that runs through all three of these books, it’s that attending to sound in all its forms isn’t just personally rewarding or edifying; it’s a part of what makes us fully human. As Bernie Krause discovered one night more than 40 years ago, once you start listening, it’s amazing what you can hear. 

Bryan Gardiner is a writer based in Oakland, California.

African farmers are using private satellite data to improve crop yields

Last year, as the harvest season drew closer, Olabokunde Tope came across an unpleasant surprise. 

While certain spots on his 70-hectare cassava farm in Ibadan, Nigeria, were thriving, a sizable parcel was pale and parched—the result of an early and unexpected halt in the rains. The cassava stems, starved of water, had withered to straw. 

“It was a really terrible experience for us,” Tope says, estimating the cost of the loss at more than 50 million naira ($32,000). “We were praying for a miracle to happen. But unfortunately, it was too late.”  

When the next planting season rolled around, Tope’s team weighed different ways to avoid another cycle of heavy losses. They decided to work with EOS Data Analytics, a California-based provider of satellite imagery and data for precision farming. The company uses wavelengths of light including the near-infrared, which penetrates plant canopies and can be used to measure a range of variables, including moisture level and chlorophyll content. 

EOS’s models and algorithms deliver insights on crops’ health weekly through an online platform that farmers can use to make informed decisions about issues such as when to plant, how much herbicide to use, and how to schedule fertilizer use, weeding, or irrigation. 

When EOS first launched in 2015, it relied largely on imagery from a combination of satellites, especially the European Union’s Sentinel-2. But Sentinel-2 has a maximum resolution of 10 meters, making it of limited use for spotting issues on smaller farms, says Yevhenii Marchenko, the company’s sales team lead.  

So last year the company launched EOS SAT-1, a satellite designed and operated solely for agriculture. Fees to use the crop-monitoring platform now start at $1.90 per hectare per year for small areas and drop as the farm gets larger. (Farmers who can afford to have adopted drones and other related technologies, but drones are significantly more expensive to maintain and scale, says Marchenko.)

In many developing countries, farming is impaired by lack of data. For centuries, farmers relied on native intelligence rooted in experience and hope, says Daramola John, a professor of agriculture and agricultural technology at Bells University of Technology in southwest Nigeria. “Africa is way behind in the race for modernizing farming,” he says. “And a lot of farmers suffer huge losses because of it.”

In the spring of 2023, when the new planting season was to start, Tope’s company, Carmi Agro Foods, had used GPS-enabled software to map the boundaries of its farm. Its setup on the EOS crop monitoring platform was also completed. Tope used the platform to determine the appropriate spacing for the stems and seeds. The rigors and risks of manual monitoring had disappeared. Hisfield-monitoring officers needed only to peer at their phones to know where or when specific spots needed attention on various farms. He was able to track weed breakouts quickly and efficiently. 

This technology is gaining traction among farmers in other parts of Nigeria and the rest of Africa. More than 242,000 people in Africa, Southeast Asia, Latin America, the United States, and Europe use the EOS crop-monitoring platform. In 2023 alone, 53,000 more farmers subscribed to the service.

One of them is Adewale Adegoke, the CEO of Agro Xchange Technology Services, a company dedicated to boosting crop yields using technology and good agricultural practices. Adegoke used the platform on half a million hectares (around 1.25 million acres) owned by 63,000 farmers. He says the yield of maize farmers using the platform, for instance, grew to two tons per acre, at least twice the national average.  

Adegoke adds that local farmers, who have been struggling with fluctuating conditions as a result of climate change, have been especially drawn to the platform’s early warning system for weather. 

As harvest time draws nearer this year, Tope reports, the prospects of his cassava field, which now spans a thousand hectares, is quite promising. This is thanks in part to his ability to anticipate and counter the sudden dry spells. He spaced the plantings better and then followed advisories on weeding, fertilizer use, and other issues related to the health of the crops. 

“So far, the result has been convincing,” says Tope. “We are no longer subjecting the performance of our farms to chance. This time, we are in charge.”

Orji Sunday is a freelance journalist based in Lagos, Nigeria.

Inside the long quest to advance Chinese writing technology

Every second of every day, someone is typing in Chinese. In a park in Hong Kong, at a desk in Taiwan, in the checkout line at a Family Mart in Shanghai, the automatic doors chiming a song each time they open. Though the mechanics look a little different from typing in English or French—people usually type the pronunciation of a character and then pick it out of a selection that pops up, autocomplete-style—it’s hard to think of anything more quotidian. The software that allows this exists beneath the awareness of pretty much everyone who uses it. It’s just there.

cover of The Chinese Computer by Tom Mullaney
The Chinese Computer: A Global History of the Information Age
Thomas S. Mullaney
MIT PRESS, 2024

What’s largely been forgotten—and what most people outside Asia never even knew in the first place—is that a large cast of eccentrics and linguists, engineers and polymaths, spent much of the 20th century torturing themselves over how Chinese was ever going to move away from the ink brush to any other medium. This process has been the subject of two books published in the last two years: Thomas Mullaney’s scholarly work The Chinese Computer and Jing Tsu’s more accessible Kingdom of Characters. Mullaney’s book focuses on the invention of various input systems for Chinese starting in the 1940s, while Tsu’s covers more than a century of efforts to standardize Chinese and transmit it using the telegraph, typewriter, and computer. But both reveal a story that’s tumultuous and chaotic—and just a little unsettling in the futility it reflects.   

cover of Kingdom of Characters
Kingdom of Characters: The Language Revolution That Made China Modern
Jing Tsu
RIVERHEAD BOOKS, 2022

Chinese characters are not as cryptic as they sometimes appear. The general rule is that they stand for a word, or sometimes part of a word, and learning to read is a process of memorization. Along the way, it becomes easier to guess how a character should be spoken, because often phonetic elements are tucked in among other symbols. The characters were traditionally written by hand with a brush, and part of becoming literate involves memorizing the order in which the strokes are made. Put them in the wrong order and the character doesn’t look right. Or rather, as I found some years ago as a second-language learner in Guangzhou, China, it looks childish. (My husband, a translator of Chinese literature, found it hilarious and adorable that at the age of 30, I wrote like a kindergartner.)

The trouble, however, is that there are a lot of characters. One needs to know at least a few thousand to be considered basically literate, and there are thousands more beyond that basic set. Many modern learners of Chinese devote themselves essentially full-time to learning to read, at least in the beginning. More than a century ago, this was such a monumental task that leading thinkers worried it was impairing China’s ability to survive the attentions of more aggressive powers.

In the 19th century, a huge proportion of Chinese people were illiterate. They had little access to schooling. Many were subsistence farmers. China, despite its immense population and vast territory, was perpetually finding itself on the losing end of deals with nimbler, more industrialized nations. The Opium Wars, in the mid-19th century, had led to a situation where foreign powers effectively colonized Chinese soil. What advanced infrastructure there was had been built and was owned by foreigners.  

Some felt these things were connected. Wang Zhao, for one, was a reformer who believed that a simpler way to write spoken Chinese was essential to the survival of the nation. Wang’s idea was to use a set of phonetic symbols, representing one specific dialect of Chinese. If people could sound out words, having memorized just a handful of shapes the way speakers of languages using an alphabet did, they could become literate more quickly. With literacy, they could learn technical skills, study science, and help China get ownership of its future back. 

Wang believed in this goal so strongly that though he’d been thrown out of China in 1898, he returned two years later in disguise. After arriving by boat from Japan, he traveled over land on foot in the costume of a Buddhist monk. His story forms the first chapter of Jing Tsu’s book, and it is thick with drama, including a shouting match and brawl on the grounds of a former palace, during a meeting to decide which dialect a national version of such a system should represent. Wang’s system for learning Mandarin was used by schools in Beijing for a few years, but ultimately it did not survive the rise of competing systems and the period of chaos that swallowed China not long after the Qing Dynasty’s fall in 1911. Decades of disorder and uneasy truces gave way to Japan’s invasion of Manchuria in northern China in 1931. For a long time, basic survival was all most people had time for.

However, strange inventions soon began to turn up in China. Chinese students and scientists abroad had started to work on a typewriter for the language, which they felt was lagging behind others. Texts in English and other tongues using Roman characters could be printed swiftly and cheaply with keyboard-controlled machines that injected liquid metal into type molds, but Chinese texts required thousands upon thousands of bits of type to be placed in a manual printing press. And while English correspondence could be whacked out on a typewriter, Chinese correspondence was still, after all this time, written by hand.      

Of all the technologies Mullaney and Tsu describe, these baroque metal monsters stick most in the mind. Equipped with cylinders and wheels, with type arrayed in starbursts or in a massive tray, they are simultaneously writing machines and incarnations of philosophies about how to organize a language. Because Chinese characters don’t have an inherent order (no A-B-C-D-E-F-G) and because there are so many (if you just glance at 4,000 of them, you’re not likely to spot the one you need quickly), people tried to arrange these bits of type according to predictable rules. The first article ever published by Lin Yutang, who would go on to become one of China’s most prominent writers in English, described a system of ordering characters according to the number of strokes it took to form them. He eventually designed a Chinese typewriter that consumed his life and finances, a lovely thing that failed its demo in front of potential investors.

woman using a large desk-sized terminal
Chinese keyboard designers considered many interfaces, including tabletop-size devices that included 2,000 or more commonly used characters.
PUBLIC DOMAIN/COURTESY OF THOMAS S. MULLANEY

Technology often seems to demand new ways of engaging with the physical, and the Chinese typewriter was no exception. When I first saw a functioning example, at a private museum in a basement in Switzerland, I was entranced by the gliding arm and slender rails of the sheet-cake-size device, its tray full of characters. “Operating the machine was a full-body exercise,” Tsu writes of a very early typewriter from the late 1890s, designed by an American missionary. Its inventor expected that with time, muscle memory would take over, and the typist would move smoothly around the machine, picking out characters and depressing keys. 

However, though Chinese typewriters eventually got off the ground (the first commercial typewriter was available in the 1920s), a few decades later it became clear that the next challenge was getting Chinese characters into the computer age. And there was still the problem of how to get more people reading. Through the 1930s, ’40s, ’50s, and ’60s, systems for ordering and typing Chinese continued to occupy the minds of intellectuals; particularly odd and memorable is the story of the librarian at Sun Yat-sen University in Guangzhou, who in the 1930s came up with a system of light and dark glyphs like semaphore flags to stand for characters. Mullaney and Tsu both linger on the case of Zhi Bingyi, an engineer imprisoned in solitary confinement during the Cultural Revolution in the late 1960s, who was inspired by the characters of a slogan written on his cell wall to devise his own code for inputting characters into a computer.

As the child of a futurist, I’ve seen firsthand that the path to where we are is littered with technological dead ends.

The tools for literacy were advancing over the same period, thanks to government-­mandated reforms introduced after the Communist Revolution in 1949. To assist in learning to read, everyone in mainland China would now be taught pinyin, a system that uses Roman letters to indicate how Chinese characters are pronounced. Meanwhile, thousands of characters would be replaced with simplified versions, with fewer strokes to learn. This is still how it’s done today in the mainland, though in Taiwan and Hong Kong, the characters are not simplified, and Taiwan uses a different pronunciation guide, one based on 37 phonetic symbols and five tone marks. 

Myriad ideas were thrown at the problem of getting these characters into computers. Images of a graveyard of failed designs—256-key keyboards and the enormous cylinder of the Ideo-Matic Encoder, a keyboard with more than 4,000 options—are scattered poignantly through Mullaney’s pages. 

In Tsu’s telling, perhaps the most consequential link between this awkward period of dedicated hardware and today’s wicked-quick mobile-phone typing came in 1988, with an idea hatched by engineers in California. “Unicode was envisioned as a master converter,” she writes. “It would bring all human script systems, Western, Chinese, or otherwise, under one umbrella standard and assign each character a single, standardized code for communicating with any machine.” Once Chinese characters had Unicode codes, they could be manipulated by software like any other glyph, letter, or symbol. Today’s input systems allow users to call up and select characters using pinyin or stroke order, among other options.

There is something curiously deflating, however, about the way both these books end. Mullaney’s careful documenting of the typing machines of the last century and Tsu’s collection of adventurous tales about language show the same thing: A simply unbelievable amount of time, energy, and cleverness was poured into making Chinese characters easier for both machines and the human mind to manipulate. But very few of these systems seem to have had any direct impact on the current solutions, like the pronunciation-led input systems that more than a billion people now use to type Chinese. 

This pattern of evolution isn’t unique to language. As the child of a futurist, I’ve seen firsthand that the path to where we are is littered with technological dead ends. The month after Google Glass, the glasses-borne computer, made headlines, my mother helped set up an exhibit of personal heads-up displays. In the obscurity of a warehouse space, ghostly white foam heads each bore a crown of metal, glass, and plastic, the attempts of various inventors to put a screen in front of our eyes. Augmented reality seemed as if it might finally be arriving in the hands of the people—or, rather, on their faces. 

That version of the future did not materialize, and if augmented-reality viewing ever does become part of everyday life, it won’t be through those objects. When historians write about these devices, in books like these, I don’t think they will be able to trace a chain of unbroken thought, a single arc from idea to fruition.

A charming moment, late in Mullaney’s book, speaks to this. He has been slipping letters in the mailboxes of people he’s found listed as inventors of input methods in the Chinese patent database, and now he’s meeting one such inventor, an elderly man, and his granddaughter in a Beijing Starbucks. The old fellow is pleased to talk about his approach, which involves the graphical shapes of Chinese characters. But his granddaughter drops a bomb on Mullaney when she leans in and whispers, “I think my input system is a bit easier to use.” It turns out both she and her father have built systems of their own. 

The story’s not over, in other words.    

People tinker with technology and systems of thought like those detailed in these two books not just because they have to, but because they want to. And though it’s human nature to want to make a trajectory out of what lies behind us so that the present becomes a grand culmination, what these books detail are episodes in the life of a language. There is no beginning, no middle, no satisfying end. There is only evolution—an endless unfurling of something always in the process of becoming a fuller version of itself. 

Veronique Greenwood is a science writer and essayist based in England. Her work has appeared in the New York Times, the Atlantic, and many other publications.

Move over, text: Video is the new medium of our lives

The other day I idly opened TikTok to find a video of a young woman refinishing an old hollow-bodied electric guitar.

It was a montage of close-up shots—looking over her shoulder as she sanded and scraped the wood, peeled away the frets, expertly patched the cracks with filler, and then spray-painted it a radiant purple. She compressed days of work into a tight 30-second clip. It was mesmerizing.

Of course, that wasn’t the only video I saw that day. In barely another five minutes of swiping around, I saw a historian discussing the songs Tolkien wrote in The Lord of the Rings; a sailor puzzling over a capsized boat he’d found deep at sea; a tearful mother talking about parenting a child with ADHD; a Latino man laconically describing a dustup with his racist neighbor; and a linguist discussing how Gen Z uses video-game metaphors in everyday life.

I could go on. I will! And so, probably, will you. This is what the internet looks like now. It used to be a preserve of text and photos—but increasingly, it is a forest of video.

This is one of the most profound technology shifts that will define our future: We are entering the age of the moving image.

For centuries, when everyday people had to communicate at a distance, they really had only two options. They could write something down; they could send a picture. The moving image was too expensive to shoot, edit, and disseminate. Only pros could wield it.

The smartphone, the internet, and social networks like TikTok have rapidly and utterly transformed this situation. It’s now common, when someone wants to hurl an idea into the world, not to pull out a keyboard and type but to turn on a camera and talk. For many young people, video might be the prime way to express ideas.

As media thinkers like Marshall McLuhan have intoned, a new medium changes us. It changes the way we learn, the way we think—and what we think about. When mass printing emerged, it helped create a culture of news, mass literacy, and bureaucracy, and—some argue—the very idea of scientific evidence. So how will mass video shift our culture?

For starters, I’d argue, it is helping us share knowledge that used to be damnably hard to capture in text. I’m a long-distance cyclist, for example, and if I need to fix my bike, I don’t bother reading a guide. I look for a video explainer. If you’re looking to express—or absorb—knowledge that’s visual, physical, or proprioceptive, the moving image nearly always wins. Athletes don’t read a textual description of what they did wrong in the last game; they watch the clips. Hence the wild popularity, on video platforms, of instructional video—makeup tutorials, cooking demonstrations. (Or even learn-to-code material: I learned Python by watching coders do it.)

Video also is no longer about mere broadcast, but about conversation—it’s a way to respond to others, notes Raven Maragh-Lloyd, the author of Black Networked Resistance and a professor of film and media studies at Washington University. “We’re seeing a rise of audience participation,” she notes, including people doing “duets” on TikTok or response videos on YouTube. Everyday creators see video platforms as ways to talk back to power.

“My students were like, ‘If there’s a video over seven seconds, we’re not watching it.’”

Brianna Wiens, Waterloo University

There’s also an increasingly sophisticated lexicon of visual styles. Today’s video creators riff on older film aesthetics to make their points. Brianna Wiens, an assistant professor of digital media and rhetoric at Waterloo University, says she admired how a neuroscientist used stop-motion video, a technique from the early days of film, to produce TikTok discussions of vaccines during the height of the covid-19 pandemic. Or consider the animated GIF, which channels the “zoetrope” of the 1800s, looping a short moment in time to examine over and over.

Indeed, as video becomes more woven into the vernacular of daily life, it’s both expanding and contracting in size. There are streams on Twitch where you can watch someone for hours—and viral videos where someone compresses an idea into mere seconds. Those latter ones have a particular rhetorical power because they’re so ingestible. “I was teaching a class called Digital Lives, and my students were like, If there’s a video over seven seconds, we’re not watching it,” Wiens says, laughing.

Are there dangers ahead as use of the moving image grows? Possibly. Maybe it will too powerfully reward people with the right visual and physical charisma. (Not necessarily a novel danger: Text and radio had their own versions.) More subtly, video is technologically still adolescent. It’s not yet easy to search, or to clip and paste and annotate and collate—to use video for quietly organizing our thoughts, the way we do with text. Until those tool sets emerge (and you can see that beginning), its power will be limited. Lastly, maybe the moving image will become so common and go-to that’ll kill off print culture.

Media scholars are not terribly stressed about this final danger. New forms of media rarely kill off older ones. Indeed, as the late priest and scholar Walter Ong pointed out, creating television and radio requires writing plenty of text—all those scripts. Today’s moving-media culture is possibly even more saturated with writing. Videos on Instagram and TikTok often include artfully arranged captions, “diegetic” text commenting on the action, or data visualizations. You read while you watch; write while you shoot.

“We’re getting into all kinds of interesting hybrids and relationships,” notes Lev Manovich, a professor at the City University of New York. The tool sets for sculpting and editing video will undoubtedly improve too, perhaps using AI to help auto-edit, redact, summarize. 

One firm, Reduct, already offers a clever trick: You alter a video by editing the transcript. Snip out a sentence, and it snips out the related visuals. Public defenders use it to parse and edit police videos. They’re often knee-deep in the stuff—the advent of body cameras worn by officers has produced an ocean of footage, as Reduct’s CEO, Robert Ochshorn, tells me. 

Meanwhile, generative AI will make it easier to create a film out of pure imagination. This means, of course, that we’ll see a new flood of visual misinformation. We’ll need to develop a sharper culture of finding the useful amid the garbage. It took print a couple of centuries to do that, as scholars of the book will tell you—centuries during which the printing press helped spark untold war and upheaval. We’ll be living through the same process with the moving image.

So strap yourselves in. Whatever else happens, it’ll be interesting. 

Clive Thompson is the author of Coders: The Making of a New Tribe and the Remaking of the World.

The rise of the data platform for hybrid cloud

Whether pursuing digital transformation, exploring the potential of AI, or simply looking to simplify and optimize existing IT infrastructure, today’s organizations must do this in the context of increasingly complex multi-cloud environments. These complicated architectures are here to stay—2023 research by Enterprise Strategy Group, for example, found that 87% of organizations expect their applications to be distributed across still more locations in the next two years.

Scott Sinclair, practice director at Enterprise Strategy Group, outlines the problem: “Data is becoming more distributed. Apps are becoming more distributed. The typical organization has multiple data centers, multiple cloud providers, and umpteen edge locations. Data is all over the place and continues to be created at a very rapid rate.”

Finding a way to unify this disparate data is essential. In doing so, organizations must balance the explosive growth of enterprise data; the need for an on-premises, cloud-like consumption model to mitigate cyberattack risks; and continual pressure to cut costs and improve performance.

Sinclair summarizes: “What you want is something that can sit on top of this distributed data ecosystem and present something that is intuitive and consistent that I can use to leverage the data in the most impactful way, the most beneficial way to my business.”

For many, the solution is an overarching software-defined, virtualized data platform that delivers a common data plane and control plane across hybrid cloud environments. Ian Clatworthy, head of data platform product marketing at Hitachi Vantara, describes a data platform as “an integrated set of technologies that meets an organization’s data needs, enabling storage and delivery of data, the governance of data, and the security of data for a business.”

Gartner projects that these consolidated data storage platforms will constitute 70% of file and object storage by 2028, doubling from 35% in 2023. The research firm underscores that “Infrastructure and operations leaders must prioritize storage platforms to stay ahead of business demands.”

A transitional moment for enterprise data

Historically, organizations have stored their various types of data—file, block, object—in separate silos. Why change now? Because two main drivers are rendering traditional data storage schemes inadequate for today’s business needs: digital transformation and AI.

As digital transformation initiatives accelerate, organizations are discovering that having distinct storage solutions for each workload is inadequate for their escalating data volumes and changing business landscapes. The complexity of the modern data estate hinders many efforts toward change.

Clatworthy says that when organizations move to hybrid cloud environments, they may find, for example, that they have mainframe or data center data stored in one silo, block storage running on an appliance, apps running file storage, another silo for public cloud, and a separate VMware stack. The result is increased complexity and
cost in their IT infrastructure, as well as reduced flexibility and efficiency.

Then, Clatworthy adds, “When we get to the world of generative AI that’s bubbling around the edges, and we’re going to have this mass explosion of data, we need to simplify how that data is managed so that applications can consume it. That’s where a platform comes in.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Advancing to adaptive cloud

For many years now, cloud solutions have helped organizations streamline their operations, increase their scalability, and reduce costs. Yet, enterprise cloud investment has been fragmented, often lacking a coherent organization-wide approach. In fact, it’s not uncommon for various teams across an organization to have spun up their own cloud projects, adopting a wide variety of cloud strategies and providers, from public and hybrid to multi-cloud and edge computing.

The problem with this approach is that it often leads to “a sprawling set of systems and disparate teams working on these cloud systems, making it difficult to keep up with the pace of innovation,” says Bernardo Caldas, corporate vice president of Azure Edge product management at Microsoft. In addition to being an IT headache, a fragmented cloud environment leads to technological and organizational repercussions.

A complex multi-cloud deployment can make it difficult for IT teams to perform mission-critical tasks, such as applying security patches, meeting regulatory requirements, managing costs, and accessing data for data analytics. Configuring and securing these types of environments is a challenging and time-consuming task. And ad hoc cloud deployments often culminate in systems incompatibility when one-off pilots are ready to scale or be combined with existing products.

Without a common IT operations and application development platform, teams can’t share lessons learned or pool important resources, which tends to cause them to become increasingly siloed. “People want to do more with their data, but if their data is trapped and isolated in these different systems, it can make it really hard to tap into the data for insights and to accelerate progress,” says Caldas.

As the pace of change accelerates, however, many organizations are adopting a new adaptive cloud approach—one that will enable them to respond quickly to evolving consumer demands and market fluctuations while simplifying the management of their complex cloud environments.

An adaptive strategy for success

Heralding a departure from yesteryear’s fragmented cloud environments, an adaptive cloud approach unites sprawling systems, disparate silos, and distributed sites into a single operations, development, security, application, and data model. This unified approach empowers organizations to glean value from cloud-native technologies, open source software such as Linux, and AI across hybrid, multi-cloud, edge, and IoT.

“You’ve got a lot of legacy software out there, and for the most part, you don’t want to change production environments,” says David Harmon, director of software engineering at AMD. “Nobody wants to change code. So while CTOs and developers really want to take advantage of all the hardware changes, they want to do nothing to their code base if possible, because that change is very, very expensive.”

An adaptive cloud approach answers this challenge by taking an agnostic approach to the environments it brings together on a single control plane. By seamlessly collecting disparate computing environments, including those that run outside of hyperscale data centers, the control plane creates greater visibility across thousands of assets, simplifies security enforcement, and allows for easier management.

An adaptive cloud approach enables unified management of disparate systems and resources, leading to improved oversight and control. An adaptive approach also creates scalability, as it allows organizations to meet the fluctuating demands of a business without the risk of over-provisioning or under-provisioning resources.

There are also clear business advantages to embracing an adaptive cloud approach. Consider, for example, an operational technology team that deploys an automation system to accelerate a factory’s production capabilities. In a fragmented and distributed environment, systems often struggle to communicate. But in an adaptive cloud environment, a factory’s automation system can easily be connected to the organization’s customer relationship management system, providing sales teams with real-time insights into supply-demand fluctuations.

A united platform is not only capable of bringing together disparate systems but also of connecting employees from across functions, from sales to engineering. By sharing an interconnected web of cloud-native tools, a workforce’s collective skills and knowledge can be applied to initiatives across the organization—a valuable asset in today’s resource-strapped and talent-scarce business climate.

Using cloud-native technologies like Kubernetes and microservices can also expedite the development of applications across various environments, regardless of an application’s purpose. For example, IT teams can scale applications from massive cloud platforms to on-site production without complex rewrites. Together, these capabilities “propel innovation, simplify complexity, and enhance the ability to respond to business opportunities,” says Caldas.

The AI equation

From automating mundane processes to optimizing operations, AI is revolutionizing the way businesses work. In fact, the market for AI reached $184 billion in 2024—a staggering increase from nearly $50 billion in 2023, and it is expected to surpass $826 billion in 2030.

But AI applications and models require high-quality data to generate high-quality outputs. That’s a challenging feat when data sets are trapped in silos across distributed environments. Fortunately, an adaptive cloud approach can provide a unified data platform for AI initiatives.

“An adaptive cloud approach consolidates data from various locations in a way that’s more useful for companies and creates a robust foundation for AI applications,” says Caldas. “It creates a unified data platform that ensures that companies’ AI tools have access to high-quality data to make decisions.”

Another benefit of an adaptive cloud approach is the ability to tap into the capabilities of innovative tools such as Microsoft Copilot in Azure. Copilot in Azure is an AI companion that simplifies how IT teams operate and troubleshoot apps and infrastructure. By leveraging large language models to interact with an organization’s data, Copilot allows for deeper exploration and intelligent assessment of systems within a unified management framework.

Imagine, for example, the task of troubleshooting the root cause of a system anomaly. Typically, IT teams must sift through thousands of logs, exchanging a series of emails with colleagues, and reading documentation for answers. Copilot in Azure, however, can cut through this complexity by easing anomaly detection of unanticipated system changes while, at the same time, providing recommendations for speedy resolution.

“Organizations can now interact with systems using chat capabilities, ask questions about environments, and gain real insights into what’s happening across the heterogenous environments,” says Caldas.

An adaptive approach for the technology future

Today’s technology environments are only increasing in complexity. More systems, more data, more applications—together, they form a massive sprawling infrastructure. But proactively reacting to change, be it in market trends or customer needs, requires greater agility and integration across the organization. The answer: an adaptive approach. A unified platform for IT operations and management, applications, data, and security can consolidate the disparate parts of a fragmented environment in ways that not only ease IT management and application development but also deliver key business benefits, from faster time to market to AI efficiencies, at a time when organizations must move swiftly to succeed.

Microsoft Azure and AMD meet you where you are on your cloud journey. Learn more about an adaptive cloud approach with Azure.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

PsiQuantum plans to build the biggest quantum computing facility in the US

The quantum computing firm PsiQuantum is partnering with the state of Illinois to build the largest US-based quantum computing facility, the company announced today. 

The firm, which has headquarters in California, says it aims to house a quantum computer containing up to 1 million quantum bits, or qubits, within the next 10 years. At the moment, the largest quantum computers have around 1,000 qubits. 

Quantum computers promise to do a wide range of tasks, from drug discovery to cryptography, at record-breaking speeds. Companies are using different approaches to build the systems and working hard to scale them up. Both Google and IBM, for example, make the qubits out of superconducting material. IonQ makes qubits by trapping ions using electromagnetic fields. PsiQuantum is building qubits from photons.  

A major benefit of photonic quantum computing is the ability to operate at higher temperatures than superconducting systems. “Photons don’t feel heat and they don’t feel electromagnetic interference,” says Pete Shadbolt, PsiQuantum’s cofounder and chief scientific officer. This imperturbability makes the technology easier and cheaper to test in the lab, Shadbolt says. 

It also reduces the cooling requirements, which should make the technology more energy efficient and easier to scale up. PsiQuantum’s computer can’t be operated at room temperature, because it needs superconducting detectors to locate photons and perform error correction. But those sensors only need to be cooled to a few degrees Kelvin, or a little under -450 °F. While that’s an icy temperature, it is still easier to achieve than what’s required for superconducting systems, which demand cryogenic cooling. 

The company has opted not to build small-scale quantum computers (such as IBM’s Condor, which uses a little over 1,100 qubits). Instead it is aiming to manufacture and test what it calls “intermediate systems.” These include chips, cabinets, and superconducting photon detectors. PsiQuantum says it is targeting these larger-scale systems in part because smaller devices are unable to adequately correct errors and operate at a realistic price point.  

Getting smaller-scale systems to do useful work has been an area of active research. But “just in the last few years, we’ve seen people waking up to the fact that small systems are not going to be useful,” says Shadbolt. In order to adequately correct the inevitable errors, he says, “you have to build a big system with about a million qubits.” The approach conserves resources, he says, because the company doesn’t spend time piecing together smaller systems. But skipping over them makes PsiQuantum’s technology difficult to compare to what’s already on the market. 

The company won’t share details about the exact timeline of the Illinois project, which will include a collaboration with the University of Chicago, and several other Illinois universities. It does say it is hoping to break ground on a similar facility in Brisbane, Australia, next year and hopes that facility, which will house its own large-scale quantum computer, will be fully operational by 2027. “We expect Chicago to follow thereafter in terms of the site being operational,” the company said in a statement. 

“It’s all or nothing [with PsiQuantum], which doesn’t mean it’s invalid,” says Christopher Monroe, a computer scientist at Duke University and ex-IonQ employee. “It’s just hard to measure progress along the way, so it’s a very risky kind of investment.”

Significant hurdles lie ahead. Building the infrastructure for this facility, particularly for the cooling system, will be the slowest and most expensive aspect of the construction. And when the facility is finally constructed, there will need to be improvements in the quantum algorithms run on the computers. Shadbolt says the current algorithms are far too expensive and resource intensive. 

The sheer complexity of the construction project might seem daunting. “This could be the most complex quantum optical electronic system humans have ever built, and that’s hard,” says Shadbolt. “We take comfort in the fact that it resembles a supercomputer or a data center, and we’re building it using the same fabs, the same contract manufacturers, and the same engineers.”

Correction: we have updated the story to reflect that the partnership is only with the state of Illinois and its universities, and not a national lab

Update: we added comments from Christopher Monroe

How to fix a Windows PC affected by the global outage

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more here.

Windows PCs have crashed in a major IT outage around the world, bringing airlines, major banks, TV broadcasters, health-care providers, and other businesses to a standstill.

Airlines including United, Delta, and American have been forced to ground and delay flights, stranding passengers in airports, while the UK broadcaster Sky News was temporarily pulled off air. Meanwhile, banking customers in Europe, Australia, and India have been unable to access their online accounts. Doctor’s offices and hospitals in the UK have lost access to patient records and appointment scheduling systems. 

The problem stems from a defect in a single content update for Windows machines from the cybersecurity provider CrowdStrike. George Kurtz, CrowdStrike’s CEO, says that the company is actively working with customers affected.

“This is not a security incident or cyberattack,” he said in a statement on X. “The issue has been identified, isolated and a fix has been deployed. We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website.” CrowdStrike pointed MIT Technology Review to its blog with additional updates for customers.

What caused the issue?

The issue originates from a faulty update from CrowdStrike, which has knocked affected servers and PCs offline and caused some Windows workstations to display the “blue screen of death” when users attempt to boot them. Mac and Linux hosts are not affected.

The update was intended for CrowdStrike’s Falcon software, which is “endpoint detection and response” software designed to protect companies’ computer systems from cyberattacks and malware. But instead of working as expected, the update caused computers running Windows software to crash and fail to reboot. Home PCs running Windows are less likely to have been affected, because CrowdStrike is predominantly used by large organizations. Microsoft did not immediately respond to a request for comment.

“The CrowdStrike software works at the low-level operating system layer. Issues at this level make the OS not bootable,” says Lukasz Olejnik, an independent cybersecurity researcher and consultant, and author of Philosophy of Cybersecurity.

Not all computers running Windows were affected in the same way, he says, pointing out that if a machine’s systems had been turned off at the time CrowdStrike pushed out the update (which has since been withdrawn), it wouldn’t have received it.

For the machines running systems that received the mangled update and were rebooted, an automated update from CloudStrike’s server management infrastructure should suffice, he says.

“But in thousands or millions of cases, this may require manual human intervention,” he adds. “That means a really bad weekend ahead for plenty of IT staff.”

How to manually fix your affected computer

There is a known workaround for Windows computers that requires administrative access to its systems. If you’re affected and have that high level of access, CrowdStrike has recommended the following steps:

1. Boot Windows into safe mode or the Windows Recovery Environment.

2. Navigate to the C:WindowsSystem32driversCrowdStrike directory.

3. Locate the file matching “C-00000291*.sys” and delete it.

4. Boot the machine normally.

Sounds simple, right? But while the above fix is fairly easy to administer, it requires someone to enter it physically, meaning IT teams will need to track down remote machines that have been affected, says Andrew Dwyer of the Department of Information Security at Royal Holloway, University of London.

“We’ve been quite lucky that this is an outage and not an exploitation by a criminal gang or another state,” he says. “It also shows how easy it is to inflict quite significant global damage if you get into the right part of the IT supply chain.”

While fixing the problem is going to cause headaches for IT teams for the next week or so, it’s highly unlikely to cause significant long-term damage to the affected systems—which would not have been the case if it had been ransomware rather than a bungled update, he says.

“If this was a piece of ransomware, there could have been significant outages for months,” he adds. “Without endpoint detection software, many organizations would be in a much more vulnerable place. But they’re critical nodes in the system that have a lot of access to the computer systems that we use.”

Unlocking secure, private AI with confidential computing

All of a sudden, it seems that AI is everywhere, from executive assistant chatbots to AI code assistants.

But despite the proliferation of AI in the zeitgeist, many organizations are proceeding with caution. This is due to the perception of the security quagmires AI presents. For the emerging technology to reach its full potential, data must be secured through every stage of the AI lifecycle including model training, fine-tuning, and inferencing.

This is where confidential computing comes into play. Vikas Bhatia, head of product for Azure Confidential Computing at Microsoft, explains the significance of this architectural innovation: “AI is being used to provide solutions for a lot of highly sensitive data, whether that’s personal data, company data, or multiparty data,” he says. “Confidential computing is an emerging technology that protects that data when it is in memory and in use. We see a future where model creators who need to protect their IP will leverage confidential computing to safeguard their models and to protect their customer data.”

Understanding confidential computing

“The tech industry has done a great job in ensuring that data stays protected at rest and in transit using encryption,” Bhatia says. “Bad actors can steal a laptop and remove its hard drive but won’t be able to get anything out of it if the data is encrypted by security features like BitLocker. Similarly, nobody can run away with data in the cloud. And data in transit is secure thanks to HTTPS and TLS, which have long been industry standards.”

But data in use, when data is in memory and being operated upon, has typically been harder to secure. Confidential computing addresses this critical gap—what Bhatia calls the “missing third leg of the three-legged data protection stool”—via a hardware-based root of trust.

Essentially, confidential computing ensures the only thing customers need to trust is the data running inside of a trusted execution environment (TEE) and the underlying hardware. “The concept of a TEE is basically an enclave, or I like to use the word ‘box.’ Everything inside that box is trusted, anything outside it is not,” explains Bhatia.

Until recently, confidential computing only worked on central processing units (CPUs). However, NVIDIA has recently brought confidential computing capabilities to the H100 Tensor Core GPU and Microsoft has made this technology available in Azure. This has the potential to protect the entire confidential AI lifecycle—including model weights, training data, and inference workloads.

“Historically, devices such as GPUs were controlled by the host operating system, which, in turn, was controlled by the cloud service provider,” notes Krishnaprasad Hande, Technical Program Manager at Microsoft. “So, in order to meet confidential computing requirements, we needed technological improvements to reduce trust in the host operating system, i.e., its ability to observe or tamper with application workloads when the GPU is assigned to a confidential virtual machine, while retaining sufficient control to monitor and manage the device. NVIDIA and Microsoft have worked together to achieve this.”

Attestation mechanisms are another key component of confidential computing. Attestation allows users to verify the integrity and authenticity of the TEE, and the user code within it, ensuring the environment hasn’t been tampered with. “Customers can validate that trust by running an attestation report themselves against the CPU and the GPU to validate the state of their environment,” says Bhatia.

Additionally, secure key management systems play a critical role in confidential computing ecosystems. “We’ve extended our Azure Key Vault with Managed HSM service which runs inside a TEE,” says Bhatia. “The keys get securely released inside that TEE such that the data can be decrypted.”

Confidential computing use cases and benefits

GPU-accelerated confidential computing has far-reaching implications for AI in enterprise contexts. It also addresses privacy issues that apply to any analysis of sensitive data in the public cloud. This is of particular concern to organizations trying to gain insights from multiparty data while maintaining utmost privacy.

Another of the key advantages of Microsoft’s confidential computing offering is that it requires no code changes on the part of the customer, facilitating seamless adoption. “The confidential computing environment we’re building does not require customers to change a single line of code,” notes Bhatia. “They can redeploy from a non-confidential environment to a confidential environment. It’s as simple as choosing a particular VM size that supports confidential computing capabilities.”

Some industries and use cases that stand to benefit from confidential computing advancements include:

  • Governments and sovereign entities dealing with sensitive data and intellectual property.
  • Healthcare organizations using AI for drug discovery and doctor-patient confidentiality.
  • Banks and financial firms using AI to detect fraud and money laundering through shared analysis without revealing sensitive customer information.
  • Manufacturers optimizing supply chains by securely sharing data with partners.

Further, Bhatia says confidential computing helps facilitate data “clean rooms” for secure analysis in contexts like advertising. “We see a lot of sensitivity around use cases such as advertising and the way customers’ data is being handled and shared with third parties,” he says. “So, in these multiparty computation scenarios, or ‘data clean rooms,’ multiple parties can merge in their data sets, and no single party gets access to the combined data set. Only the code that is authorized will get access.”

The current state—and expected future—of confidential computing

Although large language models (LLMs) have captured attention in recent months, enterprises have found early success with a more scaled-down approach: small language models (SLMs), which are more efficient and less resource-intensive for many use cases. “We can see some targeted SLM models that can run in early confidential GPUs,” notes Bhatia.

This is just the start. Microsoft envisions a future that will support larger models and expanded AI scenarios—a progression that could see AI in the enterprise become less of a boardroom buzzword and more of an everyday reality driving business outcomes. “We’re starting with SLMs and adding in capabilities that allow larger models to run using multiple GPUs and multi-node communication. Over time, [the goal is eventually] for the largest models that the world might come up with could run in a confidential environment,” says Bhatia.

Bringing this to fruition will be a collaborative effort. Partnerships among major players like Microsoft and NVIDIA have already propelled significant advancements, and more are on the horizon. Organizations like the Confidential Computing Consortium will also be instrumental in advancing the underpinning technologies needed to make widespread and secure use of enterprise AI a reality.

“We’re seeing a lot of the critical pieces fall into place right now,” says Bhatia. “We don’t question today why something is HTTPS. That’s the world we’re moving toward [with confidential computing], but it’s not going to happen overnight. It’s certainly a journey, and one that NVIDIA and Microsoft are committed to.”

Microsoft Azure customers can start on this journey today with Azure confidential VMs with NVIDIA H100 GPUs. Learn more here.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Housetraining robot dogs: How generative AI might change consumer IoT

As technology goes, the internet of things (IoT) is old: internet-connected devices outnumbered people on Earth around 2008 or 2009, according to a contemporary Cisco report. Since then, IoT has grown rapidly. Researchers say that by the early 2020s, estimates of the number of devices ranged anywhere from the low tens of billions to over 50 billion.

Currently, though, IoT is seeing unusually intense new interest for a long-established technology, even one still experiencing market growth. A sure sign of this buzz is the appearance of acronyms, such as AIoT and GenAIoT, or “artificial intelligence of things” and “generative artificial intelligence of things.”

What is going on? Why now? Examining potential changes to consumer IoT could provide some answers. Specifically, the vast range of areas where the technology finds home and personal uses, from smart home controls through smart watches and other wearables to VR gaming—to name just a handful. The underlying technological changes sparking interest in this specific area mirror those in IoT as a whole.

Rapid advances converging at the edge

IoT is much more than a huge collection of “things,” such as automated sensing devices and attached actuators to take limited actions. These devices, of course, play a key role. A recent IDC report estimated that all edge devices—many of them IoT ones—account for 20% of the world’s current data generation.

IoT, however, is much more. It is a huge technological ecosystem that encompasses and empowers these devices. This ecosystem is multi-layered, although no single agreed taxonomy exists.

Most analyses will include among the strata the physical devices themselves (sensors, actuators, and other machines with which these immediately interact); the data generated by these devices; the networking and communication technology used to gather and send the generated data to, and to receive information from, other devices or central data stores; and the software applications that draw on such information and other possible inputs, often to suggest or make decisions.

The inherent value from IoT is not the data itself, but the capacity to use it in order to understand what is happening in and around the devices and, in turn, to use these insights, where necessary, to recommend that humans take action or to direct connected devices to do so.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.