African farmers are using private satellite data to improve crop yields

Last year, as the harvest season drew closer, Olabokunde Tope came across an unpleasant surprise. 

While certain spots on his 70-hectare cassava farm in Ibadan, Nigeria, were thriving, a sizable parcel was pale and parched—the result of an early and unexpected halt in the rains. The cassava stems, starved of water, had withered to straw. 

“It was a really terrible experience for us,” Tope says, estimating the cost of the loss at more than 50 million naira ($32,000). “We were praying for a miracle to happen. But unfortunately, it was too late.”  

When the next planting season rolled around, Tope’s team weighed different ways to avoid another cycle of heavy losses. They decided to work with EOS Data Analytics, a California-based provider of satellite imagery and data for precision farming. The company uses wavelengths of light including the near-infrared, which penetrates plant canopies and can be used to measure a range of variables, including moisture level and chlorophyll content. 

EOS’s models and algorithms deliver insights on crops’ health weekly through an online platform that farmers can use to make informed decisions about issues such as when to plant, how much herbicide to use, and how to schedule fertilizer use, weeding, or irrigation. 

When EOS first launched in 2015, it relied largely on imagery from a combination of satellites, especially the European Union’s Sentinel-2. But Sentinel-2 has a maximum resolution of 10 meters, making it of limited use for spotting issues on smaller farms, says Yevhenii Marchenko, the company’s sales team lead.  

So last year the company launched EOS SAT-1, a satellite designed and operated solely for agriculture. Fees to use the crop-monitoring platform now start at $1.90 per hectare per year for small areas and drop as the farm gets larger. (Farmers who can afford to have adopted drones and other related technologies, but drones are significantly more expensive to maintain and scale, says Marchenko.)

In many developing countries, farming is impaired by lack of data. For centuries, farmers relied on native intelligence rooted in experience and hope, says Daramola John, a professor of agriculture and agricultural technology at Bells University of Technology in southwest Nigeria. “Africa is way behind in the race for modernizing farming,” he says. “And a lot of farmers suffer huge losses because of it.”

In the spring of 2023, when the new planting season was to start, Tope’s company, Carmi Agro Foods, had used GPS-enabled software to map the boundaries of its farm. Its setup on the EOS crop monitoring platform was also completed. Tope used the platform to determine the appropriate spacing for the stems and seeds. The rigors and risks of manual monitoring had disappeared. Hisfield-monitoring officers needed only to peer at their phones to know where or when specific spots needed attention on various farms. He was able to track weed breakouts quickly and efficiently. 

This technology is gaining traction among farmers in other parts of Nigeria and the rest of Africa. More than 242,000 people in Africa, Southeast Asia, Latin America, the United States, and Europe use the EOS crop-monitoring platform. In 2023 alone, 53,000 more farmers subscribed to the service.

One of them is Adewale Adegoke, the CEO of Agro Xchange Technology Services, a company dedicated to boosting crop yields using technology and good agricultural practices. Adegoke used the platform on half a million hectares (around 1.25 million acres) owned by 63,000 farmers. He says the yield of maize farmers using the platform, for instance, grew to two tons per acre, at least twice the national average.  

Adegoke adds that local farmers, who have been struggling with fluctuating conditions as a result of climate change, have been especially drawn to the platform’s early warning system for weather. 

As harvest time draws nearer this year, Tope reports, the prospects of his cassava field, which now spans a thousand hectares, is quite promising. This is thanks in part to his ability to anticipate and counter the sudden dry spells. He spaced the plantings better and then followed advisories on weeding, fertilizer use, and other issues related to the health of the crops. 

“So far, the result has been convincing,” says Tope. “We are no longer subjecting the performance of our farms to chance. This time, we are in charge.”

Orji Sunday is a freelance journalist based in Lagos, Nigeria.

Inside the long quest to advance Chinese writing technology

Every second of every day, someone is typing in Chinese. In a park in Hong Kong, at a desk in Taiwan, in the checkout line at a Family Mart in Shanghai, the automatic doors chiming a song each time they open. Though the mechanics look a little different from typing in English or French—people usually type the pronunciation of a character and then pick it out of a selection that pops up, autocomplete-style—it’s hard to think of anything more quotidian. The software that allows this exists beneath the awareness of pretty much everyone who uses it. It’s just there.

cover of The Chinese Computer by Tom Mullaney
The Chinese Computer: A Global History of the Information Age
Thomas S. Mullaney
MIT PRESS, 2024

What’s largely been forgotten—and what most people outside Asia never even knew in the first place—is that a large cast of eccentrics and linguists, engineers and polymaths, spent much of the 20th century torturing themselves over how Chinese was ever going to move away from the ink brush to any other medium. This process has been the subject of two books published in the last two years: Thomas Mullaney’s scholarly work The Chinese Computer and Jing Tsu’s more accessible Kingdom of Characters. Mullaney’s book focuses on the invention of various input systems for Chinese starting in the 1940s, while Tsu’s covers more than a century of efforts to standardize Chinese and transmit it using the telegraph, typewriter, and computer. But both reveal a story that’s tumultuous and chaotic—and just a little unsettling in the futility it reflects.   

cover of Kingdom of Characters
Kingdom of Characters: The Language Revolution That Made China Modern
Jing Tsu
RIVERHEAD BOOKS, 2022

Chinese characters are not as cryptic as they sometimes appear. The general rule is that they stand for a word, or sometimes part of a word, and learning to read is a process of memorization. Along the way, it becomes easier to guess how a character should be spoken, because often phonetic elements are tucked in among other symbols. The characters were traditionally written by hand with a brush, and part of becoming literate involves memorizing the order in which the strokes are made. Put them in the wrong order and the character doesn’t look right. Or rather, as I found some years ago as a second-language learner in Guangzhou, China, it looks childish. (My husband, a translator of Chinese literature, found it hilarious and adorable that at the age of 30, I wrote like a kindergartner.)

The trouble, however, is that there are a lot of characters. One needs to know at least a few thousand to be considered basically literate, and there are thousands more beyond that basic set. Many modern learners of Chinese devote themselves essentially full-time to learning to read, at least in the beginning. More than a century ago, this was such a monumental task that leading thinkers worried it was impairing China’s ability to survive the attentions of more aggressive powers.

In the 19th century, a huge proportion of Chinese people were illiterate. They had little access to schooling. Many were subsistence farmers. China, despite its immense population and vast territory, was perpetually finding itself on the losing end of deals with nimbler, more industrialized nations. The Opium Wars, in the mid-19th century, had led to a situation where foreign powers effectively colonized Chinese soil. What advanced infrastructure there was had been built and was owned by foreigners.  

Some felt these things were connected. Wang Zhao, for one, was a reformer who believed that a simpler way to write spoken Chinese was essential to the survival of the nation. Wang’s idea was to use a set of phonetic symbols, representing one specific dialect of Chinese. If people could sound out words, having memorized just a handful of shapes the way speakers of languages using an alphabet did, they could become literate more quickly. With literacy, they could learn technical skills, study science, and help China get ownership of its future back. 

Wang believed in this goal so strongly that though he’d been thrown out of China in 1898, he returned two years later in disguise. After arriving by boat from Japan, he traveled over land on foot in the costume of a Buddhist monk. His story forms the first chapter of Jing Tsu’s book, and it is thick with drama, including a shouting match and brawl on the grounds of a former palace, during a meeting to decide which dialect a national version of such a system should represent. Wang’s system for learning Mandarin was used by schools in Beijing for a few years, but ultimately it did not survive the rise of competing systems and the period of chaos that swallowed China not long after the Qing Dynasty’s fall in 1911. Decades of disorder and uneasy truces gave way to Japan’s invasion of Manchuria in northern China in 1931. For a long time, basic survival was all most people had time for.

However, strange inventions soon began to turn up in China. Chinese students and scientists abroad had started to work on a typewriter for the language, which they felt was lagging behind others. Texts in English and other tongues using Roman characters could be printed swiftly and cheaply with keyboard-controlled machines that injected liquid metal into type molds, but Chinese texts required thousands upon thousands of bits of type to be placed in a manual printing press. And while English correspondence could be whacked out on a typewriter, Chinese correspondence was still, after all this time, written by hand.      

Of all the technologies Mullaney and Tsu describe, these baroque metal monsters stick most in the mind. Equipped with cylinders and wheels, with type arrayed in starbursts or in a massive tray, they are simultaneously writing machines and incarnations of philosophies about how to organize a language. Because Chinese characters don’t have an inherent order (no A-B-C-D-E-F-G) and because there are so many (if you just glance at 4,000 of them, you’re not likely to spot the one you need quickly), people tried to arrange these bits of type according to predictable rules. The first article ever published by Lin Yutang, who would go on to become one of China’s most prominent writers in English, described a system of ordering characters according to the number of strokes it took to form them. He eventually designed a Chinese typewriter that consumed his life and finances, a lovely thing that failed its demo in front of potential investors.

woman using a large desk-sized terminal
Chinese keyboard designers considered many interfaces, including tabletop-size devices that included 2,000 or more commonly used characters.
PUBLIC DOMAIN/COURTESY OF THOMAS S. MULLANEY

Technology often seems to demand new ways of engaging with the physical, and the Chinese typewriter was no exception. When I first saw a functioning example, at a private museum in a basement in Switzerland, I was entranced by the gliding arm and slender rails of the sheet-cake-size device, its tray full of characters. “Operating the machine was a full-body exercise,” Tsu writes of a very early typewriter from the late 1890s, designed by an American missionary. Its inventor expected that with time, muscle memory would take over, and the typist would move smoothly around the machine, picking out characters and depressing keys. 

However, though Chinese typewriters eventually got off the ground (the first commercial typewriter was available in the 1920s), a few decades later it became clear that the next challenge was getting Chinese characters into the computer age. And there was still the problem of how to get more people reading. Through the 1930s, ’40s, ’50s, and ’60s, systems for ordering and typing Chinese continued to occupy the minds of intellectuals; particularly odd and memorable is the story of the librarian at Sun Yat-sen University in Guangzhou, who in the 1930s came up with a system of light and dark glyphs like semaphore flags to stand for characters. Mullaney and Tsu both linger on the case of Zhi Bingyi, an engineer imprisoned in solitary confinement during the Cultural Revolution in the late 1960s, who was inspired by the characters of a slogan written on his cell wall to devise his own code for inputting characters into a computer.

As the child of a futurist, I’ve seen firsthand that the path to where we are is littered with technological dead ends.

The tools for literacy were advancing over the same period, thanks to government-­mandated reforms introduced after the Communist Revolution in 1949. To assist in learning to read, everyone in mainland China would now be taught pinyin, a system that uses Roman letters to indicate how Chinese characters are pronounced. Meanwhile, thousands of characters would be replaced with simplified versions, with fewer strokes to learn. This is still how it’s done today in the mainland, though in Taiwan and Hong Kong, the characters are not simplified, and Taiwan uses a different pronunciation guide, one based on 37 phonetic symbols and five tone marks. 

Myriad ideas were thrown at the problem of getting these characters into computers. Images of a graveyard of failed designs—256-key keyboards and the enormous cylinder of the Ideo-Matic Encoder, a keyboard with more than 4,000 options—are scattered poignantly through Mullaney’s pages. 

In Tsu’s telling, perhaps the most consequential link between this awkward period of dedicated hardware and today’s wicked-quick mobile-phone typing came in 1988, with an idea hatched by engineers in California. “Unicode was envisioned as a master converter,” she writes. “It would bring all human script systems, Western, Chinese, or otherwise, under one umbrella standard and assign each character a single, standardized code for communicating with any machine.” Once Chinese characters had Unicode codes, they could be manipulated by software like any other glyph, letter, or symbol. Today’s input systems allow users to call up and select characters using pinyin or stroke order, among other options.

There is something curiously deflating, however, about the way both these books end. Mullaney’s careful documenting of the typing machines of the last century and Tsu’s collection of adventurous tales about language show the same thing: A simply unbelievable amount of time, energy, and cleverness was poured into making Chinese characters easier for both machines and the human mind to manipulate. But very few of these systems seem to have had any direct impact on the current solutions, like the pronunciation-led input systems that more than a billion people now use to type Chinese. 

This pattern of evolution isn’t unique to language. As the child of a futurist, I’ve seen firsthand that the path to where we are is littered with technological dead ends. The month after Google Glass, the glasses-borne computer, made headlines, my mother helped set up an exhibit of personal heads-up displays. In the obscurity of a warehouse space, ghostly white foam heads each bore a crown of metal, glass, and plastic, the attempts of various inventors to put a screen in front of our eyes. Augmented reality seemed as if it might finally be arriving in the hands of the people—or, rather, on their faces. 

That version of the future did not materialize, and if augmented-reality viewing ever does become part of everyday life, it won’t be through those objects. When historians write about these devices, in books like these, I don’t think they will be able to trace a chain of unbroken thought, a single arc from idea to fruition.

A charming moment, late in Mullaney’s book, speaks to this. He has been slipping letters in the mailboxes of people he’s found listed as inventors of input methods in the Chinese patent database, and now he’s meeting one such inventor, an elderly man, and his granddaughter in a Beijing Starbucks. The old fellow is pleased to talk about his approach, which involves the graphical shapes of Chinese characters. But his granddaughter drops a bomb on Mullaney when she leans in and whispers, “I think my input system is a bit easier to use.” It turns out both she and her father have built systems of their own. 

The story’s not over, in other words.    

People tinker with technology and systems of thought like those detailed in these two books not just because they have to, but because they want to. And though it’s human nature to want to make a trajectory out of what lies behind us so that the present becomes a grand culmination, what these books detail are episodes in the life of a language. There is no beginning, no middle, no satisfying end. There is only evolution—an endless unfurling of something always in the process of becoming a fuller version of itself. 

Veronique Greenwood is a science writer and essayist based in England. Her work has appeared in the New York Times, the Atlantic, and many other publications.

Move over, text: Video is the new medium of our lives

The other day I idly opened TikTok to find a video of a young woman refinishing an old hollow-bodied electric guitar.

It was a montage of close-up shots—looking over her shoulder as she sanded and scraped the wood, peeled away the frets, expertly patched the cracks with filler, and then spray-painted it a radiant purple. She compressed days of work into a tight 30-second clip. It was mesmerizing.

Of course, that wasn’t the only video I saw that day. In barely another five minutes of swiping around, I saw a historian discussing the songs Tolkien wrote in The Lord of the Rings; a sailor puzzling over a capsized boat he’d found deep at sea; a tearful mother talking about parenting a child with ADHD; a Latino man laconically describing a dustup with his racist neighbor; and a linguist discussing how Gen Z uses video-game metaphors in everyday life.

I could go on. I will! And so, probably, will you. This is what the internet looks like now. It used to be a preserve of text and photos—but increasingly, it is a forest of video.

This is one of the most profound technology shifts that will define our future: We are entering the age of the moving image.

For centuries, when everyday people had to communicate at a distance, they really had only two options. They could write something down; they could send a picture. The moving image was too expensive to shoot, edit, and disseminate. Only pros could wield it.

The smartphone, the internet, and social networks like TikTok have rapidly and utterly transformed this situation. It’s now common, when someone wants to hurl an idea into the world, not to pull out a keyboard and type but to turn on a camera and talk. For many young people, video might be the prime way to express ideas.

As media thinkers like Marshall McLuhan have intoned, a new medium changes us. It changes the way we learn, the way we think—and what we think about. When mass printing emerged, it helped create a culture of news, mass literacy, and bureaucracy, and—some argue—the very idea of scientific evidence. So how will mass video shift our culture?

For starters, I’d argue, it is helping us share knowledge that used to be damnably hard to capture in text. I’m a long-distance cyclist, for example, and if I need to fix my bike, I don’t bother reading a guide. I look for a video explainer. If you’re looking to express—or absorb—knowledge that’s visual, physical, or proprioceptive, the moving image nearly always wins. Athletes don’t read a textual description of what they did wrong in the last game; they watch the clips. Hence the wild popularity, on video platforms, of instructional video—makeup tutorials, cooking demonstrations. (Or even learn-to-code material: I learned Python by watching coders do it.)

Video also is no longer about mere broadcast, but about conversation—it’s a way to respond to others, notes Raven Maragh-Lloyd, the author of Black Networked Resistance and a professor of film and media studies at Washington University. “We’re seeing a rise of audience participation,” she notes, including people doing “duets” on TikTok or response videos on YouTube. Everyday creators see video platforms as ways to talk back to power.

“My students were like, ‘If there’s a video over seven seconds, we’re not watching it.’”

Brianna Wiens, Waterloo University

There’s also an increasingly sophisticated lexicon of visual styles. Today’s video creators riff on older film aesthetics to make their points. Brianna Wiens, an assistant professor of digital media and rhetoric at Waterloo University, says she admired how a neuroscientist used stop-motion video, a technique from the early days of film, to produce TikTok discussions of vaccines during the height of the covid-19 pandemic. Or consider the animated GIF, which channels the “zoetrope” of the 1800s, looping a short moment in time to examine over and over.

Indeed, as video becomes more woven into the vernacular of daily life, it’s both expanding and contracting in size. There are streams on Twitch where you can watch someone for hours—and viral videos where someone compresses an idea into mere seconds. Those latter ones have a particular rhetorical power because they’re so ingestible. “I was teaching a class called Digital Lives, and my students were like, If there’s a video over seven seconds, we’re not watching it,” Wiens says, laughing.

Are there dangers ahead as use of the moving image grows? Possibly. Maybe it will too powerfully reward people with the right visual and physical charisma. (Not necessarily a novel danger: Text and radio had their own versions.) More subtly, video is technologically still adolescent. It’s not yet easy to search, or to clip and paste and annotate and collate—to use video for quietly organizing our thoughts, the way we do with text. Until those tool sets emerge (and you can see that beginning), its power will be limited. Lastly, maybe the moving image will become so common and go-to that’ll kill off print culture.

Media scholars are not terribly stressed about this final danger. New forms of media rarely kill off older ones. Indeed, as the late priest and scholar Walter Ong pointed out, creating television and radio requires writing plenty of text—all those scripts. Today’s moving-media culture is possibly even more saturated with writing. Videos on Instagram and TikTok often include artfully arranged captions, “diegetic” text commenting on the action, or data visualizations. You read while you watch; write while you shoot.

“We’re getting into all kinds of interesting hybrids and relationships,” notes Lev Manovich, a professor at the City University of New York. The tool sets for sculpting and editing video will undoubtedly improve too, perhaps using AI to help auto-edit, redact, summarize. 

One firm, Reduct, already offers a clever trick: You alter a video by editing the transcript. Snip out a sentence, and it snips out the related visuals. Public defenders use it to parse and edit police videos. They’re often knee-deep in the stuff—the advent of body cameras worn by officers has produced an ocean of footage, as Reduct’s CEO, Robert Ochshorn, tells me. 

Meanwhile, generative AI will make it easier to create a film out of pure imagination. This means, of course, that we’ll see a new flood of visual misinformation. We’ll need to develop a sharper culture of finding the useful amid the garbage. It took print a couple of centuries to do that, as scholars of the book will tell you—centuries during which the printing press helped spark untold war and upheaval. We’ll be living through the same process with the moving image.

So strap yourselves in. Whatever else happens, it’ll be interesting. 

Clive Thompson is the author of Coders: The Making of a New Tribe and the Remaking of the World.

The rise of the data platform for hybrid cloud

Whether pursuing digital transformation, exploring the potential of AI, or simply looking to simplify and optimize existing IT infrastructure, today’s organizations must do this in the context of increasingly complex multi-cloud environments. These complicated architectures are here to stay—2023 research by Enterprise Strategy Group, for example, found that 87% of organizations expect their applications to be distributed across still more locations in the next two years.

Scott Sinclair, practice director at Enterprise Strategy Group, outlines the problem: “Data is becoming more distributed. Apps are becoming more distributed. The typical organization has multiple data centers, multiple cloud providers, and umpteen edge locations. Data is all over the place and continues to be created at a very rapid rate.”

Finding a way to unify this disparate data is essential. In doing so, organizations must balance the explosive growth of enterprise data; the need for an on-premises, cloud-like consumption model to mitigate cyberattack risks; and continual pressure to cut costs and improve performance.

Sinclair summarizes: “What you want is something that can sit on top of this distributed data ecosystem and present something that is intuitive and consistent that I can use to leverage the data in the most impactful way, the most beneficial way to my business.”

For many, the solution is an overarching software-defined, virtualized data platform that delivers a common data plane and control plane across hybrid cloud environments. Ian Clatworthy, head of data platform product marketing at Hitachi Vantara, describes a data platform as “an integrated set of technologies that meets an organization’s data needs, enabling storage and delivery of data, the governance of data, and the security of data for a business.”

Gartner projects that these consolidated data storage platforms will constitute 70% of file and object storage by 2028, doubling from 35% in 2023. The research firm underscores that “Infrastructure and operations leaders must prioritize storage platforms to stay ahead of business demands.”

A transitional moment for enterprise data

Historically, organizations have stored their various types of data—file, block, object—in separate silos. Why change now? Because two main drivers are rendering traditional data storage schemes inadequate for today’s business needs: digital transformation and AI.

As digital transformation initiatives accelerate, organizations are discovering that having distinct storage solutions for each workload is inadequate for their escalating data volumes and changing business landscapes. The complexity of the modern data estate hinders many efforts toward change.

Clatworthy says that when organizations move to hybrid cloud environments, they may find, for example, that they have mainframe or data center data stored in one silo, block storage running on an appliance, apps running file storage, another silo for public cloud, and a separate VMware stack. The result is increased complexity and
cost in their IT infrastructure, as well as reduced flexibility and efficiency.

Then, Clatworthy adds, “When we get to the world of generative AI that’s bubbling around the edges, and we’re going to have this mass explosion of data, we need to simplify how that data is managed so that applications can consume it. That’s where a platform comes in.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Advancing to adaptive cloud

For many years now, cloud solutions have helped organizations streamline their operations, increase their scalability, and reduce costs. Yet, enterprise cloud investment has been fragmented, often lacking a coherent organization-wide approach. In fact, it’s not uncommon for various teams across an organization to have spun up their own cloud projects, adopting a wide variety of cloud strategies and providers, from public and hybrid to multi-cloud and edge computing.

The problem with this approach is that it often leads to “a sprawling set of systems and disparate teams working on these cloud systems, making it difficult to keep up with the pace of innovation,” says Bernardo Caldas, corporate vice president of Azure Edge product management at Microsoft. In addition to being an IT headache, a fragmented cloud environment leads to technological and organizational repercussions.

A complex multi-cloud deployment can make it difficult for IT teams to perform mission-critical tasks, such as applying security patches, meeting regulatory requirements, managing costs, and accessing data for data analytics. Configuring and securing these types of environments is a challenging and time-consuming task. And ad hoc cloud deployments often culminate in systems incompatibility when one-off pilots are ready to scale or be combined with existing products.

Without a common IT operations and application development platform, teams can’t share lessons learned or pool important resources, which tends to cause them to become increasingly siloed. “People want to do more with their data, but if their data is trapped and isolated in these different systems, it can make it really hard to tap into the data for insights and to accelerate progress,” says Caldas.

As the pace of change accelerates, however, many organizations are adopting a new adaptive cloud approach—one that will enable them to respond quickly to evolving consumer demands and market fluctuations while simplifying the management of their complex cloud environments.

An adaptive strategy for success

Heralding a departure from yesteryear’s fragmented cloud environments, an adaptive cloud approach unites sprawling systems, disparate silos, and distributed sites into a single operations, development, security, application, and data model. This unified approach empowers organizations to glean value from cloud-native technologies, open source software such as Linux, and AI across hybrid, multi-cloud, edge, and IoT.

“You’ve got a lot of legacy software out there, and for the most part, you don’t want to change production environments,” says David Harmon, director of software engineering at AMD. “Nobody wants to change code. So while CTOs and developers really want to take advantage of all the hardware changes, they want to do nothing to their code base if possible, because that change is very, very expensive.”

An adaptive cloud approach answers this challenge by taking an agnostic approach to the environments it brings together on a single control plane. By seamlessly collecting disparate computing environments, including those that run outside of hyperscale data centers, the control plane creates greater visibility across thousands of assets, simplifies security enforcement, and allows for easier management.

An adaptive cloud approach enables unified management of disparate systems and resources, leading to improved oversight and control. An adaptive approach also creates scalability, as it allows organizations to meet the fluctuating demands of a business without the risk of over-provisioning or under-provisioning resources.

There are also clear business advantages to embracing an adaptive cloud approach. Consider, for example, an operational technology team that deploys an automation system to accelerate a factory’s production capabilities. In a fragmented and distributed environment, systems often struggle to communicate. But in an adaptive cloud environment, a factory’s automation system can easily be connected to the organization’s customer relationship management system, providing sales teams with real-time insights into supply-demand fluctuations.

A united platform is not only capable of bringing together disparate systems but also of connecting employees from across functions, from sales to engineering. By sharing an interconnected web of cloud-native tools, a workforce’s collective skills and knowledge can be applied to initiatives across the organization—a valuable asset in today’s resource-strapped and talent-scarce business climate.

Using cloud-native technologies like Kubernetes and microservices can also expedite the development of applications across various environments, regardless of an application’s purpose. For example, IT teams can scale applications from massive cloud platforms to on-site production without complex rewrites. Together, these capabilities “propel innovation, simplify complexity, and enhance the ability to respond to business opportunities,” says Caldas.

The AI equation

From automating mundane processes to optimizing operations, AI is revolutionizing the way businesses work. In fact, the market for AI reached $184 billion in 2024—a staggering increase from nearly $50 billion in 2023, and it is expected to surpass $826 billion in 2030.

But AI applications and models require high-quality data to generate high-quality outputs. That’s a challenging feat when data sets are trapped in silos across distributed environments. Fortunately, an adaptive cloud approach can provide a unified data platform for AI initiatives.

“An adaptive cloud approach consolidates data from various locations in a way that’s more useful for companies and creates a robust foundation for AI applications,” says Caldas. “It creates a unified data platform that ensures that companies’ AI tools have access to high-quality data to make decisions.”

Another benefit of an adaptive cloud approach is the ability to tap into the capabilities of innovative tools such as Microsoft Copilot in Azure. Copilot in Azure is an AI companion that simplifies how IT teams operate and troubleshoot apps and infrastructure. By leveraging large language models to interact with an organization’s data, Copilot allows for deeper exploration and intelligent assessment of systems within a unified management framework.

Imagine, for example, the task of troubleshooting the root cause of a system anomaly. Typically, IT teams must sift through thousands of logs, exchanging a series of emails with colleagues, and reading documentation for answers. Copilot in Azure, however, can cut through this complexity by easing anomaly detection of unanticipated system changes while, at the same time, providing recommendations for speedy resolution.

“Organizations can now interact with systems using chat capabilities, ask questions about environments, and gain real insights into what’s happening across the heterogenous environments,” says Caldas.

An adaptive approach for the technology future

Today’s technology environments are only increasing in complexity. More systems, more data, more applications—together, they form a massive sprawling infrastructure. But proactively reacting to change, be it in market trends or customer needs, requires greater agility and integration across the organization. The answer: an adaptive approach. A unified platform for IT operations and management, applications, data, and security can consolidate the disparate parts of a fragmented environment in ways that not only ease IT management and application development but also deliver key business benefits, from faster time to market to AI efficiencies, at a time when organizations must move swiftly to succeed.

Microsoft Azure and AMD meet you where you are on your cloud journey. Learn more about an adaptive cloud approach with Azure.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

PsiQuantum plans to build the biggest quantum computing facility in the US

The quantum computing firm PsiQuantum is partnering with the state of Illinois to build the largest US-based quantum computing facility, the company announced today. 

The firm, which has headquarters in California, says it aims to house a quantum computer containing up to 1 million quantum bits, or qubits, within the next 10 years. At the moment, the largest quantum computers have around 1,000 qubits. 

Quantum computers promise to do a wide range of tasks, from drug discovery to cryptography, at record-breaking speeds. Companies are using different approaches to build the systems and working hard to scale them up. Both Google and IBM, for example, make the qubits out of superconducting material. IonQ makes qubits by trapping ions using electromagnetic fields. PsiQuantum is building qubits from photons.  

A major benefit of photonic quantum computing is the ability to operate at higher temperatures than superconducting systems. “Photons don’t feel heat and they don’t feel electromagnetic interference,” says Pete Shadbolt, PsiQuantum’s cofounder and chief scientific officer. This imperturbability makes the technology easier and cheaper to test in the lab, Shadbolt says. 

It also reduces the cooling requirements, which should make the technology more energy efficient and easier to scale up. PsiQuantum’s computer can’t be operated at room temperature, because it needs superconducting detectors to locate photons and perform error correction. But those sensors only need to be cooled to a few degrees Kelvin, or a little under -450 °F. While that’s an icy temperature, it is still easier to achieve than what’s required for superconducting systems, which demand cryogenic cooling. 

The company has opted not to build small-scale quantum computers (such as IBM’s Condor, which uses a little over 1,100 qubits). Instead it is aiming to manufacture and test what it calls “intermediate systems.” These include chips, cabinets, and superconducting photon detectors. PsiQuantum says it is targeting these larger-scale systems in part because smaller devices are unable to adequately correct errors and operate at a realistic price point.  

Getting smaller-scale systems to do useful work has been an area of active research. But “just in the last few years, we’ve seen people waking up to the fact that small systems are not going to be useful,” says Shadbolt. In order to adequately correct the inevitable errors, he says, “you have to build a big system with about a million qubits.” The approach conserves resources, he says, because the company doesn’t spend time piecing together smaller systems. But skipping over them makes PsiQuantum’s technology difficult to compare to what’s already on the market. 

The company won’t share details about the exact timeline of the Illinois project, which will include a collaboration with the University of Chicago, and several other Illinois universities. It does say it is hoping to break ground on a similar facility in Brisbane, Australia, next year and hopes that facility, which will house its own large-scale quantum computer, will be fully operational by 2027. “We expect Chicago to follow thereafter in terms of the site being operational,” the company said in a statement. 

“It’s all or nothing [with PsiQuantum], which doesn’t mean it’s invalid,” says Christopher Monroe, a computer scientist at Duke University and ex-IonQ employee. “It’s just hard to measure progress along the way, so it’s a very risky kind of investment.”

Significant hurdles lie ahead. Building the infrastructure for this facility, particularly for the cooling system, will be the slowest and most expensive aspect of the construction. And when the facility is finally constructed, there will need to be improvements in the quantum algorithms run on the computers. Shadbolt says the current algorithms are far too expensive and resource intensive. 

The sheer complexity of the construction project might seem daunting. “This could be the most complex quantum optical electronic system humans have ever built, and that’s hard,” says Shadbolt. “We take comfort in the fact that it resembles a supercomputer or a data center, and we’re building it using the same fabs, the same contract manufacturers, and the same engineers.”

Correction: we have updated the story to reflect that the partnership is only with the state of Illinois and its universities, and not a national lab

Update: we added comments from Christopher Monroe

How to fix a Windows PC affected by the global outage

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more here.

Windows PCs have crashed in a major IT outage around the world, bringing airlines, major banks, TV broadcasters, health-care providers, and other businesses to a standstill.

Airlines including United, Delta, and American have been forced to ground and delay flights, stranding passengers in airports, while the UK broadcaster Sky News was temporarily pulled off air. Meanwhile, banking customers in Europe, Australia, and India have been unable to access their online accounts. Doctor’s offices and hospitals in the UK have lost access to patient records and appointment scheduling systems. 

The problem stems from a defect in a single content update for Windows machines from the cybersecurity provider CrowdStrike. George Kurtz, CrowdStrike’s CEO, says that the company is actively working with customers affected.

“This is not a security incident or cyberattack,” he said in a statement on X. “The issue has been identified, isolated and a fix has been deployed. We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website.” CrowdStrike pointed MIT Technology Review to its blog with additional updates for customers.

What caused the issue?

The issue originates from a faulty update from CrowdStrike, which has knocked affected servers and PCs offline and caused some Windows workstations to display the “blue screen of death” when users attempt to boot them. Mac and Linux hosts are not affected.

The update was intended for CrowdStrike’s Falcon software, which is “endpoint detection and response” software designed to protect companies’ computer systems from cyberattacks and malware. But instead of working as expected, the update caused computers running Windows software to crash and fail to reboot. Home PCs running Windows are less likely to have been affected, because CrowdStrike is predominantly used by large organizations. Microsoft did not immediately respond to a request for comment.

“The CrowdStrike software works at the low-level operating system layer. Issues at this level make the OS not bootable,” says Lukasz Olejnik, an independent cybersecurity researcher and consultant, and author of Philosophy of Cybersecurity.

Not all computers running Windows were affected in the same way, he says, pointing out that if a machine’s systems had been turned off at the time CrowdStrike pushed out the update (which has since been withdrawn), it wouldn’t have received it.

For the machines running systems that received the mangled update and were rebooted, an automated update from CloudStrike’s server management infrastructure should suffice, he says.

“But in thousands or millions of cases, this may require manual human intervention,” he adds. “That means a really bad weekend ahead for plenty of IT staff.”

How to manually fix your affected computer

There is a known workaround for Windows computers that requires administrative access to its systems. If you’re affected and have that high level of access, CrowdStrike has recommended the following steps:

1. Boot Windows into safe mode or the Windows Recovery Environment.

2. Navigate to the C:WindowsSystem32driversCrowdStrike directory.

3. Locate the file matching “C-00000291*.sys” and delete it.

4. Boot the machine normally.

Sounds simple, right? But while the above fix is fairly easy to administer, it requires someone to enter it physically, meaning IT teams will need to track down remote machines that have been affected, says Andrew Dwyer of the Department of Information Security at Royal Holloway, University of London.

“We’ve been quite lucky that this is an outage and not an exploitation by a criminal gang or another state,” he says. “It also shows how easy it is to inflict quite significant global damage if you get into the right part of the IT supply chain.”

While fixing the problem is going to cause headaches for IT teams for the next week or so, it’s highly unlikely to cause significant long-term damage to the affected systems—which would not have been the case if it had been ransomware rather than a bungled update, he says.

“If this was a piece of ransomware, there could have been significant outages for months,” he adds. “Without endpoint detection software, many organizations would be in a much more vulnerable place. But they’re critical nodes in the system that have a lot of access to the computer systems that we use.”

Unlocking secure, private AI with confidential computing

All of a sudden, it seems that AI is everywhere, from executive assistant chatbots to AI code assistants.

But despite the proliferation of AI in the zeitgeist, many organizations are proceeding with caution. This is due to the perception of the security quagmires AI presents. For the emerging technology to reach its full potential, data must be secured through every stage of the AI lifecycle including model training, fine-tuning, and inferencing.

This is where confidential computing comes into play. Vikas Bhatia, head of product for Azure Confidential Computing at Microsoft, explains the significance of this architectural innovation: “AI is being used to provide solutions for a lot of highly sensitive data, whether that’s personal data, company data, or multiparty data,” he says. “Confidential computing is an emerging technology that protects that data when it is in memory and in use. We see a future where model creators who need to protect their IP will leverage confidential computing to safeguard their models and to protect their customer data.”

Understanding confidential computing

“The tech industry has done a great job in ensuring that data stays protected at rest and in transit using encryption,” Bhatia says. “Bad actors can steal a laptop and remove its hard drive but won’t be able to get anything out of it if the data is encrypted by security features like BitLocker. Similarly, nobody can run away with data in the cloud. And data in transit is secure thanks to HTTPS and TLS, which have long been industry standards.”

But data in use, when data is in memory and being operated upon, has typically been harder to secure. Confidential computing addresses this critical gap—what Bhatia calls the “missing third leg of the three-legged data protection stool”—via a hardware-based root of trust.

Essentially, confidential computing ensures the only thing customers need to trust is the data running inside of a trusted execution environment (TEE) and the underlying hardware. “The concept of a TEE is basically an enclave, or I like to use the word ‘box.’ Everything inside that box is trusted, anything outside it is not,” explains Bhatia.

Until recently, confidential computing only worked on central processing units (CPUs). However, NVIDIA has recently brought confidential computing capabilities to the H100 Tensor Core GPU and Microsoft has made this technology available in Azure. This has the potential to protect the entire confidential AI lifecycle—including model weights, training data, and inference workloads.

“Historically, devices such as GPUs were controlled by the host operating system, which, in turn, was controlled by the cloud service provider,” notes Krishnaprasad Hande, Technical Program Manager at Microsoft. “So, in order to meet confidential computing requirements, we needed technological improvements to reduce trust in the host operating system, i.e., its ability to observe or tamper with application workloads when the GPU is assigned to a confidential virtual machine, while retaining sufficient control to monitor and manage the device. NVIDIA and Microsoft have worked together to achieve this.”

Attestation mechanisms are another key component of confidential computing. Attestation allows users to verify the integrity and authenticity of the TEE, and the user code within it, ensuring the environment hasn’t been tampered with. “Customers can validate that trust by running an attestation report themselves against the CPU and the GPU to validate the state of their environment,” says Bhatia.

Additionally, secure key management systems play a critical role in confidential computing ecosystems. “We’ve extended our Azure Key Vault with Managed HSM service which runs inside a TEE,” says Bhatia. “The keys get securely released inside that TEE such that the data can be decrypted.”

Confidential computing use cases and benefits

GPU-accelerated confidential computing has far-reaching implications for AI in enterprise contexts. It also addresses privacy issues that apply to any analysis of sensitive data in the public cloud. This is of particular concern to organizations trying to gain insights from multiparty data while maintaining utmost privacy.

Another of the key advantages of Microsoft’s confidential computing offering is that it requires no code changes on the part of the customer, facilitating seamless adoption. “The confidential computing environment we’re building does not require customers to change a single line of code,” notes Bhatia. “They can redeploy from a non-confidential environment to a confidential environment. It’s as simple as choosing a particular VM size that supports confidential computing capabilities.”

Some industries and use cases that stand to benefit from confidential computing advancements include:

  • Governments and sovereign entities dealing with sensitive data and intellectual property.
  • Healthcare organizations using AI for drug discovery and doctor-patient confidentiality.
  • Banks and financial firms using AI to detect fraud and money laundering through shared analysis without revealing sensitive customer information.
  • Manufacturers optimizing supply chains by securely sharing data with partners.

Further, Bhatia says confidential computing helps facilitate data “clean rooms” for secure analysis in contexts like advertising. “We see a lot of sensitivity around use cases such as advertising and the way customers’ data is being handled and shared with third parties,” he says. “So, in these multiparty computation scenarios, or ‘data clean rooms,’ multiple parties can merge in their data sets, and no single party gets access to the combined data set. Only the code that is authorized will get access.”

The current state—and expected future—of confidential computing

Although large language models (LLMs) have captured attention in recent months, enterprises have found early success with a more scaled-down approach: small language models (SLMs), which are more efficient and less resource-intensive for many use cases. “We can see some targeted SLM models that can run in early confidential GPUs,” notes Bhatia.

This is just the start. Microsoft envisions a future that will support larger models and expanded AI scenarios—a progression that could see AI in the enterprise become less of a boardroom buzzword and more of an everyday reality driving business outcomes. “We’re starting with SLMs and adding in capabilities that allow larger models to run using multiple GPUs and multi-node communication. Over time, [the goal is eventually] for the largest models that the world might come up with could run in a confidential environment,” says Bhatia.

Bringing this to fruition will be a collaborative effort. Partnerships among major players like Microsoft and NVIDIA have already propelled significant advancements, and more are on the horizon. Organizations like the Confidential Computing Consortium will also be instrumental in advancing the underpinning technologies needed to make widespread and secure use of enterprise AI a reality.

“We’re seeing a lot of the critical pieces fall into place right now,” says Bhatia. “We don’t question today why something is HTTPS. That’s the world we’re moving toward [with confidential computing], but it’s not going to happen overnight. It’s certainly a journey, and one that NVIDIA and Microsoft are committed to.”

Microsoft Azure customers can start on this journey today with Azure confidential VMs with NVIDIA H100 GPUs. Learn more here.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Housetraining robot dogs: How generative AI might change consumer IoT

As technology goes, the internet of things (IoT) is old: internet-connected devices outnumbered people on Earth around 2008 or 2009, according to a contemporary Cisco report. Since then, IoT has grown rapidly. Researchers say that by the early 2020s, estimates of the number of devices ranged anywhere from the low tens of billions to over 50 billion.

Currently, though, IoT is seeing unusually intense new interest for a long-established technology, even one still experiencing market growth. A sure sign of this buzz is the appearance of acronyms, such as AIoT and GenAIoT, or “artificial intelligence of things” and “generative artificial intelligence of things.”

What is going on? Why now? Examining potential changes to consumer IoT could provide some answers. Specifically, the vast range of areas where the technology finds home and personal uses, from smart home controls through smart watches and other wearables to VR gaming—to name just a handful. The underlying technological changes sparking interest in this specific area mirror those in IoT as a whole.

Rapid advances converging at the edge

IoT is much more than a huge collection of “things,” such as automated sensing devices and attached actuators to take limited actions. These devices, of course, play a key role. A recent IDC report estimated that all edge devices—many of them IoT ones—account for 20% of the world’s current data generation.

IoT, however, is much more. It is a huge technological ecosystem that encompasses and empowers these devices. This ecosystem is multi-layered, although no single agreed taxonomy exists.

Most analyses will include among the strata the physical devices themselves (sensors, actuators, and other machines with which these immediately interact); the data generated by these devices; the networking and communication technology used to gather and send the generated data to, and to receive information from, other devices or central data stores; and the software applications that draw on such information and other possible inputs, often to suggest or make decisions.

The inherent value from IoT is not the data itself, but the capacity to use it in order to understand what is happening in and around the devices and, in turn, to use these insights, where necessary, to recommend that humans take action or to direct connected devices to do so.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

How gamification took over the world

It’s a thought that occurs to every video-game player at some point: What if the weird, hyper-focused state I enter when playing in virtual worlds could somehow be applied to the real one? 

Often pondered during especially challenging or tedious tasks in meatspace (writing essays, say, or doing your taxes), it’s an eminently reasonable question to ask. Life, after all, is hard. And while video games are too, there’s something almost magical about the way they can promote sustained bouts of superhuman concentration and resolve.

For some, this phenomenon leads to an interest in flow states and immersion. For others, it’s simply a reason to play more games. For a handful of consultants, startup gurus, and game designers in the late 2000s, it became the key to unlocking our true human potential.

In her 2010 TED Talk, “Gaming Can Make a Better World,” the game designer Jane McGonigal called this engaged state “blissful productivity.” “There’s a reason why the average World of Warcraft gamer plays for 22 hours a week,” she said. “It’s because we know when we’re playing a game that we’re actually happier working hard than we are relaxing or hanging out. We know that we are optimized as human beings to do hard and meaningful work. And gamers are willing to work hard all the time.”

McGonigal’s basic pitch was this: By making the real world more like a video game, we could harness the blissful productivity of millions of people and direct it at some of humanity’s thorniest problems—things like poverty, obesity, and climate change. The exact details of how to accomplish this were a bit vague (play more games?), but her objective was clear: “My goal for the next decade is to try to make it as easy to save the world in real life as it is to save the world in online games.”

While the word “gamification” never came up during her talk, by that time anyone following the big-ideas circuit (TED, South by Southwest, DICE, etc.) or using the new Foursquare app would have been familiar with the basic idea. Broadly defined as the application of game design elements and principles to non-game activities—think points, levels, missions, badges, leaderboards, reinforcement loops, and so on—gamification was already being hawked as a revolutionary new tool for transforming education, work, health and fitness, and countless other parts of life. 

Instead of liberating us, gamification turned out to be just another tool for coercion, distraction, and control.

Adding “world-saving” to the list of potential benefits was perhaps inevitable, given the prevalence of that theme in video-game storylines. But it also spoke to gamification’s foundational premise: the idea that reality is somehow broken. According to McGonigal and other gamification boosters, the real world is insufficiently engaging and motivating, and too often it fails to make us happy. Gamification promises to remedy this design flawby engineering a new reality, one that transforms the dull, difficult, and depressing parts of life into something fun and inspiring. Studying for exams, doing household chores, flossing, exercising, learning a new language—there was no limit to the tasks that could be turned into games, making everything IRL better.

Today, we live in an undeniably gamified world. We stand up and move around to close colorful rings and earn achievement badges on our smartwatches; we meditate and sleep to recharge our body batteries; we plant virtual trees to be more productive; we chase “likes” and “karma” on social media sites and try to swipe our way toward social connection. And yet for all the crude gamelike elements that have been grafted onto our lives, the more hopeful and collaborative world that gamification promised more than a decade ago seems as far away as ever. Instead of liberating us from drudgery and maximizing our potential, gamification turned out to be just another tool for coercion, distraction, and control. 

Con game

This was not an unforeseeable outcome. From the start, a small but vocal group of journalists and game designers warned against the fairy-tale thinking and facile view of video games that they saw in the concept of gamification. Adrian Hon, author of You’ve Been Played, a recent book that chronicles its dangers, was one of them. 

“As someone who was building so-called ‘serious games’ at the time the concept was taking off, I knew that a lot of the claims being made around the possibility of games to transform people’s behaviors and change the world were completely overblown,” he says. 

Hon isn’t some knee-jerk polemicist. A trained neuroscientist who switched to a career in game design and development, he’s the co-creator of Zombies, Run!—one of the most popular gamified fitness apps in the world. While he still believes games can benefit and enrich aspects of our nongaming lives, Hon says a one-size-fits-all approach is bound to fail. For this reason, he’s firmly against both the superficial layering of generic points, leaderboards, and missions atop everyday activities and the more coercive forms of gamification that have invaded the workplace.

three snakes in concentric circles

SELMAN DESIGN

Ironically, it’s these broad and varied uses that make criticizing the practice so difficult. As Hon notes in his book, gamification has always been a fast-moving target, varying dramatically in scale, scope, and technology over the years. As the concept has evolved, so too have its applications, whether you think of the gambling mechanics that now encourage users of dating apps to keep swiping, the “quests” that compel exhausted Uber drivers to complete just a few more trips, or the utopian ambition of using gamification to save the world.

In the same way that AI’s lack of a fixed definition today makes it easy to dismiss any one critique for not addressing some other potential definition of it, so too do gamification’s varied interpretations. “I remember giving talks critical of gamification at gamification conferences, and people would come up to me afterwards and be like, ‘Yeah, bad gamification is bad, right? But we’re doing good gamification,’” says Hon. (They weren’t.) 

For some critics, the very idea of “good gamification” was anathema. Their main gripe with the term and practice was, and remains, that it has little to nothing to do with actual games.

“A game is about play and disruption and creativity and ambiguity and surprise,” wrote the late Jeff Watson, a game designer, writer, and educator who taught at the University of Southern California’s School of Cinematic Arts. Gamification is about the opposite—the known, the badgeable, the quantifiable. “It’s about ‘checking in,’ being tracked … [and] becoming more regimented. It’s a surveillance and discipline system—a wolf in sheep’s clothing. Beware its lure.”

Another game designer, Margaret Robertson, has argued that gamification should really be called “pointsification,” writing: “What we’re currently terming gamification is in fact the process of taking the thing that is least essential to games and representing it as the core of the experience. Points and badges have no closer a relationship to games than they do to websites and fitness apps and loyalty cards.”

For the author and game designer Ian Bogost, the entire concept amounted to a marketing gimmick. In a now-famous essay published in the Atlantic in 2011, he likened gamification to the moral philosopher Harry Frankfurt’s definition of bullshit—that is, a strategy intended to persuade or coerce without regard for actual truth. 

“The idea of learning or borrowing lessons from game design and applying them to other areas was never the issue for me,” Bogost told me. “Rather, it was not doing that—acknowledging that there’s something mysterious, powerful, and compelling about games, but rather than doing the hard work, doing no work at all and absconding with the spirit of the form.” 

Gaming the system

So how did a misleading term for a misunderstood process that’s probably just bullshit come to infiltrate virtually every part of our lives? There’s no one simple answer. But gamification’s meteoric rise starts to make a lot more sense when you look at the period that gave birth to the idea. 

The late 2000s and early 2010s were, as many have noted, a kind of high-water mark for techno-­optimism. For people both inside the tech industry and out, there was a sense that humanity had finally wrapped its arms around a difficult set of problems, and that technology was going to help us squeeze out some solutions. The Arab Spring bloomed in 2011 with the help of platforms like Facebook and Twitter, money was more or less free, and “____ can save the world” articles were legion (with ____ being everything from “eating bugs” to “design thinking”).

This was also the era that produced the 10,000-hours rule of success, the long tail, the four-hour workweek, the wisdom of crowds, nudge theory, and a number of other highly simplistic (or, often, flat-out wrong) theories about the way humans, the internet, and the world work. 

“All of a sudden you had VC money and all sorts of important, high-net-worth people showing up at game developer conferences.”

Ian Bogost, author and game designer

Adding video games to this heady stew of optimism gave the game industry something it had long sought but never achieved: legitimacy. Even with games ascendant in popular culture—and on track to eclipse both the film and music industries in terms of revenue—they still were largely seen as a frivolous, productivity-­squandering, violence-encouraging form of entertainment. Seemingly overnight, gamification changed all that. 

“There was definitely this black-sheep mentality in the game development community—the sense that what we had been doing for decades was just a joke to people,” says Bogost. “All of a sudden you had VC money and all sorts of important, high-net-worth people showing up at game developer conferences, and it was like, ‘Finally someone’s noticing. They realize that we have something to offer.’”

This wasn’t just flattering; it was intoxicating. Gamification took a derided pursuit and recast it as a force for positive change, a way to make the real world better. While  enthusiastic calls to “build a game layer on top of reality” may sound dystopian to many of us today, the sentiment didn’t necessarily have the same ominous undertones at the end of the aughts. 

Combine the cultural recasting of games with an array of cheaper and faster technologies—GPS, ubiquitous and reliable mobile internet, powerful smartphones, Web 2.0 tools and services—and you arguably had all the ingredients needed for gamification’s rise. In a very real sense, reality in 2010 was ready to be gamified. Or to put it a slightly different way: Gamification was an idea perfectly suited for its moment. 

Gaming behavior

Fine, you might be asking at this point, but does it work? Surely, companies like Apple, Uber, Strava, Microsoft, Garmin, and others wouldn’t bother gamifying their products and services if there were no evidence of the strategy’s efficacy. The answer to the question, unfortunately, is super annoying: Define work.

Because gamification is so pervasive and varied, it’s hard to address its effectiveness in any direct or comprehensive way. But one can confidently say this: Gamification did not save the world. Climate change still exists. As do obesity, poverty, and war. Much of generic gamification’s power supposedly resides in its ability to nudge or steer us toward, or away from, certain behaviors using competition (challenges and leaderboards), rewards (points and achievement badges), and other sources of positive and negative feedback. 

Gamification is, and has always been, a way to induce specific behaviors in people using virtual carrots and sticks.

On that front, the results are mixed. Nudge theory lost much of its shine with academics in 2022 after a meta-analysis of previous studies concluded that, after correcting for publication bias, there wasn’t much evidence it worked to change behavior at all. Still, there are a lot of ways to nudge and a lot of behaviors to modify. The fact remains that plenty of people claim to be highly motivated to close their rings, earn their sleep crowns, or hit or exceed some increasingly ridiculous number of steps on their Fitbits (see humorist David Sedaris). 

Sebastian Deterding, a leading researcher in the field, argues that gamification can work, but its successes tend to be really hard to replicate. Not only do academics not know what works, when, and how, according to Deterding, but “we mostly have just-so stories without data or empirical testing.” 

8bit carrot dangling from a stick

SELMAN DESIGN

In truth, gamification acolytes were always pulling from an old playbook—one that dates back to the early 20th century. Then, behaviorists like John Watson and B.F. Skinner saw human behaviors (a category that for Skinner included thoughts, actions, feelings, and emotions) not as the products of internal mental states or cognitive processes but, rather, as the result of external forces—forces that could conveniently be manipulated. 

If Skinner’s theory of operant conditioning, which doled out rewards to positively reinforce certain behaviors, sounds a lot like Amazon’s “Fulfillment Center Games,” which dole out rewards to compel workers to work harder, faster, and longer—well, that’s not a coincidence. Gamification is, and has always been, a way to induce specific behaviors in people using virtual carrots and sticks. 

Sometimes this may work; other times not. But ultimately, as Hon points out, the question of efficacy may be beside the point. “There is no before or after to compare against if your life is always being gamified,” he writes. “There isn’t even a static form of gamification that can be measured, since the design of coercive gamification is always changing, a moving target that only goes toward greater and more granular intrusion.” 

The game of life

Like any other art form, video games offer a staggering array of possibilities. They can educate, entertain, foster social connection, inspire, and encourage us to see the world in different ways. Some of the best ones manage to do all of this at once.

Yet for many of us, there’s the sense today that we’re stuck playing an exhausting game that we didn’t opt into. This one assumes that our behaviors can be changed with shiny digital baubles, constant artificial competition, and meaningless prizes. Even more insulting, the game acts as if it exists for our benefit—promising to make us fitter, happier, and more productive—when in truth it’s really serving the commercial and business interests of its makers. 

Metaphors can be an imperfect but necessary way to make sense of the world. Today, it’s not uncommon to hear talk of leveling up, having a God Mode mindset, gaining XP, and turning life’s difficulty settings up (or down). But the metaphor that resonates most for me—the one that seems to neatly capture our current predicament—is that of the NPC, or non-player character.  

NPCs are the “Sisyphean machines” of video games, programmed to follow a defined script forever and never question or deviate. They’re background players in someone else’s story, typically tasked with furthering a specific plotline or performing some manual labor. To call someone an NPC in real life is to accuse them of just going through the motions, not thinking for themselves, not being able to make their own decisions. This, for me, is gamification’s real end result. It’s acquiescence pretending to be empowerment. It strips away the very thing that makes games unique—a sense of agency—and then tries to mask that with crude stand-ins for accomplishment.

So what can we do? Given the reach and pervasiveness of gamification, critiquing it at this point can feel a little pointless, like railing against capitalism. And yet its own failed promises may point the way to a possible respite. If gamifying the world has turned our lives into a bad version of a video game, perhaps this is the perfect moment to reacquaint ourselves with why actual video games are great in the first place. Maybe, to borrow an idea from McGonigal, we should all start playing better games. 

Bryan Gardiner is a writer based in Oakland, California.