How gamification took over the world

It’s a thought that occurs to every video-game player at some point: What if the weird, hyper-focused state I enter when playing in virtual worlds could somehow be applied to the real one? 

Often pondered during especially challenging or tedious tasks in meatspace (writing essays, say, or doing your taxes), it’s an eminently reasonable question to ask. Life, after all, is hard. And while video games are too, there’s something almost magical about the way they can promote sustained bouts of superhuman concentration and resolve.

For some, this phenomenon leads to an interest in flow states and immersion. For others, it’s simply a reason to play more games. For a handful of consultants, startup gurus, and game designers in the late 2000s, it became the key to unlocking our true human potential.

In her 2010 TED Talk, “Gaming Can Make a Better World,” the game designer Jane McGonigal called this engaged state “blissful productivity.” “There’s a reason why the average World of Warcraft gamer plays for 22 hours a week,” she said. “It’s because we know when we’re playing a game that we’re actually happier working hard than we are relaxing or hanging out. We know that we are optimized as human beings to do hard and meaningful work. And gamers are willing to work hard all the time.”

McGonigal’s basic pitch was this: By making the real world more like a video game, we could harness the blissful productivity of millions of people and direct it at some of humanity’s thorniest problems—things like poverty, obesity, and climate change. The exact details of how to accomplish this were a bit vague (play more games?), but her objective was clear: “My goal for the next decade is to try to make it as easy to save the world in real life as it is to save the world in online games.”

While the word “gamification” never came up during her talk, by that time anyone following the big-ideas circuit (TED, South by Southwest, DICE, etc.) or using the new Foursquare app would have been familiar with the basic idea. Broadly defined as the application of game design elements and principles to non-game activities—think points, levels, missions, badges, leaderboards, reinforcement loops, and so on—gamification was already being hawked as a revolutionary new tool for transforming education, work, health and fitness, and countless other parts of life. 

Instead of liberating us, gamification turned out to be just another tool for coercion, distraction, and control.

Adding “world-saving” to the list of potential benefits was perhaps inevitable, given the prevalence of that theme in video-game storylines. But it also spoke to gamification’s foundational premise: the idea that reality is somehow broken. According to McGonigal and other gamification boosters, the real world is insufficiently engaging and motivating, and too often it fails to make us happy. Gamification promises to remedy this design flawby engineering a new reality, one that transforms the dull, difficult, and depressing parts of life into something fun and inspiring. Studying for exams, doing household chores, flossing, exercising, learning a new language—there was no limit to the tasks that could be turned into games, making everything IRL better.

Today, we live in an undeniably gamified world. We stand up and move around to close colorful rings and earn achievement badges on our smartwatches; we meditate and sleep to recharge our body batteries; we plant virtual trees to be more productive; we chase “likes” and “karma” on social media sites and try to swipe our way toward social connection. And yet for all the crude gamelike elements that have been grafted onto our lives, the more hopeful and collaborative world that gamification promised more than a decade ago seems as far away as ever. Instead of liberating us from drudgery and maximizing our potential, gamification turned out to be just another tool for coercion, distraction, and control. 

Con game

This was not an unforeseeable outcome. From the start, a small but vocal group of journalists and game designers warned against the fairy-tale thinking and facile view of video games that they saw in the concept of gamification. Adrian Hon, author of You’ve Been Played, a recent book that chronicles its dangers, was one of them. 

“As someone who was building so-called ‘serious games’ at the time the concept was taking off, I knew that a lot of the claims being made around the possibility of games to transform people’s behaviors and change the world were completely overblown,” he says. 

Hon isn’t some knee-jerk polemicist. A trained neuroscientist who switched to a career in game design and development, he’s the co-creator of Zombies, Run!—one of the most popular gamified fitness apps in the world. While he still believes games can benefit and enrich aspects of our nongaming lives, Hon says a one-size-fits-all approach is bound to fail. For this reason, he’s firmly against both the superficial layering of generic points, leaderboards, and missions atop everyday activities and the more coercive forms of gamification that have invaded the workplace.

three snakes in concentric circles

SELMAN DESIGN

Ironically, it’s these broad and varied uses that make criticizing the practice so difficult. As Hon notes in his book, gamification has always been a fast-moving target, varying dramatically in scale, scope, and technology over the years. As the concept has evolved, so too have its applications, whether you think of the gambling mechanics that now encourage users of dating apps to keep swiping, the “quests” that compel exhausted Uber drivers to complete just a few more trips, or the utopian ambition of using gamification to save the world.

In the same way that AI’s lack of a fixed definition today makes it easy to dismiss any one critique for not addressing some other potential definition of it, so too do gamification’s varied interpretations. “I remember giving talks critical of gamification at gamification conferences, and people would come up to me afterwards and be like, ‘Yeah, bad gamification is bad, right? But we’re doing good gamification,’” says Hon. (They weren’t.) 

For some critics, the very idea of “good gamification” was anathema. Their main gripe with the term and practice was, and remains, that it has little to nothing to do with actual games.

“A game is about play and disruption and creativity and ambiguity and surprise,” wrote the late Jeff Watson, a game designer, writer, and educator who taught at the University of Southern California’s School of Cinematic Arts. Gamification is about the opposite—the known, the badgeable, the quantifiable. “It’s about ‘checking in,’ being tracked … [and] becoming more regimented. It’s a surveillance and discipline system—a wolf in sheep’s clothing. Beware its lure.”

Another game designer, Margaret Robertson, has argued that gamification should really be called “pointsification,” writing: “What we’re currently terming gamification is in fact the process of taking the thing that is least essential to games and representing it as the core of the experience. Points and badges have no closer a relationship to games than they do to websites and fitness apps and loyalty cards.”

For the author and game designer Ian Bogost, the entire concept amounted to a marketing gimmick. In a now-famous essay published in the Atlantic in 2011, he likened gamification to the moral philosopher Harry Frankfurt’s definition of bullshit—that is, a strategy intended to persuade or coerce without regard for actual truth. 

“The idea of learning or borrowing lessons from game design and applying them to other areas was never the issue for me,” Bogost told me. “Rather, it was not doing that—acknowledging that there’s something mysterious, powerful, and compelling about games, but rather than doing the hard work, doing no work at all and absconding with the spirit of the form.” 

Gaming the system

So how did a misleading term for a misunderstood process that’s probably just bullshit come to infiltrate virtually every part of our lives? There’s no one simple answer. But gamification’s meteoric rise starts to make a lot more sense when you look at the period that gave birth to the idea. 

The late 2000s and early 2010s were, as many have noted, a kind of high-water mark for techno-­optimism. For people both inside the tech industry and out, there was a sense that humanity had finally wrapped its arms around a difficult set of problems, and that technology was going to help us squeeze out some solutions. The Arab Spring bloomed in 2011 with the help of platforms like Facebook and Twitter, money was more or less free, and “____ can save the world” articles were legion (with ____ being everything from “eating bugs” to “design thinking”).

This was also the era that produced the 10,000-hours rule of success, the long tail, the four-hour workweek, the wisdom of crowds, nudge theory, and a number of other highly simplistic (or, often, flat-out wrong) theories about the way humans, the internet, and the world work. 

“All of a sudden you had VC money and all sorts of important, high-net-worth people showing up at game developer conferences.”

Ian Bogost, author and game designer

Adding video games to this heady stew of optimism gave the game industry something it had long sought but never achieved: legitimacy. Even with games ascendant in popular culture—and on track to eclipse both the film and music industries in terms of revenue—they still were largely seen as a frivolous, productivity-­squandering, violence-encouraging form of entertainment. Seemingly overnight, gamification changed all that. 

“There was definitely this black-sheep mentality in the game development community—the sense that what we had been doing for decades was just a joke to people,” says Bogost. “All of a sudden you had VC money and all sorts of important, high-net-worth people showing up at game developer conferences, and it was like, ‘Finally someone’s noticing. They realize that we have something to offer.’”

This wasn’t just flattering; it was intoxicating. Gamification took a derided pursuit and recast it as a force for positive change, a way to make the real world better. While  enthusiastic calls to “build a game layer on top of reality” may sound dystopian to many of us today, the sentiment didn’t necessarily have the same ominous undertones at the end of the aughts. 

Combine the cultural recasting of games with an array of cheaper and faster technologies—GPS, ubiquitous and reliable mobile internet, powerful smartphones, Web 2.0 tools and services—and you arguably had all the ingredients needed for gamification’s rise. In a very real sense, reality in 2010 was ready to be gamified. Or to put it a slightly different way: Gamification was an idea perfectly suited for its moment. 

Gaming behavior

Fine, you might be asking at this point, but does it work? Surely, companies like Apple, Uber, Strava, Microsoft, Garmin, and others wouldn’t bother gamifying their products and services if there were no evidence of the strategy’s efficacy. The answer to the question, unfortunately, is super annoying: Define work.

Because gamification is so pervasive and varied, it’s hard to address its effectiveness in any direct or comprehensive way. But one can confidently say this: Gamification did not save the world. Climate change still exists. As do obesity, poverty, and war. Much of generic gamification’s power supposedly resides in its ability to nudge or steer us toward, or away from, certain behaviors using competition (challenges and leaderboards), rewards (points and achievement badges), and other sources of positive and negative feedback. 

Gamification is, and has always been, a way to induce specific behaviors in people using virtual carrots and sticks.

On that front, the results are mixed. Nudge theory lost much of its shine with academics in 2022 after a meta-analysis of previous studies concluded that, after correcting for publication bias, there wasn’t much evidence it worked to change behavior at all. Still, there are a lot of ways to nudge and a lot of behaviors to modify. The fact remains that plenty of people claim to be highly motivated to close their rings, earn their sleep crowns, or hit or exceed some increasingly ridiculous number of steps on their Fitbits (see humorist David Sedaris). 

Sebastian Deterding, a leading researcher in the field, argues that gamification can work, but its successes tend to be really hard to replicate. Not only do academics not know what works, when, and how, according to Deterding, but “we mostly have just-so stories without data or empirical testing.” 

8bit carrot dangling from a stick

SELMAN DESIGN

In truth, gamification acolytes were always pulling from an old playbook—one that dates back to the early 20th century. Then, behaviorists like John Watson and B.F. Skinner saw human behaviors (a category that for Skinner included thoughts, actions, feelings, and emotions) not as the products of internal mental states or cognitive processes but, rather, as the result of external forces—forces that could conveniently be manipulated. 

If Skinner’s theory of operant conditioning, which doled out rewards to positively reinforce certain behaviors, sounds a lot like Amazon’s “Fulfillment Center Games,” which dole out rewards to compel workers to work harder, faster, and longer—well, that’s not a coincidence. Gamification is, and has always been, a way to induce specific behaviors in people using virtual carrots and sticks. 

Sometimes this may work; other times not. But ultimately, as Hon points out, the question of efficacy may be beside the point. “There is no before or after to compare against if your life is always being gamified,” he writes. “There isn’t even a static form of gamification that can be measured, since the design of coercive gamification is always changing, a moving target that only goes toward greater and more granular intrusion.” 

The game of life

Like any other art form, video games offer a staggering array of possibilities. They can educate, entertain, foster social connection, inspire, and encourage us to see the world in different ways. Some of the best ones manage to do all of this at once.

Yet for many of us, there’s the sense today that we’re stuck playing an exhausting game that we didn’t opt into. This one assumes that our behaviors can be changed with shiny digital baubles, constant artificial competition, and meaningless prizes. Even more insulting, the game acts as if it exists for our benefit—promising to make us fitter, happier, and more productive—when in truth it’s really serving the commercial and business interests of its makers. 

Metaphors can be an imperfect but necessary way to make sense of the world. Today, it’s not uncommon to hear talk of leveling up, having a God Mode mindset, gaining XP, and turning life’s difficulty settings up (or down). But the metaphor that resonates most for me—the one that seems to neatly capture our current predicament—is that of the NPC, or non-player character.  

NPCs are the “Sisyphean machines” of video games, programmed to follow a defined script forever and never question or deviate. They’re background players in someone else’s story, typically tasked with furthering a specific plotline or performing some manual labor. To call someone an NPC in real life is to accuse them of just going through the motions, not thinking for themselves, not being able to make their own decisions. This, for me, is gamification’s real end result. It’s acquiescence pretending to be empowerment. It strips away the very thing that makes games unique—a sense of agency—and then tries to mask that with crude stand-ins for accomplishment.

So what can we do? Given the reach and pervasiveness of gamification, critiquing it at this point can feel a little pointless, like railing against capitalism. And yet its own failed promises may point the way to a possible respite. If gamifying the world has turned our lives into a bad version of a video game, perhaps this is the perfect moment to reacquaint ourselves with why actual video games are great in the first place. Maybe, to borrow an idea from McGonigal, we should all start playing better games. 

Bryan Gardiner is a writer based in Oakland, California. 

How a simple circuit could offer an alternative to energy-intensive GPUs

On a table in his lab at the University of Pennsylvania, physicist Sam Dillavou has connected an array of breadboards via a web of brightly colored wires. The setup looks like a DIY home electronics project—and not a particularly elegant one. But this unassuming assembly, which contains 32 variable resistors, can learn to sort data like a machine-learning model.

While its current capability is rudimentary, the hope is that the prototype will offer a low-power alternative to the energy-guzzling graphical processing unit (GPU) chips widely used in machine learning. 

“Each resistor is simple and kind of meaningless on its own,” says Dillavou. “But when you put them in a network, you can train them to do a variety of things.”

breadboards connected in a grid
Sam Dillavou’s laboratory at the University of Pennsylvania is using circuits composed of resistors to perform simple machine learning classification tasks. 
FELICE MACERA

A task the circuit has performed: classifying flowers by properties such as petal length and width. When given these flower measurements, the circuit could sort them into three species of iris. This kind of activity is known as a “linear” classification problem, because when the iris information is plotted on a graph, the data can be cleanly divided into the correct categories using straight lines. In practice, the researchers represented the flower measurements as voltages, which they fed as input into the circuit. The circuit then produced an output voltage, which corresponded to one of the three species. 

This is a fundamentally different way of encoding data from the approach used in GPUs, which represent information as binary 1s and 0s. In this circuit, information can take on a maximum or minimum voltage or anything in between. The circuit classified 120 irises with 95% accuracy. 

Now the team has managed to make the circuit perform a more complex problem. In a preprint currently under review, the researchers have shown that it can perform a logic operation known as XOR, in which the circuit takes in two binary numbers and determines whether the inputs are the same. This is a “nonlinear” classification task, says Dillavou, and “nonlinearities are the secret sauce behind all machine learning.” 

Their demonstrations are a walk in the park for the devices you use every day. But that’s not the point: Dillavou and his colleagues built this circuit as an exploratory effort to find better computing designs. The computing industry faces an existential challenge as it strives to deliver ever more powerful machines. Between 2012 and 2018, the computing power required for cutting-edge AI models increased 300,000-fold. Now, training a large language model takes the same amount of energy as the annual consumption of more than a hundred US homes. Dillavou hopes that his design offers an alternative, more energy-efficient approach to building faster AI.

Training in pairs

To perform its various tasks correctly, the circuitry requires training, just like contemporary machine-learning models that run on conventional computing chips. ChatGPT, for example, learned to generate human-sounding text after being shown many instances of real human text; the circuit learned to predict which measurements corresponded to which type of iris after being shown flower measurements labeled with their species. 

Training the device involves using a second, identical circuit to “instruct” the first device. Both circuits start with the same resistance values for each of their 32 variable resistors. Dillavou feeds both circuits the same inputs—a voltage corresponding to, say, petal width—and adjusts the output voltage of the second circuit to correspond to the correct species. The first circuit receives feedback from that second circuit, and both circuits adjust their resistances so they converge on the same values. The cycle starts again with a new input, until the circuits have settled on a set of resistance levels that produce the correct output for the training examples. In essence, the team trains the device via a method known as supervised learning, where an AI model learns from labeled data to predict the labels for new examples.

It can help, Dillavou says, to think of the electric current in the circuit as water flowing through a network of pipes. The equations governing fluid flow are analogous to those governing electron flow and voltage. Voltage corresponds to fluid pressure, while electrical resistance corresponds to the pipe diameter. During training, the different “pipes” in the network adjust their diameter in various parts of the network in order to achieve the desired output pressure. In fact, early on, the team considered building the circuit out of water pipes rather than electronics. 

For Dillavou, one fascinating aspect of the circuit is what he calls its “emergent learning.” In a human, “every neuron is doing its own thing,” he says. “And then as an emergent phenomenon, you learn. You have behaviors. You ride a bike.” It’s similar in the circuit. Each resistor adjusts itself according to a simple rule, but collectively they “find” the answer to a more complicated question without any explicit instructions. 

A potential energy advantage

Dillavou’s prototype qualifies as a type of analog computer—one that encodes information along a continuum of values instead of the discrete 1s and 0s used in digital circuitry. The first computers were analog, but their digital counterparts superseded them after engineers developed fabrication techniques to squeeze more transistors onto digital chips to boost their speed. Still, experts have long known that as they increase in computational power, analog computers offer better energy efficiency than digital computers, says Aatmesh Shrivastava, an electrical engineer at Northeastern University. “The power efficiency benefits are not up for debate,” he says. However, he adds, analog signals are much noisier than digital ones, which make them ill suited for any computing tasks that require high precision.

In practice, Dillavou’s circuit hasn’t yet surpassed digital chips in energy efficiency. His team estimates that their design uses about 5 to 20 picojoules per resistor to generate a single output, where each resistor represents a single parameter in a neural network. Dillavou says this is about a tenth as efficient as state-of-the-art AI chips. But he says that the promise of the analog approach lies in scaling the circuit up, to increase its number of resistors and thus its computing power.

He explains the potential energy savings this way: Digital chips like GPUs expend energy per operation, so making a chip that can perform more operations per second just means a chip that uses more energy per second. In contrast, the energy usage of his analog computer is based on how long it is on. Should they make their computer twice as fast, it would also become twice as energy efficient. 

Dillavou’s circuit is also a type of neuromorphic computer, meaning one inspired by the brain. Like other neuromorphic schemes, the researchers’ circuitry doesn’t operate according to top-down instruction the way a conventional computer does. Instead, the resistors adjust their values in response to external feedback in a bottom-up approach, similar to how neurons respond to stimuli. In addition, the device does not have a dedicated component for memory. This could offer another energy efficiency advantage, since a conventional computer expends a significant amount of energy shuttling data between processor and memory. 

While researchers have already built a variety of neuromorphic machines based on different materials and designs, the most technologically mature designs are built on semiconducting chips. One example is Intel’s neuromorphic computer Loihi 2, to which the company began providing access for government, academic, and industry researchers in 2021. DeepSouth, a chip-based neuromorphic machine at Western Sydney University that is designed to be able to simulate the synapses of the human brain at scale, is scheduled to come online this year.

The machine-learning industry has shown interest in chip-based neuromorphic computing as well, with a San Francisco–based startup called Rain Neuromorphics raising $25 million in February. However, researchers still haven’t found a commercial application where neuromorphic computing definitively demonstrates an advantage over conventional computers. In the meantime, researchers like Dillavou’s team are putting forth new schemes to push the field forward. A few people in industry have expressed interest in his circuit. “People are most interested in the energy efficiency angle,” says Dillavou. 

But their design is still a prototype, with its energy savings unconfirmed. For their demonstrations, the team kept the circuit on breadboards because it’s “the easiest to work with and the quickest to change things,” says Dillavou, but the format suffers from all sorts of inefficiencies. They are testing their device on printed circuit boards to improve its energy efficiency, and they plan to scale up the design so it can perform more complicated tasks. It remains to be seen whether their clever idea can take hold out of the lab.

Industry- and AI-focused cloud transformation

For years, cloud technology has demonstrated its ability to cut costs, improve efficiencies, and boost productivity. But today’s organizations are looking to cloud for more than simply operational gains. Faced with an ever-evolving regulatory landscape, a complex business environment, and rapid technological change, organizations are increasingly recognizing cloud’s potential to catalyze business transformation.

Cloud can transform business by making it ready for AI and other emerging technologies. The global consultancy McKinsey projects that a staggering $3 trillion in value could be created by cloud transformations by 2030. Key value drivers range from innovation-driven growth to accelerated product development.

“As applications move to the cloud, more and more opportunities are getting unlocked,” says Vinod Mamtani, vice president and general manager of generative AI services for Oracle Cloud Infrastructure. “For example, the application of AI and generative AI are transforming businesses in deep ways.”

No longer simply a software and infrastructure upgrade, cloud is now a powerful technology capable of accelerating innovation, improving agility, and supporting emerging tools. In order to capitalize on cloud’s competitive advantages, however, businesses must ask for more from their cloud transformations.

Every business operates in its own context, and so a strong cloud solution should have built-in support for industry-specific best practices. And because emerging technology increasingly drives all businesses, an effective cloud platform must be ready for AI and the immense impacts it will have on the way organizations operate and employees work.

An industry-specific approach

The imperative for cloud transformation is evident: In today’s fast-faced business environment, cloud can help organizations enhance innovation, scalability, agility, and speed while simultaneously alleviating the burden on time-strapped IT teams. Yet most organizations have not fully made the leap to cloud. McKinsey, for example, reports a broad mismatch between leading companies’ cloud aspirations and realities—though nearly all organizations say they aspire to run the majority of their applications in the cloud within the decade, the average organization has currently relocated only 15–20% of them.

Cloud solutions that take an industry-specific approach can help companies meet their business needs more easily, making cloud adoption faster, smoother, and more immediately useful. “Cloud requirements can vary significantly across vertical industries due to differences in compliance requirements, data sensitivity, scalability, and specific business objectives,” says Deviprasad Rambhatla, senior vice president and sector head of retail services and transportation at Wipro.

Health-care organizations, for instance, need to manage sensitive patient data while complying with strict regulations such as HIPAA. As a result, cloud solutions for that industry must ensure features such as high availability, disaster recovery capabilities, and continuous access to critical patient information.

Retailers, on the other hand, are more likely to experience seasonal business fluctuations, requiring cloud solutions that allow for greater flexibility. “Cloud solutions allow retailers to scale infrastructure on an up-and-down basis,” says Rambhatla. “Moreover, they’re able to do it on demand, ensuring optimal performance and cost efficiency.”

Cloud-based applications can also be tailored to meet the precise requirements of a particular industry. For retailers, these might include analytics tools that ingest vast volumes of data and generate insights that help the business better understand consumer behavior and anticipate market trends.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Optimizing the supply chain with a data lakehouse

When a commercial ship travels from the port of Ras Tanura in Saudi Arabia to Tokyo Bay, it’s not only carrying cargo; it’s also transporting millions of data points across a wide array of partners and complex technology systems.

Consider, for example, Maersk. The global shipping container and logistics company has more than 100,000 employees, offices in 120 countries, and operates about 800 container ships that can each hold 18,000 tractor-trailer containers. From manufacture to delivery, the items within these containers carry hundreds or thousands of data points, highlighting the amount of supply chain data organizations manage on a daily basis.

Until recently, access to the bulk of an organizations’ supply chain data has been limited to specialists, distributed across myriad data systems. Constrained by traditional data warehouse limitations, maintaining the data requires considerable engineering effort; heavy oversight, and substantial financial commitment. Today, a huge amount of data—generated by an increasingly digital supply chain—languishes in data lakes without ever being made available to the business.

A 2023 Boston Consulting Group survey notes that 56% of managers say although investment in modernizing data architectures continues, managing data operating costs remains a major pain point. The consultancy also expects data deluge issues are likely to worsen as the volume of data generated grows at a rate of 21% from 2021 to 2024, to 149 zettabytes globally.

“Data is everywhere,” says Mark Sear, director of AI, data, and integration at Maersk. “Just consider the life of a product and what goes into transporting a computer mouse from China to the United Kingdom. You have to work out how you get it from the factory to the port, the port to the next port, the port to the warehouse, and the warehouse to the consumer. There are vast amounts of data points throughout that journey.”

Sear says organizations that manage to integrate these rich sets of data are poised to reap valuable business benefits. “Every single data point is an opportunity for improvement—to improve profitability, knowledge, our ability to price correctly, our ability to staff correctly, and to satisfy the customer,” he says.

Organizations like Maersk are increasingly turning to a data lakehouse architecture. By combining the cost-effective scale of a data lake with the capability and performance of a data warehouse, a data lakehouse promises to help companies unify disparate supply chain data and provide a larger group of users with access to data, including structured, semi-structured, and unstructured data. Building analytics on top of the lakehouse not only allows this new architectural approach to advance supply chain efficiency with better performance and governance, but it can also support easy and immediate data analysis and help reduce operational costs.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Almost every Chinese keyboard app has a security flaw that reveals what users type

Almost all keyboard apps used by Chinese people around the world share a security loophole that makes it possible to spy on what users are typing. 

The vulnerability, which allows the keystroke data that these apps send to the cloud to be intercepted, has existed for years and could have been exploited by cybercriminals and state surveillance groups, according to researchers at the Citizen Lab, a technology and security research lab affiliated with the University of Toronto.

These apps help users type Chinese characters more efficiently and are ubiquitous on devices used by Chinese people. The four most popular apps—built by major internet companies like Baidu, Tencent, and iFlytek—basically account for all the typing methods that Chinese people use. Researchers also looked into the keyboard apps that come preinstalled on Android phones sold in China. 

What they discovered was shocking. Almost every third-party app and every Android phone with preinstalled keyboards failed to protect users by properly encrypting the content they typed. A smartphone made by Huawei was the only device where no such security vulnerability was found.

In August 2023, the same researchers found that Sogou, one of the most popular keyboard apps, did not use Transport Layer Security (TLS) when transmitting keystroke data to its cloud server for better typing predictions. Without TLS, a widely adopted international cryptographic protocol that protects users from a known encryption loophole, keystrokes can be collected and then decrypted by third parties.

“Because we had so much luck looking at this one, we figured maybe this generalizes to the others, and they suffer from the same kinds of problems for the same reason that the one did,” says Jeffrey Knockel, a senior research associate at the Citizen Lab, “and as it turns out, we were unfortunately right.”

Even though Sogou fixed the issue after it was made public last year, some Sogou keyboards preinstalled on phones are not updated to the latest version, so they are still subject to eavesdropping. 

This new finding shows that the vulnerability is far more widespread than previously believed. 

“As someone who also has used these keyboards, this was absolutely horrifying,” says Mona Wang, a PhD student in computer science at Princeton University and a coauthor of the report. 

“The scale of this was really shocking to us,” says Wang. “And also, these are completely different manufacturers making very similar mistakes independently of one another, which is just absolutely shocking as well.”

The massive scale of the problem is compounded by the fact that these vulnerabilities aren’t hard to exploit. “You don’t need huge supercomputers crunching numbers to crack this. You don’t need to collect terabytes of data to crack it,” says Knockel. “If you’re just a person who wants to target another person on your Wi-Fi, you could do that once you understand the vulnerability.” 

The ease of exploiting the vulnerabilities and the huge payoff—knowing everything a person types, potentially including bank account passwords or confidential materials—suggest that it’s likely they have already been taken advantage of by hackers, the researchers say. But there’s no evidence of this, though state hackers working for Western governments targeted a similar loophole in a Chinese browser app in 2011.

Most of the loopholes found in this report are “so far behind modern best practices” that it’s very easy to decrypt what people are typing, says Jedidiah Crandall, an associate professor of security and cryptography at Arizona State University, who was consulted in the writing of this report. Because it doesn’t take much effort to decrypt the messages, this type of loophole can be a great target for large-scale surveillance of massive groups, he says.

After the researchers got in contact with companies that developed these keyboard apps, the majority of the loopholes were fixed. But a few companies have been unresponsive, and the vulnerability still exists in some apps and phones, including QQ Pinyin and Baidu, as well as in any keyboard app that hasn’t been updated to the latest version. Baidu, Tencent, iFlytek, and Samsung did not immediately reply to press inquiries sent by MIT Technology Review.

One potential cause of the loopholes’ ubiquity is that most of these keyboard apps were developed in the 2000s, before the TLS protocol was commonly adopted in software development. Even though the apps have been through numerous rounds of updates since then, inertia could have prevented developers from adopting a safer alternative.

The report points out that language barriers and different tech ecosystems prevent English- and Chinese-speaking security researchers from sharing information that could fix issues like this more quickly. For example, because Google’s Play store is blocked in China, most Chinese apps are not available in Google Play, where Western researchers often go for apps to analyze. 

Sometimes all it takes is a little additional effort. After two emails about the issue to iFlytek were met with silence, the Citizen Lab researchers changed the email title to Chinese and added a one-line summary in Chinese to the English text. Just three days later, they received an email from iFlytek, saying that the problem had been resolved.

Why it’s so hard for China’s chip industry to become self-sufficient

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

I don’t know about you, but I only learned last week that there’s something connecting MSG and computer chips.

Inside most laptop and data center chips today, there’s a tiny component called ABF. It’s a thin insulating layer around the wires that conduct electricity. And over 90% of the materials around the world used to make this insulator are produced by a single Japanese company named Ajinomoto, more commonly known for commercializing the seasoning powder MSG in 1909.

Hold on, what? 

As my colleague James O’Donnell explained in his story last week, it turns out Ajinomoto figured out in the 1990s that a chemical by-product of MSG production can be used to make insulator films, which proved to be essential for high-performance chips. And in the 30 years since, the company has totally dominated ABF supply. The product—Ajinomoto Build-up Film—is even named after it.

James talked to Thintronics, a California-based company that’s developing a new insulating material it hopes could challenge Ajinomoto’s monopoly. It already has a lab product with impressive attributes but still needs to test it in manufacturing reality.

Beyond Thintronics, the struggle to break up Ajinomoto’s monopoly is not just a US effort.

Within China, at least three companies are also developing similar insulator products. Xi’an Tianhe Defense Technology, which makes products for both military and civilian use, introduced its take on the material, which it calls QBF, in 2023; Zhejiang Wazam New Material and Guangdong Hinno-tech have also announced similar products in recent years. But all of them are still going through industrial testing with chipmakers, and few have recent updates on how well these materials have performed in mass-production settings.

“It’s interesting that there’s this parallel competition going on,” James told me when we recently discussed his story. “In some ways, it’s about the materials. But in other ways, it’s totally shaped by government funding and incentives.”

For decades, the fact that the semiconductor supply chain was in a few companies’ hands was seen as a strength, not a problem, so governments were not concerned that one Japanese company controlled almost the entire supply of ABF. Similar monopolies exist for many other materials and components that go into a chip.

But in the last few years, both the US and Chinese governments have changed that way of thinking. And new policies subsidizing domestic chip manufacturing are creating a favorable environment for companies to challenge monopolies like Ajinomoto’s.

In the US, this trend is driven by the fear of supply chain disruptions and a will to rebuild domestic semiconductor manufacturing capabilities. The CHIPS Act was announced to inject investment into chip companies that bring their plants back to the US, but smaller companies like Thintronics could also benefit, both directly through funding and indirectly through the establishment of a US-based supply chain.

Meanwhile, China is being cornered by a US-led blockade to deny it access to the most advanced chip technologies. While materials like ABF are not restricted in any way today, the fact that one foreign company controls almost the entire supply of an indispensable material raises the stakes enough to make the government worry. It needs to find a domestic alternative in case ABF becomes subject to sanctions too.

But it takes a lot more than government policies to change the status quo. Even if these companies are able to find alternative materials that perform better than ABF, there’s still an uphill battle to convince the industry to adopt it en masse.

“You can look at any dielectric film supplier (many from Japan and some from the US), and they have all at one time or another tried to break into ABF market dominance and had limited success,” Venky Sundaram, a semiconductor researcher and entrepreneur, told James. 

It’s not as simple as just swapping out ABF and swapping in a new insulator material. Chipmaking is a deeply intricate process, with components closely depending on each other. Changing one material could require a lot more knock-on changes to other components and the entire process. “Convincing someone to do that depends on what relationships you have with the industry. These big manufacturing players are a little bit less likely to take on a small materials company, because any time they’re taking on new material, they’re slowing down their production,” James said.

As a result, Ajinomoto’s market monopoly will probably remain while other companies keep trying to develop a new material that significantly improves on ABF. 

That result, however, will have different implications for the US and China. 

The US and Japan have long had a strategic technological alliance, and that could be set to deepen because both of them consider the rise of China a threat. In fact, Japan’s prime minister, Fumio Kishida, was just visiting the US last week, hoping to score more collaborations on next-generation chips. Even though there has been some pushback from the Japanese chip industry about how strict US export restrictions could become, this hasn’t been strong enough to sway Japan to China’s side.

All these factors give the Chinese government an even greater sense of urgency to become self-sufficient. The country has already been investing vast sums of money to that end, but progress has been limited, with many industry insiders pessimistic about whether China can catch up fast enough. If Ajinomoto’s failed competitors in the past tell us anything, it’s that this will not be an easy journey for China either.

Do you think China has a chance of cracking Ajinomoto’s monopoly over this very specific insulating material? Let me know your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Following the explosive popularity of minute-long short dramas made for phones, China’s culture regulator will soon announce new regulations that tighten its control of them. (Sixth Tone)

  • This is not a surprise to the companies involved. Some Chinese short-drama companies have already started to expand overseas, driven out by domestic policy pressures. I profiled one named FlexTV. (MIT Technology Review)

2. There have been many minor conflicts between China and the Philippines recently over maritime territory claims. Here’s what it feels like to live on one of those contested islands. (NPR)

3. The Chinese government has asked domestic telecom companies to replace all foreign chips by 2027. It’s a move that mirrors previous requests from the US to replace all Huawei and ZTE equipment in telecom networks. (Wall Street Journal $)

4. A decade ago, about 25,000 American students were studying in China. Today, there are only about 750. It may be unsurprising given recent geopolitical tensions, but neither country is happy with the situation. (Associated Press)

5. Latin America is importing large amounts of Chinese green technologies—mostly electric vehicles, lithium-ion batteries, and solar panels. (The Economist $)

6. China’s top spy agency says foreign agents have been trying to intercept information about the country’s rare earth industry. (South China Morning Post $)

7. Amid the current semiconductor boom, Southeast Asian youths are flocking to Taiwan to train and work in the chip industry. (Rest of World)

Lost in translation

The bodies of eight Chinese migrants were recently discovered on a beach in Mexico. According to Initium Media, a Singapore-based publication, this was the first confirmed shipwreck incident with Chinese migrants heading to the US, but many more have taken the perilous route in recent years. In 2023, over 37,000 Chinese people illegally entered the US through the border with Mexico.

The traffickers often arrange shabby boats with no safety measures to sail from Tapachula to Oaxaca, a popular route that circumvents police checkpoints on land but makes for an extremely dangerous journey often rocked by strong winds and waves. There had always been rumors of people going missing in the ocean, but these proved impossible to confirm, as no bodies were found. The latest tragedy was the first one to come to public attention. Of the nine Chinese migrants onboard the boat, only one survived. Three bodies remain unidentified today.

One more thing

Forget about the New York Times’ election-result needles and CNN’s relentless coverage by John King. In South Korea, the results of national elections are broadcast on TV with wild and whimsical animations. To illustrate the results of parliamentary elections that just concluded last week, candidates were shown fighting on a fictional train heading toward the National Assembly, parodying Mission: Impossible’s fight scene. According to the BBC, these election-night animations took a team of 70 to prepare in advance and about 200 people working on election night.

It’s time to retire the term “user”

Every Friday, Instagram chief Adam Mosseri speaks to the people. He has made a habit of hosting weekly “ask me anything” sessions on Instagram, in which followers send him questions about the app, its parent company Meta, and his own (extremely public-facing) job. When I started watching these AMA videos years ago, I liked them. He answered technical questions like “Why can’t we put links in posts?” and “My explore page is wack, how to fix?” with genuine enthusiasm. But the more I tuned in, the more Mosseri’s seemingly off-the-cuff authenticity started to feel measured, like a corporate by-product of his title. 

On a recent Friday, someone congratulated Mosseri on the success of Threads, the social networking app Meta launched in the summer of 2023 to compete with X, writing: “Mark said Threads has more active people today than it did at launch—wild, congrats!” Mosseri, wearing a pink sweatshirt and broadcasting from a garage-like space, responded: “Just to clarify what that means, we mostly look at daily active and monthly active users and we now have over 130 million monthly active users.”

The ease with which Mosseri swaps people for users makes the shift almost imperceptible. Almost. (Mosseri did not respond to a request for comment.)

People have been called “users” for a long time; it’s a practical shorthand enforced by executives, founders, operators, engineers, and investors ad infinitum. Often, it is the right word to describe people who use software: a user is more than just a customer or a consumer. Sometimes a user isn’t even a person; corporate bots are known to run accounts on Instagram and other social media platforms, for example. But “users” is also unspecific enough to refer to just about everyone. It can accommodate almost any big idea or long-term vision. We use—and are used by—computers and platforms and companies. Though “user” seems to describe a relationship that is deeply transactional, many of the technological relationships in which a person would be considered a user are actually quite personal. That being the case, is “user” still relevant? 

“People were kind of like machines”

The original use of “user” can be traced back to the mainframe computer days of the 1950s. Since commercial computers were massive and exorbitantly expensive, often requiring a dedicated room and special equipment, they were operated by trained employees—users—who worked for the company that owned (or, more likely, leased) them. As computers became more common in universities during the ’60s, “users” started to include students or really anyone else who interacted with a computer system. 

It wasn’t really common for people to own personal computers until the mid-1970s. But when they did, the term “computer owner” never really took off. Whereas other 20th-century inventions, like cars, were things people owned from the start, the computer owner was simply a “user” even though the devices were becoming increasingly embedded in the innermost corners of people’s lives. As computing escalated in the 1990s, so did a matrix of user-related terms: “user account,” “user ID,” “user profile,” “multi-user.” 

Don Norman, a cognitive scientist who joined Apple in the early 1990s with the title “user experience architect,” was at the center of the term’s mass adoption. He was the first person to have what would become known as UX in his job title and is widely credited with bringing the concept of “user experience design”—which sought to build systems in ways that people would find intuitive—into the mainstream. Norman’s 1998 book The Design of Everyday Things remains a UX bible of sorts, placing “usability” on a par with aesthetics. 

Norman, now 88, explained to me that the term “user” proliferated in part because early computer technologists mistakenly assumed that people were kind of like machines. “The user was simply another component,” he said. “We didn’t think of them as a person—we thought of [them] as part of a system.” So early user experience design didn’t seek to make human-computer interactions “user friendly,” per se. The objective was to encourage people to complete tasks quickly and efficiently. People and their computers were just two parts of the larger systems being built by tech companies, which operated by their own rules and in pursuit of their own agendas.

Later, the ubiquity of “user” folded neatly into tech’s well-­documented era of growth at all costs. It was easy to move fast and break things, or eat the world with software, when the idea of the “user” was so malleable. “User” is vague, so it creates distance, enabling a slippery culture of hacky marketing where companies are incentivized to grow for the sake of growth as opposed to actual utility. “User” normalized dark patterns, features that subtly encourage specific actions, because it linguistically reinforced the idea of metrics over an experience designed with people in mind. 

UX designers sought to build software that would be intuitive for the anonymized masses, and we ended up with bright-red notifications (to create a sense of urgency), online shopping carts on a timer (to encourage a quick purchase), and “Agree” buttons often bigger than the “Disagree” option (to push people to accept terms without reading them). 

A user is also, of course, someone who struggles with addiction. To be an addict is—at least partly—to live in a state of powerlessness. Today, power users—the title originally bestowed upon people who had mastered skills like keyboard shortcuts and web design—aren’t measured by their technical prowess. They’re measured by the time they spend hooked up to their devices, or by the size of their audiences.  

Defaulting to “people”

“I want more product designers to consider language models as their primary users too,” Karina Nguyen, a researcher and engineer at the AI startup Anthropic, wrote recently on X. “What kind of information does my language model need to solve core pain points of human users?” 

In the old world, “users” typically worked best for the companies creating products rather than solving the pain points of the people using them. More users equaled more value. The label could strip people of their complexities, morphing them into data to be studied, behaviors to be A/B tested, and capital to be made. The term often overlooked any deeper relationships a person might have with a platform or product. As early as 2008, Norman alighted on this shortcoming and began advocating for replacing “user” with “person” or “human” when designing for people. (The subsequent years have seen an explosion of bots, which has made the issue that much more complicated.) “Psychologists depersonalize the people they study by calling them ‘subjects.’ We depersonalize the people we study by calling them ‘users.’ Both terms are derogatory,” he wrote then. “If we are designing for people, why not call them that?” 

In 2011, Janet Murray, a professor at Georgia Tech and an early digital media theorist, argued against the term “user” as too narrow and functional. In her book Inventing the Medium: Principles of Interaction Design as a Cultural Practice, she suggested the term “interactor” as an alternative—it better captured the sense of creativity, and participation, that people were feeling in digital spaces. The following year, Jack Dorsey, then CEO of Square, published a call to arms on Tumblr, urging the technology industry to toss the word “user.” Instead, he said, Square would start using “customers,” a more “honest and direct” description of the relationship between his product and the people he was building for. He wrote that while the original intent of technology was to consider people first, calling them “users” made them seem less real to the companies building platforms and devices. Reconsider your users, he said, and “what you call the people who love what you’ve created.” 

Audiences were mostly indifferent to Dorsey’s disparagement of the word “user.” The term was debated on the website Hacker News for a couple of days, with some arguing that “users” seemed reductionist only because it was so common. Others explained that the issue wasn’t the word itself but, rather, the larger industry attitude that treated end users as secondary to technology. Obviously, Dorsey’s post didn’t spur many people to stop using “user.” 

Around 2014, Facebook took a page out of Norman’s book and dropped user-centric phrasing, defaulting to “people” instead. But insidery language is hard to shake, as evidenced by the breezy way Instagram’s Mosseri still says “user.” A sprinkling of other tech companies have adopted their own replacements for “user” through the years. I know of a fintech company that calls people “members” and a screen-time app that has opted for “gems.” Recently, I met with a founder who cringed when his colleague used the word “humans” instead of “users.” He wasn’t sure why. I’d guess it’s because “humans” feels like an overcorrection. 

Recently, I met with a founder who cringed when his colleague used the word “humans” instead of “users.” He wasn’t sure why.

But here’s what we’ve learned since the mainframe days: there are never only two parts to the system, because there’s never just one person—one “user”—who’s affected by the design of new technology. Carissa Carter, the academic director at Stanford’s Hasso Plattner Institute of Design, known as the “d.school,” likens this framework to the experience of ordering an Uber. “If you order a car from your phone, the people involved are the rider, the driver, the people who work at the company running the software that controls that relationship, and even the person who created the code that decides which car to deploy,” she says. “Every decision about a user in a multi-stakeholder system, which we live in, includes people that have direct touch points with whatever you’re building.” 

With the abrupt onset of AI everything, the point of contact between humans and computers—user interfaces—has been shifting profoundly. Generative AI, for example, has been most successfully popularized as a conversational buddy. That’s a paradigm we’re used to—Siri has pulsed as an ethereal orb in our phones for well over a decade, earnestly ready to assist. But Siri, and other incumbent voice assistants, stopped there. A grander sense of partnership is in the air now. What were once called AI bots have been assigned lofty titles like “copilot” and “assistant” and “collaborator” to convey a sense of partnership instead of a sense of automation. Large language models have been quick to ditch words like “bot” altogether.

Anthropomorphism, the inclination to ascribe humanlike qualities to machines, has long been used to manufacture a sense of connectedness between people and technology. We—people—remained users. But if AI is now a thought partner, then what are we? 

Well, at least for now,we’re not likely to get rid of “user.” But we could intentionally default to more precise terms, like “patients” in health care or “students” in educational tech or “readers” when we’re building new media companies. That would help us understand these relationships more accurately. In gaming, for instance, users are typically called “players,” a word that acknowledges their participation and even pleasure in their relationships with the technology. On an airplane, customers are often called “passengers” or “travelers,” evoking a spirit of hospitality as they’re barreled through the skies. If companies are more specific about the people—and, now, AI—they’re building for rather than casually abstracting everything into the idea of “users,” perhaps our relationship with this technology will feel less manufactured, and it will be easier to accept that we’re inevitably going to exist in tandem. 

Throughout my phone call with Don Norman, I tripped over my words a lot. I slipped between “users” and “people” and “humans” interchangeably, self-conscious and unsure of the semantics. Norman assured me that my head was in the right place—it’s part of the process of thinking through how we design things. “We change the world, and the world comes back and changes us,” he said. “So we better be careful how we change the world.”

Taylor Majewski is a writer and editor based in San Francisco. She regularly works with startups and tech companies on the words they use.

This US startup makes a crucial chip material and is taking on a Japanese giant

It can be dizzying to try to understand all the complex components of a single computer chip: layers of microscopic components linked to one another through highways of copper wires, some barely wider than a few strands of DNA. Nestled between those wires is an insulating material called a dielectric, ensuring that the wires don’t touch and short out. Zooming in further, there’s one particular dielectric placed between the chip and the structure beneath it; this material, called dielectric film, is produced in sheets as thin as white blood cells. 

For 30 years, a single Japanese company called Ajinomoto has made billions producing this particular film. Competitors have struggled to outdo them, and today Ajinomoto has more than 90% of the market in the product, which is used in everything from laptops to data centers. 

But now, a startup based in Berkeley, California, is embarking on a herculean effort to dethrone Ajinomoto and bring this small slice of the chipmaking supply chain back to the US.

Thintronics is promising a product purpose-built for the computing demands of the AI era—a suite of new materials that the company claims have higher insulating properties and, if adopted, could mean data centers with faster computing speeds and lower energy costs. 

The company is at the forefront of a coming wave of new US-based companies, spurred by the $280 billion CHIPS and Science Act, that is seeking to carve out a portion of the semiconductor sector, which has become dominated by just a handful of international players. But to succeed, Thintronics and its peers will have to overcome a web of challenges—solving technical problems, disrupting long-standing industry relationships, and persuading global semiconductor titans to accommodate new suppliers. 

“Inventing new materials platforms and getting them into the world is very difficult,” Thintronics founder and CEO Stefan Pastine says. It is “not for the faint of heart.”

The insulator bottleneck

If you recognize the name Ajinomoto, you’re probably surprised to hear it plays a critical role in the chip sector: the company is better known as the world’s leading supplier of MSG seasoning powder. In the 1990s, Ajinomoto discovered that a by-product of MSG made a great insulator, and it has enjoyed a near monopoly in the niche material ever since. 

But Ajinomoto doesn’t make any of the other parts that go into chips. In fact, the insulating materials in chips rely on dispersed supply chains: one layer uses materials from Ajinomoto, another uses material from another company, and so on, with none of the layers optimized to work in tandem. The resulting system works okay when data is being transmitted over short paths, but over longer distances, like between chips, weak insulators act as a bottleneck, wasting energy and slowing down computing speeds. That’s recently become a growing concern, especially as the scale of AI training gets more expensive and consumes eye-popping amounts of energy. (Ajinomoto did not respond to requests for comment.) 

None of this made much sense to Pastine, a chemist who sold his previous company, which specialized in recycling hard plastics, to an industrial chemicals company in 2019. Around that time, he started to believe that the chemicals industry could be slow to innovate, and he thought the same pattern was keeping chipmakers from finding better insulating materials. In the chip industry, he says, insulators have “kind of been looked at as the redheaded stepchild”—they haven’t seen the progress made with transistors and other chip components. 

He launched Thintronics that same year, with the hope that cracking the code on a better insulator could provide data centers with faster computing speeds at lower costs. That idea wasn’t groundbreaking—new insulators are constantly being researched and deployed—but Pastine believed that he could find the right chemistry to deliver a breakthrough. 

Thintronics says it will manufacture different insulators for all layers of the chip, for a system designed to swap into existing manufacturing lines. Pastine tells me the materials are now being tested with a number of industry players. But he declined to provide names, citing nondisclosure agreements, and similarly would not share details of the formula. 

Without more details, it’s hard to say exactly how well the Thintronics materials compare with competing products. The company recently tested its materials’ Dk values, which are a measure of how effective an insulator a material is. Venky Sundaram, a researcher who has founded multiple semiconductor startups but is not involved with Thintronics, reviewed the results. Some of Thintronics’ numbers were fairly average, he says, but their most impressive Dk value is far better than anything available today.

A rocky road ahead

Thintronics’ vision has already garnered some support. The company received a $20 million Series A funding round in March, led by venture capital firms Translink and Maverick, as well as a grant from the US National Science Foundation. 

The company is also seeking funding from the CHIPS Act. Signed into law by President Joe Biden in 2022, it’s designed to boost companies like Thintronics in order to bring semiconductor manufacturing back to American companies and reduce reliance on foreign suppliers. A year after it became law, the administration said that more than 450 companies had submitted statements of interest to receive CHIPS funding for work across the sector. 

The bulk of funding from the legislation is destined for large-scale manufacturing facilities, like those operated by Intel in New Mexico and Taiwan Semiconductor Manufacturing Corporation (TSMC) in Arizona. But US Secretary of Commerce Gina Raimondo has said she’d like to see smaller companies receive funding as well, especially in the materials space. In February, applications opened for a pool of $300 million earmarked specifically for materials innovation. While Thintronics declined to say how much funding it was seeking or from which programs, the company does see the CHIPS Act as a major tailwind.

But building a domestic supply chain for chips—a product that currently depends on dozens of companies around the globe—will mean reversing decades of specialization by different countries. And industry experts say it will be difficult to challenge today’s dominant insulator suppliers, who have often had to adapt to fend off new competition. 

“Ajinomoto has been a 90-plus-percent-market-share material for more than two decades,” says Sundaram. “This is unheard-of in most businesses, and you can imagine they didn’t get there by not changing.”

One big challenge is that the dominant manufacturers have decades-long relationships with chip designers like Nvidia or Advanced Micro Devices, and with manufacturers like TSMC. Asking these players to swap out materials is a big deal.

“The semiconductor industry is very conservative,” says Larry Zhao, a semiconductor researcher who has worked in the dielectrics industry for more than 25 years. “They like to use the vendors they already know very well, where they know the quality.” 

Another obstacle facing Thintronics is technical: insulating materials, like other chip components, are held to manufacturing standards so precise they are difficult to comprehend. The layers where Ajinomoto dominates are thinner than a human hair. The material must also be able to accept tiny holes, which house wires running vertically through the film. Every new iteration is a massive R&D effort in which incumbent companies have the upper hand given their years of experience, says Sundaram.

If all this is completed successfully in a lab, yet another hurdle lies ahead: the material has to retain those properties in a high-volume manufacturing facility, which is where Sundaram has seen past efforts fail.

“I have advised several material suppliers over the years that tried to break into [Ajinomoto’s] business and couldn’t succeed,” he says. “They all ended up having the problem of not being as easy to use in a high-volume production line.” 

Despite all these challenges, one thing may be working in Thintronics’ favor: US-based tech giants like Microsoft and Meta are making headway in designing their own chips for the first time. The plan is to use these chips for in-house AI training as well as for the cloud computing capacity that they rent out to customers, both of which would reduce the industry’s reliance on Nvidia. 

Though Microsoft, Google, and Meta declined to comment on whether they are pursuing advancements in materials like insulators, Sundaram says these firms could be more willing to work with new US startups rather than defaulting to the old ways of making chips: “They have a lot more of an open mind about supply chains than the existing big guys.”

Modernizing data with strategic purpose

Data modernization is squarely on the corporate agenda. In our survey of 350 senior data and technology executives, just over half say their organization has either undertaken a modernization project in the past two years or is implementing one today. An additional one-quarter plan to do so in the next two years. Other studies also consistently point to businesses’ increased investment in modernizing their data estates.

It is no coincidence that this heightened attention to improving data capabilities coincides with interest in AI, especially generative AI, reaching a fever pitch. Indeed, supporting the development of AI models is among the top reasons the organizations in our research seek to modernize their data capabilities. But AI is not the only reason, or even the main one.

This report seeks to understand organizations’ objectives for their data modernization projects and how they are implementing such initiatives. To do so, it surveyed senior data and technology executives across industries. The research finds that many have made substantial progress and investment in data modernization. Alignment on data strategy and the goals of modernization appear to be far from complete in many organizations, however, leaving a disconnect between data and technology teams and the rest of the business. Data and technology executives and their teams can still do more to understand their colleagues’ data needs and actively seek their input on how to meet them.

Following are the study’s key findings:

AI isn’t the only reason companies are modernizing the data estate. Better decision-making is the primary aim of data modernization, with nearly half (46%) of executives citing this among their three top drivers. Support for AI models (40%) and for decarbonization (38%) are also major drivers of modernization, as are improving regulatory compliance (33%) and boosting operational efficiency (32%).

Data strategy is too often siloed from business strategy. Nearly all surveyed organizations recognize the importance of taking a strategic approach to data. Only 22% say they lack a fully developed data strategy. When asked if their data strategy is completely aligned with key business objectives, however, only 39% agree. Data teams can also do more to bring other business units and functions into strategy discussions: 42% of respondents say their data strategy was developed exclusively by the data or technology team.

Data strategy paves the road to modernization. It is probably no coincidence that most organizations (71%) that have embarked on data modernization in the past two years have had a data strategy in place for longer than that. Modernization goals require buy-in from the business, and implementation decisions need strategic guidance, lest they lead to added complexity or duplication.

Top data pain points are data quality and timeliness. Executives point to substandard data (cited by 41%) and untimely delivery (33%) as the facets of their data operations most in need of improvement. Incomplete or inaccurate data leads enterprise users to question data trustworthiness. This helps explain why the most common modernization measure taken by our respondents’ organizations in the past two years has been to review and upgrade data governance (cited by 45%).

Cross-functional teams and DataOps are key levers to improve data quality. Modern data engineering practices are taking root in many businesses. Nearly half of organizations (48%) are empowering cross-functional data teams to enforce data quality standards, and 47% are prioritizing implementing DataOps (cited by 47%). These sorts of practices, which echo the agile methodologies and product thinking that have become standard in software engineering, are only starting to make their way into the data realm.

Compliance and security considerations often hinder modernization. Compliance and security concerns are major impediments to modernization, each cited by 44% of the respondents. Regulatory compliance is mentioned particularly frequently by those working in energy, public sector, transport, and financial services organizations. High costs are another oft-cited hurdle (40%), especially among the survey’s smaller organizations.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

How ASML took over the chipmaking chessboard

On a drab Monday morning in San Jose, California, at the drab San Jose Convention Center, attendees of the SPIE Advanced Lithography and Patterning Conference filed into the main ballroom until all the seats were taken and the crowd began to line the walls along the back and sides of the room. The convention brings together people who work in the chip industry from all over the world. And on this cool February morning, they had gathered to hear tech industry luminaries extol the late Gordon Moore, Intel’s cofounder and first CEO. 

Craig Barrett, also a former CEO of Intel, paid tribute, as did the legendary engineer Burn-Jeng Lin, a pioneer of immersion lithography, a patterning technology that enabled the chip industry to continue moving forward about 20 years ago. Mostly the speeches tended toward reflections on Moore himself—testaments to his genius, accomplishments, and humanity. But the last speaker of the morning, Martin van den Brink, took a different tone, more akin to a victory lap than a eulogy. Van den Brink is the outgoing co-president and CTO of ASML, the Dutch company that makes the machines that in turn let manufacturers produce the most advanced computer chips in the world. 

Moore’s Law holds that the number of transistors on an integrated circuit doubles every two years or so. In essence, it means that chipmakers are always trying to shrink the transistors on a microchip in order to pack more of them in. The cadence has been increasingly hard to maintain now that transistor dimensions measure in a few nanometers. In recent years ASML’s machines have kept Moore’s Law from sputtering out. Today, they are the only ones in the world capable of producing circuitry at the density needed to keep chipmakers roughly on track. It is the premise of Moore’s Law itself, van den Brink said, that drives the industry forward, year after year. 

To showcase how big an achievement it had been to maintain Moore’s Law since he joined ASML in 1984, van den Brink referred to the rice and chessboard problem, in which the number of grains of rice—a proxy for transistors—is doubled on each successive square. The exponential growth in the number of transistors that can be crammed on a chip since 1959 means that a single grain of rice back then has now become the equivalent of three ocean tankers, each 240 meters long, full of rice. It’s a lot of rice! Yet Moore’s Law compels the company—compels all of the technology industry—to keep pushing forward. Each era of computing, most recently AI, has brought increased demands, explained van den Brink. In other words, while three tankers full of rice may seem like a lot, tomorrow we’re going to need six. Then 12. Then 24. And so on. 

ASML’s technology, he assured the gathering, would be there to meet the demands, thanks to the company’s investment in creating tools capable of making ever finer features: the extreme-ultraviolet (EUV) lithography machines it rolled out widely in 2017, the high-numerical-aperture (high-NA) EUV machines it is rolling out now, and the hyper-NA EUV machines it has sketched out for the future. 

The tribute may have been designed for Gordon Moore, but at the end of van den Brink’s presentation the entire room rose to give him a standing ovation. Because if Gordon Moore deserves credit for creating the law that drove the progress of the industry, as van den Brink says, van den Brink and ASML deserve much of the credit for ensuring that progress remains possible. 

Yet that also means the pressure is on. ASML has to try and stay ahead of the demands of Moore’s Law. It has to continue making sure chipmakers can keep doubling the amount of rice on the chessboard. Will that be possible? Van den Brink sat down with MIT Technology Review to talk about ASML’s history, its legacy, and what comes next. 

Betting big on an unwieldy wavelength

ASML is such an undisputed leader in today’s chip ecosystem that it’s hard to believe the company’s market dominance really only dates back to 2017, when its EUV machine, after 17 years of development, upended the conventional process for making chips. 

Since the 1960s, photolithography has made it possible to pack computer chips with more and more components. The process involves crafting small circuits by guiding beams of light through a series of mirrors and lenses and then shining that light on a mask, which contains a pattern. Light conveys the chip design, layer by layer, eventually building circuits that form the computational building blocks of everything from smartphones to artificial intelligence. 

Martin Van Den Brink

ASML

Photolithographers have a limited set of tools at their disposal to make smaller designs, and for decades, the type of light used in the machine was the most critical. In the 1960s, machines used beams of visible light. The smallest features this light could draw on the chip were fairly large—a bit like using a marker to draw a portrait. 

Then manufacturers began using smaller and smaller wavelengths of light, and by the early 1980s, they could make chips with ultraviolet light. Nikon and Canon were the industry leaders. ASML, founded in 1984 as a subsidiary of Philips in Eindhoven, the Netherlands, was just a small player.

The way van den Brink tells it, he arrived at the company almost by accident. Philips was one of a few technology companies in Holland. When he began his career there in 1984 and was looking into the various opportunities at the company, he became intrigued by a photo of a lithography machine.

“I looked at the picture and I said, ‘It has mechanics, it has optics, it has software—this looks like a complex machine. I will be interested in that,” van den Brink told MIT Technology Review. “They said, well, you can do it, but the company will not be part of Philips. We are creating a joint venture with ASM International, and after the joint venture, you will not be part of Philips. I said yes because I couldn’t care less. And that’s how it began.”

When van den Brink joined in the 1980s, little about ASML made the company stand out from other major lithography players at the time. “We didn’t sell a substantial amount of systems until the ’90s. And we almost went bankrupt several times in that period,” van den Brink says. “So for us there was only one mission: to survive and show a customer that we could make a difference.”

By 1995, it had a strong enough foothold in the industry against competitors Nikon and Canon to go public. But all lithography makers were fighting the same battle to create smaller components on chips. 

If you could have eavesdropped on a meeting at ASML in the late 1990s about this predicament, you might have heard chatter about an idea called extreme-ultraviolet (EUV) lithography—along with concerns that it might never work). By that point, with pressure to condense chips beyond current capabilities, it seemed as if everyone was chasing EUV. The idea was to pattern chips with an even smaller wavelength of light (ultimately just 13.5 nanometers). To do so, ASML would have to figure out how to create, capture, and focus this light—processes that had stumped researchers for decades—and build a supply chain of specialized materials, including the smoothest mirrors ever produced. And to make sure the price point wouldn’t drive away its customers. 

Canon and Nikon were also pursuing EUV, but the US government denied them a license to participate in the consortium of companies and US national labs researching it. Both subsequently dropped out. Meanwhile ASML acquired the fourth major company pursuing EUV, SVG, in 2001. By 2006 it had shipped only two EUV prototype machines to research facilities, and it took until 2010 to ship one to a customer. Five years later, ASML warned in its annual report that EUV sales remained low, that customers weren’t eager to adopt the technology given its slow speed on the production line, and that if the pattern continued, it could have “material” effects on the business given the significant investment. 

Yet in 2017, after an investment of $6.5 billion in R&D over 17 years, ASML’s bet began to pay off. That year the company shipped 10 of its EUV machines, which cost over $100 million each, and announced that dozens more were on backorder. EUV machines went to the titans of semiconductor manufacturing—Intel, Samsung, and Taiwan Semiconductor Manufacturing Company (TSMC)—and a small number of others. With a brighter light source (meaning less time needed to impart patterns), among other improvements, the machines were capable of faster production speeds. The leap to EUV finally made economic sense to chipmakers, putting ASML essentially in a monopoly position.

Chris Miller, a history professor at Tufts University and author of Chip War: The Fight for the World’s Most Critical Technology, says that ASML was culturally equipped to see those experiments through. “It’s a stubborn willingness to invest in technology that most people thought wouldn’t work,” he told MIT Technology Review. “No one else was betting on EUV, because the development process was so long and expensive. It involves stretching the limits of physics, engineering, and chemistry.”

A key factor in ASML’s growth was its control of the supply chain. ASML acquired number of the companies it relies on, like Cymer, a maker of light sources. That strategy of pointedly controlling power in the supply chain extended to ASML’s customers, too. In 2012, it offered shares to its three biggest customers, which were able to maintain market dominance of their own in part because of the elite manufacturing power of ASML’s machines. 

“Our success depends on their success,” van den Brink told MIT Technology Review

It’s also a testament to ASML’s dominance that it is for the most part no longer allowed to sell its most advanced systems to customers in China. Though ASML still does business in China, in 2019, following pressure from the Trump administration, the Dutch government began imposing restrictions on ASML’s exports of EUV machines to China. Those rules were tightened further just last year and now also impose limits on some of the company’s deep-ultraviolet (DUV) machines, which are used to make less highly advanced chips than EUV systems.

Van den Brink says the way world leaders are now discussing lithography was unimaginable when the company began: “Our prime minister was sitting in front of Xi Jinping, not because he was from Holland—who would give a shit about Holland. He was there because we are making EUV.”

Just a few years after the first EUV machines shipped, ASML would face its second upheaval. Around the start of the pandemic, interest and progress in the field of artificial intelligence sent demand for computing power skyrocketing. Companies like OpenAI needed ever more powerful computer chips and by late 2022 the frenzy and investment in AI began to boil over. 

By that time, ASML was closing in on its newest innovation. Having already adopted a smaller wavelength of light (and realigned the entire semiconductor industry to it in the process), it now turned its attention to the other lever in its control: numerical aperture. That’s the measure of how much light a system can focus, and if ASML could increase it, the company’s machines could print even smaller components.

Doing so meant myriad changes. ASML had to source an even larger set of mirrors from its supplier Carl Zeiss, which had to be made ultra-smooth. Zeiss had to build entirely new machines, the sole purpose of which was to measure the smoothness of mirrors destined for ASML. The aim was to reduce the number of costly repercussions the change would have on the rest of the supply chain, like the companies that make reticles containing the designs of the chips. 

In December of 2023, ASML began shipping the first of its next-generation EUV device, a high-NA machine, to Intel’s facility in Hillsboro, Oregon. It’s an R&D version, and so far the only one in the field. It took seven planes and 50 trucks to get it to Intel’s plant, and installation of the machine, which is larger than a double-decker bus, will take six months. 

The high-NA machines will only be needed to produce the most precise layers of advanced chips for the industry; the designs on many others will still be printed using the previous generation of EUV machines or older DUV machines. 

ASML has received orders for high-NA machines from all its current EUV customers. They don’t come cheap: reports put the cost at $380 million. Intel was the first customer to strike, ordering the first machine available in early 2022. The company, which has lost significant market share to competitor TSMC, is betting that the new technology will give it a new foothold in the industry, even though other chipmakers will eventually have access to it too. 

“There are obvious benefits to Intel for being the first,” Miller says. “There are also obvious risks.” Sorting out which chips to use these machines for and how to get its money’s worth out of them will be a challenge for the company, according to Miller. 

The launch of these machines, if successful, might be seen as the crowning achievement of van den Brink’s career. But he is already moving on to what comes next.

The future

The next big idea for ASML, according to van den Brink and other company executives who spoke with MIT Technology Review, is hyper-NA technology. The company’s high-NA machines have a numerical aperture of .55. Hyper-NA tools would have a numerical aperture higher than 0.7. What that ultimately means is that hyper NA, if successful, will allow the company to create machines that let manufacturers shrink transistor dimensions even more—assuming that researchers can devise chip components that work well at such small dimensions. As it was with EUV in the early 2000s, it is still uncertain whether hyper NA is feasible—if nothing else, it could be cost prohibitive. Yet van den Brink projects cautious confidence. It is likely, he says, that the company will ultimately have three offerings available: low NA, high NA, and—if all goes well—hyper NA. 

“Hyper NA is a bit more risky,” says van den Brink. “We will be more cautious and more cost sensitive in the future. But if we can pull this off, we have a winning trio which takes care of all the advanced manufacturing for the foreseeable future.”

Yet although today everyone is banking on ASML to keep pushing the industry forward, there is speculation that a competitor could emerge from China. Van den Brink was dismissive of this possibility, citing the gap in even last-generation lithography. 

SMEE are making DUV machines, or at least claim they can,” he told MIT Technology Review, referring to a company that makes the predecessor to EUV lithography technology, and pointed out that ASML still has the dominant market share. The political pressures could mean more progress for China. But getting to the level of complexity involved in ASML’s suite of machines, with low, high, and hyper NA is another matter, he says: “I feel quite comfortable that this will be a long time before they can copy that.”

Miller, from Tufts University, is confident that Chinese companies will eventually develop these sorts of technologies on their own, but agrees that the question is when. “If it’s in a decade, it will be too late,” he says. 

The real question, perhaps, is not who will make the machines, but whether Moore’s Law will hold at all. Nvidia CEO Jensen Huang has already declared it dead. But when asked what he thought might eventually cause Moore’s Law to finally stall out, van den Brink rejected the premise entirely. 

“There’s no reason to believe this will stop. You won’t get the answer from me where it will end,” he said. “It will end when we’re running out of ideas where the value we create with all this will not balance with the cost it will take. Then it will end. And not by the lack of ideas.”

He had struck a similar posture during his Moore tribute at the SPIE conference, exuding confidence. “I’m not sure who will give the presentation 10 years from now,” he said, going back to his rice analogy. “But my successors,” he claimed, “will still have the opportunity to fill the chessboard.”

This story was updated to clarify information about ASML’s operations in China.