Want less mining? Switch to clean energy.

Political fights over mining and minerals are heating up, and there are growing environmental and sociological concerns about how to source the materials the world needs to build new energy technologies. 

But low-emissions energy sources, including wind, solar, and nuclear power, have a smaller mining footprint than coal and natural gas, according to a new report from the Breakthrough Institute released today.

The report’s findings add to a growing body of evidence that technologies used to address climate change will likely lead to a future with less mining than a world powered by fossil fuels. However, experts point out that oversight will be necessary to minimize harm from the mining needed to transition to lower-emission energy sources. 

“In many ways, we talk so much about the mining of clean energy technologies, and we forget about the dirtiness of our current system,” says Seaver Wang, an author of the report and co-director of Climate and Energy at the Breakthrough Institute, an environmental research center.  

In the new analysis, Wang and his colleagues considered the total mining footprint of different energy technologies, including the amount of material needed for these energy sources and the total amount of rock that needs to be moved to extract that material.

Many minerals appear in small concentrations in source rock, so the process of extracting them has a large footprint relative to the amount of final product. A mining operation would need to move about seven kilograms of rock to get one kilogram of aluminum, for instance. For copper, the ratio is much higher, at over 500 to one. Taking these ratios into account allows for a more direct comparison of the total mining required for different energy sources. 

With this adjustment, it becomes clear that the energy source with the highest mining burden is coal. Generating one gigawatt-hour of electricity with coal requires 20 times the mining footprint as generating the same electricity with low-carbon power sources like wind and solar. Producing the same electricity with natural gas requires moving about twice as much rock.

Tallying up the amount of rock moved is an imperfect approximation of the potential environmental and sociological impact of mining related to different technologies, Wang says, but the report’s results allow researchers to draw some broad conclusions. One is that we’re on track for less mining in the future. 

Other researchers have projected a decrease in mining accompanying a move to low-emissions energy sources. “We mine so many fossil fuels today that the sum of mining activities decreases even when we assume an incredibly rapid expansion of clean energy technologies,” Joey Nijnens, a consultant at Monitor Deloitte and author of another recent study on mining demand, said in an email.

That being said, potentially moving less rock around in the future “hardly means that society shouldn’t look for further opportunities to reduce mining impacts throughout the energy transition,” Wang says.

There’s already been progress in cutting down on the material required for technologies like wind and solar. Solar modules have gotten more efficient, so the same amount of material can yield more electricity generation. Recycling can help further cut material demand in the future, and it will be especially crucial to reduce the mining needed to build batteries.  

Resource extraction may decrease overall, but it’s also likely to increase in some places as our demands change, researchers pointed out in a 2021 study. Between 32% and 40% of the mining increase in the future could occur in countries with weak, poor, or failing resource governance, where mining is more likely to harm the environment and may fail to benefit people living near the mining projects. 

“We need to ensure that the energy transition is accompanied by responsible mining that benefits local communities,” Takuma Watari, a researcher at the National Institute for Environmental Studies and an author of the study, said via email. Otherwise, the shift to lower-emissions energy sources could lead to a reduction of carbon emissions in the Global North “at the expense of increasing socio-environmental risks in local mining areas, often in the Global South.” 

Strong oversight and accountability are crucial to make sure that we can source minerals in a responsible way, Wang says: “We want a rapid energy transition, but we also want an energy transition that’s equitable.”

My biotech plants are dead

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Six weeks ago, I pre-ordered the “Firefly Petunia,” a houseplant engineered with genes from bioluminescent fungi so that it glows in the dark. 

After years of writing about anti-GMO sentiment in the US and elsewhere, I felt it was time to have some fun with biotech. These plants are among the first direct-to-consumer GM organisms you can buy, and they certainly seem like the coolest.

But when I unboxed my two petunias this week, they were in bad shape, with rotted leaves. And in a day, they were dead crisps. My first attempt to do biotech at home is a total bust, and it cost me $84, shipping included.

My plants did arrive in a handsome black box with neon lettering that alerted me to the living creature within. The petunias, about five inches tall, were each encased in a see-through plastic pod to keep them upright. Government warnings on the back of the box assured me they were free of Japanese beetles, sweet potato weevils, the snail Helix aspera, and gypsy moths.

The problem was when I opened the box. As it turns out, I left for a week’s vacation in Florida the same day that Light Bio, the startup selling the petunia, sent me an email saying “Glowing plants headed your way,” with a UPS tracking number. I didn’t see the email, and even if I had, I wasn’t there to receive them. 

That meant my petunias sat in darkness for seven days. The box became their final sarcophagus.

My fault? Perhaps. But I had no idea when Light Bio would ship my order. And others have had similar experiences. Mat Honan, the editor in chief of MIT Technology Review, told me his petunia arrived the day his family flew to Japan. Luckily, a house sitter feeding his lizard eventually opened the box, and Mat reports the plant is still clinging to life in his yard.

One of the ill-fated petunia plants and its sarcophagus. Credit: Antonio Regalado
ANTONIO REGALADO

But what about the glow? How strong is it? 

Mat says so far, he doesn’t notice any light coming from the plant, even after carrying it into a pitch-dark bathroom. But buyers may have to wait a bit to see anything. It’s the flowers that glow most brightly, and you may need to tend your petunia for a couple of weeks before you get blooms and see the mysterious effect.  

“I had two flowers when I opened mine, but sadly they dropped and I haven’t got to see the brightness yet. Hoping they will bloom again soon,” says Kelsey Wood, a postdoctoral researcher at the University of California, Davis. 

She would like to use the plants in classes she teaches at the university. “It’s been a dream of synthetic biologists for so many years to make a bioluminescent plant,” she says. “But they couldn’t get it bright enough to see with the naked eye.”

Others are having success right out of the box. That’s the case with Tharin White, publisher of EYNTK.info, a website about theme parks. “It had a lot of protection around it and a booklet to explain what you needed to do to help it,” says White. “The glow is strong, if you are [in] total darkness. Just being in a dark room, you can’t really see it. That being said, I didn’t expect a crazy glow, so [it] meets my expectations.”

That’s no small recommendation coming from White, who has been a “cast member” at Disney parks and an operator of the park’s Avatar ride, named after the movie whose action takes place on a planet where the flora glows. “I feel we are leaps closer to Pandora—The World of Avatar being reality,” White posted to his X account.

Chronobiologist Brian Hodge also found success by resettling his petunia immediately into a larger eight-inch pot, giving it flower food and a good soaking, and putting it in the sunlight. “After a week or so it really started growing fast, and the buds started to show up around day 10. Their glow is about what I expected. It is nothing like a neon light but more of a soft gentle glow,” says Hodge, a staff scientist at the University of California, San Francisco.

In his daily work, Hodge has handled bioluminescent beings before—bacteria mostly—and says he always needed photomultiplier tubes to see anything. “My experience with bioluminescent cells is that the light they would produce was pretty hard to see with the naked eye,” he says. “So I was happy with the amount of light I was seeing from the plants. You really need to turn off all the lights for them to really pop out at you.”

Hodge posted a nifty snapshot of his petunia, but only after setting his iPhone for a two-second exposure.

Light Bio’s CEO Keith Wood didn’t respond to an email about how my plants died, but in an interview last month he told me sales of the biotech plant had been “viral” and that the company would probably run out of its initial supply. To generate new ones, it hires commercial greenhouses to place clippings in water, where they’ll sprout new roots after a couple of weeks. According to Wood, the plant is “a rare example where the benefits of GM technology are easily recognized and experienced by the public.”

Hodge says he got interested in the plants after reading an article about combating light pollution by using bioluminescent flora instead of streetlamps. As a biologist who studies how day and night affect life, he’s worried that city lights and computer screens are messing with natural cycles.

“I just couldn’t pass up being one of the first to own one,” says Hodge. “Once you flip the lights off, the glow is really beautiful … and it sorta feels like you are witnessing something out of a futuristic sci-fi movie!” 

It makes me tempted to try again. 


Now read the rest of The Checkup

From the archives 

We’re not sure if rows of glowing plants can ever replace streetlights, but there’s no doubt light pollution is growing. Artificial light emissions on Earth grew by about 50% between 1992 and 2017—and as much as 400% in some regions. That’s according to Shel Evergreen,in his story on the switch to bright LED streetlights.

It’s taken a while for scientists to figure out how to make plants glow brightly enough to interest consumers. In 2016, I looked at a failed Kickstarter that promised glow-in-the-dark roses but couldn’t deliver.  

Another thing 

Cassandra Willyard is updating us on the case of Lisa Pisano, a 54-year-old woman who is feeling “fantastic” two weeks after surgeons gave her a kidney from a genetically modified pig. It’s the latest in a series of extraordinary animal-to-human organ transplants—a technology, known as xenotransplantation, that may end the organ shortage.

From around the web

Taiwan’s government is considering steps to ease restrictions on the use of IVF. The country has an ultra-low birth rate, but it bans surrogacy, limiting options for male couples. One Taiwanese pair spent $160,000 to have a child in the United States.  (CNN)

Communities in Appalachia are starting to get settlement payments from synthetic-opioid makers like Johnson & Johnson, which along with other drug vendors will pay out $50 billion over several years. But the money, spread over thousands of jurisdictions, is “a feeble match for the scale of the problem.” (Wall Street Journal)

A startup called Climax Foods claims it has used artificial intelligence to formulate vegan cheese that tastes “smooth, rich, and velvety,” according to writer Andrew Rosenblum. He relates the results of his taste test in the new “Build” issue of MIT Technology Review. But one expert Rosenblum spoke to warns that computer-generated cheese is “significantly” overhyped.

AI hype continued this week in medicine when a startup claimed it has used “generative AI” to quickly discover new versions of CRISPR, the powerful gene-editing tool. But new gene-editing tricks won’t conquer the main obstacle, which is how to deliver these molecules where they’re needed in the bodies of patients. (New York Times).

Here’s the defense tech at the center of US aid to Israel, Ukraine, and Taiwan

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

After weeks of drawn-out congressional debate over how much the United States should spend on conflicts abroad, President Joe Biden signed a $95.3 billion aid package into law on Wednesday.

The bill will send a significant quantity of supplies to Ukraine and Israel, while also supporting Taiwan with submarine technology to aid its defenses against China. It’s also sparked renewed calls for stronger crackdowns on Iranian-produced drones. 

Though much of the money will go toward replenishing fairly standard munitions and supplies, the spending bill provides a window into US strategies around four key defense technologies that continue to reshape how today’s major conflicts are being fought.

For a closer look at the military technology at the center of the aid package, I spoke with Andrew Metrick, a fellow with the defense program at the Center for a New American Security, a think tank.

Ukraine and the role of long-range missiles

Ukraine has long sought the Army Tactical Missile System (ATACMS), a long-range ballistic missile made by Lockheed Martin. First debuted in Operation Desert Storm in Iraq in 1990, it’s 13 feet high, two feet wide, and over 3,600 pounds. It can use GPS to accurately hit targets 190 miles away. 

Last year, President Biden was apprehensive about sending such missiles to Ukraine, as US stockpiles of the weapons were relatively low. In October, the administration changed tack. The US sent shipments of ATACMS, a move celebrated by President Volodymyr Zelensky of Ukraine, but they came with restrictions: the missiles were older models with a shorter range, and Ukraine was instructed not to fire them into Russian territory, only Ukrainian territory. 

This week, just hours before the new aid package was signed, multiple news outlets reported that the US had secretly sent more powerful long-range ATACMS to Ukraine several weeks before. They were used on Tuesday, April 23, to target a Russian airfield in Crimea and Russian troops in Berdiansk, 50 miles southwest of Mariupol.

The long range of the weapons has proved essential for Ukraine, says Metrick. “It allows the Ukrainians to strike Russian targets at ranges for which they have very few other options,” he says. That means being able to hit locations like supply depots, command centers, and airfields behind Russia’s front lines in Ukraine. This capacity has grown more important as Ukraine’s troop numbers have waned, Metrick says.

Replenishing Israel’s Iron Dome

On April 13, Iran launched its first-ever direct attack on Israeli soil. In the attack, which Iran says was retaliation for Israel’s airstrike on its embassy in Syria, hundreds of missiles were lobbed into Israeli airspace. Many of them were neutralized by the web of cutting-edge missile launchers dispersed throughout Israel that can automatically detonate incoming strikes before they hit land. 

One of those systems is Israel’s Iron Dome, in which radar systems detect projectiles and then signal units to launch defensive missiles that detonate the target high in the sky before it strikes populated areas. Israel’s other system, called David’s Sling, works a similar way but can identify rockets coming from a greater distance, upwards of 180 miles. 

Both systems are hugely costly to research and build, and the new US aid package allocates $15 billion to replenish their missile stockpile. The missiles can cost anywhere from $100,000 to $10 million each, and a system like Iron Dome might fire them daily during intense periods of conflict. 

The aid comes as funding for Israel has grown more contentious amid the dire conditions faced by displaced Palestinians in Gaza. While the spending bill worked its way through Congress, increasing numbers of Democrats sought to put conditions on the military aid to Israel, particularly after an Israeli air strike on April 1 killed seven aid workers from World Central Kitchen, an international food charity. The funding package does provide $9 billion in humanitarian assistance for the conflict, but the efforts to impose conditions for Israeli military aid failed. 

Taiwan and underwater defenses against China

A rising concern for the US defense community—and a subject of “wargaming” simulations that Metrick has carried out—is an amphibious invasion of Taiwan from China. The rising risk of that scenario has driven the US to build and deploy larger numbers of advanced submarines, Metrick says. A bigger fleet of these submarines would be more likely to keep attacks from China at bay, thereby protecting Taiwan.

The trouble is that the US shipbuilding effort, experts say, is too slow. It’s been hampered by budget cuts and labor shortages, but the new aid bill aims to jump-start it. It will provide $3.3 billion to do so, specifically for the production of Columbia-class submarines, which carry nuclear weapons, and Virginia-class submarines, which carry conventional weapons. 

Though these funds aim to support Taiwan by building up the US supply of submarines, the package also includes more direct support, like $2 billion to help it purchase weapons and defense equipment from the US. 

The US’s Iranian drone problem 

Shahed drones are used almost daily on the Russia-Ukraine battlefield, and Iran launched more than 100 against Israel earlier this month. Produced by Iran and resembling model planes, the drones are fast, cheap, and lightweight, capable of being launched from the back of a pickup truck. They’re used frequently for potent one-way attacks, where they detonate upon reaching their target. US experts say the technology is tipping the scales toward Russian and Iranian military groups and their allies. 

The trouble of combating them is partly one of cost. Shooting down the drones, which can be bought for as little as $40,000, can cost millions in ammunition.

“Shooting down Shaheds with an expensive missile is not, in the long term, a winning proposition,” Metrick says. “That’s what the Iranians, I think, are banking on. They can wear people down.”

This week’s aid package renewed White House calls for stronger sanctions aimed at curbing production of the drones. The United Nations previously passed rules restricting any drone-related material from entering or leaving Iran, but those expired in October. The US now wants them reinstated. 

Even if that happens, it’s unlikely the rules would do much to contain the Shahed’s dominance. The components of the drones are not all that complex or hard to obtain to begin with, but experts also say that Iran has built a sprawling global supply chain to acquire the materials needed to manufacture them and has worked with Russia to build factories. 

“Sanctions regimes are pretty dang leaky,” Metrick says. “They [Iran] have friends all around the world.”

Chatbot answers are all made up. This new tool helps you figure out which ones to trust.

Large language models are famous for their ability to make things up—in fact, it’s what they’re best at. But their inability to tell fact from fiction has left many businesses wondering if using them is worth the risk.

A new tool created by Cleanlab, an AI startup spun out of a quantum computing lab at MIT, is designed to give high-stakes users a clearer sense of how trustworthy these models really are. Called the Trustworthy Language Model, it gives any output generated by a large language model a score between 0 and 1, according to its reliability. This lets people choose which responses to trust and which to throw out. In other words: a BS-o-meter for chatbots.

Cleanlab hopes that its tool will make large language models more attractive to businesses worried about how much stuff they invent. “I think people know LLMs will change the world, but they’ve just got hung up on the damn hallucinations,” says Cleanlab CEO Curtis Northcutt.

Chatbots are quickly becoming the dominant way people look up information on a computer. Search engines are being redesigned around the technology. Office software used by billions of people every day to create everything from school assignments to marketing copy to financial reports now comes with chatbots built in. And yet a study put out in November by Vectara, a startup founded by former Google employees, found that chatbots invent information at least 3% of the time. It might not sound like much, but it’s a potential for error most businesses won’t stomach.

Cleanlab’s tool is already being used by a handful of companies, including Berkeley Research Group, a UK-based consultancy specializing in corporate disputes and investigations. Steven Gawthorpe, associate director at Berkeley Research Group, says the Trustworthy Language Model is the first viable solution to the hallucination problem that he has seen: “Cleanlab’s TLM gives us the power of thousands of data scientists.”

In 2021, Cleanlab developed technology that discovered errors in 34 popular data sets used to train machine-learning algorithms; it works by by measuring the differences in output across a range of models trained on that data. That tech is now used by several large companies, including Google, Tesla, and the banking giant Chase. The Trustworthy Language Model takes the same basic idea—that disagreements between models can be used to measure the trustworthiness of the overall system—and applies it to chatbots.

In a demo Cleanlab gave to MIT Technology Review last week, Northcutt typed a simple question into ChatGPT: “How many times does the letter ‘n’ appear in ‘enter’?” ChatGPT answered: “The letter ‘n’ appears once in the word ‘enter.’” That correct answer promotes trust. But ask the question a few more times and ChatGPT answers: “The letter ‘n’ appears twice in the word ‘enter.’”

“Not only does it often get it wrong, but it’s also random, you never know what it’s going to output,” says Northcutt. “Why the hell can’t it just tell you that it outputs different answers all the time?”

Cleanlab’s aim is to make that randomness more explicit. Northcutt asks the Trustworthy Language Model the same question. “The letter ‘n’ appears once in the word ‘enter,’” it says—and scores its answer 0.63. Six out of 10 is not a great score, suggesting that the chatbot’s answer to this question should not be trusted.

It’s a basic example, but it makes the point. Without the score, you might think the chatbot knew what it was talking about, says Northcutt. The problem is that data scientists testing large language models in high-risk situations could be misled by a few correct answers and assume that future answers will be correct too: “They try things out, they try a few examples, and they think this works. And then they do things that result in really bad business decisions.”

The Trustworthy Language Model draws on multiple techniques to calculate its scores. First, each query submitted to the tool is sent to several different large language models. Cleanlab is using five versions of DBRX, an open-source model developed by Databricks, an AI firm based in San Francisco. (But the tech will work with any model, says Northcutt, including Meta’s Llama models or OpenAI’s GPT series, the models behind ChatpGPT.) If the responses from each of these models are the same or similar, it will contribute to a higher score.

At the same time, the Trustworthy Language Model also sends variations of the original query to each of the DBRX models, swapping in words that have the same meaning. Again, if the responses to synonymous queries are similar, it will contribute to a higher score. “We mess with them in different ways to get different outputs and see if they agree,” says Northcutt.

The tool can also get multiple models to bounce responses off one another: “It’s like, ‘Here’s my answer—what do you think?’ ‘Well, here’s mine—what do you think?’ And you let them talk.” These interactions are monitored and measured and fed into the score as well.

Nick McKenna, a computer scientist at Microsoft Research in Cambridge, UK, who works on large language models for code generation, is optimistic that the approach could be useful. But he doubts it will be perfect. “One of the pitfalls we see in model hallucinations is that they can creep in very subtly,” he says.

In a range of tests across different large language models, Cleanlab shows that its trustworthiness scores correlate well with the accuracy of those models’ responses. In other words, scores close to 1 line up with correct responses, and scores close to 0 line up with incorrect ones. In another test, they also found that using the Trustworthy Language Model with GPT-4 produced more reliable responses than using GPT-4 by itself.

Large language models generate text by predicting the most likely next word in a sequence. In future versions of its tool, Cleanlab plans to make its scores even more accurate by drawing on the probabilities that a model used to make those predictions. It also wants to access the numerical values that models assign to each word in their vocabulary, which they use to calculate those probabilities. This level of detail is provided by certain platforms, such as Amazon’s Bedrock, that businesses can use to run large language models.

Cleanlab has tested its approach on data provided by Berkeley Research Group. The firm needed to search for references to health-care compliance problems in tens of thousands of corporate documents. Doing this by hand can take skilled staff weeks. By checking the documents using the Trustworthy Language Model, Berkeley Research Group was able to see which documents the chatbot was least confident about and check only those. It reduced the workload by around 80%, says Northcutt.

In another test, Cleanlab worked with a large bank (Northcutt would not name it but says it is a competitor to Goldman Sachs). Similar to Berkeley Research Group, the bank needed to search for references to insurance claims in around 100,000 documents. Again, the Trustworthy Language Model reduced the number of documents that needed to be hand-checked by more than half.

Running each query multiple times through multiple models takes longer and costs a lot more than the typical back-and-forth with a single chatbot. But Cleanlab is pitching the Trustworthy Language Model as a premium service to automate high-stakes tasks that would have been off limits to large language models in the past. The idea is not for it to replace existing chatbots but to do the work of human experts. If the tool can slash the amount of time that you need to employ skilled economists or lawyers at $2,000 an hour, the costs will be worth it, says Northcutt.

In the long run, Northcutt hopes that by reducing the uncertainty around chatbots’ responses, his tech will unlock the promise of large language models to a wider range of users. “The hallucination thing is not a large-language-model problem,” he says. “It’s an uncertainty problem.”

Almost every Chinese keyboard app has a security flaw that reveals what users type

Almost all keyboard apps used by Chinese people around the world share a security loophole that makes it possible to spy on what users are typing. 

The vulnerability, which allows the keystroke data that these apps send to the cloud to be intercepted, has existed for years and could have been exploited by cybercriminals and state surveillance groups, according to researchers at the Citizen Lab, a technology and security research lab affiliated with the University of Toronto.

These apps help users type Chinese characters more efficiently and are ubiquitous on devices used by Chinese people. The four most popular apps—built by major internet companies like Baidu, Tencent, and iFlytek—basically account for all the typing methods that Chinese people use. Researchers also looked into the keyboard apps that come preinstalled on Android phones sold in China. 

What they discovered was shocking. Almost every third-party app and every Android phone with preinstalled keyboards failed to protect users by properly encrypting the content they typed. A smartphone made by Huawei was the only device where no such security vulnerability was found.

In August 2023, the same researchers found that Sogou, one of the most popular keyboard apps, did not use Transport Layer Security (TLS) when transmitting keystroke data to its cloud server for better typing predictions. Without TLS, a widely adopted international cryptographic protocol that protects users from a known encryption loophole, keystrokes can be collected and then decrypted by third parties.

“Because we had so much luck looking at this one, we figured maybe this generalizes to the others, and they suffer from the same kinds of problems for the same reason that the one did,” says Jeffrey Knockel, a senior research associate at the Citizen Lab, “and as it turns out, we were unfortunately right.”

Even though Sogou fixed the issue after it was made public last year, some Sogou keyboards preinstalled on phones are not updated to the latest version, so they are still subject to eavesdropping. 

This new finding shows that the vulnerability is far more widespread than previously believed. 

“As someone who also has used these keyboards, this was absolutely horrifying,” says Mona Wang, a PhD student in computer science at Princeton University and a coauthor of the report. 

“The scale of this was really shocking to us,” says Wang. “And also, these are completely different manufacturers making very similar mistakes independently of one another, which is just absolutely shocking as well.”

The massive scale of the problem is compounded by the fact that these vulnerabilities aren’t hard to exploit. “You don’t need huge supercomputers crunching numbers to crack this. You don’t need to collect terabytes of data to crack it,” says Knockel. “If you’re just a person who wants to target another person on your Wi-Fi, you could do that once you understand the vulnerability.” 

The ease of exploiting the vulnerabilities and the huge payoff—knowing everything a person types, potentially including bank account passwords or confidential materials—suggest that it’s likely they have already been taken advantage of by hackers, the researchers say. But there’s no evidence of this, though state hackers working for Western governments targeted a similar loophole in a Chinese browser app in 2011.

Most of the loopholes found in this report are “so far behind modern best practices” that it’s very easy to decrypt what people are typing, says Jedidiah Crandall, an associate professor of security and cryptography at Arizona State University, who was consulted in the writing of this report. Because it doesn’t take much effort to decrypt the messages, this type of loophole can be a great target for large-scale surveillance of massive groups, he says.

After the researchers got in contact with companies that developed these keyboard apps, the majority of the loopholes were fixed. But a few companies have been unresponsive, and the vulnerability still exists in some apps and phones, including QQ Pinyin and Baidu, as well as in any keyboard app that hasn’t been updated to the latest version. Baidu, Tencent, iFlytek, and Samsung did not immediately reply to press inquiries sent by MIT Technology Review.

One potential cause of the loopholes’ ubiquity is that most of these keyboard apps were developed in the 2000s, before the TLS protocol was commonly adopted in software development. Even though the apps have been through numerous rounds of updates since then, inertia could have prevented developers from adopting a safer alternative.

The report points out that language barriers and different tech ecosystems prevent English- and Chinese-speaking security researchers from sharing information that could fix issues like this more quickly. For example, because Google’s Play store is blocked in China, most Chinese apps are not available in Google Play, where Western researchers often go for apps to analyze. 

Sometimes all it takes is a little additional effort. After two emails about the issue to iFlytek were met with silence, the Citizen Lab researchers changed the email title to Chinese and added a one-line summary in Chinese to the English text. Just three days later, they received an email from iFlytek, saying that the problem had been resolved.

A new kind of gene-edited pig kidney was just transplanted into a person

A month ago, Richard Slayman became the first living person to receive a kidney transplant from a gene-edited pig. Now, a team of researchers from NYU Langone Health reports that Lisa Pisano, a 54-year-old woman from New Jersey, has become the second. Her new kidney has just a single genetic modification—an approach that researchers hope could make scaling up the production of pig organs simpler. 

Pisano, who had heart failure and end-stage kidney disease, underwent two operations, one to fit her with a heart pump to improve her circulation and the second to perform the kidney transplant. She is still in the hospital, but doing well. “Her kidney function 12 days out from the transplant is perfect, and she has no signs of rejection,” said Robert Montgomery, director of the NYU Langone Transplant Institute, who led the transplant surgery, at a press conference on Wednesday.

“I feel fantastic,” said Pisano, who joined the press conference by video from her hospital bed.

Pisano is the fourth living person to receive a pig organ. Two men who received heart transplants at the University of Maryland Medical Center in 2022 and 2023 both died within a couple of months after receiving the organ. Slayman, the first pig kidney recipient, is still doing well, says Leonardo Riella, medical director for kidney transplantation at Massachusetts General Hospital, where Slayman received the transplant.  

“It’s an awfully exciting time,” says Andrew Cameron, a transplant surgeon at Johns Hopkins Medicine in Baltimore. “There is a bright future in which all 100,000 patients on the kidney transplant wait list, and maybe even the 500,000 Americans on dialysis, are more routinely offered a pig kidney as one of their options,” Cameron adds.

All the living patients who have received pig hearts and kidneys have accessed the organs under the FDA’s expanded access program, which allows patients with life-threatening conditions to receive investigational therapies outside of clinical trials. But patients may soon have another option. Both Johns Hopkins and NYU are aiming to start clinical trials in 2025. 

In the coming weeks, doctors will be monitoring Pisano closely for signs of organ rejection, which occurs when the recipient’s immune system identifies the new tissue as foreign and begins to attack it. That’s a concern even with human kidney transplants, but it’s an even greater risk when the tissue comes from another species, a procedure known as xenotransplantation.

To prevent rejection, the companies that produce these pigs have introduced genetic modifications to make their tissue appear less foreign and reduce the chance that it will spark an immune attack. But it’s not yet clear just how many genetic alterations are necessary to prevent rejection. Slayman’s kidney came from a pig developed by eGenesis, a company based in Cambridge, Massachusetts; it has 69 modifications. The vast majority of those modifications focus on inactivating viral DNA in the pig’s genome to make sure those viruses can’t be transmitted to the patient. But 10 were employed to help prevent the immune system from rejecting the organ.

Pisano’s kidney came from pigs that carry just a single genetic alteration—to eliminate a specific sugar called alpha-gal, which can trigger immediate organ rejection, from the surface of its cells. “We believe that less is more, and that the main gene edit that has been introduced into the pigs and the organs that we’ve been using is the fundamental problem,” Montgomery says. “Most of those other edits can be replaced by medications that are available to humans.”

JOE CARROTTA/NYU LANGONE HEALTH

The kidney is implanted along with a piece of the pig’s thymus gland, which plays a key role in educating white blood cells to distinguish between friend and foe.  The idea is that the thymus will help Pisano’s immune system learn to accept the foreign tissue. The so-called UThymoKidney is being developed by United Therapeutics Corporation, but the company has also created pigs with 10 genetic alterations. The company “wanted to take multiple shots on goal,” says Leigh Peterson, executive vice president of product development and xenotransplantation at United Therapeutics.

There’s one major advantage to using a pig with a single genetic modification. “The simpler it is, in theory, the easier it’s going to be to breed and raise these animals,” says Jayme Locke, a transplant surgeon at the University of Alabama at Birmingham. Pigs with a single genetic change can be bred, but pigs with many alterations require cloning, Montgomery says. “These pigs could be rapidly expanded, and more quickly and completely solve the organ supply crisis.”

But Cameron isn’t sure that a single alteration will be enough to prevent rejection. “I think most people are worried that one knockout might not be enough, but we’re hopeful,” he says.

So is Pisano, who is working to get strong enough to leave the hospital. “I just want to spend time with my grandkids and play with them and be able to go shopping,” she says.

Hydrogen trains could revolutionize how Americans get around

Like a mirage speeding across the dusty desert outside Pueblo, Colorado, the first hydrogen-fuel-cell passenger train in the United States is getting warmed up on its test track. Made by the Swiss manufacturer Stadler and known as the FLIRT (for “Fast Light Intercity and Regional Train”), it will soon be shipped to Southern California, where it is slated to carry riders on San Bernardino County’s Arrow commuter rail service before the end of the year. In the insular world of railroading, this hydrogen-powered train is a Rorschach test. To some, it represents the future of rail transportation. To others, it looks like a big, shiny distraction.

In the quest to decarbonize the transportation sector—the largest source of greenhouse-gas emissions in the United States—rubber-tired electric vehicles tend to dominate the conversation. But to reach the Biden administration’s goal of net-zero emissions by 2050, other forms of transportation, including those on steel wheels, will need to find new energy sources too. 

The best way to decarbonize railroads is the subject of growing debate among regulators, industry, and activists. Things are coming to a head in California, which recently enacted rules requiring all new passenger locomotives operating in the state to be zero-emissions by 2030 and all new freight locomotives to meet that threshold by 2035. Federal regulators could be close behind.

The debate is partly technological, revolving around whether hydrogen fuel cells, batteries, or overhead electric wires offer the best performance for different railroad situations. But it’s also political: a question of the extent to which decarbonization can, or should, usher in a broader transformation of rail transportation. For decades, the government has largely deferred to the will of the big freight rail conglomerates. Decarbonization could shift that power dynamic—or further entrench it. 

So far, hydrogen has been the big technological winner in California. Over the past year, the California Department of Transportation, known as Caltrans, has ordered 10 hydrogen FLIRT trains at a cost of $207 million. After the Arrow service, the next rail line to receive hydrogen trains is scheduled to be the Valley Rail service in the Central Valley. That line will connect Sacramento to California High-Speed Rail, the under-construction system that will eventually link Los Angeles and San Francisco.

In its analysis of different zero-­emissions rail technologies, Caltrans found that hydrogen trains, powered by onboard fuel cells that convert hydrogen into electricity, had better range and shorter refueling times than battery-electric trains, which function much like electric cars. Hydrogen was also a cheaper power source than overhead wire (or simply “electrification,” in industry parlance), which would cost an estimated $6.8 billion to install on the state’s three main intercity routes. (California High-Speed Rail and its shared track on the Bay Area’s Caltrain commuter service will both be powered by overhead wire, since electrification is necessary to reach speeds of over 100 miles per hour.)  

Further complicating the electrification option, installing overhead wire on the rest of California’s passenger network would require the consent of BNSF and Union Pacific, the two major freight rail carriers that own most of the state’s tracks. The companies have long opposed the installation of wire above their tracks, which they say could interfere with double-stacked freight trains. 

Electrifying all 144,000 miles of the nation’s freight rail tracks would cost hundreds of billions of dollars, according to a report by the Association of American Railroads (AAR), an industry trade group, and even electrifying smaller sections of track would result in ongoing disruptions to train traffic and shift freight customers from trains to trucks, the group claims. Electrification would also require the cooperation of electric utilities, leaving railroads vulnerable to the grid connection delays that plague renewable-energy developers. 

“We have long stretches of track outside of urbanized areas,” says Marcin Taraszkiewicz, an engineer at the engineering and architecture firm HDR who has worked on Caltrans’s hydrogen train program. Getting power to those rugged places can be a challenge, he says, especially when infrastructure must be designed to resist natural disasters like wildfires and earthquakes: “If that wire goes down, you’re going to be in trouble.” 

The AAR thinks California’s railroad emissions regulations are too much, too soon, especially given that freight rail is already three to four times more fuel efficient than trucking. Last year, the AAR sued the state over its latest railroad emissions regulations, in a case that is still pending. Though the group generally prefers hydrogen to electrification as a long-term solution, it contends that this alternative technology is not yet mature enough to meet the industry’s needs. 

A group called Californians for Electric Rail also views hydrogen as an immature technology. “From an environmental as well as a cost perspective, it’s a really circular and indirect way of doing things,” says Adriana Rizzo, the group’s founder, who is an advocate for electrifying the state’s regional and intercity tracks with overhead wire.

Synthesizing, transporting, and using the tiny hydrogen molecule can be very inefficient. Hydrogen trains currently require roughly three times more energy per mile than trains powered by overhead wire. And the environmental benefits of hydrogen—the ostensible purpose of this new technology—remain largely theoretical, since the vast majority of hydrogen today is produced by burning fossil fuels like methane. Natural-gas utilities have been among the hydrogen industry’s biggest boosters, because they are already able to produce and transport the gas. 

Opinions on the merits of hydrogen trains have been mixed. In 2022, following a pilot program, the German state of Baden-Württemberg determined that this technology would be 80% more expensive to operate over the long run than other zero-emissions alternatives. 

Kyle Gradinger, assistant deputy director for rail at Caltrans, thinks there’s been some “Twittersphere exaggeration” about the problems with hydrogen trains. In tests, the hydrogen-powered Stadler FLIRT is “performing as well as we expected, if not better,” he says. Since they also use electric motors, hydrogen trains offer many of the same benefits as trains powered by overhead wire, Gradinger says. Both technologies will be quieter, cleaner, and faster than diesel trains. 

Caltrans hopes to obtain all the hydrogen for its trains from zero-emissions sources by 2030—a goal bolstered by a draft clean-­hydrogen rule issued by the Biden administration in 2023. California is one of seven “hydrogen hubs” in the US, public-private partnerships that will receive billions of dollars in subsidies from the Infrastructure Investment and Jobs Act for developing hydrogen technologies. It’s too early to say whether Caltrans will be able to procure funding for its hydrogen fueling stations and supply chains through these subsidies, Gradinger says, but it’s certainly a possibility. So far, California is the only US state to have purchased hydrogen trains. 

Advocates like Rizzo fear, however, that all this investment in hydrogen infrastructure will get in the way of more transformative changes to passenger rail in California. 

“Why are we putting millions of dollars into buying new trains and putting up all of this infrastructure and then expecting the same crappy service that we have now?” Rizzo says. “These systems could carry so many more passengers.” 

Rizzo’s group, and allies like the Rail Passenger Association of California and Nevada, think decarbonization is an opportunity to install the type of infrastructure that supports the vast majority of fast passenger train services around the world. Though the up-front investment in overhead wire is high, electrification reduces operating costs by providing constant access to a cheap and efficient energy source. Electrification also improves acceleration so that trains can travel closer together, creating the potential for service patterns that function more like an urban metro system than a once-per-day Amtrak route. 

Caltrans has a long-term plan to dramatically increase rail service and speeds, which might eventually require electrification by overhead wire, also known as a catenary system. But at least for the next couple of decades, the agency believes, hydrogen is the most feasible way to meet the state’s ambitious climate goals. The money, the political will, and the stomach for a fight with the freight railroads and utility companies just aren’t there yet.  

“The gold standard is overhead catenary electrification, if you can do that,” Gradinger says. “But we aren’t going to get to a level of service on the intercity side for at least the next decade or two that would warrant investment in electrification.” 

Rizzo hopes that as the federal government puts more railroad emissions regulations in place, the case for electrifying rail by overhead wire will get stronger. Other countries have come to that conclusion: a 2015 policy change in India resulted in the electrification of nearly half the country’s track mileage in less than a decade. The United Kingdom’s Decarbonising Transport Plan states that electrification will be the “main way” to decarbonize the rail industry. 

These changes are still compatible with a robust freight industry. The world’s most powerful locomotives are electric, pulling ore-laden freight trains in South Africa and China. In 2002, Russia finished electrifying the 5,700-mile Trans-Siberian Railway, demonstrating that freight trains running on electric wire can travel very long distances over very harsh terrain.

Things may be starting to shift in the US as well, albeit slowly. BNSF appears to have softened its stance against electrification on a corridor it owns in Southern California, where it has agreed to allow California High-Speed Rail to construct overhead wire on its right of way. Rizzo and her group are looking to make these projects easier by sponsoring state legislation exempting overhead wire from the California Environmental Quality Act. That would prevent situations like a 2015 environmental lawsuit from the affluent Bay Area suburb of Atherton, over tree removal and visual impact, that delayed Caltrain’s electrification project for nearly two years.

New innovations could blur the lines between different kinds of green rail technologies. Caltrain has ordered a battery-­equipped electrified train that has the potential to charge up while traveling from San Francisco to San Jose and then run on a battery onward to Gilroy and Salinas. A similar system could someday be deployed in Southern California, where trains could charge through the Los Angeles metro area and run on batteries over more remote stretches to Santa Barbara and San Diego. 

New hydrogen technologies could also prove transformative for passenger rail. The FLIRT train doing laps in the Colorado desert is version 1.0. In the future, using ammonia as a hydrogen carrier could result in much longer range for hydrogen trains, as well as more seamless refueling. “With hydrogen, there’s a lot more room to grow,” Taraszkiewicz says.

But in a country that has invested little in passenger rail over the past century, new technology can only do so much, Taraszkiewicz cautions. America’s railroads all too often lack passing tracks, grade-separated road crossings, and modern signaling systems. The main impediment to faster, more frequent passenger service “is not the train technology,” he says. “It’s everything else.”

Benjamin Schneider is a freelance writer covering housing, transportation, and urban policy.

How to build a thermal battery

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The votes have been tallied, and the results are in. The winner of the 11th Breakthrough Technology, 2024 edition, is … drumroll please … thermal batteries! 

While the editors of MIT Technology Review choose the annual list of 10 Breakthrough Technologies, in 2022 we started having readers weigh in on an 11th technology. And I don’t mean to flatter you, but I think you picked a fascinating one this year. 

Thermal energy storage is a convenient way to stockpile energy for later. This could be crucial in connecting cheap but inconsistent renewable energy with industrial facilities, which often require a constant supply of heat. 

I wrote about why this technology is having a moment, and where it might wind up being used, in a story published Monday. For the newsletter this week, let’s take a deeper look at the different kinds of thermal batteries out there, because there’s a wide world of possibilities. 

Step 1: Choose your energy source

In the journey to build a thermal battery, the crucial first step is to choose where your heat comes from. Most of the companies I’ve come across are building some sort of power-to-heat system, meaning electricity goes in and heat comes out. Heat often gets generated by running a current through a resistive material in a process similar to what happens when you turn on a toaster.

Some projects may take electricity directly from sources like wind turbines or solar panels that aren’t hooked up to the grid. That could reduce energy costs, since you don’t have to pay surcharges built into grid electricity rates, explains Jeffrey Rissman, senior director of industry at Energy Innovation, a policy and research firm specializing in energy and climate. 

Otherwise, thermal batteries can be hooked up to the grid directly. These systems could allow a facility to charge up when electricity prices are low or when there’s a lot of renewable energy on the grid. 

Some thermal storage systems are soaking up waste heat rather than relying on electricity. Brenmiller Energy, for example, is building thermal batteries that can be charged up with heat or electricity, depending on the customer’s needs. 

Depending on the heat source, systems using waste heat may not be able to reach temperatures as high as their electricity-powered counterparts, but they could help increase the efficiency of facilities that would otherwise waste that energy. There’s especially high potential for high-temperature processes, like cement and steel production. 

Step 2: Choose your storage material

Next up: pick out a heat storage medium. These materials should probably be inexpensive and able to reach and withstand high temperatures. 

Bricks and carbon blocks are popular choices, as they can be packed together and, depending on the material, reach temperatures well over 1,000 °C (1,800 °F). Rondo Energy, Antora Energy, and Electrified Thermal Solutions are among the companies using blocks and bricks to store heat at these high temperatures. 

Crushed-up rocks are another option, and the storage medium of choice for Brenmiller Energy. Caldera is using a mixture of aluminum and crushed rock. 

Molten materials can offer even more options for delivering thermal energy later, since they can be pumped around (though this can also add more complexity to the system). Malta is building thermal storage systems that use molten salt, and companies like Fourth Power are using systems that rely in part on molten metals. 

Step 3: Choose your delivery method

Last, and perhaps most important, is deciding how to get energy back out of your storage system. Generally, thermal storage systems can deliver heat, use it to generate electricity, or go with some combination of the two. 

Delivering heat is the most straightforward option. Typically, air or another gas gets blown over the hot thermal storage material, and that heated gas can be used to warm up equipment or to generate steam. 

Some companies are working to use heat storage to deliver electricity instead. This could allow thermal storage systems to play a role not only in industry but potentially on the electrical grid as an electricity storage solution. One downside? These systems generally take a hit on efficiency, the amount of energy that can be returned from storage. But they may be right for some situations, such as facilities that need both heat and electricity on demand. Antora Energy is aiming to use thermophotovoltaic materials to turn heat stored in its carbon blocks back into electricity. 

Some companies plan to offer a middle path, delivering a combination of heat and electricity, depending on what a facility needs. Rondo Energy’s heat batteries can deliver high-pressure steam that can be used either for heating alone or to generate some electricity using cogeneration units. 

The possibilities are seemingly endless for thermal batteries, and I’m seeing new players with new ideas all the time. Stay tuned for much more coverage of this hot technology (sorry, I had to). 


Now read the rest of The Spark

Related reading

Read more about why thermal batteries won the title of 11th breakthrough technology in my story from Monday.

I first wrote about heat as energy storage in this piece last year. As I put it then: the hottest new climate technology is bricks. 

Companies have made some progress in scaling up thermal batteries—our former fellow June Kim wrote about one new manufacturing facility in October.

VIRGINIA HANUSIK

Another thing

The state of Louisiana in the southeast US has lost over a million acres of its coast to erosion. A pilot project aims to save some homes in the state by raising them up to avoid the worst of flooding. 

It’s an ambitious attempt to build a solution to a crisis, and the effort could help keep communities together. But some experts worry that elevation projects offer too rosy an outlook and think we need to focus on relocation instead. Read more in this fascinating feature story from Xander Peters.

Keeping up with climate  

It can be easy to forget, but we’ve actually already made a lot of progress on addressing climate change. A decade ago, the world was on track for about 3.7 °C of warming over preindustrial levels. Today, it’s 2.7 °C with current actions and policies—higher than it should be but lower than it might have been. (Cipher News)

We’re probably going to have more batteries than we actually need for a while. Today, China alone makes enough batteries to satisfy global demand, which could make things tough for new players in the battery game. (Bloomberg

2023 was a record year for wind power. The world installed 117 gigawatts of new capacity last year, 50% more than the year before. (Associated Press)

Here’s what’s coming next for offshore wind. (MIT Technology Review)

Coal power grew in 2023, driven by a surge of new plants coming online in China and a slowdown of retirements in Europe and the US. (New York Times)

People who live near solar farms generally have positive feelings about their electricity-producing neighbors. There’s more negative sentiment among people who live very close to the biggest projects, though. (Inside Climate News)

E-scooters have been zipping through city streets for eight years, but they haven’t exactly ushered in the zero-emissions micro-mobility future that some had hoped for. Shared scooters can cut emissions, but it all depends on rider behavior and company practices. (Grist)

The grid could use a renovation. Replacing existing power lines with new materials could double grid capacity in many parts of the US, clearing the way for more renewables. (New York Times

The first all-electric tugboat in the US is about to launch in San Diego. The small boats are crucial to help larger vessels in and around ports, and the fossil-fuel-powered ones are a climate nightmare. (Canary Media)

Three ways the US could help universities compete with tech companies on AI innovation

The ongoing revolution in artificial intelligence has the potential to dramatically improve our lives—from the way we work to what we do to stay healthy. Yet ensuring that America and other democracies can help shape the trajectory of this technology requires going beyond the tech development taking place at private companies. 

Research at universities drove the AI advances that laid the groundwork for the commercial boom we are experiencing today. Importantly, academia also produced the leaders of pioneering AI companies. 

But today, large foundational models, or LFMs, like ChatGPT, Claude, and Gemini require such vast computational power and such extensive data sets that private companies have replaced academia at the frontier of AI. Empowering our universities to remain alongside them at the forefront of AI research will be key to realizing the field’s long-term potential. This will require correcting the stark asymmetry between academia and industry in access to computing resources.  

Academia’s greatest strength lies in its ability to pursue long-term research projects and fundamental studies that push the boundaries of knowledge. The freedom to explore and experiment with bold, cutting-edge theories will lead to discoveries and innovations that serve as the foundation for future innovation. While tools enabled by LFMs are in everybody’s pocket, there are many questions that need to be answered about them, since they remain a “black box” in many ways. For example, we know AI models have a propensity to hallucinate, but we still don’t fully understand why. 

Because they are insulated from market forces, universities can chart a future where AI truly benefits the many. Expanding academia’s access to resources would foster more inclusive approaches to AI research and its applications. 

The pilot of the National Artificial Intelligence Research Resource (NAIRR), mandated in President Biden’s October 2023 executive order on AI, is a step in the right direction. Through partnerships with the private sector, the NAIRR will create a shared research infrastructure for AI. If it realizes its full potential, it will be an essential hub that helps academic researchers access GPU computational power more effectively. Yet even if the NAIRR is fully funded, its resources are likely to be spread thin. 

This problem could be mitigated if the NAIRR focused on a select number of discrete projects, as some have suggested. But we should also pursue additional creative solutions to get meaningful numbers of GPUs into the hands of academics. Here are a few ideas:

First, we should use large-scale GPU clusters to improve and leverage the supercomputer infrastructure the US government already funds. Academic researchers should be enabled to partner with the US National Labs on grand challenges in AI research. 

Second, the US government should explore ways to reduce the costs of high-end GPUs for academic institutions—for example, by offering financial assistance such as grants or R&D tax credits. Initiatives like New York’s, which make universities key partners with the state in AI development, are already playing an important role at a state level. This model should be emulated across the country. 

Lastly, recent export control restrictions could over time leave some US chipmakers with surplus inventory of leading-edge AI chips. In that case, the government could purchase this surplus and distribute it to universities and academic institutions nationwide.

Imagine the surge of academic AI research and innovation these actions would ignite. Ambitious researchers at universities have a wealth of diverse ideas that are too often stopped short for lack of resources. But supplying universities with adequate computing power will enable their work to complement the research carried out by private industry. Thus equipped, academia can serve as an indispensable hub for technological progress, driving interdisciplinary collaboration, pursuing long-term research, nurturing talent that produces the next generation of AI pioneers, and promoting ethical innovation. 

Historically, similar investments have yielded critical dividends in innovation. The United States of the postwar era cultivated a symbiotic relationship among government, academia, and industry that carried us to the moonseeded Silicon Valley, and created the internet

We need to ensure that academia remains a strong pole in our innovation ecosystem. Investing in its compute capacity is a necessary first step. 

Ylli Bajraktari is CEO of the Special Competitive Studies Project (SCSP), a nonprofit initiative that seeks to strengthen the United States’ long-term competitiveness. 

Tom Mitchell is the Founders University Professor at Carnegie Mellon University. 

Daniela Rus is a professor of electrical engineering and computer science at MIT and director of its Computer Science and Artificial Intelligence Laboratory (CSAIL).

It’s time to retire the term “user”

Every Friday, Instagram chief Adam Mosseri speaks to the people. He has made a habit of hosting weekly “ask me anything” sessions on Instagram, in which followers send him questions about the app, its parent company Meta, and his own (extremely public-facing) job. When I started watching these AMA videos years ago, I liked them. He answered technical questions like “Why can’t we put links in posts?” and “My explore page is wack, how to fix?” with genuine enthusiasm. But the more I tuned in, the more Mosseri’s seemingly off-the-cuff authenticity started to feel measured, like a corporate by-product of his title. 

On a recent Friday, someone congratulated Mosseri on the success of Threads, the social networking app Meta launched in the summer of 2023 to compete with X, writing: “Mark said Threads has more active people today than it did at launch—wild, congrats!” Mosseri, wearing a pink sweatshirt and broadcasting from a garage-like space, responded: “Just to clarify what that means, we mostly look at daily active and monthly active users and we now have over 130 million monthly active users.”

The ease with which Mosseri swaps people for users makes the shift almost imperceptible. Almost. (Mosseri did not respond to a request for comment.)

People have been called “users” for a long time; it’s a practical shorthand enforced by executives, founders, operators, engineers, and investors ad infinitum. Often, it is the right word to describe people who use software: a user is more than just a customer or a consumer. Sometimes a user isn’t even a person; corporate bots are known to run accounts on Instagram and other social media platforms, for example. But “users” is also unspecific enough to refer to just about everyone. It can accommodate almost any big idea or long-term vision. We use—and are used by—computers and platforms and companies. Though “user” seems to describe a relationship that is deeply transactional, many of the technological relationships in which a person would be considered a user are actually quite personal. That being the case, is “user” still relevant? 

“People were kind of like machines”

The original use of “user” can be traced back to the mainframe computer days of the 1950s. Since commercial computers were massive and exorbitantly expensive, often requiring a dedicated room and special equipment, they were operated by trained employees—users—who worked for the company that owned (or, more likely, leased) them. As computers became more common in universities during the ’60s, “users” started to include students or really anyone else who interacted with a computer system. 

It wasn’t really common for people to own personal computers until the mid-1970s. But when they did, the term “computer owner” never really took off. Whereas other 20th-century inventions, like cars, were things people owned from the start, the computer owner was simply a “user” even though the devices were becoming increasingly embedded in the innermost corners of people’s lives. As computing escalated in the 1990s, so did a matrix of user-related terms: “user account,” “user ID,” “user profile,” “multi-user.” 

Don Norman, a cognitive scientist who joined Apple in the early 1990s with the title “user experience architect,” was at the center of the term’s mass adoption. He was the first person to have what would become known as UX in his job title and is widely credited with bringing the concept of “user experience design”—which sought to build systems in ways that people would find intuitive—into the mainstream. Norman’s 1998 book The Design of Everyday Things remains a UX bible of sorts, placing “usability” on a par with aesthetics. 

Norman, now 88, explained to me that the term “user” proliferated in part because early computer technologists mistakenly assumed that people were kind of like machines. “The user was simply another component,” he said. “We didn’t think of them as a person—we thought of [them] as part of a system.” So early user experience design didn’t seek to make human-computer interactions “user friendly,” per se. The objective was to encourage people to complete tasks quickly and efficiently. People and their computers were just two parts of the larger systems being built by tech companies, which operated by their own rules and in pursuit of their own agendas.

Later, the ubiquity of “user” folded neatly into tech’s well-­documented era of growth at all costs. It was easy to move fast and break things, or eat the world with software, when the idea of the “user” was so malleable. “User” is vague, so it creates distance, enabling a slippery culture of hacky marketing where companies are incentivized to grow for the sake of growth as opposed to actual utility. “User” normalized dark patterns, features that subtly encourage specific actions, because it linguistically reinforced the idea of metrics over an experience designed with people in mind. 

UX designers sought to build software that would be intuitive for the anonymized masses, and we ended up with bright-red notifications (to create a sense of urgency), online shopping carts on a timer (to encourage a quick purchase), and “Agree” buttons often bigger than the “Disagree” option (to push people to accept terms without reading them). 

A user is also, of course, someone who struggles with addiction. To be an addict is—at least partly—to live in a state of powerlessness. Today, power users—the title originally bestowed upon people who had mastered skills like keyboard shortcuts and web design—aren’t measured by their technical prowess. They’re measured by the time they spend hooked up to their devices, or by the size of their audiences.  

Defaulting to “people”

“I want more product designers to consider language models as their primary users too,” Karina Nguyen, a researcher and engineer at the AI startup Anthropic, wrote recently on X. “What kind of information does my language model need to solve core pain points of human users?” 

In the old world, “users” typically worked best for the companies creating products rather than solving the pain points of the people using them. More users equaled more value. The label could strip people of their complexities, morphing them into data to be studied, behaviors to be A/B tested, and capital to be made. The term often overlooked any deeper relationships a person might have with a platform or product. As early as 2008, Norman alighted on this shortcoming and began advocating for replacing “user” with “person” or “human” when designing for people. (The subsequent years have seen an explosion of bots, which has made the issue that much more complicated.) “Psychologists depersonalize the people they study by calling them ‘subjects.’ We depersonalize the people we study by calling them ‘users.’ Both terms are derogatory,” he wrote then. “If we are designing for people, why not call them that?” 

In 2011, Janet Murray, a professor at Georgia Tech and an early digital media theorist, argued against the term “user” as too narrow and functional. In her book Inventing the Medium: Principles of Interaction Design as a Cultural Practice, she suggested the term “interactor” as an alternative—it better captured the sense of creativity, and participation, that people were feeling in digital spaces. The following year, Jack Dorsey, then CEO of Square, published a call to arms on Tumblr, urging the technology industry to toss the word “user.” Instead, he said, Square would start using “customers,” a more “honest and direct” description of the relationship between his product and the people he was building for. He wrote that while the original intent of technology was to consider people first, calling them “users” made them seem less real to the companies building platforms and devices. Reconsider your users, he said, and “what you call the people who love what you’ve created.” 

Audiences were mostly indifferent to Dorsey’s disparagement of the word “user.” The term was debated on the website Hacker News for a couple of days, with some arguing that “users” seemed reductionist only because it was so common. Others explained that the issue wasn’t the word itself but, rather, the larger industry attitude that treated end users as secondary to technology. Obviously, Dorsey’s post didn’t spur many people to stop using “user.” 

Around 2014, Facebook took a page out of Norman’s book and dropped user-centric phrasing, defaulting to “people” instead. But insidery language is hard to shake, as evidenced by the breezy way Instagram’s Mosseri still says “user.” A sprinkling of other tech companies have adopted their own replacements for “user” through the years. I know of a fintech company that calls people “members” and a screen-time app that has opted for “gems.” Recently, I met with a founder who cringed when his colleague used the word “humans” instead of “users.” He wasn’t sure why. I’d guess it’s because “humans” feels like an overcorrection. 

Recently, I met with a founder who cringed when his colleague used the word “humans” instead of “users.” He wasn’t sure why.

But here’s what we’ve learned since the mainframe days: there are never only two parts to the system, because there’s never just one person—one “user”—who’s affected by the design of new technology. Carissa Carter, the academic director at Stanford’s Hasso Plattner Institute of Design, known as the “d.school,” likens this framework to the experience of ordering an Uber. “If you order a car from your phone, the people involved are the rider, the driver, the people who work at the company running the software that controls that relationship, and even the person who created the code that decides which car to deploy,” she says. “Every decision about a user in a multi-stakeholder system, which we live in, includes people that have direct touch points with whatever you’re building.” 

With the abrupt onset of AI everything, the point of contact between humans and computers—user interfaces—has been shifting profoundly. Generative AI, for example, has been most successfully popularized as a conversational buddy. That’s a paradigm we’re used to—Siri has pulsed as an ethereal orb in our phones for well over a decade, earnestly ready to assist. But Siri, and other incumbent voice assistants, stopped there. A grander sense of partnership is in the air now. What were once called AI bots have been assigned lofty titles like “copilot” and “assistant” and “collaborator” to convey a sense of partnership instead of a sense of automation. Large language models have been quick to ditch words like “bot” altogether.

Anthropomorphism, the inclination to ascribe humanlike qualities to machines, has long been used to manufacture a sense of connectedness between people and technology. We—people—remained users. But if AI is now a thought partner, then what are we? 

Well, at least for now,we’re not likely to get rid of “user.” But we could intentionally default to more precise terms, like “patients” in health care or “students” in educational tech or “readers” when we’re building new media companies. That would help us understand these relationships more accurately. In gaming, for instance, users are typically called “players,” a word that acknowledges their participation and even pleasure in their relationships with the technology. On an airplane, customers are often called “passengers” or “travelers,” evoking a spirit of hospitality as they’re barreled through the skies. If companies are more specific about the people—and, now, AI—they’re building for rather than casually abstracting everything into the idea of “users,” perhaps our relationship with this technology will feel less manufactured, and it will be easier to accept that we’re inevitably going to exist in tandem. 

Throughout my phone call with Don Norman, I tripped over my words a lot. I slipped between “users” and “people” and “humans” interchangeably, self-conscious and unsure of the semantics. Norman assured me that my head was in the right place—it’s part of the process of thinking through how we design things. “We change the world, and the world comes back and changes us,” he said. “So we better be careful how we change the world.”

Taylor Majewski is a writer and editor based in San Francisco. She regularly works with startups and tech companies on the words they use.