How to build a thermal battery

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The votes have been tallied, and the results are in. The winner of the 11th Breakthrough Technology, 2024 edition, is … drumroll please … thermal batteries! 

While the editors of MIT Technology Review choose the annual list of 10 Breakthrough Technologies, in 2022 we started having readers weigh in on an 11th technology. And I don’t mean to flatter you, but I think you picked a fascinating one this year. 

Thermal energy storage is a convenient way to stockpile energy for later. This could be crucial in connecting cheap but inconsistent renewable energy with industrial facilities, which often require a constant supply of heat. 

I wrote about why this technology is having a moment, and where it might wind up being used, in a story published Monday. For the newsletter this week, let’s take a deeper look at the different kinds of thermal batteries out there, because there’s a wide world of possibilities. 

Step 1: Choose your energy source

In the journey to build a thermal battery, the crucial first step is to choose where your heat comes from. Most of the companies I’ve come across are building some sort of power-to-heat system, meaning electricity goes in and heat comes out. Heat often gets generated by running a current through a resistive material in a process similar to what happens when you turn on a toaster.

Some projects may take electricity directly from sources like wind turbines or solar panels that aren’t hooked up to the grid. That could reduce energy costs, since you don’t have to pay surcharges built into grid electricity rates, explains Jeffrey Rissman, senior director of industry at Energy Innovation, a policy and research firm specializing in energy and climate. 

Otherwise, thermal batteries can be hooked up to the grid directly. These systems could allow a facility to charge up when electricity prices are low or when there’s a lot of renewable energy on the grid. 

Some thermal storage systems are soaking up waste heat rather than relying on electricity. Brenmiller Energy, for example, is building thermal batteries that can be charged up with heat or electricity, depending on the customer’s needs. 

Depending on the heat source, systems using waste heat may not be able to reach temperatures as high as their electricity-powered counterparts, but they could help increase the efficiency of facilities that would otherwise waste that energy. There’s especially high potential for high-temperature processes, like cement and steel production. 

Step 2: Choose your storage material

Next up: pick out a heat storage medium. These materials should probably be inexpensive and able to reach and withstand high temperatures. 

Bricks and carbon blocks are popular choices, as they can be packed together and, depending on the material, reach temperatures well over 1,000 °C (1,800 °F). Rondo Energy, Antora Energy, and Electrified Thermal Solutions are among the companies using blocks and bricks to store heat at these high temperatures. 

Crushed-up rocks are another option, and the storage medium of choice for Brenmiller Energy. Caldera is using a mixture of aluminum and crushed rock. 

Molten materials can offer even more options for delivering thermal energy later, since they can be pumped around (though this can also add more complexity to the system). Malta is building thermal storage systems that use molten salt, and companies like Fourth Power are using systems that rely in part on molten metals. 

Step 3: Choose your delivery method

Last, and perhaps most important, is deciding how to get energy back out of your storage system. Generally, thermal storage systems can deliver heat, use it to generate electricity, or go with some combination of the two. 

Delivering heat is the most straightforward option. Typically, air or another gas gets blown over the hot thermal storage material, and that heated gas can be used to warm up equipment or to generate steam. 

Some companies are working to use heat storage to deliver electricity instead. This could allow thermal storage systems to play a role not only in industry but potentially on the electrical grid as an electricity storage solution. One downside? These systems generally take a hit on efficiency, the amount of energy that can be returned from storage. But they may be right for some situations, such as facilities that need both heat and electricity on demand. Antora Energy is aiming to use thermophotovoltaic materials to turn heat stored in its carbon blocks back into electricity. 

Some companies plan to offer a middle path, delivering a combination of heat and electricity, depending on what a facility needs. Rondo Energy’s heat batteries can deliver high-pressure steam that can be used either for heating alone or to generate some electricity using cogeneration units. 

The possibilities are seemingly endless for thermal batteries, and I’m seeing new players with new ideas all the time. Stay tuned for much more coverage of this hot technology (sorry, I had to). 


Now read the rest of The Spark

Related reading

Read more about why thermal batteries won the title of 11th breakthrough technology in my story from Monday.

I first wrote about heat as energy storage in this piece last year. As I put it then: the hottest new climate technology is bricks. 

Companies have made some progress in scaling up thermal batteries—our former fellow June Kim wrote about one new manufacturing facility in October.

VIRGINIA HANUSIK

Another thing

The state of Louisiana in the southeast US has lost over a million acres of its coast to erosion. A pilot project aims to save some homes in the state by raising them up to avoid the worst of flooding. 

It’s an ambitious attempt to build a solution to a crisis, and the effort could help keep communities together. But some experts worry that elevation projects offer too rosy an outlook and think we need to focus on relocation instead. Read more in this fascinating feature story from Xander Peters.

Keeping up with climate  

It can be easy to forget, but we’ve actually already made a lot of progress on addressing climate change. A decade ago, the world was on track for about 3.7 °C of warming over preindustrial levels. Today, it’s 2.7 °C with current actions and policies—higher than it should be but lower than it might have been. (Cipher News)

We’re probably going to have more batteries than we actually need for a while. Today, China alone makes enough batteries to satisfy global demand, which could make things tough for new players in the battery game. (Bloomberg

2023 was a record year for wind power. The world installed 117 gigawatts of new capacity last year, 50% more than the year before. (Associated Press)

Here’s what’s coming next for offshore wind. (MIT Technology Review)

Coal power grew in 2023, driven by a surge of new plants coming online in China and a slowdown of retirements in Europe and the US. (New York Times)

People who live near solar farms generally have positive feelings about their electricity-producing neighbors. There’s more negative sentiment among people who live very close to the biggest projects, though. (Inside Climate News)

E-scooters have been zipping through city streets for eight years, but they haven’t exactly ushered in the zero-emissions micro-mobility future that some had hoped for. Shared scooters can cut emissions, but it all depends on rider behavior and company practices. (Grist)

The grid could use a renovation. Replacing existing power lines with new materials could double grid capacity in many parts of the US, clearing the way for more renewables. (New York Times

The first all-electric tugboat in the US is about to launch in San Diego. The small boats are crucial to help larger vessels in and around ports, and the fossil-fuel-powered ones are a climate nightmare. (Canary Media)

Three ways the US could help universities compete with tech companies on AI innovation

The ongoing revolution in artificial intelligence has the potential to dramatically improve our lives—from the way we work to what we do to stay healthy. Yet ensuring that America and other democracies can help shape the trajectory of this technology requires going beyond the tech development taking place at private companies. 

Research at universities drove the AI advances that laid the groundwork for the commercial boom we are experiencing today. Importantly, academia also produced the leaders of pioneering AI companies. 

But today, large foundational models, or LFMs, like ChatGPT, Claude, and Gemini require such vast computational power and such extensive data sets that private companies have replaced academia at the frontier of AI. Empowering our universities to remain alongside them at the forefront of AI research will be key to realizing the field’s long-term potential. This will require correcting the stark asymmetry between academia and industry in access to computing resources.  

Academia’s greatest strength lies in its ability to pursue long-term research projects and fundamental studies that push the boundaries of knowledge. The freedom to explore and experiment with bold, cutting-edge theories will lead to discoveries and innovations that serve as the foundation for future innovation. While tools enabled by LFMs are in everybody’s pocket, there are many questions that need to be answered about them, since they remain a “black box” in many ways. For example, we know AI models have a propensity to hallucinate, but we still don’t fully understand why. 

Because they are insulated from market forces, universities can chart a future where AI truly benefits the many. Expanding academia’s access to resources would foster more inclusive approaches to AI research and its applications. 

The pilot of the National Artificial Intelligence Research Resource (NAIRR), mandated in President Biden’s October 2023 executive order on AI, is a step in the right direction. Through partnerships with the private sector, the NAIRR will create a shared research infrastructure for AI. If it realizes its full potential, it will be an essential hub that helps academic researchers access GPU computational power more effectively. Yet even if the NAIRR is fully funded, its resources are likely to be spread thin. 

This problem could be mitigated if the NAIRR focused on a select number of discrete projects, as some have suggested. But we should also pursue additional creative solutions to get meaningful numbers of GPUs into the hands of academics. Here are a few ideas:

First, we should use large-scale GPU clusters to improve and leverage the supercomputer infrastructure the US government already funds. Academic researchers should be enabled to partner with the US National Labs on grand challenges in AI research. 

Second, the US government should explore ways to reduce the costs of high-end GPUs for academic institutions—for example, by offering financial assistance such as grants or R&D tax credits. Initiatives like New York’s, which make universities key partners with the state in AI development, are already playing an important role at a state level. This model should be emulated across the country. 

Lastly, recent export control restrictions could over time leave some US chipmakers with surplus inventory of leading-edge AI chips. In that case, the government could purchase this surplus and distribute it to universities and academic institutions nationwide.

Imagine the surge of academic AI research and innovation these actions would ignite. Ambitious researchers at universities have a wealth of diverse ideas that are too often stopped short for lack of resources. But supplying universities with adequate computing power will enable their work to complement the research carried out by private industry. Thus equipped, academia can serve as an indispensable hub for technological progress, driving interdisciplinary collaboration, pursuing long-term research, nurturing talent that produces the next generation of AI pioneers, and promoting ethical innovation. 

Historically, similar investments have yielded critical dividends in innovation. The United States of the postwar era cultivated a symbiotic relationship among government, academia, and industry that carried us to the moonseeded Silicon Valley, and created the internet

We need to ensure that academia remains a strong pole in our innovation ecosystem. Investing in its compute capacity is a necessary first step. 

Ylli Bajraktari is CEO of the Special Competitive Studies Project (SCSP), a nonprofit initiative that seeks to strengthen the United States’ long-term competitiveness. 

Tom Mitchell is the Founders University Professor at Carnegie Mellon University. 

Daniela Rus is a professor of electrical engineering and computer science at MIT and director of its Computer Science and Artificial Intelligence Laboratory (CSAIL).

It’s time to retire the term “user”

Every Friday, Instagram chief Adam Mosseri speaks to the people. He has made a habit of hosting weekly “ask me anything” sessions on Instagram, in which followers send him questions about the app, its parent company Meta, and his own (extremely public-facing) job. When I started watching these AMA videos years ago, I liked them. He answered technical questions like “Why can’t we put links in posts?” and “My explore page is wack, how to fix?” with genuine enthusiasm. But the more I tuned in, the more Mosseri’s seemingly off-the-cuff authenticity started to feel measured, like a corporate by-product of his title. 

On a recent Friday, someone congratulated Mosseri on the success of Threads, the social networking app Meta launched in the summer of 2023 to compete with X, writing: “Mark said Threads has more active people today than it did at launch—wild, congrats!” Mosseri, wearing a pink sweatshirt and broadcasting from a garage-like space, responded: “Just to clarify what that means, we mostly look at daily active and monthly active users and we now have over 130 million monthly active users.”

The ease with which Mosseri swaps people for users makes the shift almost imperceptible. Almost. (Mosseri did not respond to a request for comment.)

People have been called “users” for a long time; it’s a practical shorthand enforced by executives, founders, operators, engineers, and investors ad infinitum. Often, it is the right word to describe people who use software: a user is more than just a customer or a consumer. Sometimes a user isn’t even a person; corporate bots are known to run accounts on Instagram and other social media platforms, for example. But “users” is also unspecific enough to refer to just about everyone. It can accommodate almost any big idea or long-term vision. We use—and are used by—computers and platforms and companies. Though “user” seems to describe a relationship that is deeply transactional, many of the technological relationships in which a person would be considered a user are actually quite personal. That being the case, is “user” still relevant? 

“People were kind of like machines”

The original use of “user” can be traced back to the mainframe computer days of the 1950s. Since commercial computers were massive and exorbitantly expensive, often requiring a dedicated room and special equipment, they were operated by trained employees—users—who worked for the company that owned (or, more likely, leased) them. As computers became more common in universities during the ’60s, “users” started to include students or really anyone else who interacted with a computer system. 

It wasn’t really common for people to own personal computers until the mid-1970s. But when they did, the term “computer owner” never really took off. Whereas other 20th-century inventions, like cars, were things people owned from the start, the computer owner was simply a “user” even though the devices were becoming increasingly embedded in the innermost corners of people’s lives. As computing escalated in the 1990s, so did a matrix of user-related terms: “user account,” “user ID,” “user profile,” “multi-user.” 

Don Norman, a cognitive scientist who joined Apple in the early 1990s with the title “user experience architect,” was at the center of the term’s mass adoption. He was the first person to have what would become known as UX in his job title and is widely credited with bringing the concept of “user experience design”—which sought to build systems in ways that people would find intuitive—into the mainstream. Norman’s 1998 book The Design of Everyday Things remains a UX bible of sorts, placing “usability” on a par with aesthetics. 

Norman, now 88, explained to me that the term “user” proliferated in part because early computer technologists mistakenly assumed that people were kind of like machines. “The user was simply another component,” he said. “We didn’t think of them as a person—we thought of [them] as part of a system.” So early user experience design didn’t seek to make human-computer interactions “user friendly,” per se. The objective was to encourage people to complete tasks quickly and efficiently. People and their computers were just two parts of the larger systems being built by tech companies, which operated by their own rules and in pursuit of their own agendas.

Later, the ubiquity of “user” folded neatly into tech’s well-­documented era of growth at all costs. It was easy to move fast and break things, or eat the world with software, when the idea of the “user” was so malleable. “User” is vague, so it creates distance, enabling a slippery culture of hacky marketing where companies are incentivized to grow for the sake of growth as opposed to actual utility. “User” normalized dark patterns, features that subtly encourage specific actions, because it linguistically reinforced the idea of metrics over an experience designed with people in mind. 

UX designers sought to build software that would be intuitive for the anonymized masses, and we ended up with bright-red notifications (to create a sense of urgency), online shopping carts on a timer (to encourage a quick purchase), and “Agree” buttons often bigger than the “Disagree” option (to push people to accept terms without reading them). 

A user is also, of course, someone who struggles with addiction. To be an addict is—at least partly—to live in a state of powerlessness. Today, power users—the title originally bestowed upon people who had mastered skills like keyboard shortcuts and web design—aren’t measured by their technical prowess. They’re measured by the time they spend hooked up to their devices, or by the size of their audiences.  

Defaulting to “people”

“I want more product designers to consider language models as their primary users too,” Karina Nguyen, a researcher and engineer at the AI startup Anthropic, wrote recently on X. “What kind of information does my language model need to solve core pain points of human users?” 

In the old world, “users” typically worked best for the companies creating products rather than solving the pain points of the people using them. More users equaled more value. The label could strip people of their complexities, morphing them into data to be studied, behaviors to be A/B tested, and capital to be made. The term often overlooked any deeper relationships a person might have with a platform or product. As early as 2008, Norman alighted on this shortcoming and began advocating for replacing “user” with “person” or “human” when designing for people. (The subsequent years have seen an explosion of bots, which has made the issue that much more complicated.) “Psychologists depersonalize the people they study by calling them ‘subjects.’ We depersonalize the people we study by calling them ‘users.’ Both terms are derogatory,” he wrote then. “If we are designing for people, why not call them that?” 

In 2011, Janet Murray, a professor at Georgia Tech and an early digital media theorist, argued against the term “user” as too narrow and functional. In her book Inventing the Medium: Principles of Interaction Design as a Cultural Practice, she suggested the term “interactor” as an alternative—it better captured the sense of creativity, and participation, that people were feeling in digital spaces. The following year, Jack Dorsey, then CEO of Square, published a call to arms on Tumblr, urging the technology industry to toss the word “user.” Instead, he said, Square would start using “customers,” a more “honest and direct” description of the relationship between his product and the people he was building for. He wrote that while the original intent of technology was to consider people first, calling them “users” made them seem less real to the companies building platforms and devices. Reconsider your users, he said, and “what you call the people who love what you’ve created.” 

Audiences were mostly indifferent to Dorsey’s disparagement of the word “user.” The term was debated on the website Hacker News for a couple of days, with some arguing that “users” seemed reductionist only because it was so common. Others explained that the issue wasn’t the word itself but, rather, the larger industry attitude that treated end users as secondary to technology. Obviously, Dorsey’s post didn’t spur many people to stop using “user.” 

Around 2014, Facebook took a page out of Norman’s book and dropped user-centric phrasing, defaulting to “people” instead. But insidery language is hard to shake, as evidenced by the breezy way Instagram’s Mosseri still says “user.” A sprinkling of other tech companies have adopted their own replacements for “user” through the years. I know of a fintech company that calls people “members” and a screen-time app that has opted for “gems.” Recently, I met with a founder who cringed when his colleague used the word “humans” instead of “users.” He wasn’t sure why. I’d guess it’s because “humans” feels like an overcorrection. 

Recently, I met with a founder who cringed when his colleague used the word “humans” instead of “users.” He wasn’t sure why.

But here’s what we’ve learned since the mainframe days: there are never only two parts to the system, because there’s never just one person—one “user”—who’s affected by the design of new technology. Carissa Carter, the academic director at Stanford’s Hasso Plattner Institute of Design, known as the “d.school,” likens this framework to the experience of ordering an Uber. “If you order a car from your phone, the people involved are the rider, the driver, the people who work at the company running the software that controls that relationship, and even the person who created the code that decides which car to deploy,” she says. “Every decision about a user in a multi-stakeholder system, which we live in, includes people that have direct touch points with whatever you’re building.” 

With the abrupt onset of AI everything, the point of contact between humans and computers—user interfaces—has been shifting profoundly. Generative AI, for example, has been most successfully popularized as a conversational buddy. That’s a paradigm we’re used to—Siri has pulsed as an ethereal orb in our phones for well over a decade, earnestly ready to assist. But Siri, and other incumbent voice assistants, stopped there. A grander sense of partnership is in the air now. What were once called AI bots have been assigned lofty titles like “copilot” and “assistant” and “collaborator” to convey a sense of partnership instead of a sense of automation. Large language models have been quick to ditch words like “bot” altogether.

Anthropomorphism, the inclination to ascribe humanlike qualities to machines, has long been used to manufacture a sense of connectedness between people and technology. We—people—remained users. But if AI is now a thought partner, then what are we? 

Well, at least for now,we’re not likely to get rid of “user.” But we could intentionally default to more precise terms, like “patients” in health care or “students” in educational tech or “readers” when we’re building new media companies. That would help us understand these relationships more accurately. In gaming, for instance, users are typically called “players,” a word that acknowledges their participation and even pleasure in their relationships with the technology. On an airplane, customers are often called “passengers” or “travelers,” evoking a spirit of hospitality as they’re barreled through the skies. If companies are more specific about the people—and, now, AI—they’re building for rather than casually abstracting everything into the idea of “users,” perhaps our relationship with this technology will feel less manufactured, and it will be easier to accept that we’re inevitably going to exist in tandem. 

Throughout my phone call with Don Norman, I tripped over my words a lot. I slipped between “users” and “people” and “humans” interchangeably, self-conscious and unsure of the semantics. Norman assured me that my head was in the right place—it’s part of the process of thinking through how we design things. “We change the world, and the world comes back and changes us,” he said. “So we better be careful how we change the world.”

Taylor Majewski is a writer and editor based in San Francisco. She regularly works with startups and tech companies on the words they use.

Beyond Neuralink: Meet the other companies developing brain-computer interfaces

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

In the world of brain-computer interfaces, it can seem as if one company sucks up all the oxygen in the room. Last month, Neuralink posted a video to X showing the first human subject to receive its brain implant, which will be named Telepathy. The recipient, a 29-year-old man who is paralyzed from the shoulders down, played computer chess, moving the cursor around with his mind. Learning to control it was “like using the force,” he says in the video.

Neuralink’s announcement of a first-in-human trial made a big splash not because of what the man was able to accomplish—scientists demonstrated using a brain implant to move a cursor in 2006—but because the technology is so advanced. The device is unobtrusive and wireless, and it contains electrodes so thin and fragile they must be stitched into the brain by a specialized robot. It also commanded attention because of the wild promises Neuralink founder Elon Musk has made. It’s no secret that Musk is interested in using his chip to enhance the mind, not just restore function lost to injury or illness.  

But Neuralink isn’t the only company developing brain-computer interfaces to help people who have lost the ability to move or speak. In fact, Synchron, a New York–based company backed by funding from Bill Gates and Jeff Bezos, has already implanted its device in 10 people. Last week, it launched a patient registry to gear up for a larger clinical trial.

Today in The Checkup, let’s take a look at some of the companies developing brain chips, their progress, and their different approaches to the technology.

Most of the companies working in this space have the same goal: capturing enough information from the brain to decipher the user’s intention. The idea is to aid communication for people who can’t easily move or speak, either by helping them navigate a computer cursor or by actually translating their brain activity into speech or text.

There are a variety of ways to classify these devices, but Jacob Robinson, a bioengineer at Rice University, likes to group them by their invasiveness. There’s an inherent trade-off. The deeper the electrodes go, the more invasive the surgery required to implant them, and the greater the risks. But going deeper also puts the electrodes closer to the brain activity these companies hope to record, which means the device can capture higher-resolution information that might, say, allow the device to decode speech. That’s the goal of companies like Neuralink and Paradromics. 

Robinson is CEO and cofounder of a company called Motif Neurotech, which is developing a brain-computer interface that only penetrates the skull (more on this later).  In contrast, Neuralink’s device has electrodes that go into the cortex, “right in the first couple of millimeters,” Robinson says. Two other companies—the Austin-based startup Paradromics and Blackrock Neurotech—have also developed chips designed to penetrate the cortex.

“That allows you to get really close to the neurons and get information about what each brain cell is doing,” Robinson says. Proximity to the neurons and a greater number of electrodes that can “listen” to their activity increases the speed of data transfer, or the “bandwidth.” And the greater the bandwidth, the more likely it is that the device will be able to translate brain activity into speech or text. 

When it comes to the sheer amount of human experience, Blackrock Neurotech is far ahead of the pack. Its Utah array has been implanted in dozens of people since 2004. It’s the array used by academic labs all over the country. And it’s the array that forms the basis of Blackrock’s MoveAgain device, which received an FDA Breakthrough Designation in 2021. But its bandwidth is likely lower than that of Neuralink’s device, says Robinson. 

“Paradromics actually has the highest-bandwidth interface, but they haven’t demonstrated it in humans yet,” Robinson says. The electrodes sit on a chip about the size of a watch battery, but the device requires a separate wireless transmitter that is implanted in the chest and connected to the brain implant by a wire.

There’s a drawback to all these high-bandwidth devices, though. They all require open brain surgery, and “the brain doesn’t really like having needles put into it,” said Synchron founder Tom Oxley in a 2022 TED talk. Synchron has developed an electrode array mounted on a stent, the very same device doctors use to prop open clogged arteries. The “Stentrode” is delivered via an incision in the neck to a blood vessel just above the motor cortex. This unique delivery method avoids brain surgery. But having the device placed above the brain rather than in it  limits the amount of data it can capture, Robinson says. He is skeptical the device will be able to capture enough data to move a mouse. But it is sufficient to generate mouse clicks. “They can click yes or no; they can click up and down,” he says.

Newcomer Precision Neuroscience, founded by a former Neuralink executive, has developed a flexible electrode array thinner than a human hair that resembles a piece of Scotch tape. It slides on top of the cortex through a small incision. The company launched its first human trials last year. In these initial studies, the array was implanted temporarily in people who were having brain surgery for other reasons. 

Last week, Robinson and his colleagues reported in Science Advances the first human test of Motif Neurotech’s device, which only penetrates the skull. They temporarily placed the small, battery-free device, known as the Digitally Programmable Over-brain Therapeutic (DOT), above the motor cortex of an individual who was already scheduled to undergo brain surgery. When they switched the device on, they saw movement in the patient’s hand. 

The ultimate goal of Motif’s device isn’t to produce movement. They’ve set their sights on a completely different application: alleviating mood disorders. “For every person with a spinal cord injury, there are 10 people suffering major depressive disorder and not responding to drugs,” Robinson says. “They’re just as desperate. It’s just not visible.”But the study shows that the device is powerful enough to stimulate the brain, a first step toward the company’s goals. 

The device sits above the brain, so it won’t be able to capture high-bandwidth data. But because Motif isn’t actually trying to decode speech or help people move things with their mind, they don’t need it to. “Your emotions don’t change nearly as quickly as the sounds coming out of your mouth,” Robinson says. 

Which of these companies will succeed remains to be seen, but with the momentum the field has already gained, controlling technology with your mind no longer seems like the stuff of science fiction. Still, these devices are primarily intended for people who have serious physical impairments. Don’t expect brain implants to achieve Neuralink’s goals of “redefining the boundaries of human capability” or “expanding how we experience the world” anytime soon. 


Now read the rest of The Checkup

Read more from Tech Review’s archive

Elon Musk claimed he wants to use brain implants to increase “bandwidth” between people. But the idea of extra-fast communication is “largely hogwash,” said Antonio Regalado in a previous issue of The Checkup. In some instances, however, bandwidth really does matter. 

Last year I wrote about two women who, thanks to brain implants, regained the ability to communicate. One device translated the intended muscle movements of the mouth into text and speech. The other decoded speech directly. 

Phil Kennedy, one of the inventors of brain-computer interfaces, ended up getting one himself in pursuit of data. This fascinating and bizarre story from Adam Piore really delivers. 

Long read: This 2021 profile of one brain implant user, by Antonio Regalado, covers almost everything you might want to know about brain implants and dives deeper into some of the technologies I mention above. 

From around the web

People with HIV have to remember to take a once-daily pill, but in the coming years new, long-acting therapies may be available that would require a weekly pill or a monthly shot. These treatments could prove especially useful for reaching the more than 9 million people who are not receiving treatment. (NYT)

Tests that search for signs of cancer in the blood—sometimes called liquid biopsies—could represent a breakthrough in cancer detection. As many as 20 tests are in various stages of development, and some are already in use. But the evidence that these tests improve survival or reduce the number of deaths is lacking. (Washington Post)

As neurotech expands, there’s a lingering question of who owns your neural data. A new report finds that in many cases, privacy policies don’t protect this information. Some people are trying to change that, including legislators in Colorado, where a bill expanding neurorights protections was just signed into law on Wednesday. (Stat)

Researchers taught robots to run. Now they’re teaching them to walk

We’ve all seen videos over the past few years demonstrating how agile humanoid robots have become, running and jumping with ease. We’re no longer surprised by this kind of agility—in fact, we’ve grown to expect it.

The problem is, these shiny demos lack real-world applications. When it comes to creating robots that are useful and safe around humans, the fundamentals of movement are more important. As a result, researchers are using the same techniques to train humanoid robots to achieve much more modest goals. 

Alan Fern, a professor of computer science at Oregon State University, and a team of researchers have successfully trained a humanoid robot called Digit V3 to stand, walk, pick up a box, and move it from one location to another. Meanwhile, a separate group of researchers from the University of California, Berkeley, have focused on teaching Digit to walk in unfamiliar environments while carrying different loads, without toppling over. Their research is published in a paper in Science Robotics today. 

Both groups are using an AI technique called sim-to-real reinforcement learning, a burgeoning method of training two-legged robots like Digit. Researchers believe it will lead to more robust, reliable two-legged machines capable of interacting with their surroundings more safely—as well as learning much more quickly.

Sim-to-real reinforcement learning involves training AI models to complete certain tasks in simulated environments billions of times before a robot powered by the model attempts to complete them in the real world. What would take years for a robot to learn in real life can take just days thanks to repeated trial-and-error testing in simulations.

A neural network guides the robot using a mathematical reward function, a technique that rewards the robot with a large number every time it moves closer to its target location or completes its goal behavior. If it does something it’s not supposed to do, like falling down, it’s “punished” with a negative number, so it learns to avoid these motions over time.

In previous projects, researchers from the University of Oregon had used the same reinforcement learning technique to teach a two-legged robot named Cassie to run. The approach paid off—Cassie became the first robot to run an outdoor 5K before setting a Guinness World Record for the fastest bipedal robot to run 100 meters and mastering the ability to jump from one location to another with ease.

Training robots to behave in athletic ways requires them to develop really complex skills in very narrow environments, says Ilija Radosavovic, a PhD student at Berkleley who trained Digit to carry a wide range of loads and stabilize itself when poked with a stick. “We’re sort of the opposite—focusing on fairly simple skills in broad environments.”

This new wave of research in humanoid robotics is less concerned with speed and ability, and more focused on making machines robust and able to adapt—which is ultimately what’s needed to make them useful in the real world. Humanoid robots remain a relative rarity in work environments, as they often struggle to balance while carrying heavy objects. This is why most robots designed to lift objects of varying weights in factories and warehouses tend to have four legs or larger, more stable bases. But researchers hope to change that by making humanoid robots more reliable using AI techniques. 

Reinforcement learning will usher in a “new, much more flexible and faster way for training these types of manipulation skills,” Fern says. He and his team are due to present their findings at ICRA, the International Conference on Robotics and Automation, in Japan next month.

The ultimate goal is for a human to be able to show the robot a video of the desired task, like picking up a box from one shelf and pushing it onto another higher shelf, and then have the robot do it without requiring any further instruction, says Fern.

Getting robots to observe, copy, and quickly learn these kinds of behaviors would be really useful, but it still remains a challenge, says Lerrel Pinto, an assistant professor of computer science at New York University, who was not involved in the research. “If that could be done, I would be very impressed by that,” he says. “These are hard problems.”

Is robotics about to have its own ChatGPT moment?

Silent. Rigid. Clumsy.

Henry and Jane Evans are used to awkward houseguests. For more than a decade, the couple, who live in Los Altos Hills, California, have hosted a slew of robots in their home. 

In 2002, at age 40, Henry had a massive stroke, which left him with quadriplegia and an inability to speak. Since then, he’s learned how to communicate by moving his eyes over a letter board, but he is highly reliant on caregivers and his wife, Jane. 

Henry got a glimmer of a different kind of life when he saw Charlie Kemp on CNN in 2010. Kemp, a robotics professor at Georgia Tech, was on TV talking about PR2, a robot developed by the company Willow Garage. PR2 was a massive two-armed machine on wheels that looked like a crude metal butler. Kemp was demonstrating how the robot worked, and talking about his research on how health-care robots could help people. He showed how the PR2 robot could hand some medicine to the television host.    

“All of a sudden, Henry turns to me and says, ‘Why can’t that robot be an extension of my body?’ And I said, ‘Why not?’” Jane says. 

There was a solid reason why not. While engineers have made great progress in getting robots to work in tightly controlled environments like labs and factories, the home has proved difficult to design for. Out in the real, messy world, furniture and floor plans differ wildly; children and pets can jump in a robot’s way; and clothes that need folding come in different shapes, colors, and sizes. Managing such unpredictable settings and varied conditions has been beyond the capabilities of even the most advanced robot prototypes. 

That seems to finally be changing, in large part thanks to artificial intelligence. For decades, roboticists have more or less focused on controlling robots’ “bodies”—their arms, legs, levers, wheels, and the like—via purpose-­driven software. But a new generation of scientists and inventors believes that the previously missing ingredient of AI can give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into our homes. 

Progress won’t happen overnight, though, as the Evanses know far too well from their many years of using various robot prototypes. 

PR2 was the first robot they brought in, and it opened entirely new skills for Henry. It would hold a beard shaver and Henry would move his face against it, allowing him to shave and scratch an itch by himself for the first time in a decade. But at 450 pounds (200 kilograms) or so and $400,000, the robot was difficult to have around. “It could easily take out a wall in your house,” Jane says. “I wasn’t a big fan.”

More recently, the Evanses have been testing out a smaller robot called Stretch, which Kemp developed through his startup Hello Robot. The first iteration launched during the pandemic with a much more reasonable price tag of around $18,000. 

Stretch weighs about 50 pounds. It has a small mobile base, a stick with a camera dangling off it, and an adjustable arm featuring a gripper with suction cups at the ends. It can be controlled with a console controller. Henry controls Stretch using a laptop, with a tool that that tracks his head movements to move a cursor around. He is able to move his thumb and index finger enough to click a computer mouse. Last summer, Stretch was with the couple for more than a month, and Henry says it gave him a whole new level of autonomy. “It was practical, and I could see using it every day,” he says. 

a robot arm holds a brush over the head of Henry Evans which rests on a pillow
Henry Evans used the Stretch robot to brush his hair, eat, and even
play with his granddaughter.
PETER ADAMS

Using his laptop, he could get the robot to brush his hair and have it hold fruit kebabs for him to snack on. It also opened up Henry’s relationship with his granddaughter Teddie. Before, they barely interacted. “She didn’t hug him at all goodbye. Nothing like that,” Jane says. But “Papa Wheelie” and Teddie used Stretch to play, engaging in relay races, bowling, and magnetic fishing. 

Stretch doesn’t have much in the way of smarts: it comes with some pre­installed software, such as the web interface that Henry uses to control it, and other capabilities such as AI-enabled navigation. The main benefit of Stretch is that people can plug in their own AI models and use them to do experiments. But it offers a glimpse of what a world with useful home robots could look like. Robots that can do many of the things humans do in the home—tasks such as folding laundry, cooking meals, and cleaning—have been a dream of robotics research since the inception of the field in the 1950s. For a long time, it’s been just that: “Robotics is full of dreamers,” says Kemp.

But the field is at an inflection point, says Ken Goldberg, a robotics professor at the University of California, Berkeley. Previous efforts to build a useful home robot, he says, have emphatically failed to meet the expectations set by popular culture—think the robotic maid from The Jetsons. Now things are very different. Thanks to cheap hardware like Stretch, along with efforts to collect and share data and advances in generative AI, robots are getting more competent and helpful faster than ever before. “We’re at a point where we’re very close to getting capability that is really going to be useful,” Goldberg says. 

Folding laundry, cooking shrimp, wiping surfaces, unloading shopping baskets—today’s AI-powered robots are learning to do tasks that for their predecessors would have been extremely difficult. 

Missing pieces

There’s a well-known observation among roboticists: What is hard for humans is easy for machines, and what is easy for humans is hard for machines. Called Moravec’s paradox, it was first articulated in the 1980s by Hans Moravec, thena roboticist at the Robotics Institute of Carnegie Mellon University. A robot can play chess or hold an object still for hours on end with no problem. Tying a shoelace, catching a ball, or having a conversation is another matter. 

There are three reasons for this, says Goldberg. First, robots lack precise control and coordination. Second, their understanding of the surrounding world is limited because they are reliant on cameras and sensors to perceive it. Third, they lack an innate sense of practical physics. 

“Pick up a hammer, and it will probably fall out of your gripper, unless you grab it near the heavy part. But you don’t know that if you just look at it, unless you know how hammers work,” Goldberg says. 

On top of these basic considerations, there are many other technical things that need to be just right, from motors to cameras to Wi-Fi connections, and hardware can be prohibitively expensive. 

Mechanically, we’ve been able to do fairly complex things for a while. In a video from 1957, two large robotic arms are dexterous enough to pinch a cigarette, place it in the mouth of a woman at a typewriter, and reapply her lipstick. But the intelligence and the spatial awareness of that robot came from the person who was operating it. 

In a video from 1957, a man operates two large robotic arms and uses the machine to apply a woman’s lipstick. Robots
have come a long way since.
“LIGHTER SIDE OF THE NEWS –ATOMIC ROBOT A HANDY GUY” (1957) VIA YOUTUBE

“The missing piece is: How do we get software to do [these things] automatically?” says Deepak Pathak, an assistant professor of computer science at Carnegie Mellon.  

Researchers training robots have traditionally approached this problem by planning everything the robot does in excruciating detail. Robotics giant Boston Dynamics used this approach when it developed its boogying and parkouring humanoid robot Atlas. Cameras and computer vision are used to identify objects and scenes. Researchers then use that data to make models that can be used to predict with extreme precision what will happen if a robot moves a certain way. Using these models, roboticists plan the motions of their machines by writing a very specific list of actions for them to take. The engineers then test these motions in the laboratory many times and tweak them to perfection. 

This approach has its limits. Robots trained like this are strictly choreographed to work in one specific setting. Take them out of the laboratory and into an unfamiliar location, and they are likely to topple over. 

Compared with other fields, such as computer vision, robotics has been in the dark ages, Pathak says. But that might not be the case for much longer, because the field is seeing a big shake-up. Thanks to the AI boom, he says, the focus is now shifting from feats of physical dexterity to building “general-purpose robot brains” in the form of neural networks. Much as the human brain is adaptable and can control different aspects of the human body, these networks can be adapted to work in different robots and different scenarios. Early signs of this work show promising results. 

Robots, meet AI 

For a long time, robotics research was an unforgiving field, plagued by slow progress. At the Robotics Institute at Carnegie Mellon, where Pathak works, he says, “there used to be a saying that if you touch a robot, you add one year to your PhD.” Now, he says, students get exposure to many robots and see results in a matter of weeks.

What separates this new crop of robots is their software. Instead of the traditional painstaking planning and training, roboticists have started using deep learning and neural networks to create systems that learn from their environment on the go and adjust their behavior accordingly. At the same time, new, cheaper hardware, such as off-the-shelf components and robots like Stretch, is making this sort of experimentation more accessible. 

Broadly speaking, there are two popular ways researchers are using AI to train robots. Pathak has been using reinforcement learning, an AI technique that allows systems to improve through trial and error, to get robots to adapt their movements in new environments. This is a technique that Boston Dynamics has also started using  in its robot “dogs” called Spot.

Deepak Pathak’s team at Carnegie Mellon has used an AI technique called reinforcement learning to create a robotic dog that can do extreme parkour with minimal pre-programming.

In 2022, Pathak’s team used this method to create four-legged robot “dogs” capable of scrambling up steps and navigating tricky terrain. The robots were first trained to move around in a general way in a simulator. Then they were set loose in the real world, with a single built-in camera and computer vision software to guide them. Other similar robots rely on tightly prescribed internal maps of the world and cannot navigate beyond them.

Pathak says the team’s approach was inspired by human navigation. Humans receive information about the surrounding world from their eyes, and this helps them instinctively place one foot in front of the other to get around in an appropriate way. Humans don’t typically look down at the ground under their feet when they walk, but a few steps ahead, at a spot where they want to go. Pathak’s team trained its robots to take a similar approach to walking: each one used the camera to look ahead. The robot was then able to memorize what was in front of it for long enough to guide its leg placement. The robots learned about the world in real time, without internal maps, and adjusted their behavior accordingly. At the time, experts told MIT Technology Review the technique was a “breakthrough in robot learning and autonomy” and could allow researchers to build legged robots capable of being deployed in the wild.   

Pathak’s robot dogs have since leveled up. The team’s latest algorithm allows a quadruped robot to do extreme parkour. The robot was again trained to move around in a general way in a simulation. But using reinforcement learning, it was then able to teach itself new skills on the go, such as how to jump long distances, walk on its front legs, and clamber up tall boxes twice its height. These behaviors were not something the researchers programmed. Instead, the robot learned through trial and error and visual input from its front camera. “I didn’t believe it was possible three years ago,” Pathak says. 

In the other popular technique, called imitation learning, models learn to perform tasks by, for example, imitating the actions of a human teleoperating a robot or using a VR headset to collect data on a robot. It’s a technique that has gone in and out of fashion over decades but has recently become more popular with robots that do manipulation tasks, says Russ Tedrake, vice president of robotics research at the Toyota Research Institute and an MIT professor.

By pairing this technique with generative AI, researchers at the Toyota Research Institute, Columbia University, and MIT have been able to quickly teach robots to do many new tasks. They believe they have found a way to extend the technology propelling generative AI from the realm of text, images, and videos into the domain of robot movements. 

The idea is to start with a human, who manually controls the robot to demonstrate behaviors such as whisking eggs or picking up plates. Using a technique called diffusion policy, the robot is then able to use the data fed into it to learn skills. The researchers have taught robots more than 200 skills, such as peeling vegetables and pouring liquids, and say they are working toward teaching 1,000 skills by the end of the year. 

Many others have taken advantage of generative AI as well. Covariant, a robotics startup that spun off from OpenAI’s now-shuttered robotics research unit, has built a multimodal model called RFM-1. It can accept prompts in the form of text, image, video, robot instructions, or measurements. Generative AI allows the robot to both understand instructions and generate images or videos relating to those tasks. 

The Toyota Research Institute team hopes this will one day lead to “large behavior models,” which are analogous to large language models, says Tedrake. “A lot of people think behavior cloning is going to get us to a ChatGPT moment for robotics,” he says. 

In a similar demonstration, earlier this year a team at Stanford managed to use a relatively cheap off-the-shelf robot costing $32,000 to do complex manipulation tasks such as cooking shrimp and cleaning stains. It learned those new skills quickly with AI. 

Called Mobile ALOHA (a loose acronym for “a low-cost open-source hardware teleoperation system”), the robot learned to cook shrimp with the help of just 20 human demonstrations and data from other tasks, such as tearing off a paper towel or piece of tape. The Stanford researchers found that AI can help robots acquire transferable skills: training on one task can improve its performance for others.

While the current generation of generative AI works with images and language, researchers at the Toyota Research Institute, Columbia University, and MIT believe the approach can extend to the domain of robot motion.

This is all laying the groundwork for robots that can be useful in homes. Human needs change over time, and teaching robots to reliably do a wide range of tasks is important, as it will help them adapt to us. That is also crucial to commercialization—first-generation home robots will come with a hefty price tag, and the robots need to have enough useful skills for regular consumers to want to invest in them. 

For a long time, a lot of the robotics community was very skeptical of these kinds of approaches, says Chelsea Finn, an assistant professor of computer science and electrical engineering at Stanford University and an advisor for the Mobile ALOHA project. Finn says that nearly a decade ago, learning-based approaches were rare at robotics conferences and disparaged in the robotics community. “The [natural-language-processing] boom has been convincing more of the community that this approach is really, really powerful,” she says. 

There is one catch, however. In order to imitate new behaviors, the AI models need plenty of data. 

More is more

Unlike chatbots, which can be trained by using billions of data points hoovered from the internet, robots need data specifically created for robots. They need physical demonstrations of how washing machines and fridges are opened, dishes picked up, or laundry folded, says Lerrel Pinto, an assistant professor of computer science at New York University. Right now that data is very scarce, and it takes a long time for humans to collect.

top frame shows a person recording themself opening a kitchen drawer with a grabber, and the bottom shows a robot attempting the same action

“ON BRINGING ROBOTS HOME,” NUR MUHAMMAD (MAHI) SHAFIULLAH, ET AL.

Some researchers are trying to use existing videos of humans doing things to train robots, hoping the machines will be able to copy the actions without the need for physical demonstrations. 

Pinto’s lab has also developed a neat, cheap data collection approach that connects robotic movements to desired actions. Researchers took a reacher-grabber stick, similar to ones used to pick up trash, and attached an iPhone to it. Human volunteers can use this system to film themselves doing household chores, mimicking the robot’s view of the end of its robotic arm. Using this stand-in for Stretch’s robotic arm and an open-source system called DOBB-E, Pinto’s team was able to get a Stretch robot to learn tasks such as pouring from a cup and opening shower curtains with just 20 minutes of iPhone data.  

But for more complex tasks, robots would need even more data and more demonstrations.  

The requisite scale would be hard to reach with DOBB-E, says Pinto, because you’d basically need to persuade every human on Earth to buy the reacher-­grabber system, collect data, and upload it to the internet. 

A new initiative kick-started by Google DeepMind, called the Open X-Embodiment Collaboration, aims to change that. Last year, the company partnered with 34 research labs and about 150 researchers to collect data from 22 different robots, including Hello Robot’s Stretch. The resulting data set, which was published in October 2023, consists of robots demonstrating 527 skills, such as picking, pushing, and moving.  

Sergey Levine, a computer scientist at UC Berkeley who participated in the project, says the goal was to create a “robot internet” by collecting data from labs around the world. This would give researchers access to bigger, more scalable, and more diverse data sets. The deep-learning revolution that led to the generative AI of today started in 2012 with the rise of ImageNet, a vast online data set of images. The Open X-Embodiment Collaboration is an attempt by the robotics community to do something similar for robot data. 

Early signs show that more data is leading to smarter robots. The researchers built two versions of a model for robots, called RT-X, that could be either run locally on individual labs’ computers or accessed via the web. The larger, web-accessible model was pretrained with internet data to develop a “visual common sense,” or a baseline understanding of the world, from the large language and image models. 

When the researchers ran the RT-X model on many different robots, they discovered that the robots were able to learn skills 50% more successfully than in the systems each individual lab was developing.

“I don’t think anybody saw that coming,” says Vincent Vanhoucke, Google DeepMind’s head of robotics. “Suddenly there is a path to basically leveraging all these other sources of data to bring about very intelligent behaviors in robotics.”

Many roboticists think that large vision-language models, which are able to analyze image and language data, might offer robots important hints as to how the surrounding world works, Vanhoucke says. They offer semantic clues about the world and could help robots with reasoning, deducing things, and learning by interpreting images. To test this, researchers took a robot that had been trained on the larger model and asked it to point to a picture of Taylor Swift. The researchers had not shown the robot pictures of Swift, but it was still able to identify the pop star because it had a web-scale understanding of who she was even without photos of her in its data set, says Vanhoucke.

RT-2, a recent model for robotic control, was trained on online text
and images as well as interactions with the real world.
KELSEY MCCLELLAN

Vanhoucke says Google DeepMind is increasingly using techniques similar to those it would use for machine translation to translate from English to robotics. Last summer, Google introduced a vision-language-­action model called RT-2. This model gets its general understanding of the world from online text and images it has been trained on, as well as its own interactions in the real world. It translates that data into robotic actions. Each robot has a slightly different way of translating English into action, he adds.  

“We increasingly feel like a robot is essentially a chatbot that speaks robotese,” Vanhoucke says. 

Baby steps

Despite the fast pace of development, robots still face many challenges before they can be released into the real world. They are still way too clumsy for regular consumers to justify spending tens of thousands of dollars on them. Robots also still lack the sort of common sense that would allow them to multitask. And they need to move from just picking things up and placing them somewhere to putting things together, says Goldberg—for example, putting a deck of cards or a board game back in its box and then into the games cupboard. 

But to judge from the early results of integrating AI into robots, roboticists are not wasting their time, says Pinto. 

“I feel fairly confident that we will see some semblance of a general-purpose home robot. Now, will it be accessible to the general public? I don’t think so,” he says. “But in terms of raw intelligence, we are already seeing signs right now.” 

Building the next generation of robots might not just assist humans in their everyday chores or help people like Henry Evans live a more independent life. For researchers like Pinto, there is an even bigger goal in sight.

Home robotics offers one of the best benchmarks for human-level machine intelligence, he says. The fact that a human can operate intelligently in the home environment, he adds, means we know this is a level of intelligence that can be reached. 

“It’s something which we can potentially solve. We just don’t know how to solve it,” he says. 

Evans in the foreground with computer screen.  A table with playing cards separates him from two other people in the room
Thanks to Stretch, Henry Evans was able to hold his own playing cards
for the first time in two decades.
VY NGUYEN

For Henry and Jane Evans, a big win would be to get a robot that simply works reliably. The Stretch robot that the Evanses experimented with is still too buggy to use without researchers present to troubleshoot, and their home doesn’t always have the dependable Wi-Fi connectivity Henry needs in order to communicate with Stretch using a laptop.

Even so, Henry says, one of the greatest benefits of his experiment with robots has been independence: “All I do is lay in bed, and now I can do things for myself that involve manipulating my physical environment.”

Thanks to Stretch, for the first time in two decades, Henry was able to hold his own playing cards during a match. 

“I kicked everyone’s butt several times,” he says. 

“Okay, let’s not talk too big here,” Jane says, and laughs.

The inadvertent geoengineering experiment that the world is now shutting off

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Usually when we talk about climate change, the focus is squarely on the role that greenhouse-gas emissions play in driving up global temperatures, and rightly so. But another important, less-known phenomenon is also heating up the planet: reductions in other types of pollution.

In particular, the world’s power plants, factories, and ships are pumping much less sulfur dioxide into the air, thanks to an increasingly strict set of global pollution regulations. Sulfur dioxide creates aerosol particles in the atmosphere that can directly reflect sunlight back into space or act as the “condensation nuclei” around which cloud droplets form. More or thicker clouds, in turn, also cast away more sunlight. So when we clean up pollution, we also ease this cooling effect. 

Before we go any further, let me stress: cutting air pollution is smart public policy that has unequivocally saved lives and prevented terrible suffering. 

The fine particulate matter produced by burning coal, gas, wood, and other biomatter is responsible for millions of premature deaths every year through cardiovascular disease, respiratory illnesses, and various forms of cancer, studies consistently show. Sulfur dioxide causes asthma and other respiratory problems, contributes to acid rain, and depletes the protective ozone layer. 

But as the world rapidly warms, it’s critical to understand the impact of pollution-fighting regulations on the global thermostat as well. Scientists have baked the drop-off of this cooling effect into net warming projections for the coming decades, but they’re also striving to obtain a clearer picture of just how big a role declining pollution will play.

A new study found that reductions in emissions of sulfur dioxide and other pollutants are responsible for about 38%, as a middle estimate, of the increased “radiative forcing” observed on the planet between 2001 and 2019. 

An increase in radiative forcing means that more energy is entering the atmosphere than leaving it, as Kerry Emanuel, a professor of atmospheric science at MIT, lays out in a handy explainer here. As that balance has shifted in recent decades, the difference has been absorbed by the oceans and atmosphere, which is what is warming up the planet. 

The remainder of the increase is “mainly” attributable to continued rising emissions of heat-trapping greenhouse gases, says Øivind Hodnebrog, a researcher at the Center for International Climate and Environment Research in Norway and lead author of the paper, which relied on climate models, sea-surface temperature readings, and satellite observations.

The study underscores the fact that as carbon dioxide, methane, and other gases continue to drive up temperature​​s, parallel reductions in air pollution are revealing more of that additional warming, says Zeke Hausfather, a scientist at the independent research organization Berkeley Earth. And it’s happening at a point when, by most accounts, global warming is about to begin accelerating or has already started to do so. (There’s ongoing debate over whether researchers can yet detect that acceleration and whether the world is now warming faster than researchers had expected.)

Because of the cutoff date, the study did not capture a more recent contributor to these trends. Starting in 2020, under new regulations from the International Maritime Organization, commercial shipping vessels have also had to steeply reduce the sulfur content in fuels. Studies have already detected a decrease in the formation of “ship tracks,” or the lines of clouds that often form above busy shipping routes. 

Again, this is a good thing in the most important way: maritime pollution alone is responsible for tens of thousands of early deaths every year. But even so, I have seen and heard of suggestions that perhaps we should slow down or alter the implementation of some of these pollution policies, given the declining cooling effect.

A 2013 study explored one way to potentially balance the harms and benefits. The researchers simulated a scenario in which the maritime industry would be required to use very low-sulfur fuels around coastlines, where the pollution has the biggest effect on mortality and health. But then the vessels would double the fuel’s sulfur content when crossing the open ocean. 

In that hypothetical world, the cooling effect was a bit stronger and premature deaths declined by 69% with respect to figures at the time, delivering a considerable public health improvement. But notably, under a scenario in which low-sulfur fuels were required across the board, mortality declined by 96%, a difference of more than 13,000 preventable deaths every year.

Now that the rules are in place and the industry is running on low-sulfur fuels, intentionally reintroducing pollution over the oceans would be a far more controversial matter.

While society basically accepted for well over a century that ships were inadvertently emitting sulfur dioxide into the air, flipping those emissions back on for the purpose of easing global warming would amount to a form of solar geoengineering, a deliberate effort to tweak the climate system.

Many think such planetary interventions are far too powerful and unpredictable for us to muck around with. And to be sure, this particular approach would be one of the more ineffective, dangerous, and expensive ways to carry out solar geoengineering, if the world ever decided it should be done at all. The far more commonly studied concept is emitting sulfur dioxide high in the stratosphere, where it would persist for longer and, as a bonus, not be inhaled by humans. 

On an episode of the Energy vs. Climate podcast last fall, David Keith, a professor at the University of Chicago who has closely studied the topic, said that it may be possible to slowly implement solar geoengineering in the stratosphere as a means of balancing out the reduced cooling occurring from sulfur dioxide emissions in the troposphere.

“The kind of solar geoengineering ideas that people are talking about seriously would be a thin wedge that would, for example, start replacing what was happening with the added warming we have from unmasking the aerosol cooling from shipping,” he said. 

Positioning the use of solar geoengineering as a means of merely replacing a cruder form that the world was shutting down offers a somewhat different mental framing for the concept—though certainly not one that would address all the deep concerns and fierce criticisms.


Now read the rest of The Spark 

Read more from MIT Technology Review’s archive: 

Back in 2018, I wrote a piece about the maritime rules that were then in the works and the likelihood that they would fuel additional global warming, noting that we were “about to kill a massive, unintentional” experiment in solar geoengineering.

Another thing

Speaking of the concerns about solar geoengineering, late last week I published a deep dive into Harvard’s unsuccessful, decade-long effort to launch a high-altitude balloon to conduct a tiny experiment in the stratosphere. I asked a handful of people who were involved in the project or followed it closely for their insights into what unfolded, the lessons that can be drawn from the episode—and their thoughts on what it means for geoengineering research moving forward.

Keeping up with Climate 

Yup, as the industry predicted (and common sense would suggest), this week’s solar eclipse dramatically cut solar power production across North America. But for the most part, grid operators were able to manage their systems smoothly, minus a few price spikes, thanks in part to a steady buildout of battery banks and the availability of other sources like natural gas and hydropower. (Heatmap)

There’s been a pile-up of bad news for Tesla in recent days. First, the company badly missed analyst expectations for vehicle deliveries during the first quarter. Then, Reuters reported that the EV giant has canceled plans for a low-cost, mass-market car. That may have something to do with the move to “prioritize the development of a robotaxi,” which the Wall Street Journal then wrote about. Over on X, Elon Musk denied the Reuters story, sort ofposting that “Reuters is lying (again).” But there’s a growing sense that his transformation into a “far-right activist” is exacting an increasingly high cost on his personal and business brands. (Wall Street Journal)

In a landmark ruling this week, the European Court of Human Rights determined that by not taking adequate steps to address the dangers of climate change, including increasingly severe heat waves that put the elderly at particular risk, Switzerland had violated the human rights of a group of older Swiss women who had brought a case against the country. Legal experts say the ruling creates a precedent that could unleash many similar cases across Europe. (The Guardian)

A brief, weird history of brainwashing

On an early spring day in 1959, Edward Hunter testified before a US Senate subcommittee investigating “the effect of Red China Communes on the United States.” It was the kind of opportunity he relished. A war correspondent who had spent considerable time in Asia, Hunter had achieved brief media stardom in 1951 after his book Brain-Washing in Red China introduced a new concept to the American public: a supposedly scientific system for changing people’s minds, even making them love things they once hated. 

But Hunter wasn’t just a reporter, objectively chronicling conditions in China. As he told the assembled senators, he was also an anticommunist activist who served as a propagandist for the OSS, or Office of Strategic Services—something that was considered normal and patriotic at the time. His reporting blurred the line between fact and political mythology.

portrait of Liang Qichao
Chinese reformists like Liang Qichao used the term xinao—a play on an older word, xixin, or “washing the heart”—in an attempt to bring ideas from Western science into Chinese philosophy
WIKIMEDIA COMMONS

When a senator asked about Hunter’s work for the OSS, the operative boasted that he was the first to “discover the technique of mind-attack” in mainland China, the first to use the word “brainwashing” in writing in any language, and “the first, except for the Chinese, to use the word in speech in any language.” 

None of this was true. Other operatives associated with the OSS had used the word in reports before Hunter published articles about it. More important, as the University of Hong Kong legal scholar Ryan Mitchell has pointed out, the Chinese word Hunter used at the hearing—xinao (), translated as “wash brain”—has a long history going back to scientifically minded Chinese philosophers of the late 19th century, who used it to mean something more akin to enlightenment. 

Yet Hunter’s sensational tales still became an important part of the disinformation and pseudoscience that fueled a “mind-control race” during the Cold War, much like the space race. Inspired by new studies on brain function, the US military and intelligence communities prepared themselves for a psychic war with the Soviet Union and China by spending millions of dollars on research into manipulating the human brain. But while the science never exactly panned out, residual beliefs fostered by this bizarre conflict continue to play a role in ideological and scientific debates to this day.

Coercive persuasion and pseudoscience

Ironically, “brainwashing” was not a widely used term among communists in China. The word xinao, Mitchell told me in an email, is actually a play on an older word, xixin, or washing the heart, which alludes to a Confucian and Buddhist ideal of self-awareness. In the late 1800s, Chinese reformists such as Liang Qichao began using xinao—replacing the character for “heart” with “brain”—in part because they were trying to modernize Chinese philosophy. “They were eager to receive and internalize as much as they could of Western science in general, and discourse about the brain as the seat of consciousness was just one aspect of that set of imported ideas,” Mitchell said. 

For Liang and his circle, brainwashing wasn’t some kind of mind-wiping process. “It was a sort of notion of epistemic virtue,” Mitchell said, “or a personal duty to make oneself modern in order to behave properly in the modern world.”

Meanwhile, scientists outside China were investigating “brainwashing” in the sense we usually think of, with experiments into mind clearing and reprogramming. Some of the earliest research into the possibility began in the 1890s, when Ivan Pavlov, the Russian physiologist who had famously conditioned dogs to drool at the sound of a bell, worked on Soviet-funded projects to investigate how trauma could change animal behavior. He found that even the most well-conditioned dogs would forget their training after intensely stressful experiences such as nearly drowning, especially when those were combined with sleep deprivation and isolation. It seemed that Pavlov had hit upon a quick way to wipe animals’ memories. Scientists on both sides of the Iron Curtain subsequently wondered whether it might work on humans. And once memories were wiped, they wondered, could something else be installed their place? 

During the 1949 show trial of the Hungarian anticommunist József Mindszenty, American officials worried that the Russians might have found the answer. A Catholic cardinal, Mindszenty had protested several government policies of the newly formed, Soviet-backed Hungarian People’s Republic. He was arrested and tortured, and he eventually made a series of outlandish confessions at trial: that he had conspired to steal the Hungarian crown jewels, start World War III, and make himself ruler of the world. In his book Dark Persuasion, Joel Dimsdale, a psychiatry professor at the University of California, San Diego, argues that the US intelligence community saw these implausible claims as confirmation that the Soviets had made some kind of scientific breakthrough that allowed them to control the human mind through coercive persuasion.

This question became more urgent when, in 1953, a handful of American POWs in China and Korea switched sides, and a Marine named Frank Schwable was quoted on Chinese radio validating the communist claim that the US was testing germ warfare in Asia. By this time, Hunter had already published a book about brainwashing in China, so the Western public quickly gravitated toward his explanation that the prisoners had been brainwashed, just like Mindszenty. People were terrified, and this was a reassuring explanation for how nice American GIs could go Red. 

Edward Hunter, who claimed to have coined the term “brainwashing,” wrote a book that fueled paranoia about a “mind-control race” during the Cold War.
A pamphlet published in 1955, purported to be a translation of a work by the Russian secret police, claimed that the Soviets used drugs and psychology to control the masses and that Dianetics, a pseudoscience invented by Scientology founder L. Ron Hubbard, could prevent brainwashing.

Over the following years, in the wake of the Korean War, “brainwashing” grew into a catchall explanation for any kind of radical or nonconformist behavior in the United States. Social scientists and politicians alike latched onto the idea. The Dutch psychologist Joost Meerloo warned that television was a brainwashing machine, for example, and the anticommunist educator J. Merrill Root claimed that high schools brainwashed kids into being weak-willed and vulnerable to communist influence. Meanwhile, popular movies like 1962’s The Manchurian Candidate, starring Frank Sinatra, offered thrilling tales of Chinese communists whose advanced psychological techniques turned unsuspecting American POWs into assassins. 

For the military and intelligence communities, mind control hovered between myth and science. Nowhere is this more obvious than in the peculiar case of an anonymously published 1955 pamphlet called Brain-Washing: A Synthesis of the Russian Textbook on Psychopolitics, which purported to be a translation of work by the Soviet secret-police chief Lavrentiy Beria. Full of wild claims about how the Soviets used psychology and drugs to control the masses, the pamphlet has a peculiar section devoted to the ways that Dianetics—a pseudoscience invented by the founder of Scientology, L. Ron Hubbard—could prevent brainwashing. As a result, it is widely believed that Hubbard himself wrote the pamphlet as black propaganda, or propaganda that masquerades as something produced by a foreign adversary. 

The 1962 film The Manchurian Candidate, starring Frank Sinatra, offered thrilling tales of Chinese communists whose advanced psychological techniques turned unsuspecting American POWs into assassins.
ALAMY

Still, US officials apparently took it seriously. David Seed, a cultural studies scholar at the University of Liverpool, plumbed the National Security Council papers at the Dwight D. Eisenhower Library, where he discovered that the NSC’s Operations Coordinating Board had analyzed the pamphlet as part of an investigation into enemy capabilities. A member of the board wrote that it might be “fake” but contained so much accurate information that it was clearly written by “experts.” When it came to brainwashing, government operatives made almost no distinction between black propaganda and so-called expertise.

This gobbledygook may also have struck the NSC investigator as legitimate because Hubbard borrowed lingo from the same sources as many scientists of the era. Hubbard chose the name Dianetics, for instance, specifically to evoke the computer scientist Norbert Wiener’s idea of cybernetics, an influential theory about information control systems that heavily informed both psychology and the burgeoning field of artificial intelligence. Cybernetics suggested that the brain functioned like a machine, with inputs and outputs, feedback and control. And if machines could be optimized, then why not brains?

An excuse for government abuse 

The fantasy of brainwashing was always one of optimization. Military experts knew that adversaries could be broken with torture, but it took months and was often a violent, messy process. A fast, scientifically informed interrogation method would save time and could potentially be deployed on a mass scale. In 1953, that dream led the CIA to invest millions of dollars in MK-Ultra, a project that injected cash into university and research programs devoted to memory wiping, mind control, and “truth serum” drugs. Worried that their rivals in the Soviet Union and China were controlling people’s minds to spread communism throughout the world, the intelligence community was willing to try almost anything to fight back. No operation was too weird. 

One of MK-Ultra’s most notorious projects was “Operation Midnight Climax” in San Francisco, where sex workers lured random American men to a safe house and dosed them with LSD while CIA agents covertly observed their behavior. At McGill University in Montreal, the CIA funded the work of the psychologist Donald Cameron, who used a combination of drugs and electroconvulsive therapy on patients with mental illness, attempting to erase and “repattern” their minds. Though many of his victims did wind up suffering from amnesia for years, Cameron never successfully injected new thoughts or memories. Marcia Holmes, a science historian who researched brainwashing for the Hidden Persuaders project at Birkbeck, University of London, told me that the CIA used Cameron’s data to develop new kinds of torture, which the US adopted as  “enhanced interrogation” techniques in the wake of 9/11. “You could put a scientific spin on it and claim that’s why it worked,” she said. “But it always boiled down to medieval tactics that people knew from experience worked.”

Schwable
Believed to be a victim of communist mind control, the American
POW Frank Schwable claimed on Chinese radio in 1953 that the US was testing germ warfare in Asia.
József Mindszenty
After being arrested and tortured, the Catholic cardinal and anticommunist
József Mindszenty made outlandish confessions
at trial, like that he had conspired to steal the Hungarian crown jewels.

MK-Ultra remained secret until the mid-1970s, when the US Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities, commonly known as the Church Committee after its chair, Senator Frank Church, opened hearings into the long-­running project. The shocking revelations that the CIA was drugging American citizens and paying for the torment of vulnerable Canadians changed the public’s understanding of mind control. “Brainwashing” came to seem less like a legitimate threat from overseas enemies and more like a ruse or excuse for almost any kind of bad behavior. When Patty Hearst, granddaughter of the newspaper publisher William Randolph Hearst, was put on trial in 1976 for robbing a bank after being kidnapped by the Symbionese Liberation Army, an American militant organization, the judge refused to believe experts who testified that she had been tortured and brainwashed by her captors. She was convicted and spent 22 months in jail. This marked the end of the nation’s infatuation with brainwashing, and experts began to debunk the idea that there was a scientific basis for mind control.

Patty Hearts against a red flag
In publishing heiress Patty Hearst’s 1976 trial for bank robbery,
the judge refused to believe that she had been brainwashed as a victim of kidnapping.
GIFT OF TIME MAGAZINE

Still, the revelations about MK-Ultra led to new cultural myths. Communists were no longer the baddies—instead, people feared that the US government was trying to experiment on its citizens. Soon after the Church Committee hearings were over, the media was gripped by a crime story of epic proportions: nearly two dozen Black children had been murdered in Atlanta, and the police had no leads other than a vague idea that maybe it could be a serial killer. Wayne Williams, a Black man who was eventually convicted of two of the murders, claimed at various points that he had been trained by the CIA. This led to popular conspiracy theories that MK-Ultra had been experimenting on Black people in Atlanta.

Colin Dickey, author of Under the Eye of Power: How Fear of Secret Societies Shapes American Democracy, told me these conspiracy theories became “a way of making sense of an otherwise mystifying and terrifying reality, [which is that America is] a country where Black people are so disenfranchised that their murders aren’t noticed.” Dickey added that this MK-Ultra conspiracy theory “gave a shape to systemic racism,” placing blame for the Atlanta child murders on the US government. In the process, it also suggested that Black people had been brainwashed to kill each other. 

No evidence ever surfaced that MK-Ultra was behind the children’s deaths, but the idea of brainwashing continues to be a powerful metaphor for the effects of systemic racism. It haunts contemporary Black horror films like Get Out, where white people take over Black people’s bodies through a fantastical version of hypnosis. And it provides the analytical substrate for the scathing indictment of racist marketing in the book Brainwashed: Challenging the Myth of Black Inferiority, by the Black advertising executive Tom Burrell. He argues that advertising has systematically pushed stereotypes of Black people as second-class citizens, instilling a “slave mindset” in Black audiences.

A social and political phenomenon

Today, even as the idea of brainwashing is often dismissed as pseudoscience, Americans are still spellbound by the idea that people we disagree with have been psychologically captured by our enemies. Right-wing pundits and politicians often attribute discussions of racism to infections by a “woke mind virus”—an idea that is a direct descendant of Cold War panics over communist brainwashing. Meanwhile, contemporary psychology researchers like UCSD’s Dimsdale fear that social media is now a vector for coercive persuasion, just as Meerloo worried about television’s mind-control powers in the 1950s. 

Cutting-edge technology is also altering how we think about mind control. In a 2017 open letter published in Nature, an international group of researchers and ethicists warned that neurotechnologies like brain-computer interfaces “mean that we are on a path to a world in which it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions.” It sounds like MK-Ultra’s wish list. Hoping to head off a neuro-dystopia, the group outlined several key ways that companies and universities could guard against coercive uses of this technology in the future. They suggested that we need laws to prevent companies from spying on people’s private thoughts, for example, as well as regulations that bar anyone from using brain implants to change people’s personalities or make them more neurotypical. 

Many neuroscientists feel that these concerns are overblown; one of them, the University of Maryland cognitive scientist R. Douglas Fields, summed up the naysayers’ position with a column in Quanta magazine arguing that the brain is more plastic than we realize, and that neurotech mind control will never be as simple as throwing a switch. Kathleen Taylor, another neuroscientist who studies brainwashing, takes a more measured view; in her book Brainwashing: The Science of Thought Control, she acknowledges that neurotech and drugs could change people’s thought processes but ultimately concludes that “brainwashing is above all a social and political phenomenon.” 

Sydney Gottleib
Sidney Gottlieb was an American chemist and spymaster who in the 1950s headed the
Central Intelligence Agency’s mind-control program known as Project MK-Ultra.
COURTESY OF THE CIA

Perhaps that means the anonymous National Security Council examiner was right to call Hubbard’s black propaganda the work of an “expert.” If brainwashing is politics, then disinformation might be as effective (or ineffective) as a brain implant in changing someone’s mind. Still, scholars have learned that political efforts at mind control do not have predictable results. Online disinformation leads to what Juliette Kayyem, a former assistant secretary of the Department of Homeland Security, identifies as stochastic terrorism, or acts of violence that cannot be predicted precisely but can be analyzed statistically. She writes that stochastic terrorism is inspired by online rhetoric that demonizes groups of people, but it’s hard to know which people consuming that rhetoric will actually become terrorists, and which of them will just rage at their computer screens—the result of coercive persuasion that works on some targets and misses others. 

American operatives may never have found the perfect system for brainwashing foreign adversaries or unsuspecting citizens, but the US managed to win the mind-control wars in one small way. Mitchell, the legal scholar at Hong Kong University, told me that the American definition of brainwashing, or xinao, is now the dominant way the word is used in modern Chinese speech. “People refer to aggressive advertising campaigns or earworm pop songs as having a xinao effect,” he said. The Chinese government, Mitchell added, uses the term exactly the way the US military did back in the 1950s. State media, for example, “described many Hong Kong protesters in 2019 as having undergone xinao by the West.”

Annalee Newitz is the author of Stories Are Weapons: Psychological Warfare and the American Mind, coming in June 2024.

The effort to make a breakthrough cancer therapy cheaper

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

CAR-T therapies, created by engineering a patient’s own cells to fight cancer, are typically reserved for people who have exhausted other treatment options. But last week, the FDA approved Carvykti, a CAR-T product for multiple myeloma, as a second-line therapy. That means people are eligible to receive Carvykti after their first relapse.

While this means some multiple myeloma patients in the US will now get earlier access to CAR-T, the vast majority of patients around the globe still won’t get CAR-T at all. These therapies are expensive—half a million dollars in some cases. But do they have to be?

Today, let’s take a look at efforts to make CAR-T cheaper and more accessible.

It’s not hard to see why CAR-T comes with a high price tag. Creating these therapies is a multistep process. First doctors harvest T cells from the patient. Those cells are then engineered outside the body using a viral vector, which inserts an artificial gene that codes for a chimeric antigen receptor, or CAR. That receptor enables the cells to identify cancer cells and flag them for destruction. The cells must then be grown in the lab until they number in the millions. Meanwhile, the patient has to undergo chemotherapy to destroy any remaining T cells and make space for the CAR-T cells. The engineered cells are then reintroduced into the patient’s body, where they become living, cancer-fighting drugs. It’s a high-tech and laborious process.

In the US, CAR-T brings in big money. The therapies are priced between $300,000 and $600,000, but some estimates put the true cost—covering hospital time, the care required to manage adverse reactions, and more—at more than a million dollars in some cases.  

One way to cut costs is to produce the therapy in countries where drug development and manufacturing is significantly cheaper. In March, India approved its first homegrown CAR-T therapy, NexCAR19. It’s produced by a small biotech called ImmunoACT, based in Mumbai. The Indian CAR-T therapy costs roughly a tenth of what US products sell for: between $30,000 and $50,000. “It lights a little fire under all of us to look at the cost of making CAR-T cells, even in places like the United States,” says Terry Fry, a pediatric hematologist at the University of Colorado Anschutz Medical Campus.  

That lower cost is due to a variety of factors. Labor is cheaper in India, where the drug was developed and tested and is now manufactured. The company also saved money by manufacturing its own viral vectors, one of the most expensive line items in the manufacturing process.

Another way to curb costs is to produce the therapies in the medical centers where they’re delivered. Although cancer centers are in charge of collecting T cells from their patients, they typically don’t produce the CAR-T therapies themselves. Instead they ship the cells to pharma companies, which have specialized facilities for engineering and growing the cells. Then the company ships the therapy back. But producing these therapies in house—a model called point-of-care manufacturing—could save money and reduce wait times. One hospital in Barcelona made and tested its own CAR-T therapy and now provides it to patients for $97,000, a fraction of what the name-brand medicines cost.

In Brazil, the Oswaldo Cruz Foundation, a vaccine manufacturer and the largest biomedical research institute in Latin America, recently partnered with a US-based nonprofit called Caring Cross to help develop local CAR-T manufacturing capabilities. Caring Cross has developed a point-of-care manufacturing process able to generate CAR-T therapies for an even lower cost—roughly $20,000 in materials and $10,000 in labor and facilities.

It’s an attractive model. Demand for CAR-T often outstrips supply, leading to long wait times. “There is a growing tension around the limited access that we’re seeing for cell and gene therapies coming out of biotech,” Stanford pediatric oncologist Crystal Mackall told Stat. “It’s incredibly tempting to say, ‘Well, why don’t you just let me make it for my patients?’”

Even these treatments run in the tens of thousands of dollars, partly because approved CAR-T products are bespoke therapies, each one produced for a particular patient. But many companies are also working on off-the-shelf CAR-T therapies. In some cases, that means engineering T cells from healthy donors. Some of those therapies are already in clinical trials. 

In other cases, companies are working to engineer cells inside the body. That process should make it much, much simpler and cheaper to deliver CAR-T. With conventional CAR-T therapies, patients have to undergo chemotherapy to destroy their existing T cells. But with in vivo CAR-T, this step isn’t necessary. And because these therapies don’t require any cell manipulation outside the patient’s body, “you could take it in an outpatient clinic,” says Priya Karmali, chief technology officer at Capstan Therapeutics, which is developing in vivo CAR-T therapies. “You wouldn’t need specialized centers.”

Some in vivo strategies, just like the ex vivo strategies, rely on viral vectors. Umoja Biopharma’s platform uses a viral vector but also employs a second technology to prompt the engineered cells to survive and expand in the presence of the drug rapamycin. Last fall, the company reported that it had successfully generated in vivo CAR-T cells in nonhuman primates.

At Capstan Therapeutics, researchers are taking a different tack, using lipid nanoparticles to ferry mRNA into T cells. When a viral vector places the CAR gene into a cell’s DNA, the change is permanent. But with mRNA, the CAR operates for only a limited time. “Once the war is over, you don’t want the soldiers lurking around forever,” Karmali says.

And with CAR-T, there are plenty of potential battlefields to conquer. CAR-T therapies are already showing promise beyond blood cancers. Earlier this year, researchers reported stunning results in 15 patients with lupus and other autoimmune diseases. CAR-T is also being tested as a treatment for solid tumors, heart disease, aging, HIV infection, and more. As the number of people eligible for CAR-T therapies increases, so will the pressure to reduce the cost.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Scientists are finally making headway in moving CAR-T into solid tumors. Last fall I wrote about the barriers and the progress

In the early days of CAR-T, Emily Mullin reported on patient deaths that called the safety of the treatment into question. 

Travel back in time to relive the excitement over the approval of the first CAR-T therapy with this story by Emily Mullin. 

From around the web

The Arizona Supreme Court ruled that an 1864 law banning nearly all abortions can be enforced after a 14-day grace period. (NBC)

Drug shortages are worse than they have been in more than two decades. Pain meds, chemo drugs, and ADHD medicines are all in short supply. Here’s why. (Stat)

England became the fifth European country to begin limiting children’s access to gender treatments such as puberty blockers and hormone therapy. Proponents of the restrictions say there is little evidence that these therapies help young people with gender dysphoria. (NYT

Last week I wrote about an outbreak of bird flu in cows. A new study finds that birds in New York City are also carrying the virus. The researchers found H5N1 in geese in the Bronx, a chicken in Manhattan, a red-tailed hawk in Queens, and a goose and a peregrine falcon in Brooklyn. (NYT)

This US startup makes a crucial chip material and is taking on a Japanese giant

It can be dizzying to try to understand all the complex components of a single computer chip: layers of microscopic components linked to one another through highways of copper wires, some barely wider than a few strands of DNA. Nestled between those wires is an insulating material called a dielectric, ensuring that the wires don’t touch and short out. Zooming in further, there’s one particular dielectric placed between the chip and the structure beneath it; this material, called dielectric film, is produced in sheets as thin as white blood cells. 

For 30 years, a single Japanese company called Ajinomoto has made billions producing this particular film. Competitors have struggled to outdo them, and today Ajinomoto has more than 90% of the market in the product, which is used in everything from laptops to data centers. 

But now, a startup based in Berkeley, California, is embarking on a herculean effort to dethrone Ajinomoto and bring this small slice of the chipmaking supply chain back to the US.

Thintronics is promising a product purpose-built for the computing demands of the AI era—a suite of new materials that the company claims have higher insulating properties and, if adopted, could mean data centers with faster computing speeds and lower energy costs. 

The company is at the forefront of a coming wave of new US-based companies, spurred by the $280 billion CHIPS and Science Act, that is seeking to carve out a portion of the semiconductor sector, which has become dominated by just a handful of international players. But to succeed, Thintronics and its peers will have to overcome a web of challenges—solving technical problems, disrupting long-standing industry relationships, and persuading global semiconductor titans to accommodate new suppliers. 

“Inventing new materials platforms and getting them into the world is very difficult,” Thintronics founder and CEO Stefan Pastine says. It is “not for the faint of heart.”

The insulator bottleneck

If you recognize the name Ajinomoto, you’re probably surprised to hear it plays a critical role in the chip sector: the company is better known as the world’s leading supplier of MSG seasoning powder. In the 1990s, Ajinomoto discovered that a by-product of MSG made a great insulator, and it has enjoyed a near monopoly in the niche material ever since. 

But Ajinomoto doesn’t make any of the other parts that go into chips. In fact, the insulating materials in chips rely on dispersed supply chains: one layer uses materials from Ajinomoto, another uses material from another company, and so on, with none of the layers optimized to work in tandem. The resulting system works okay when data is being transmitted over short paths, but over longer distances, like between chips, weak insulators act as a bottleneck, wasting energy and slowing down computing speeds. That’s recently become a growing concern, especially as the scale of AI training gets more expensive and consumes eye-popping amounts of energy. (Ajinomoto did not respond to requests for comment.) 

None of this made much sense to Pastine, a chemist who sold his previous company, which specialized in recycling hard plastics, to an industrial chemicals company in 2019. Around that time, he started to believe that the chemicals industry could be slow to innovate, and he thought the same pattern was keeping chipmakers from finding better insulating materials. In the chip industry, he says, insulators have “kind of been looked at as the redheaded stepchild”—they haven’t seen the progress made with transistors and other chip components. 

He launched Thintronics that same year, with the hope that cracking the code on a better insulator could provide data centers with faster computing speeds at lower costs. That idea wasn’t groundbreaking—new insulators are constantly being researched and deployed—but Pastine believed that he could find the right chemistry to deliver a breakthrough. 

Thintronics says it will manufacture different insulators for all layers of the chip, for a system designed to swap into existing manufacturing lines. Pastine tells me the materials are now being tested with a number of industry players. But he declined to provide names, citing nondisclosure agreements, and similarly would not share details of the formula. 

Without more details, it’s hard to say exactly how well the Thintronics materials compare with competing products. The company recently tested its materials’ Dk values, which are a measure of how effective an insulator a material is. Venky Sundaram, a researcher who has founded multiple semiconductor startups but is not involved with Thintronics, reviewed the results. Some of Thintronics’ numbers were fairly average, he says, but their most impressive Dk value is far better than anything available today.

A rocky road ahead

Thintronics’ vision has already garnered some support. The company received a $20 million Series A funding round in March, led by venture capital firms Translink and Maverick, as well as a grant from the US National Science Foundation. 

The company is also seeking funding from the CHIPS Act. Signed into law by President Joe Biden in 2022, it’s designed to boost companies like Thintronics in order to bring semiconductor manufacturing back to American companies and reduce reliance on foreign suppliers. A year after it became law, the administration said that more than 450 companies had submitted statements of interest to receive CHIPS funding for work across the sector. 

The bulk of funding from the legislation is destined for large-scale manufacturing facilities, like those operated by Intel in New Mexico and Taiwan Semiconductor Manufacturing Corporation (TSMC) in Arizona. But US Secretary of Commerce Gina Raimondo has said she’d like to see smaller companies receive funding as well, especially in the materials space. In February, applications opened for a pool of $300 million earmarked specifically for materials innovation. While Thintronics declined to say how much funding it was seeking or from which programs, the company does see the CHIPS Act as a major tailwind.

But building a domestic supply chain for chips—a product that currently depends on dozens of companies around the globe—will mean reversing decades of specialization by different countries. And industry experts say it will be difficult to challenge today’s dominant insulator suppliers, who have often had to adapt to fend off new competition. 

“Ajinomoto has been a 90-plus-percent-market-share material for more than two decades,” says Sundaram. “This is unheard-of in most businesses, and you can imagine they didn’t get there by not changing.”

One big challenge is that the dominant manufacturers have decades-long relationships with chip designers like Nvidia or Advanced Micro Devices, and with manufacturers like TSMC. Asking these players to swap out materials is a big deal.

“The semiconductor industry is very conservative,” says Larry Zhao, a semiconductor researcher who has worked in the dielectrics industry for more than 25 years. “They like to use the vendors they already know very well, where they know the quality.” 

Another obstacle facing Thintronics is technical: insulating materials, like other chip components, are held to manufacturing standards so precise they are difficult to comprehend. The layers where Ajinomoto dominates are thinner than a human hair. The material must also be able to accept tiny holes, which house wires running vertically through the film. Every new iteration is a massive R&D effort in which incumbent companies have the upper hand given their years of experience, says Sundaram.

If all this is completed successfully in a lab, yet another hurdle lies ahead: the material has to retain those properties in a high-volume manufacturing facility, which is where Sundaram has seen past efforts fail.

“I have advised several material suppliers over the years that tried to break into [Ajinomoto’s] business and couldn’t succeed,” he says. “They all ended up having the problem of not being as easy to use in a high-volume production line.” 

Despite all these challenges, one thing may be working in Thintronics’ favor: US-based tech giants like Microsoft and Meta are making headway in designing their own chips for the first time. The plan is to use these chips for in-house AI training as well as for the cloud computing capacity that they rent out to customers, both of which would reduce the industry’s reliance on Nvidia. 

Though Microsoft, Google, and Meta declined to comment on whether they are pursuing advancements in materials like insulators, Sundaram says these firms could be more willing to work with new US startups rather than defaulting to the old ways of making chips: “They have a lot more of an open mind about supply chains than the existing big guys.”