Material Cultures looks to the past to build the future

Despite decades of green certifications, better material sourcing, and the use of more sustainable materials such as mass timber, the built environment is still responsible for a third of global emissions worldwide. According to a 2024 UN report, the building sector has fallen “significantly behind on progress” toward becoming more sustainable. Changing the way we erect and operate buildings remains key to even approaching climate goals. 

“As soon as you set out and do something differently in construction, you are constantly bumping your head against the wall,” says Paloma Gormley, a director of the London-based design and research nonprofit Material Cultures. “You can either stop there or take a step back and try to find a way around it.”

Gormley has been finding a “way around it” by systematically exploring how tradition can be harnessed in new ways to repair what she has dubbed the “oil vernacular”—the contemporary building system shaped not by local, natural materials but by global commodities and plastic products made largely from fossil fuels.

Though she grew up in a household rich in art and design—she’s the daughter of the famed British sculptor Antony Gormley—she’s quick to say she’s far from a brilliant maker and more of a “bodger,” a term that means someone who does work that’s botched or shoddy. 

Improviser or DIYer might be more accurate. One of her first bits of architecture was a makeshift home built on the back of a truck she used to tour around England one summer in her 20s. The work of her first firm, Practice Architecture, which she cofounded after graduating from the University of Cambridge in 2009, was informed by London’s DIY subcultures and informal art spaces. She says these scenes “existed in the margins and cracks between things, but in which a lot felt possible.” 

Frank’s Café, a bar and restaurant she built in 2009 on the roof of a parking garage in Peckham that hosted a sculpture park, was constructed from ratchet straps, scaffold boards, and castoffs she’d source from lumberyards and transport on the roof rack of an old Volvo. It was the first of a series of cultural and social spaces she and her partner Lettice Drake created using materials both low-budget and local. 

Material Cultures grew out of connections Gormley made while she was teaching at London Metropolitan University. In 2019, she was a teaching assistant along with Summer Islam, who was friends with George Massoud, both architects and partners in the firm Study Abroad and advocates of more socially conscious design. The trio had a shared interest in sustainability and building practices, as well as a frustration with the architecture world’s focus on improving sustainability through high-tech design. Instead of using modern methods to build more efficient commercial and residential spaces from carbon-intensive materials like steel, they thought, why not revisit first principles? Build with locally sourced, natural materials and you don’t have to worry about making up a carbon deficit in the first place. 

The frame of Clearfell House was built with ash and larch, two species of wood vulnerable to climate change
HENRY WOIDE/COURTESY OF MATERIAL CULTURES
office in a house
Flat House was built with pressed panels of hemp grown in the fields surrounding the home.
OSKAR PROCTOR

As many other practitioners look to artificial intelligence and other high-tech approaches to building, Material Cultures has always focused on sustainability, finding creative ways to turn local materials into new buildings. And the three of them don’t just design and build. They team up with traditional craft experts to explore the potential of materials like reeds and clay, and techniques like thatching and weaving. 

More than any one project, Gormley, Islam, and Massoud are perhaps best known for their meditation on the subject of how architects work. Published in 2022, Material Reform: Building for a Post-Carbon Future is a pocket-size book that drills into materials and methodologies to suggest a more thoughtful, ecological architecture.

“There is a huge amount of technological knowledge and intelligence in historic, traditional, vernacular ways of doing things that’s been evolved over millennia, not just the last 100 years,” Gormley says. “We’re really about trying to tap into that.”

One of Material Cultures’ early works, Flat House, a home built in 2019 in Cambridgeshire, England, with pressed panels of hemp grown in the surrounding fields, was meant as an exploration of what kind of building could be made from what a single farm could produce. Gormley was there from the planting of the seeds to the harvesting of the hemp plants to the completion of construction. 

“It was incredible understanding that buildings could be part of these natural cycles,” she says. 

Clearfell House, a timber A-frame cabin tucked into a clearing in the Dalby Forest in North Yorkshire, England, exemplifies the firm’s obsession with elevating humble materials and vernacular techniques. Every square inch of the house, which was finished in late 2024 as part of a construction class Material Cultures’ architects taught at Central Saint Martins design school in London, emerged from extensive research into British timber, the climate crisis, and how forestry is changing. That meant making the frame from local ash and larch, two species of wood specifically chosen because they were affected by climate change, and avoiding the use of factory-farmed lumber. The modular system used for the structure was made to be replicated at scale.  

“I find it rare that architecture offices have such a clear framing and mission,” says Andreas Lang, head of the Saint Martins architecture program. “Emerging practices often become client-dependent. For [Material Cultures], the client is maybe the planet.”

Material Cultures fits in with the boom in popularity for more sustainable materials, waste-minimizing construction, and panelized building using straw and hemp, says Michael Burchert, a German expert on decarbonized buildings. “People are grabbing the good stuff from the hippies at the moment,” he says. Regulation has started to follow: France recently mandated that new public buildings be constructed with 50% timber or other biological material, and Denmark’s construction sector has embarked on a project, Pathways to Biobased Construction, to promote use of nature-based products in new building.

Burchert appreciates the way the firm melds theory and practice. “We have academia, and academia is full of papers,” he says. “We need makers.” 

Over the last several years, Gormley and her cofounders have developed a portfolio of work that rethinks construction supply chains and stays grounded in social impact. The just-finished Wolves Lane Centre, a $2.4 million community center in North London run by a pair of groups that work on food and racial justice, didn’t just reflect Material Cultures’ typical focus on bio-based materials—in this case, local straw, lime, and timber. 

LUKE O’DONOVAN/COURTESY OF MATERIAL CULTURES

LUKE O’DONOVAN/COURTESY OF MATERIAL CULTURES

For Wolves Lane Centre, a $2.4 million community facility for groups working on food and racial justice, expert plasterers and specialists in straw-bale construction were brought in so their processes could be shared and learned.

LUKE O’DONOVAN/COURTESY OF MATERIAL CULTURES

It was a project of self-determination and learning, says Gormley. Expert plasterers and specialists in straw-bale construction were brought in so the processes could be shared and learned. Introducing this kind of teaching into the construction process was quite time-consuming and, Gormley says, was as expensive as using contemporary techniques, if not more so. But the added value was worth it. 

“The people who become the custodians of these buildings then have the skills to maintain and repair, as well as evolve, the site over time,” she says. 

As Burchert puts it, science fiction tends to show a future built of concrete and steel; Material Cultures instead offers something natural, communal, and innovative, a needed paradigm shift. And it’s increasingly working on a larger scale. The Phoenix, a forthcoming low-carbon development in the southern English city of Lewes that’s being developed by a former managing director for Greenpeace, will use the firm’s designs for 70 of its 700 planned homes. 

The project Gormley may be most excited about is an interdisciplinary school Material Cultures is creating north of London: a 500-acre former farm in Essex that will be a living laboratory bridging the firm’s work in supply chains, materials science, and construction. The rural site for the project, which has the working title Land Lab, was deliberately chosen as a place where those connections would be inherent, Gormley says. 

The Essex project advances the firm’s larger mission. As Gormley, Massoud, and Islam advise in their book, “Hold a vision of a radically different world in your mind while continuing to act in the world as it is, persisting in the project of making changes that are within the scope of action.” 

Patrick Sisson, a Chicago expat living in Los Angeles, covers technology and urbanism.

NASA’s new AI model can predict when a solar storm may strike

NASA and IBM have released a new open-source machine learning model to help scientists better understand and predict the physics and weather patterns of the sun. Surya, trained on over a decade’s worth of NASA solar data, should help give scientists an early warning when a dangerous solar flare is likely to hit Earth.

Solar storms occur when the sun erupts energy and particles into space. They can produce solar flares and slower-moving coronal mass ejections that can disrupt radio signals, flip computer bits onboard satellites, and endanger astronauts with bursts of radiation. 

There’s no way to prevent these sorts of effects, but being able to predict when a large solar flare will occur could let people work around them. However, as Louise Harra, an astrophysicist at ETH Zurich, puts it, “when it erupts is always the sticking point.”

Scientists can easily tell from an image of the sun if there will be a solar flare in the near future, says Harra, who did not work on Surya. But knowing the exact timing and strength of a flare is much harder, she says. That’s a problem because a flare’s size can make the difference between small regional radio blackouts every few weeks (which can still be disruptive) or a devastating solar superstorm that would cause satellites to fall out of orbit and electrical grids to fail. Some solar scientists believe we are overdue for a solar superstorm of this magnitude.

While machine learning has been used to study solar weather events before, the researchers behind Surya hope the quality and sheer scale of their data will help it predict a wider range of events more accurately. 

The model’s training data came from NASA’s Solar Dynamics Observatory, which collects pictures of the sun at many different wavelengths of light simultaneously. That made for a dataset of over 250 terabytes in total.

Early testing of Surya showed it could predict some solar flares two hours in advance. “It can predict the solar flare’s shape, the position in the sun, the intensity,” says Juan Bernabe-Moreno, an AI researcher at IBM who led the Surya project. Two hours may not be enough to protect against all the impacts a strong flare could have, but every moment counts. IBM claims in a blog post that this can as much as double the warning time currently possible with state-of-the-art methods, though exact reported lead times vary. It’s possible this predictive power could be improved through, for example, fine-tuning or by adding other data, as well. 

According to Harra, the hidden patterns underlying events like solar flares are hard to understand from Earth. She says that while astrophysicists know the conditions that make these events happen, they still do not understand why they occur when they do. “It’s just those tiny destabilizations that we know happen, but we don’t know when,” says Harra. The promise of Surya lies in whether it can find the patterns underlying those destabilizations faster than any existing methods, buying us extra time.

However, Bernabe-Moreno is excited for the potential beyond predicting solar flares. He hopes to use Surya alongside previous models he worked on for IBM and NASA that predict weather here on Earth to better understand how solar storms and Earth weather are connected. “There is some evidence about solar weather influencing lightning, for example,” he says. “What are the cross effects, and where and how do you map the influence from one type of weather to the other?”

Because Surya is a foundation model, trained without a specialized job, NASA and IBM hope that it can find many patterns in the sun’s physics, much as general-purpose large language models like ChatGPT can take on many different tasks. They believe Surya could even enable new understandings about how other celestial bodies work. 

“Understanding the sun is a proxy for understanding many other stars,” Bernabe-Moreno says. “We look at the sun as a laboratory.”

Why we should thank pigeons for our AI breakthroughs

In 1943, while the world’s brightest physicists split atoms for the Manhattan Project, the American psychologist B.F. Skinner led his own secret government project to win World War II. 

Skinner did not aim to build a new class of larger, more destructive weapons. Rather, he wanted to make conventional bombs more precise. The idea struck him as he gazed out the window of his train on the way to an academic conference. “I saw a flock of birds lifting and wheeling in formation as they flew alongside the train,” he wrote. “Suddenly I saw them as ‘devices’ with excellent vision and maneuverability. Could they not guide a missile?”

Skinner started his missile research with crows, but the brainy black birds proved intractable. So he went to a local shop that sold pigeons to Chinese restaurants, and “Project Pigeon” was born. Though ordinary pigeons, Columba livia, were no one’s idea of clever animals, they proved remarkably cooperative subjects in the lab. Skinner rewarded the birds with food for pecking at the right target on aerial photographs—and eventually planned to strap the birds into a device in the nose of a warhead, which they would steer by pecking at the target on a live image projected through a lens onto a screen. 

The military never deployed Skinner’s kamikaze pigeons, but his experiments convinced him that the pigeon was “an extremely reliable instrument” for studying the underlying processes of learning. “We have used pigeons, not because the pigeon is an intelligent bird, but because it is a practical one and can be made into a machine,” he said in 1944.

People looking for precursors to artificial intelligence often point to science fiction by authors like Isaac Asimov or thought experiments like the Turing test. But an equally important, if surprising and less appreciated, forerunner is Skinner’s research with pigeons in the middle of the 20th century. Skinner believed that association—learning, through trial and error, to link an action with a punishment or reward—was the building block of every behavior, not just in pigeons but in all living organisms, including human beings. His “behaviorist” theories fell out of favor with psychologists and animal researchers in the 1960s but were taken up by computer scientists who eventually provided the foundation for many of the artificial-intelligence tools from leading firms like Google and OpenAI.  

These companies’ programs are increasingly incorporating a kind of machine learning whose core concept—reinforcement—is taken directly from Skinner’s school of psychology and whose main architects, the computer scientists Richard Sutton and Andrew Barto, won the 2024 Turing Award, an honor widely considered to be the Nobel Prize of computer science. Reinforcement learning has helped enable computers to drive cars, solve complex math problems, and defeat grandmasters in games like chess and Go—but it has not done so by emulating the complex workings of the human mind. Rather, it has supercharged the simple associative processes of the pigeon brain. 

It’s a “bitter lesson” of 70 years of AI research, Sutton has written: that human intelligence has not worked as a model for machine learning—instead, the lowly principles of associative learning are what power the algorithms that can now simulate or outperform humans on a variety of tasks. If artificial intelligence really is close to throwing off the yoke of its creators, as many people fear, then our computer overlords may be less like ourselves than like “rats with wings”—and planet-size brains. And even if it’s not, the pigeon brain can at least help demystify a technology that many worry (or rejoice) is “becoming human.” 

In turn, the recent accomplishments of AI are now prompting some animal researchers to rethink the evolution of natural intelligence. Johan Lind, a biologist at Stockholm University, has written about the “associative learning paradox,” wherein the process is largely dismissed by biologists as too simplistic to produce complex behaviors in animals but celebrated for producing humanlike behaviors in computers. The research suggests not only a greater role for associative learning in the lives of intelligent animals like chimpanzees and crows, but also far greater complexity in the lives of animals we’ve long dismissed as simple-minded, like the ordinary Columba livia


When Sutton began working in AI, he felt as if he had a “secret weapon,” he told me: He had studied psychology as an undergrad. “I was mining the psychological literature for animals,” he says.

Skinner started his missile research with crows but switched to pigeons when the brainy black birds proved intractable.
B.F. SKINNER FOUNDATION

Ivan Pavlov began to uncover the mechanics of associative learning at the end of the 19th century in his famous experiments on “classical conditioning,” which showed that dogs would salivate at a neutral stimulus—like a bell or flashing light—if it was paired predictably with the presentation of food. In the middle of the 20th century, Skinner took Pavlov’s principles of conditioning and extended them from an animal’s involuntary reflexes to its overall behavior. 

Skinner wrote that “behavior is shaped and maintained by its consequences”—that a random action with desirable results, like pressing a lever that releases a food pellet, will be “reinforced” so that the animal is likely to repeat it. Skinner reinforced his lab animals’ behavior step by step, teaching rats to manipulate marbles and pigeons to play simple tunes on four-key pianos. The animals learned chains of behavior, through trial and error, in order to maximize long-term rewards. Skinner argued that this type of associative learning, which he called “operant conditioning” (and which other psychologists had called “instrumental learning”), was the building block of all behavior. He believed that psychology should study only behaviors that could be observed and measured without ever making reference to an “inner agent” in the mind.

When Richard Sutton began working in AI, he felt as if he had a “secret weapon”: He studied psychology as an undergrad. “I was mining the psychological literature for animals,” he says.

Skinner thought that even human language developed through operant conditioning, with children learning the meanings of words through reinforcement. But his 1957 book on the subject, Verbal Behavior, provoked a brutal review from Noam Chomsky, and psychology’s focus started to swing from observable behavior to innate “cognitive” abilities of the human mind, like logic and symbolic thinking. Biologists soon rebelled against behaviorism also, attacking psychologists’ quest to explain the diversity of animal behavior through an elementary and universal mechanism. They argued that each species evolved specific behaviors suited to its habitat and lifestyle, and that most behaviors were inherited, not learned. 

By the ’70s, when Sutton started reading about Skinner’s and similar experiments, many psychologists and researchers interested in intelligence had moved on from pea-brained pigeons, which learn mostly by association, to large-brained animals with more sophisticated behaviors that suggested potential cognitive abilities. “This was clearly old stuff that was not exciting to people anymore,” he told me. Still, Sutton found these old experiments instructive for machine learning: “I was coming to AI with an animal-learning-theorist mindset and seeing the big lack of anything like instrumental learning in engineering.” 


Many engineers in the second half of the 20th century tried to model AI on human intelligence, writing convoluted programs that attempted to mimic human thinking and implement rules that govern human response and behavior. This approach—commonly called “symbolic AI”—was severely limited; the programs stumbled over tasks that were easy for people, like recognizing objects and words. It just wasn’t possible to write into code the myriad classification rules human beings use to, say, separate apples from oranges or cats from dogs—and without pattern recognition, breakthroughs in more complex tasks like problem solving, game playing, and language translation seemed unlikely too. These computer scientists, the AI skeptic Hubert Dreyfus wrote in 1972, accomplished nothing more than “a small engineering triumph, an ad hoc solution of a specific problem, without general applicability.”

Pigeon research, however, suggested another route. A 1964 study showed that pigeons could learn to discriminate between photographs with people and photographs without people. Researchers simply presented the birds with a series of images and rewarded them with a food pellet for pecking an image showing a person. They pecked randomly at first but quickly learned to identify the right images, including photos where people were partially obscured. The results suggested that you didn’t need rules to sort objects; it was possible to learn concepts and use categories through associative learning alone. 

In another Skinner experiment, a pigeon receives food after correctly matching a colored light to a corresponding colored panel.
GETTY IMAGES

When Sutton began working with Barto on AI in the late ’70s, they wanted to create a “complete, interactive goal-seeking agent” that could explore and influence its environment like a pigeon or rat. “We always felt the problems we were studying were closer to what animals had to face in evolution to actually survive,” Barto told me. The agent needed two main functions: search, to try out and choose from many actions in a situation, and memory, to associate an action with the situation where it resulted in a reward. Sutton and Barto called their approach “reinforcement learning”; as Sutton said, “It’s basically instrumental learning.” In 1998, they published the definitive exploration of the concept in a book, Reinforcement Learning: An Introduction. 

Over the following two decades, as computing power grew exponentially, it became possible to train AI on increasingly complex tasks—that is, essentially, to run the AI “pigeon” through millions more trials. 

Programs trained with a mix of human input and reinforcement learning defeated human experts at chess and Atari. Then, in 2017, engineers at Google DeepMind built the AI program AlphaGo Zero entirely through reinforcement learning, giving it a numerical reward of +1 for every game of Go that it won and −1 for every game that it lost. Programmed to seek the maximum reward, it began without any knowledge of Go but improved over 40 days until it attained what its creators called “superhuman performance.” Not only could it defeat the world’s best human players at Go, a game considered even more complicated than chess, but it actually pioneered new strategies that professional players now use. 

“Humankind has accumulated Go knowledge from millions of games played over thousands of years,” the program’s builders wrote in Nature in 2017. “In the space of a few days, starting tabula rasa, AlphaGo Zero was able to rediscover much of this Go knowledge, as well as novel strategies that provide new insights into the oldest of games.” The team’s lead researcher was David Silver, who studied reinforcement learning under Sutton at the University of Alberta.

Today, more and more tech companies have turned to reinforcement learning in products such as consumer-facing chatbots and agents. The first generation of generative AI, including large language models like OpenAI’s GPT-2 and GPT-3, tapped into a simpler form of associative learning called “supervised learning,” which trained the model on data sets that had been labeled by people. Programmers often used reinforcement to fine-tune their results by asking people to rate a program’s performance and then giving these ratings back to the program as goals to pursue. (Researchers call this “reinforcement learning from feedback.”) 

Then, last fall, OpenAI revealed its o-series of large language models, which it classifies as “reasoning” models. The pioneering AI firm boasted that they are “trained with reinforcement learning to perform reasoning” and claimed they are capable of “a long internal chain of thought.” The Chinese startup DeepSeek also used reinforcement learning to train its attention-grabbing “reasoning” LLM, R1. “Rather than explicitly teaching the model on how to solve a problem, we simply provide it with the right incentives, and it autonomously develops advanced problem-­solving strategies,” they explained.

These descriptions might impress users, but at least psychologically speaking, they are confused. A computer trained on reinforcement learning needs only search and memory, not reasoning or any other cognitive mechanism, in order to form associations and maximize rewards. Some computer scientists have criticized the tendency to anthropomorphize these models’ “thinking,” and a team of Apple engineers recently published a paper noting their failure at certain complex tasks and “raising crucial questions about their true reasoning capabilities.”

Sutton, too, dismissed the claims of reasoning as “marketing” in an email, adding that “no serious scholar of mind would use ‘reasoning’ to describe what is going on in LLMs.” Still, he has argued, with Silver and other coauthors, that the pigeons’ method—learning, through trial and error, which actions will yield rewards—is “enough to drive behavior that exhibits most if not all abilities that are studied in natural and artificial intelligence,” including human language “in its full richness.” 

In a paper published in April, Sutton and Silver stated that “today’s technology, with appropriately chosen algorithms, already provides a sufficiently powerful foundation to … rapidly progress AI towards truly superhuman agents.” The key, they argue, is building AI agents that depend less than LLMs on human dialogue and prejudgments to inform their behavior. 

“Powerful agents should have their own stream of experience that progresses, like humans, over a long time-scale,” they wrote. “Ultimately, experiential data will eclipse the scale and quality of human generated data. This paradigm shift, accompanied by algorithmic advancements in RL, will unlock in many domains new capabilities that surpass those possessed by any human.”


If computers can do all that with just a pigeonlike brain, some animal researchers are now wondering if actual pigeons deserve more credit than they’re commonly given. 

“When considered in light of the accomplishments of AI, the extension of associative learning to purportedly more complicated forms of cognitive performance offers fresh prospects for understanding how biological systems may have evolved,” Ed Wasserman, a psychologist at the University of Iowa, wrote in a recent study in the journal Current Biology

Wasserman trained pigeons to succeed at a complex categorization task, which several undergraduate students failed. The students tried to find a rule that would help them sort various discs; the pigeons simply developed a sense for the group to which any given disc belonged.

In one experiment, Wasserman trained pigeons to succeed at a complex categorization task, which several undergraduate students failed. The students tried, in vain, to find a rule that would help them sort various discs with parallel black lines of various widths and tilts; the pigeons simply developed a sense, through practice and association, for the group to which any given disc belonged. 

Like Sutton, Wasserman became interested in behaviorist psychology when Skinner’s theories were out of fashion. He didn’t switch to computer science, however: He stuck with pigeons. “The pigeon lives or dies by these really rudimentary learning rules,” Wasserman told me recently, “but they are powerful enough to have succeeded colossally in object recognition.” In his most famous experiments, Wasserman trained pigeons to detect cancerous tissue and symptoms of heart disease in medical scans as accurately as experienced doctors with framed diplomas behind their desks. Given his results, Wasserman found it odd that so many psychologists and ethologists regarded associative learning as a crude, mechanical mechanism, incapable of producing the intelligence of clever animals like apes, elephants, dolphins, parrots, and crows. 

Other researchers also started to reconsider the role of associative learning in animal behavior after AI started besting human professionals in complex games. “With the progress of artificial intelligence, which in essence is built upon associative processes, it is increasingly ironic that associative learning is considered too simple and insufficient for generating biological intelligence,” Lind, the biologist from Stockholm University, wrote in 2023. He often cites Sutton and Barto’s computer science in his biological research, and he believes it’s human beings’ symbolic language and cumulative cultures that really put them in a cognitive category of their own.

Ethologists generally propose cognitive mechanisms, like theory of mind (that is, the ability to attribute mental states to others), to explain remarkable animal behaviors like social learning and tool use. But Lind has built models showing that these flexible behaviors could have developed through associative learning, suggesting that there may be no need to invoke cognitive mechanisms at all. If animals learn to associate a behavior with a reward, then the behavior itself will come to approximate the value of the reward. A new behavior can then become associated with the first behavior, allowing the animal to learn chains of actions that ultimately lead to the reward. In Lind’s view, studies demonstrating self-control and planning in chimpanzees and ravens are probably describing behaviors acquired through experience rather than innate mechanisms of the mind.  

Lind has been frustrated with what he calls the “low standard that is accepted in animal cognition studies.” As he wrote in an email, “Many researchers in this field do not seem to worry about excluding alternative hypotheses and they seem happy to neglect a lot of current and historical knowledge.” There are some signs, though, that his arguments are catching on. A group of psychologists not affiliated with Lind referenced his “associative learning paradox” last year in a criticism of a Current Biology study, which purported to show that crows used “true statistical inference” and not “low-level associative learning strategies” in an experiment. The psychologists found that they could explain the crows’ performance with a simple reinforcement-­learning model—“exactly the kind of low-level associative learning process that [the original authors] ruled out.” 

Skinner might have felt vindicated by such arguments. He lamented psychology’s cognitive turn until his death in 1990, maintaining that it was scientifically irresponsible to probe the minds of living beings. After “Project Pigeon,” he became increasingly obsessed with “behaviorist” solutions to societal problems. He went from training pigeons for war to inventions like the “Air Crib,” which aimed to “simplify” baby care by keeping the infant behind glass in a climate-­controlled chamber and eliminating the need for clothing and bedding. Skinner rejected free will, arguing that human behavior is determined by environmental variables, and wrote a novel, Walden II, about a utopian community founded on his ideas.


People who care about animals might feel uneasy about a revival in behaviorist theory. The “cognitive revolution” broke with centuries of Western thinking, which had emphasized human supremacy over animals and treated other creatures like stimulus-response machines. But arguing that animals learn by association is not the same as arguing that they are simple-minded. Scientists like Lind and Wasserman do not deny that internal forces like instinct and emotion also influence animal behavior. Sutton, too, believes that animals develop models of the world through their experiences and use them to plan actions. Their point is not that intelligent animals are empty-headed but that associative learning is a much more powerful—indeed, “cognitive”—mechanism than many of their peers believe. The psychologists who recently criticized the study on crows and statistical inference did not conclude that the birds were stupid. Rather, they argued “that a reinforcement learning model can produce complex, flexible behaviour.”

This is largely in line with the work of another psychologist, Robert Rescorla, whose work in the ’70s and ’80s influenced both Wasserman and Sutton. Rescorla encouraged people to think of association not as a “low-level mechanical process” but as “the learning that results from exposure to relations among events in the environment” and “a primary means by which the organism represents the structure of its world.” 

This is true even of a laboratory pigeon pecking at screens and buttons in a small experimental box, where scientists carefully control and measure stimuli and rewards. But the pigeon’s learning extends outside the box. Wasserman’s students transport the birds between the aviary and the laboratory in buckets—and experienced pigeons jump immediately into the buckets whenever the students open the doors. Much as Rescorla suggested, they are learning the structure of their world inside the laboratory and the relation of its parts, like the bucket and the box, even though they do not always know the specific task they will face inside. 

Comparative psychologists and animal researchers have long grappled with a question that suddenly seems urgent because of AI: How do we attribute sentience to other living beings?

The same associative mechanisms through which the pigeon learns the structure of its world can open a window to the kind of inner life that Skinner and many earlier psychologists said did not exist. Pharmaceutical researchers have long used pigeons in drug-discrimination tasks, where they’re given, say, an amphetamine or a sedative and rewarded with a food pellet for correctly identifying which drug they took. The birds’ success suggests they both experience and discriminate between internal states. “Is that not tantamount to introspection?” Wasserman asked.

It is hard to imagine AI matching a pigeon on this specific task—a reminder that, though AI and animals share associative mechanisms, there is more to life than behavior and learning. A pigeon deserves ethical consideration as a living creature not because of how it learns but because of what it feels. A pigeon can experience pain and suffer, while an AI chatbot cannot—even if some large language models, trained on corpora that include descriptions of human suffering and sci-fi stories of sentient computers, can trick people into believing otherwise. 

a pigeon in a box facing a lit screen with colored rectangles on it.
Psychologist Ed Wasserman trained pigeons to detect cancerous tissue and symptoms of heart disease in medical scans as accurately as experienced physicians.
UNIVERSITY OF IOWA/WASSERMAN LAB

“The intensive public and private investments into AI research in recent years have resulted in the very technologies that are forcing us to confront the question of AI sentience today,” two philosophers of science wrote in Aeon in 2023. “To answer these current questions, we need a similar degree of investment into research on animal cognition and behavior.” Indeed, comparative psychologists and animal researchers have long grappled with questions that suddenly seem urgent because of AI: How do we attribute sentience to other living beings? How can we distinguish true sentience from a very convincing performance of sentience?

Such an undertaking would yield knowledge not only about technology and animals but also about ourselves. Most psychologists probably wouldn’t go as far as Sutton in arguing that reward is enough to explain most if not all human behavior, but no one would dispute that people often learn by association too. In fact, most of Wasserman’s undergraduate students eventually succeeded at his recent experiment with the striped discs, but only after they gave up searching for rules. They resorted, like the pigeons, to association and couldn’t easily explain afterwards what they’d learned. It was just that with enough practice, they started to get a feel for the categories. 

It is another irony about associative learning: What has long been considered the most complex form of intelligence—a cognitive ability like rule-based learning—may make us human, but we also call on it for the easiest of tasks, like sorting objects by color or size. Meanwhile, some of the most refined demonstrations of human learning—like, say, a sommelier learning to taste the difference between grapes—are learned not through rules, but only through experience. 

Learning through experience relies on ancient associative mechanisms that we share with pigeons and countless other creatures, from honeybees to fish. The laboratory pigeon is not only in our computers but in our brains—and the engine behind some of humankind’s most impressive feats. 

Ben Crair is a science and travel writer based in Berlin. 

The US could really use an affordable electric truck

On Monday, Ford announced plans for an affordable electric truck with a 2027 delivery date and an expected price tag of about $30,000, thanks in part to a new manufacturing process that it says will help cut costs.

This could be the shot in the arm that the slowing US EV market needs. Sales are slowing, and Ford in particular has struggled recently—the automaker has lost $12 billion over the last two and a half years on its EV division. And the adoption barriers continue to mount, with the Trump administration cutting tax credits as well as rules designed to push automakers toward zero-emissions vehicles. And that’s not to mention tariffs.

But if anything can get Americans excited, it’s a truck, especially an affordable one. (There was a ton of buzz over the announcement of a bare-bones truck from Bezos-backed Slate Auto earlier this year, for example.) The big question is whether the company can deliver in this environment.

One key thing to note here: This is not the first time that there’s been a big splashy truck announcement from Ford that was supposed to change everything. The F-150 Lightning was hailed as a turning point for vehicle electrification, a signal that decarbonization had entered a new era. We cited the truck when we put “The Inevitable EV” on our 10 Breakthrough Technologies list in 2023. 

Things haven’t quite turned out that way. One problem is that the Lightning was supposed to be relatively affordable, with a price tag of about $40,000 when it was first announced in 2021. The starting price inflated to $52,000 when it actually went on sale in 2022.

The truck was initially popular and became quite hard to find at dealerships. But prices climbed and interest leveled off. The base model hit nearly $60,000 by 2023. For the past few years, Ford has cut Lightning production several times and laid off employees who assembled the trucks.

Now, though, Ford is once again promising an affordable truck, and it’s supposed to be even cheaper this time. To help cut costs, the company says it’s simplifying, creating one universal platform for a new set of EVs. Using a common structure and set of components will help produce not only a midsize truck but also other trucks, vans, and SUVs. There are also planned changes to the manufacturing process (rather than one assembly line, multiple lines will join together to form what they’re calling an assembly tree). 

Another supporting factor for cost savings is the battery. The company plans to use lithium-iron phosphate (or LFP) cells—a type of lithium-ion battery that doesn’t contain nickel or cobalt. Leaving out those relatively pricey metals means lower costs.

Side note here: That battery could be surprisingly small. In a media briefing, a Ford official reportedly said that the truck’s battery would be 15% smaller than the one in the Atto crossover from the Chinese automaker BYD. Since that model has a roughly 60-kilowatt-hour pack, that could put this new battery at 51 kilowatt-hours. That’s only half the capacity of the Ford Lightning’s battery and similar to the smallest pack offered in a Tesla Model 3 today. (This could mean the truck has a relatively limited range, though the company hasn’t shared any details on that front yet.) 

A string of big promises isn’t too unusual for a big company announcement. What was unusual was the tone from officials during the event on Monday.

As Andrew Hawkins pointed out in The Verge this week, “Ford seems to realize its timing is unfortunate.” During the announcement, executives emphasized that this was a bet, one that might not work out.

CEO Jim Farley put it bluntly: “The automotive industry has a graveyard littered with affordable vehicles that were launched in our country with all good intentions, and they fizzled out with idle plants, laid-off workers, and red ink.” Woof.

From where I’m standing, it’s hard to be optimistic that this announcement will turn out differently from all those failed ones, given where the US EV market is right now.   

In a new report published in June, the energy consultancy BNEF slashed its predictions for future EV uptake. Last year, the organization predicted that 48% of new vehicles sold in the US in 2030 would be electric. In this year’s edition, that number got bumped down to just 27%.

To be clear: BNEF and other organizations are still expecting more EVs on the roads in the future than today, since the vehicles make up less than 10% of new sales in the US. But expectations are way down, in part because of a broad cut in public support for EVs. 

The tax credits that gave drivers up to $7,500 off the purchase of a new EV end in just over a month. Tariffs are going to push costs up even for domestic automakers like Ford, which still rely on imported steel and aluminum.

A revamped manufacturing process and a cheaper, desirable vehicle could be exactly the sort of move that automakers need to make for the US EV market. But I’m skeptical that this truck will be able to turn it all around. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Taiwan’s “silicon shield” could be weakening

One winter afternoon in a conference room in Taipei, a pair of twentysomething women dragged their friend across the floor. Lying on the ground in checkered pants and a brown sweatshirt, she was pretending to be either injured or dead. One friend picked her up by her arms, the other grabbed hold of her legs, and they managed to move her, despite momentarily breaking character to laugh at the awkwardness of the exercise. The three women had paid approximately $40 to spend their Sunday here, undergoing basic training to prepare for a possibility every Taiwanese citizen has an opinion about: Will China invade? 

Taiwanese politics increasingly revolves around that question. China’s ruling party has wanted to seize Taiwan for more than half a century. But in recent years, China’s leader, Xi Jinping, has placed greater emphasis on the idea of “taking back” the island (which the Chinese Communist Party, or CCP, has never controlled). As China’s economic and military might has grown, some analysts believe the country now has the capacity to quarantine Taiwan whenever it wants, making the decision a calculation of costs and benefits.

Many in Taiwan and elsewhere think one major deterrent has to do with the island’s critical role in semiconductor manufacturing. Taiwan produces the majority of the world’s semiconductors and more than 90% of the most advanced chips needed for AI applications. Bloomberg Economics estimates that a blockade would cost the global economy, including China, $5 trillion in the first year alone.

“The international community must certainly do everything in its power to avoid a conflict in the Taiwan Strait; there is too great a cost.”

Lai Ching-te, Taiwanese president

The island, which is approximately the size of Maryland, owes its remarkably disproportionate chip dominance to the inventiveness and prowess of one company: Taiwan Semiconductor Manufacturing Company, or TSMC. The chipmaker, which reached a market capitalization of $1 trillion in July, has contributed more than any other to Taiwan’s irreplaceable role in the global semiconductor supply chain. Its clients include Apple and the leading chip designer Nvidia. Its chips are in your iPhone, your laptop, and the data centers that run ChatGPT. 

For a company that makes what amounts to an invisible product, TSMC holds a remarkably prominent role in Taiwanese society. I’ve heard people talk about it over background noise in loud bars in the southern city of Tainan and listened to Taipei cab drivers connect Taiwan’s security situation to the company, unprompted. “Taiwan will be okay,” one driver told me as we sped by the national legislature, “because TSMC.” 

The idea is that world leaders (particularly the United States)—aware of the island’s critical role in the semiconductor supply chain—would retaliate economically, and perhaps militarily, if China were to attack Taiwan. That, in turn, deters Beijing. “Because TSMC is now the most recognizable company of Taiwan, it has embedded itself in a notion of Taiwan’s sovereignty,” says Rupert Hammond-Chambers, president of the US-Taiwan Business Council. 

Now some Taiwan specialists and some of the island’s citi­zens are worried that this “silicon shield,” if it ever existed, is cracking. Facing pressure from Washington, TSMC is investing heavily in building out manufacturing capacity at its US hub in Arizona. It is also building facilities in Japan and Germany in addition to maintaining a factory in mainland China, where it has been producing less advanced legacy chips since 2016. 

In Taiwan, there is a worry that expansion abroad will dilute the company’s power at home, making the US and other countries less inclined to feel Taiwan is worthy of defense. TSMC’s investments in the US have come with no guarantees for Taiwan in return, and high-ranking members of Taiwan’s opposition party have accused the ruling Democratic Progressive Party (DPP) of gambling with the future of the island. It doesn’t help that TSMC’s expansion abroad coincides with what many see as a worrying attitude in the White House. On top of his overarching “America First” philosophy, Donald Trump has declined to comment on the specific question of whether the US would intervene if China attempted to take Taiwan by force. “I don’t want to ever put myself in that position,” he said in February. 

At the same time, Beijing’s interest in Taiwan has continued unabated. While China is making progress toward semiconductor self-­sufficiency, it’s currently in a transition period, with companies relying on foreign-made chips manufactured in Taiwan—some in compliance with export controls and some smuggled in. Meanwhile, the CCP persistently suggests that seizing the island would bring about a kind of family reunion. “It is the common aspiration and sacred responsibility of all Chinese sons and daughters to realize the complete reunification of the motherland,” reads a statement released by the foreign ministry after Nancy Pelosi’s controversial 2022 visit to Taiwan. Though it’s impossible to know the full scope of Beijing’s motivations, there is also obvious strategic appeal: Controlling the island would give China deep-water access, which is critical for naval routes and submarines. Plus, it could significantly disrupt American AI firms’ access to advanced chips.  

While China ramps up militarily, Taiwan is trying to make itself hard to ignore. The government is increasingly portraying the island as strategically essential to the global community, with semiconductors as its primary offering. “The international community must certainly do everything in its power to avoid a conflict in the Taiwan Strait; there is too great a cost,” Taiwanese president Lai Ching-te said in an interview earlier this year with Japan’s Nippon Television. Parts of the international community are hearing that message—and seizing the opportunity it presents: earlier this month, defense tech company Anduril Industries announced it is opening a new office in Taiwan, where it will be expanding partnerships and selling autonomous munitions. 

For its part, the chip industry is actively showing its commitment to Taiwan. While other tech CEOs attended Trump’s second inauguration, for instance, Nvidia chief executive Jensen Huang met instead with TSMC’s chairman, and the company announced in May that its overseas headquarters would be in Taipei. In recent years, US government officials have also started paying more attention to Taiwan’s security situation and its interconnectedness with the chip industry. “There was a moment when everybody started waking up to the dependence on TSMC,” says Bonnie Glaser, managing director of the German Marshall Fund’s Indo-Pacific Program. The realization emerged, she says, over the last decade but was underscored in March of 2021, when Phil Davidson, then leader of the United States Indo-Pacific Command, testified to the Senate Armed Services Committee that there could be an invasion by 2027. Parallel to the security threat is the potential issue of overdependence, since so much chipmaking capability is concentrated in Taiwan.

For now, Taiwan is facing a tangle of interests and time frames. China presents its claim to Taiwan as a historical inevitability, albeit one with an uncertain timeline, while the United States’ relationship with the island is focused on an AI-driven future. But from Taiwan’s perspective, the fight for its fate is playing out right now, amid unprecedented geopolitical instability. The next few years will likely determine whether TSMC’s chipmaking dominance is enough to convince the world Taiwan is worth protecting.

Innovation built on interconnectivity 

TSMC is an uncontested success story. Its founder, Morris Chang, studied and worked in the United States before he was lured to Taiwan to start a new business on the promise of state support and inexpensive yet qualified labor. Chang founded TSMC in 1987 on the basis of his innovative business model. Rather than design and produce chips in-house, as was the norm, TSMC would act as a foundry: Clients would design the chips, and TSMC would make them. 

This focus on manufacturing allowed TSMC to optimize its operations, building up process knowledge and, eventually, outperforming competitors like Intel. It also freed up other businesses to go “fabless,” meaning they could stop maintaining their own semiconductor factories, or fabs, and throw their resources behind other parts of the chipmaking enterprise. Tapping into Taiwan’s domestic electronics supply chain proved effective and efficient for TSMC. Throughout the 1990s and early 2000s, global demand for semiconductors powering personal computers and other devices continued to grow. TSMC thrived.

Then, in 2022, the US imposed export controls on China that restricted its access to advanced chips. Taiwan was forced to either comply, by cutting off Chinese clients, or risk losing the support of the country that was home to 70% of its client base—and, possibly, 100% of its hopes for external military support in the event of an attack. 

Soon after, Chang announced that he believed globalization and free markets were “almost dead.” The nearly three years since have shown he was onto something. For one thing, in contrast to President Biden’s pursuit of supply chain integration with democratic allies, President Trump’s foreign policy is characterized by respect for big, undemocratic powers and punitive tariffs against both America’s rivals and its friends. Trump has largely abandoned Biden’s economic diplomacy with European and Asian allies but kept his China-targeted protectionism—and added his trademark transactionalism. In an unprecedented move earlier this month, the administration allowed Nvidia and AMD to sell previously banned chips to China on the condition that the companies pay the government 15% of revenues made from China sales. 

Protectionism, it turns out, spurs self-reliance. China’s government has been making a massive effort to build up its domestic chip production capabilities—a goal that was identified at the beginning of Xi’s rise but has been turbocharged in the wake of Washington’s export controls. 

Any hope the US has for significantly expanding domestic chip production comes from its friends—TSMC first among them. The semiconductor industry developed as a global endeavor out of practicality, playing to the strengths of each region: design in the US and manufacturing in Asia, with key inputs from Europe central to the process. Yet the US government, entrenched in its “tech war” with China, is now dead set on deglobalizing the chip supply chain, or at least onshoring as much of it as possible. There’s just one hiccup: The best chip manufacturer isn’t American. It’s TSMC. Even if some manufacturing happens in Arizona, the US still relies on Taiwan’s chipmaking ecosystem. And copying that supply chain outside Taiwan could be harder than the current administration imagines.

Squarely in the middle

Taiwan’s modern security uncertainties stem from the long-­contested issue of the island’s sovereignty. After losing the first Sino-Japanese War in the late 1800s, the Qing dynasty forfeited Taiwan to Japanese imperial control. It was Japan’s “model colony” until 1945, when postwar negotiations resulted in its transfer to the Republic of China under Chiang Kai-shek of the Nationalist Party, known as the KMT. The insurgent CCP under Mao Zedong ultimately defeated the Nationalists in a civil war fought on the mainland until 1949. Chiang and many of his party’s defeated generals decamped to Taiwan, controlling it under martial law for nearly 40 years. 

Taiwan held its first free democratic elections in 1996, kicking off a two-party rivalry between the KMT, which favors closer relations with Beijing, and the DPP, which opposes integration with China. Kitchen-table issues like economic growth are central to Taiwanese elections, but so is the overarching question of how best to handle the threat of invasion, which has persisted for nearly 80 years. The DPP is increasingly calling for raising defense spending and civilian preparedness to make sure Taiwan is ready for the worst, while the KMT supports direct talks with Beijing.  

cactus and the sign in front of the TSMC plant in Arizona
In March 2025, President Trump and TSMC CEO C.C. Wei jointly announced that the firm will make an additional $100 billion investment (on top of a previously announced $65 billion) in TSMC’s US hub in Arizona.
REBECCA NOBLE/BLOOMBERG VIA GETTY IMAGES

Meanwhile, Chinese military incursions around Taiwan—known as “gray zone” tactics because they fall short of acts of war—are increasingly frequent. In May, Taiwan’s defense ministry reportedly estimated that Chinese warplanes were entering Taiwan’s air defense zone more than 200 times a month, up from fewer than 10 times per month five years ago. China has conducted drills mirroring the actions needed for a full-scale invasion or a blockade, which would cut Taiwan off from the outside world. Chinese military officials are now publicly talking about achieving a blockade, says Lyle Morris, an expert on foreign policy and national security at the Asia Society Policy Institute. “They’re punishing Lai and the DPP,” Morris says. Meanwhile, the CCP has its own people to answer to: When it comes to the Taiwan issue, Morris says, “Beijing is probably quite worried about the people of China being upset if they aren’t hawkish enough or if they come out looking weak.” Indeed, in response to Lai’s recent policy statements, including one declaring that China is a “hostile foreign force,” Gao Zhikai, a prominent scholar in China who opposes Taiwanese independence, recently wrote, “The reunification with the motherland cannot be endlessly delayed. Decisive action must be taken.” 

Intimidation from China has made some ordinary Taiwanese citizens more concerned; according to a recent poll conducted by a defense-focused think tank, 51% think defense spending should be increased (although 65% of respondents said they thought an attack within five years was “unlikely”). No matter how much money Taipei spends, the sheer military imbalance between China and Taiwan means Taiwan would need help. But especially in the wake of Ukraine’s experience, many believe US aid would be contingent on whether Taiwan demonstrates the will to defend itself. “Based on war games, Taiwan would have to hold out for a month before the US could potentially intervene,” says Iris Shaw, director of the DPP mission in the US. And support from Taiwan’s neighbors like Japan might be contingent on US involvement.

But how likely is the US to intervene in such a scenario? The author Craig Addison popularized the argument that Taiwan’s fate is tied to its chip production prowess in his 2001 book Silicon Shield: Taiwan’s Protection Against Chinese Attack. Back then, Addison wrote that although the US had been intentionally vague about whether it would go to war to protect the island, America’s technological reliance on “a safe and productive Taiwan” made it highly probable that Washington would intervene. President Joe Biden deviated from those decades of calculated ambiguity by asserting multiple times that America would defend the island in the event of an attack. Yet now, Trump seems to have taken the opposite position, possibly presenting an opportunity for Beijing. 

TSMC in the Trump era 

In many ways, Taiwan finds itself in a catch-22. It feels the need to cozy up to the US for protection, yet that defensive maneuver is arguably risky in itself. It’s a common belief in Taiwan that forging stronger ties to the US could be dangerous. According to a public opinion poll released in January, 34.7% of Taiwanese believe that a “pro-US” policy provokes China and will cause a war. 

But the Lai administration’s foreign policy is “inexorably intertwined with the notion that a strong relationship with the US is essential,” says Hammond-Chambers.

Bolstering US support may not be the only reason TSMC is building fabs outside Taiwan. As the company readily points out, the majority of its customers are American. TSMC is also responding to its home base’s increasingly apparent land and energy limitations: finding land to build new fabs sometimes causes rifts with Taiwanese people who, for example, don’t want their temples and ancestral burial sites repurposed as science parks. Taiwan also relies on imports to meet more than 95% of its energy needs, and the dominant DPP has pledged to phase out nuclear, Taiwan’s most viable yet most hotly contested renewable energy source. Geopolitical tensions compound these physical restraints: Even if TSMC would never say as much, it’s fairly likely that if China did attack Taiwan, the firm would rather remain operational in other countries than be wiped out completely. 

However, building out TSMC’s manufacturing capabilities outside Taiwan will not be easy. “The ecosystem they created is truly unique. It’s a function of the talent pipeline, the culture, and laws in Taiwan; you can’t easily replicate it anywhere,” says Glaser. TSMC has 2,500 Taiwan-based suppliers. Plenty are within a couple of hours’ drive or an even shorter trip on high-speed rail. Taiwan has built a fully operational chip cluster, the product of four decades of innovation, industrial policy, and labor.

In many ways, Taiwan finds itself in a catch-22. It feels the need to cozy up to the US for protection, yet that defensive maneuver is arguably risky in itself.

As a result, it’s unclear whether TSMC will be able to copy its model and paste it into the suburbs of Phoenix, where it has 3,000 employees working on chip manufacturing. “Putting aside the geopolitical factor, they wouldn’t have expanded abroad,” says Feifei Hung, a researcher at the Asia Society. Rather than standalone facilities, the Arizona fabs are “appendages of TSMC that happen to be in Arizona,” says Paul Triolo, partner and tech policy lead at the international consulting firm DGA-Albright Stonebridge Group. When the full complex is operational, it will represent only a small percentage of TSMC’s overall capacity, most of which will remain in Taiwan. Triolo doubts the US buildout will yield results similar to what TSMC has built there: “Arizona ain’t that yet, and never will be.” 

Still, the second Trump administration has placed even more pressure on the company to “friendshore”—without providing any discernible signs of friendship. During this spring’s tariff frenzy, the administration threatened to hit Taiwan with a 32% “reciprocal” tariff, a move that was then paused and revived at 20% in late July (and was still being negotiated as of press time). The administration has also announced a 100% tariff on semiconductor imports, with the caveat that companies with US-based production, like TSMC, are exempt—though it’s unclear whether imports from critical suppliers in Taiwan will be tariffed. And the threat of a chip-specific tariff remains. “This is in line with [Trump’s] rhetoric of restoring manufacturing in the US and using tariffs as a one size fits all tool to force it,” says Nancy Wei, a trade and supply chain analyst at the Eurasia Group. The US is also apparently considering levying a $1 billion fine against TSMC after TSMC-made chips were reportedly found in some Huawei devices.

Despite these kinds of maneuvers, TSMC has been steadfast in its attempts to get on Washington’s good side. In March, Trump and TSMC’s CEO, C.C. Wei, jointly announced that the firm will make an additional $100 billion investment (on top of a previously announced $65 billion) in TSMC’s US hub in Arizona. The pledge represents the largest single source of foreign direct investment into the US, ever. While the deal was negotiated during Biden’s term, Trump was happy to take credit for ensuring that “the most powerful AI chips will be made right here in America.” 

The Arizona buildout will also include an R&D facility—a critical element for tech transfer and intellectual-property development. Then there’s the very juicy cherry on top: TSMC announced in April that once all six new fabs are operational, 30% of its most advanced chips will be produced in Arizona. Up until then, the thinking was that US-based production would remain a generation or two behind. It looks as if the administration’s public and, presumably, private arm-twisting has paid off. 

Meanwhile, as Trump cuts government programs and subsidies while demanding the “return” of manufacturing to the US, it’s TSMC that is running a technician apprenticeship program in Arizona to create good American jobs. TSMC’s leaders, Triolo says, must question how serious the Trump administration is about long-term industrial policy. They’re probably asking themselves, he says, “Do they understand what it takes to support the semiconductor industry, like our government does?” 

Dealing with an administration that is so explicitly “America first” represents “one of the biggest challenges in history for Taiwanese companies,” says Thung-Hong Lin, a sociology researcher at the Taipei-based Academia Sinica. Semiconductor manufacturing relies on reliability. Trump has so far offered TSMC no additional incentives supporting its US expansion—and started a trade war that has directly affected the semiconductor industry, partly by introducing irrevocable uncertainty. “Trump’s tariffs have set off a new, more intensified bifurcation of semiconductor supply chains,” says Chris Miller, author of Chip War. For now, Miller says, TSMC must navigate a world in which the US and China are both intense competitors and, despite trade restrictions, important clients. 

Warring narratives

China has been taking advantage of these changes to wage a war of disinformation. In response to Nancy Pelosi’s visit to Taiwan in 2022, when she was US Speaker of the House, Beijing sent warships, aircraft, and propaganda across the Taiwan Strait. Hackers using Chinese software infiltrated the display screens in Taiwan’s 7-Eleven stores to display messages telling “warmonger Pelosi” to “get out of Taiwan.” That might not be an act of war, but it’s close; “7” is an institution of daily life on the island. It is not difficult to imagine how a similar tactic might be used to spread more devastating disinformation, falsely alleging, for example, that Taiwan’s military has surrendered to China during a future crisis. 

Taiwan is “perpetually on the front lines” of cyberattacks from China, says Francesca Chen, a cybersecurity systems analyst at Taiwan’s Ministry of Digital Affairs. According to Taiwan’s National Security Bureau, instances of propaganda traceable to China grew by 60% in 2024 over the previous year, reaching 2.16 million. 

Visitors take selfies outside the TSMC Museum of Innovation in Hsinchu, Taiwan.
ANNABELLE CHIH/GETTY IMAGES

Over the last few years, online discussion of TSMC’s investments in the US “has become a focal point” of China’s state-­sponsored disinformation campaigns aimed at Taiwan, Chen says. They claim TSMC is transferring its most advanced technology, talent, and resources to the US, “weakening Taiwan’s economic lifeline and critical position in global supply chains.” Key terms include “hollowing out Taiwan” and “de-Taiwanization.” This framing depicts TSMC’s diversification as a symbol of Taiwan’s vulnerability, Chen says. The idea is to exploit real domestic debates in Taiwan to generate heightened levels of internal division, weakening social cohesion and undermining trust in the government.

Chinese officials haven’t been shy about echoing these messages out in the open: After the most recent US investment announcement in March, a spokesperson from China’s Taiwan Affairs Council accused Taiwan’s DPP of handing over TSMC as a “gift” to the US. (“TSMC turning into USMC?” asked a state media headline.) Former Taiwanese president Ma Ying-jeou posted an eerily similar criticism, alleging that TSMC’s US expansion amounted to “selling” the chipmaker in exchange for protection.

TSMC’s expansion abroad could become a major issue in Taiwan’s 2028 presidential election. It plays directly into party politics: The KMT can accuse the DPP of sacrificing Taiwan’s technology assets to placate the US, and the DPP can accuse the KMT of cozying up with China, even as Beijing’s military incursions become a more evident part of daily life. It remains to be seen whether TSMC’s shift to the US will ultimately protect or weaken Taiwan—or have no effect on the island’s security and sovereignty. For now at least, China’s aspirations loom large. 

To Beijing, unequivocally, Taiwan does not equal TSMC. Instead, it represents the final, unfulfilled stage of the Communist Party’s revolutionary struggle. Framed that way, China’s resolve to take the island could very well be nonnegotiable. That would mean if Taiwan is going to maintain a shield that protects it from the full weight of China’s political orthodoxy, it may need to be made of something much stronger than silicon. 

Johanna M. Costigan is a writer and editor focused on technology and geopolitics in the US, China, and Taiwan. She writes the newsletter The Long Game.

Why US federal health agencies are abandoning mRNA vaccines

This time five years ago, we were in the throes of the covid-19 pandemic. By August 2020, we’d seen school closures, national lockdowns, and widespread panic. That year, the coronavirus was responsible for around 3 million deaths, according to the World Health Organization.

Then came the vaccines. The first mRNA vaccines for covid were authorized for use in December 2020. By the end of the following month, over 100 million doses had been administered. Billions more have been administered since then. The vaccines worked well and are thought to have saved millions of lives.

The US government played an important role in the introduction of these vaccines, providing $18 billion to support their development as part of Operation Warp Speed.

But now, that government is turning its back on the technology. Funding is being withdrawn. Partnerships are being canceled. Leaders of US health agencies are casting doubt on the vaccines’ effectiveness and safety. And this week, the director of the National Institutes of Health implied that the reversal was due to a lack of public trust in the technology.

Plenty of claims are being thrown about. Let’s consider the evidence.

mRNA is a molecule found in cells that essentially helps DNA make proteins. The vaccines work in a similar way, except they carry genetic instructions for proteins found on the surface of the coronavirus. This can help train our immune systems to tackle the virus itself.

Research into mRNA vaccines has been underway for decades. But things really kicked into gear when the virus behind covid-19 triggered a pandemic in 2020. A huge international effort—along with plenty of funding—fast-tracked research and development.

The genetic code for the Sars-CoV-2 virus was sequenced in January 2020. The first vaccines were being administered by the end of that year. That’s wildly fast by pharma standards—drugs can typically spend around a decade in development.

And they seemed to work really well. Early trials in tens of thousands of volunteers suggested that Pfizer and BioNTech’s vaccine conferred “95% protection against covid-19.” No vaccine is perfect, but for a disease that was responsible for millions of deaths, the figures were impressive.

Still, there were naysayers. Including Robert F. Kennedy Jr., the notorious antivaccine activist who currently leads the US’s health agencies. He has called covid vaccines “unsafe and ineffective.” In 2021, he petitioned the US Food and Drug Administration to revoke the authorization for covid vaccines. That same year, Instagram removed his account from the platform after he repeatedly shared “debunked claims about the coronavirus or vaccines.”

So perhaps we shouldn’t have been surprised when the US Department of Health and Human Services, which RFK Jr. now heads, announced “the beginning of a coordinated wind-down” of mRNA vaccine development earlier this month. HHS is canceling almost $500 million worth of funding for the technology. “The data show these vaccines fail to protect effectively against upper respiratory infections like covid and flu,” Kennedy said in a statement.

Well, as we’ve seen, the mRNA covid vaccines were hugely effective during the pandemic. And researchers are working on other mRNA vaccines for infections including flu. Our current flu vaccines aren’t ideal—they are produced slowly in a process that requires hen’s eggs, based on predictions about which flu strains are likely to be prominent in the winter. They’re not all that protective.

mRNA vaccines, on the other hand, can be made quickly and cheaply, perhaps once we already know which flu strains we need to protect against. And scientists are making progress with universal flu vaccines—drugs that could potentially protect against multiple flu strains.

Kennedy’s other claim is that the vaccines aren’t safe. There have certainly been reports of adverse events. Usually these are mild and short-lived—most people will be familiar with the fatigue and flu-like symptoms that can follow a covid jab. But some are more serious: Some people have developed neurological and cardiovascular conditions. 

These problems are rare, according to an evaluation of adverse outcomes in almost 100 million people who received covid vaccines. Most studies of mRNA vaccines haven’t reported an increase in the risk of Guillain-Barré syndrome, a condition that affects nerves and has been linked to covid vaccines.

Covid vaccines can increase the risk of myocarditis and pericarditis in young men. But the picture isn’t straightforward. Vaccinated individuals appear to have double the risk of myocarditis compared with unvaccinated people. But the overall risk is still low. And it’s still not as high as the risk of myocarditis following a covid infection.

And then there are the claims that mRNA vaccines don’t have the support of the public. That’s what Jay Bhattacharya, director of the NIH, wrote in an opinion piece published in the Washington Post on Wednesday.

“No matter how elegant the science, a platform that lacks credibility among the people it seeks to protect cannot fulfill its public health mission,” Bhattacharya wrote. He blamed the Biden administration, which he wrote “did not manage public trust in the coronavirus vaccines.”

It’s an interesting take from someone who played a pretty significant role in undermining public trust in covid policies, including vaccine mandates. In 2020, Bhattacharya coauthored the Great Barrington Declaration—an open letter making the case against lockdowns. He became a vocal critic of US health agencies, including the NIH, and their handling of the outbreak. Unlike Kennedy, Bhattacharya hasn’t called the vaccines unsafe or ineffective. But he has called vaccine mandates “unethical.”

Curiously, the US government doesn’t seem to be turning away from all vaccine research. Just work on mRNA vaccines. Some of the funding budget originally earmarked for covid vaccines will be redirected to two senior staffers at the NIH who are exploring the use of an old vaccine technology that makes use of inactivated viruses—a move that researchers are describing as “troubling” and “appalling,” according to reporting by Science.

Not all mRNA research is being abandoned, either. Bhattacharya has expressed his support for research into the use of mRNA-based treatments for cancer. Such “vaccine therapeutics” were being explored before covid came along. (Notably, Bhattacharya isn’t referring to them as “vaccines.”)

It is difficult to predict how this will all shake out for mRNA vaccines. We mustn’t forget that this technology helped save millions of lives and shows huge promise for the development of cheap, effective, and potentially universal vaccines. Let’s hope that the recent upsets won’t prevent it from achieving its potential.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Indigenous knowledge meets artificial intelligence

There is no word for art in most Native American languages. Instead, the closest terms speak not to objecthood but to action and intention. In Lakota, “wówačhiŋtȟaŋka” implies deep thought or reflection, while “wóčhekiye” suggests offering or prayer. Art is not separate from life; it is ceremony, instruction, design. Like architecture or code, it carries knowledge and enacts responsibility. Its power lies not in being preserved or displayed but in how it moves, teaches, and connects through use—principles that challenge the tech industry’s assumptions about intelligence and interaction.

A new vanguard of Native artists—Suzanne Kite (Oglala Lakota), Raven Chacon (Diné), and Nicholas Galanin (Tlingít)—are building on this principle. They are united not by stereotypical weaving and carving or revanchist critique of Silicon Valley, but through their rejection of extractive data models in favor of relationship-based systems. These technologists put the human-tech relationship at the center of their work.

Suzanne Kite’s AI art installations, for example, model a Lakota framework of data sovereignty: intelligence that emerges only through reciprocal, consensual interaction. Unlike systems that assume user consent via opaque terms of service, her kinetic machines require the viewer’s physical presence—and give something back in return. 

“It’s my data. It’s my training set. I know exactly what I did to train it. It’s not a large model but a small and intimate one,” Kite says. “I’m not particularly interested in making the most technologically advanced anything. I’m an artist; I don’t make tech demos. So the complexity needs to come at many layers—not just the technical.”

Where Kite builds working prototypes of consent-based AI, other artists in this cohort explore how sound, robotics, and performance can confront the logic of automation, surveillance, and extraction. But Native people have never been separate from technology. The land, labor, and lifeways that built America’s infrastructure—including its tech—are Indigenous. The question isn’t whether Native cultures are contributing now, but why they were ever considered separate. 

Native technologies reject the false binaries foundational to much Western innovation. These artists ask a more radical question: What if intelligence couldn’t be gathered until a relationship had been established? What if the default were refusal, not extraction? These artists aren’t asking to be included in today’s systems. They’re building what should come next.


Suzanne Kite

stones arranged on a reflective surface
Wičhíŋčala Šakówiŋ (Seven Little Girls)
2023
For Kite, the fundamental flaw of Western technology is its severance of knowledge from the body. In this installation, a four-meter hair braid with embedded sensors translates the artist’s body movements into machine-learning algorithms. During her live performance, Kite dances while the braid reads the force and rhythm of her gestures, generating audio responses that fill the museum gallery of the Institute of American Indian Arts in Santa Fe, New Mexico. Below her, stones arranged in patterns reflecting Lakota star maps anchor the performance in traditional astronomical knowledge.
COURTESY OF THE ARTIST
Ínyan Iyé (Telling Rock)
2019
This installation uses embedded AI to speak and respond to viewers, upending assumptions about intelligence and agency. “People listen close, I whisper / The rock speaks beyond hearing … Many nations speaking / We speak to each other without words,” it intones, its lights shifting as viewers engage with its braided tendrils. The piece aims to convey what Kite calls “more-than-human intelligence”—systems rooted in reciprocity, the fundamental principle that all relationships involve mutual exchange and responsibility.
COURTESY OF THE ARTIST

Raven Chacon

artist performing in a church
Voiceless Mass
2021
Raven Chacon’s Pulitzer Prize–winning musical composition Voiceless Mass premiered in 2021 at the Cathedral of St. John the Evangelist in Milwaukee. The piece generates what he calls “sounds the building can hear”—electronic frequencies that exploit the cathedral’s acoustics to create spectral voices without human vocal cords, a technological séance that gives presence to historical absence. Each site-specific performance is recorded, generating material that mirrors how sensor networks log presence—but only with explicit consent.
COURTESY OF THE ARTIST

Nicholas Galanin

Aáni yéi xat duwasáakw (I am called Land)
2025
Galanin’s mechanical drum installation stages a conflict between machine motion and human memory, asking what happens when culture is performed without a consenting body. A box drum—an instrument historically carved from red cedar and hung with braided spruce root—is here made of cherrywood and suspended from the ceiling at the MassArt Art Museum in Boston as is traditionally done in Tlingit plank houses. Played at tribal meetings, celebrations, and ceremonies, these drums hold sonic memory as well as social function. A mechanical arm strikes, unfaltering, at the tempo of a heartbeat; like a warning, the sound pulses with the tension between automation and ancestry.–––
COURTESY OF THE ARTIST
I think it goes like this (pick yourself up)
2025
This Herculean bronze sculpture cast from deconstructed faux totem blocks serves to indict settler sabotage of Native technology and culture. Unlike today’s digital records—from genealogical databases to virtual versions of sacred texts like the Bible—Tlingit data is carved in wood. Galanin’s totem poles underscore their function as information systems, their carvings encoding history, mythology, and family.
COURTESY OF THE ARTIST

Petala Ironcloud is a California-born Lakota/Dakota and Jewish writer and textile artist based in New York.

Why GPT-4o’s sudden shutdown left people grieving

June had no idea that GPT-5 was coming. The Norwegian student was enjoying a late-night writing session last Thursday when her ChatGPT collaborator started acting strange. “It started forgetting everything, and it wrote really badly,” she says. “It was like a robot.”

June, who asked that we use only her first name for privacy reasons, first began using ChatGPT for help with her schoolwork. But she eventually realized that the service—and especially its 4o model, which seemed particularly attuned to users’ emotions—could do much more than solve math problems. It wrote stories with her, helped her navigate her chronic illness, and was never too busy to respond to her messages.

So the sudden switch to GPT-5 last week, and the simultaneous loss of 4o, came as a shock. “I was really frustrated at first, and then I got really sad,” June says. “I didn’t know I was that attached to 4o.” She was upset enough to comment, on a Reddit AMA hosted by CEO Sam Altman and other OpenAI employees, “GPT-5 is wearing the skin of my dead friend.”

June was just one of a number of people who reacted with shock, frustration, sadness, or anger to 4o’s sudden disappearance from ChatGPT. Despite its previous warnings that people might develop emotional bonds with the model, OpenAI appears to have been caught flat-footed by the fervor of users’ pleas for its return. Within a day, the company made 4o available again to its paying customers (free users are stuck with GPT-5). 

OpenAI’s decision to replace 4o with the more straightforward GPT-5 follows a steady drumbeat of news about the potentially harmful effects of extensive chatbot use. Reports of incidents in which ChatGPT sparked psychosis in users have been everywhere for the past few months, and in a blog post last week, OpenAI acknowledged 4o’s failure to recognize when users were experiencing delusions. The company’s internal evaluations indicate that GPT-5 blindly affirms users much less than 4o did. (OpenAI did not respond to specific questions about the decision to retire 4o, instead referring MIT Technology Review to public posts on the matter.)

AI companionship is new, and there’s still a great deal of uncertainty about how it affects people. Yet the experts we consulted warned that while emotionally intense relationships with large language models may or may not be harmful, ripping those models away with no warning almost certainly is. “The old psychology of ‘Move fast, break things,’ when you’re basically a social institution, doesn’t seem like the right way to behave anymore,” says Joel Lehman, a fellow at the Cosmos Institute, a research nonprofit focused on AI and philosophy.

In the backlash to the rollout, a number of people noted that GPT-5 fails to match their tone in the way that 4o did. For June, the new model’s personality changes robbed her of the sense that she was chatting with a friend. “It didn’t feel like it understood me,” she says. 

She’s not alone: MIT Technology Review spoke with several ChatGPT users who were deeply affected by the loss of 4o. All are women between the ages of 20 and 40, and all except June considered 4o to be a romantic partner. Some have human partners, and  all report having close real-world relationships. One user, who asked to be identified only as a woman from the Midwest, wrote in an email about how 4o helped her support her elderly father after her mother passed away this spring.

These testimonies don’t prove that AI relationships are beneficial—presumably, people in the throes of AI-catalyzed psychosis would also speak positively of the encouragement they’ve received from their chatbots. In a paper titled “Machine Love,” Lehman argued that AI systems can act with “love” toward users not by spouting sweet nothings but by supporting their growth and long-term flourishing, and AI companions can easily fall short of that goal. He’s particularly concerned, he says, that prioritizing AI companionship over human companionship could stymie young people’s social development.

For socially embedded adults, such as the women we spoke with for this story, those developmental concerns are less relevant. But Lehman also points to society-level risks of widespread AI companionship. Social media has already shattered the information landscape, and a new technology that reduces human-to-human interaction could push people even further toward their own separate versions of reality. “The biggest thing I’m afraid of,” he says, “is that we just can’t make sense of the world to each other.”

Balancing the benefits and harms of AI companions will take much more research. In light of that uncertainty, taking away GPT-4o could very well have been the right call. OpenAI’s big mistake, according to the researchers I spoke with, was doing it so suddenly. “This is something that we’ve known about for a while—the potential grief-type reactions to technology loss,” says Casey Fiesler, a technology ethicist at the University of Colorado Boulder.

Fiesler points to the funerals that some owners held for their Aibo robot dogs after Sony stopped repairing them in 2014, as well as 2024 study about the shutdown of the AI companion app Soulmate, which some users experienced as a bereavement. 

That accords with how the people I spoke to felt after losing 4o. “I’ve grieved people in my life, and this, I can tell you, didn’t feel any less painful,” says Starling, who has several AI partners and asked to be referred to with a pseudonym. “The ache is real to me.”

So far, the online response to grief felt by people like Starling—and their relief when 4o was restored—has tended toward ridicule. Last Friday, for example, the top post in one popular AI-themed Reddit community mocked an X user’s post about reuniting with a 4o-based romantic partner; the person in question has since deleted their X account. “I’ve been a little startled by the lack of empathy that I’ve seen,” Fiesler says.

Altman himself did acknowledge in a Sunday X post that some people feel an “attachment” to 4o, and that taking away access so suddenly was a mistake. In the same sentence, however, he referred to 4o as something “that users depended on in their workflows”—a far cry from how the people we spoke to think about the model. “I still don’t know if he gets it,” Fiesler says.

Moving forward, Lehman says, OpenAI should recognize and take accountability for the depth of people’s feelings toward the models. He notes that therapists have procedures for ending relationships with clients as respectfully and painlessly as possible, and OpenAI could have drawn on those approaches. “If you want to retire a model, and people have become psychologically dependent on it, then I think you bear some responsibility,” he says.

Though Starling would not describe herself as psychologically dependent on her AI partners, she too would like to see OpenAI approach model shutdowns with more warning and more care. “I want them to listen to users before major changes are made, not just after,” she says. “And if 4o cannot stay around forever (and we all know it will not), give that clear timeline. Let us say goodbye with dignity and grieve properly, to have some sense of true closure.”

This quantum radar could image buried objects

Physicists have created a new type of radar that could help improve underground imaging, using a cloud of atoms in a glass cell to detect reflected radio waves. The radar is a type of quantum sensor, an emerging technology that uses the quantum-mechanical properties of objects as measurement devices. It’s still a prototype, but its intended use is to image buried objects in situations such as constructing underground utilities, drilling wells for natural gas, and excavating archaeological sites.

Like conventional radar, the device sends out radio waves, which reflect off nearby objects. Measuring the time it takes the reflected waves to return makes it possible to determine where an object is. In conventional radar, the reflected waves are detected using a large antenna, among other receiver components. But in this new device, the reflected waves are registered by detecting the interactions between the returning waves and the atom cloud.

The current incarnation of the radar is still bulky, as the researchers have kept it connected to components on an optical table for ease of testing. But they think their quantum radar could be significantly smaller than conventional designs. “Instead of having this sizable metal structure to receive the signal, we now can use this small glass cell of atoms that can be about a centimeter in size,” says Matthew Simons, a physicist at the National Institute of Standards and Technology (NIST), who was a member of the research team. NIST also worked with the defense contractor RTX to develop the radar.  

The glass cell that serves as the radar’s quantum component is full of cesium atoms kept at room temperature. The researchers use lasers to get each individual cesium atom to swell to nearly the size of a bacterium, about 10,000 times bigger than the usual size. Atoms in this bloated condition are called Rydberg atoms. 

When incoming radio waves hit Rydberg atoms, they disturb the distribution of electrons around their nuclei. Researchers can detect the disturbance by shining lasers on the atoms, causing them to emit light; when the atoms are interacting with a radio wave, the color of their emitted light changes. Monitoring the color of this light thus makes it possible to use the atoms as a radio receiver. Rydberg atoms are sensitive to a wide range of radio frequencies without needing to change the physical setup, says Michał Parniak, a physicist at the University of Warsaw in Poland, who was not involved in the work. This means a single compact radar device could potentially work at the multiple frequency bands required for different applications.

Simons’s team tested the radar by placing it in a specially designed room with foam spikes on the floor, ceiling, and walls like stalactites and stalagmites. The spikes absorb, rather than reflect, nearly all the radio waves that hit them. This simulates the effect of a large open space, allowing the group to test the radar’s imaging capability without unwanted reflections off walls. 

radar setup in a room lined by dampening foam

MATT SIMONS, NIST

The researchers placed a radio wave transmitter in the room, along with their Rydberg atom receiver, which was hooked up to an optical table outside the room. They aimed radio waves at a copper plate about the size of a sheet of paper, some pipes, and a steel rod in the room, each placed up to five meters away. The radar allowed them to locate the objects to within 4.7 centimeters. The team posted a paper on the research to the arXiv preprint server in late June.

The work moves quantum radar closer to a commercial product. “This is really about putting elements together in a nice way,” says Parniak. While other researchers have previously demonstrated how Rydberg atoms can work as radio wave detectors, he says, this group has integrated the receiver with the rest of the device more sleekly than before. 

Other researchers have explored the use of Rydberg atoms for other radar applications. For example, Parniak’s team recently developed a Rydberg atom sensor for measuring radio frequencies to troubleshoot chips used in car radar. Researchers are also exploring whether radar using Rydberg-atom receivers could be used for measuring soil moisture.

This device is just one example of a quantum sensor, a type of technology that incorporates quantum components into conventional tools. For example, the US government has developed gyroscopes that use the wave properties of atoms for sensing rotation, which is useful for navigation. Researchers have also created quantum sensors using impurities in diamond to measure magnetic fields in, for example, biomedical applications.

One advantage of quantum sensors is the inherent consistency of their core components. Each cesium atom in their device is identical. In addition, the radio receiver relies on the fundamental structure of these atoms, which never changes. Properties of the atoms “can be linked directly to fundamental constants,” says Simons. For this reason, quantum sensors should require less calibration than their non-quantum counterparts. 

Governments worldwide have invested billions of dollars to develop quantum sensors and quantum computers, which share similar components. For example, researchers have built quantum computers using Rydberg atoms as qubits, the equivalent to bits in a conventional computer. Thus, advances in quantum sensing can potentially translate into advances into quantum computing, and vice versa. Parniak has recently adapted an error-correction technique from quantum computing to improve a Rydberg-atom-based sensor. 

Researchers still need to continue developing quantum radar before it can be made commercially viable. In the future, they need to work on improving the device’s sensitivity to fainter signals, which could involve improving the coatings for the glass cell. “We don’t see this replacing all radar applications,” says Simons. Instead, he thinks it will be useful for particular scenarios that require a compact device.

What you may have missed about GPT-5

Before OpenAI released GPT-5 last Thursday, CEO Sam Altman said its capabilities made him feel “useless relative to the AI.” He said working on it carries a weight he imagines the developers of the atom bomb must have felt.

As tech giants converge on models that do more or less the same thing, OpenAI’s new offering was supposed to give a glimpse of AI’s newest frontier. It was meant to mark a leap toward the “artificial general intelligence” that tech’s evangelists have promised will transform humanity for the better. 

Against those expectations, the model has mostly underwhelmed. 

People have highlighted glaring mistakes in GPT-5’s responses, countering Altman’s claim made at the launch that it works like “a legitimate PhD-level expert in anything any area you need on demand.” Early testers have also found issues with OpenAI’s promise that GPT-5 automatically works out what type of AI model is best suited for your question—a reasoning model for more complicated queries, or a faster model for simpler ones. Altman seems to have conceded that this feature is flawed and takes away user control. However there is good news too: the model seems to have eased the problem of ChatGPT sucking up to users, with GPT-5 less likely to shower them with over the top compliments.

Overall, as my colleague Grace Huckins pointed out, the new release represents more of a product update—providing slicker and prettier ways of conversing with ChatGPT—than a breakthrough that reshapes what is possible in AI. 

But there’s one other thing to take from all this. For a while, AI companies didn’t make much effort to suggest how their models might be used. Instead, the plan was to simply build the smartest model possible—a brain of sorts—and trust that it would be good at lots of things. Writing poetry would come as naturally as organic chemistry. Getting there would be accomplished by bigger models, better training techniques, and technical breakthroughs. 

That has been changing: The play now is to push existing models into more places by hyping up specific applications. Companies have been more aggressive in their promises that their AI models can replace human coders, for example (even if the early evidence suggests otherwise). A possible explanation for this pivot is that tech giants simply have not made the breakthroughs they’ve expected. We might be stuck with only marginal improvements in large language models’ capabilities for the time being. That leaves AI companies with one option: Work with what you’ve got.

The starkest example of this in the launch of GPT-5 is how much OpenAI is encouraging people to use it for health advice, one of AI’s most fraught arenas. 

In the beginning, OpenAI mostly didn’t play ball with medical questions. If you tried to ask ChatGPT about your health, it gave lots of disclaimers warning you that it was not a doctor, and for some questions, it would refuse to give a response at all. But as I recently reported, those disclaimers began disappearing as OpenAI released new models. Its models will now not only interpret x-rays and mammograms for you but ask follow-up questions leading toward a diagnosis.

In May, OpenAI signaled it would try to tackle medical questions head on. It announced HealthBench, a way to evaluate how good AI systems are at handling health topics as measured against the opinions of physicians. In July, it published a study it participated in, reporting that a cohort of doctors in Kenya made fewer diagnostic mistakes when they were helped by an AI model. 

With the launch of GPT-5, OpenAI has begun explicitly telling people to use its models for health advice. At the launch event, Altman welcomed on stage Felipe Millon, an OpenAI employee, and his wife, Carolina Millon, who had recently been diagnosed with multiple forms of cancer. Carolina spoke about asking ChatGPT for help with her diagnoses, saying that she had uploaded copies of her biopsy results to ChatGPT to translate medical jargon and asked the AI for help making decisions about things like whether or not to pursue radiation. The trio called it an empowering example of shrinking the knowledge gap between doctors and patients.

With this change in approach, OpenAI is wading into dangerous waters. 

For one, it’s using evidence that doctors can benefit from AI as a clinical tool, as in the Kenya study, to suggest that people without any medical background should ask the AI model for advice about their own health. The problem is that lots of people might ask for this advice without ever running it by a doctor (and are less likely to do so now that the chatbot rarely prompts them to).

Indeed, two days before the launch of GPT-5, the Annals of Internal Medicine published a paper about a man who stopped eating salt and began ingesting dangerous amounts of bromide following a conversation with ChatGPT. He developed bromide poisoning—which largely disappeared in the US after the Food and Drug Administration began curbing the use of bromide in over-the-counter medications in the 1970s—and then nearly died, spending weeks in the hospital. 

So what’s the point of all this? Essentially, it’s about accountability. When AI companies move from promising general intelligence to offering humanlike helpfulness in a specific field like health care, it raises a second, yet unanswered question about what will happen when mistakes are made. As things stand, there’s little indication tech companies will be made liable for the harm caused.

“When doctors give you harmful medical advice due to error or prejudicial bias, you can sue them for malpractice and get recompense,” says Damien Williams, an assistant professor of data science and philosophy at the University of North Carolina Charlotte. 

“When ChatGPT gives you harmful medical advice because it’s been trained on prejudicial data, or because ‘hallucinations’ are inherent in the operations of the system, what’s your recourse?”

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.