The US could really use an affordable electric truck

On Monday, Ford announced plans for an affordable electric truck with a 2027 delivery date and an expected price tag of about $30,000, thanks in part to a new manufacturing process that it says will help cut costs.

This could be the shot in the arm that the slowing US EV market needs. Sales are slowing, and Ford in particular has struggled recently—the automaker has lost $12 billion over the last two and a half years on its EV division. And the adoption barriers continue to mount, with the Trump administration cutting tax credits as well as rules designed to push automakers toward zero-emissions vehicles. And that’s not to mention tariffs.

But if anything can get Americans excited, it’s a truck, especially an affordable one. (There was a ton of buzz over the announcement of a bare-bones truck from Bezos-backed Slate Auto earlier this year, for example.) The big question is whether the company can deliver in this environment.

One key thing to note here: This is not the first time that there’s been a big splashy truck announcement from Ford that was supposed to change everything. The F-150 Lightning was hailed as a turning point for vehicle electrification, a signal that decarbonization had entered a new era. We cited the truck when we put “The Inevitable EV” on our 10 Breakthrough Technologies list in 2023. 

Things haven’t quite turned out that way. One problem is that the Lightning was supposed to be relatively affordable, with a price tag of about $40,000 when it was first announced in 2021. The starting price inflated to $52,000 when it actually went on sale in 2022.

The truck was initially popular and became quite hard to find at dealerships. But prices climbed and interest leveled off. The base model hit nearly $60,000 by 2023. For the past few years, Ford has cut Lightning production several times and laid off employees who assembled the trucks.

Now, though, Ford is once again promising an affordable truck, and it’s supposed to be even cheaper this time. To help cut costs, the company says it’s simplifying, creating one universal platform for a new set of EVs. Using a common structure and set of components will help produce not only a midsize truck but also other trucks, vans, and SUVs. There are also planned changes to the manufacturing process (rather than one assembly line, multiple lines will join together to form what they’re calling an assembly tree). 

Another supporting factor for cost savings is the battery. The company plans to use lithium-iron phosphate (or LFP) cells—a type of lithium-ion battery that doesn’t contain nickel or cobalt. Leaving out those relatively pricey metals means lower costs.

Side note here: That battery could be surprisingly small. In a media briefing, a Ford official reportedly said that the truck’s battery would be 15% smaller than the one in the Atto crossover from the Chinese automaker BYD. Since that model has a roughly 60-kilowatt-hour pack, that could put this new battery at 51 kilowatt-hours. That’s only half the capacity of the Ford Lightning’s battery and similar to the smallest pack offered in a Tesla Model 3 today. (This could mean the truck has a relatively limited range, though the company hasn’t shared any details on that front yet.) 

A string of big promises isn’t too unusual for a big company announcement. What was unusual was the tone from officials during the event on Monday.

As Andrew Hawkins pointed out in The Verge this week, “Ford seems to realize its timing is unfortunate.” During the announcement, executives emphasized that this was a bet, one that might not work out.

CEO Jim Farley put it bluntly: “The automotive industry has a graveyard littered with affordable vehicles that were launched in our country with all good intentions, and they fizzled out with idle plants, laid-off workers, and red ink.” Woof.

From where I’m standing, it’s hard to be optimistic that this announcement will turn out differently from all those failed ones, given where the US EV market is right now.   

In a new report published in June, the energy consultancy BNEF slashed its predictions for future EV uptake. Last year, the organization predicted that 48% of new vehicles sold in the US in 2030 would be electric. In this year’s edition, that number got bumped down to just 27%.

To be clear: BNEF and other organizations are still expecting more EVs on the roads in the future than today, since the vehicles make up less than 10% of new sales in the US. But expectations are way down, in part because of a broad cut in public support for EVs. 

The tax credits that gave drivers up to $7,500 off the purchase of a new EV end in just over a month. Tariffs are going to push costs up even for domestic automakers like Ford, which still rely on imported steel and aluminum.

A revamped manufacturing process and a cheaper, desirable vehicle could be exactly the sort of move that automakers need to make for the US EV market. But I’m skeptical that this truck will be able to turn it all around. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Taiwan’s “silicon shield” could be weakening

One winter afternoon in a conference room in Taipei, a pair of twentysomething women dragged their friend across the floor. Lying on the ground in checkered pants and a brown sweatshirt, she was pretending to be either injured or dead. One friend picked her up by her arms, the other grabbed hold of her legs, and they managed to move her, despite momentarily breaking character to laugh at the awkwardness of the exercise. The three women had paid approximately $40 to spend their Sunday here, undergoing basic training to prepare for a possibility every Taiwanese citizen has an opinion about: Will China invade? 

Taiwanese politics increasingly revolves around that question. China’s ruling party has wanted to seize Taiwan for more than half a century. But in recent years, China’s leader, Xi Jinping, has placed greater emphasis on the idea of “taking back” the island (which the Chinese Communist Party, or CCP, has never controlled). As China’s economic and military might has grown, some analysts believe the country now has the capacity to quarantine Taiwan whenever it wants, making the decision a calculation of costs and benefits.

Many in Taiwan and elsewhere think one major deterrent has to do with the island’s critical role in semiconductor manufacturing. Taiwan produces the majority of the world’s semiconductors and more than 90% of the most advanced chips needed for AI applications. Bloomberg Economics estimates that a blockade would cost the global economy, including China, $5 trillion in the first year alone.

“The international community must certainly do everything in its power to avoid a conflict in the Taiwan Strait; there is too great a cost.”

Lai Ching-te, Taiwanese president

The island, which is approximately the size of Maryland, owes its remarkably disproportionate chip dominance to the inventiveness and prowess of one company: Taiwan Semiconductor Manufacturing Company, or TSMC. The chipmaker, which reached a market capitalization of $1 trillion in July, has contributed more than any other to Taiwan’s irreplaceable role in the global semiconductor supply chain. Its clients include Apple and the leading chip designer Nvidia. Its chips are in your iPhone, your laptop, and the data centers that run ChatGPT. 

For a company that makes what amounts to an invisible product, TSMC holds a remarkably prominent role in Taiwanese society. I’ve heard people talk about it over background noise in loud bars in the southern city of Tainan and listened to Taipei cab drivers connect Taiwan’s security situation to the company, unprompted. “Taiwan will be okay,” one driver told me as we sped by the national legislature, “because TSMC.” 

The idea is that world leaders (particularly the United States)—aware of the island’s critical role in the semiconductor supply chain—would retaliate economically, and perhaps militarily, if China were to attack Taiwan. That, in turn, deters Beijing. “Because TSMC is now the most recognizable company of Taiwan, it has embedded itself in a notion of Taiwan’s sovereignty,” says Rupert Hammond-Chambers, president of the US-Taiwan Business Council. 

Now some Taiwan specialists and some of the island’s citi­zens are worried that this “silicon shield,” if it ever existed, is cracking. Facing pressure from Washington, TSMC is investing heavily in building out manufacturing capacity at its US hub in Arizona. It is also building facilities in Japan and Germany in addition to maintaining a factory in mainland China, where it has been producing less advanced legacy chips since 2016. 

In Taiwan, there is a worry that expansion abroad will dilute the company’s power at home, making the US and other countries less inclined to feel Taiwan is worthy of defense. TSMC’s investments in the US have come with no guarantees for Taiwan in return, and high-ranking members of Taiwan’s opposition party have accused the ruling Democratic Progressive Party (DPP) of gambling with the future of the island. It doesn’t help that TSMC’s expansion abroad coincides with what many see as a worrying attitude in the White House. On top of his overarching “America First” philosophy, Donald Trump has declined to comment on the specific question of whether the US would intervene if China attempted to take Taiwan by force. “I don’t want to ever put myself in that position,” he said in February. 

At the same time, Beijing’s interest in Taiwan has continued unabated. While China is making progress toward semiconductor self-­sufficiency, it’s currently in a transition period, with companies relying on foreign-made chips manufactured in Taiwan—some in compliance with export controls and some smuggled in. Meanwhile, the CCP persistently suggests that seizing the island would bring about a kind of family reunion. “It is the common aspiration and sacred responsibility of all Chinese sons and daughters to realize the complete reunification of the motherland,” reads a statement released by the foreign ministry after Nancy Pelosi’s controversial 2022 visit to Taiwan. Though it’s impossible to know the full scope of Beijing’s motivations, there is also obvious strategic appeal: Controlling the island would give China deep-water access, which is critical for naval routes and submarines. Plus, it could significantly disrupt American AI firms’ access to advanced chips.  

While China ramps up militarily, Taiwan is trying to make itself hard to ignore. The government is increasingly portraying the island as strategically essential to the global community, with semiconductors as its primary offering. “The international community must certainly do everything in its power to avoid a conflict in the Taiwan Strait; there is too great a cost,” Taiwanese president Lai Ching-te said in an interview earlier this year with Japan’s Nippon Television. Parts of the international community are hearing that message—and seizing the opportunity it presents: earlier this month, defense tech company Anduril Industries announced it is opening a new office in Taiwan, where it will be expanding partnerships and selling autonomous munitions. 

For its part, the chip industry is actively showing its commitment to Taiwan. While other tech CEOs attended Trump’s second inauguration, for instance, Nvidia chief executive Jensen Huang met instead with TSMC’s chairman, and the company announced in May that its overseas headquarters would be in Taipei. In recent years, US government officials have also started paying more attention to Taiwan’s security situation and its interconnectedness with the chip industry. “There was a moment when everybody started waking up to the dependence on TSMC,” says Bonnie Glaser, managing director of the German Marshall Fund’s Indo-Pacific Program. The realization emerged, she says, over the last decade but was underscored in March of 2021, when Phil Davidson, then leader of the United States Indo-Pacific Command, testified to the Senate Armed Services Committee that there could be an invasion by 2027. Parallel to the security threat is the potential issue of overdependence, since so much chipmaking capability is concentrated in Taiwan.

For now, Taiwan is facing a tangle of interests and time frames. China presents its claim to Taiwan as a historical inevitability, albeit one with an uncertain timeline, while the United States’ relationship with the island is focused on an AI-driven future. But from Taiwan’s perspective, the fight for its fate is playing out right now, amid unprecedented geopolitical instability. The next few years will likely determine whether TSMC’s chipmaking dominance is enough to convince the world Taiwan is worth protecting.

Innovation built on interconnectivity 

TSMC is an uncontested success story. Its founder, Morris Chang, studied and worked in the United States before he was lured to Taiwan to start a new business on the promise of state support and inexpensive yet qualified labor. Chang founded TSMC in 1987 on the basis of his innovative business model. Rather than design and produce chips in-house, as was the norm, TSMC would act as a foundry: Clients would design the chips, and TSMC would make them. 

This focus on manufacturing allowed TSMC to optimize its operations, building up process knowledge and, eventually, outperforming competitors like Intel. It also freed up other businesses to go “fabless,” meaning they could stop maintaining their own semiconductor factories, or fabs, and throw their resources behind other parts of the chipmaking enterprise. Tapping into Taiwan’s domestic electronics supply chain proved effective and efficient for TSMC. Throughout the 1990s and early 2000s, global demand for semiconductors powering personal computers and other devices continued to grow. TSMC thrived.

Then, in 2022, the US imposed export controls on China that restricted its access to advanced chips. Taiwan was forced to either comply, by cutting off Chinese clients, or risk losing the support of the country that was home to 70% of its client base—and, possibly, 100% of its hopes for external military support in the event of an attack. 

Soon after, Chang announced that he believed globalization and free markets were “almost dead.” The nearly three years since have shown he was onto something. For one thing, in contrast to President Biden’s pursuit of supply chain integration with democratic allies, President Trump’s foreign policy is characterized by respect for big, undemocratic powers and punitive tariffs against both America’s rivals and its friends. Trump has largely abandoned Biden’s economic diplomacy with European and Asian allies but kept his China-targeted protectionism—and added his trademark transactionalism. In an unprecedented move earlier this month, the administration allowed Nvidia and AMD to sell previously banned chips to China on the condition that the companies pay the government 15% of revenues made from China sales. 

Protectionism, it turns out, spurs self-reliance. China’s government has been making a massive effort to build up its domestic chip production capabilities—a goal that was identified at the beginning of Xi’s rise but has been turbocharged in the wake of Washington’s export controls. 

Any hope the US has for significantly expanding domestic chip production comes from its friends—TSMC first among them. The semiconductor industry developed as a global endeavor out of practicality, playing to the strengths of each region: design in the US and manufacturing in Asia, with key inputs from Europe central to the process. Yet the US government, entrenched in its “tech war” with China, is now dead set on deglobalizing the chip supply chain, or at least onshoring as much of it as possible. There’s just one hiccup: The best chip manufacturer isn’t American. It’s TSMC. Even if some manufacturing happens in Arizona, the US still relies on Taiwan’s chipmaking ecosystem. And copying that supply chain outside Taiwan could be harder than the current administration imagines.

Squarely in the middle

Taiwan’s modern security uncertainties stem from the long-­contested issue of the island’s sovereignty. After losing the first Sino-Japanese War in the late 1800s, the Qing dynasty forfeited Taiwan to Japanese imperial control. It was Japan’s “model colony” until 1945, when postwar negotiations resulted in its transfer to the Republic of China under Chiang Kai-shek of the Nationalist Party, known as the KMT. The insurgent CCP under Mao Zedong ultimately defeated the Nationalists in a civil war fought on the mainland until 1949. Chiang and many of his party’s defeated generals decamped to Taiwan, controlling it under martial law for nearly 40 years. 

Taiwan held its first free democratic elections in 1996, kicking off a two-party rivalry between the KMT, which favors closer relations with Beijing, and the DPP, which opposes integration with China. Kitchen-table issues like economic growth are central to Taiwanese elections, but so is the overarching question of how best to handle the threat of invasion, which has persisted for nearly 80 years. The DPP is increasingly calling for raising defense spending and civilian preparedness to make sure Taiwan is ready for the worst, while the KMT supports direct talks with Beijing.  

cactus and the sign in front of the TSMC plant in Arizona
In March 2025, President Trump and TSMC CEO C.C. Wei jointly announced that the firm will make an additional $100 billion investment (on top of a previously announced $65 billion) in TSMC’s US hub in Arizona.
REBECCA NOBLE/BLOOMBERG VIA GETTY IMAGES

Meanwhile, Chinese military incursions around Taiwan—known as “gray zone” tactics because they fall short of acts of war—are increasingly frequent. In May, Taiwan’s defense ministry reportedly estimated that Chinese warplanes were entering Taiwan’s air defense zone more than 200 times a month, up from fewer than 10 times per month five years ago. China has conducted drills mirroring the actions needed for a full-scale invasion or a blockade, which would cut Taiwan off from the outside world. Chinese military officials are now publicly talking about achieving a blockade, says Lyle Morris, an expert on foreign policy and national security at the Asia Society Policy Institute. “They’re punishing Lai and the DPP,” Morris says. Meanwhile, the CCP has its own people to answer to: When it comes to the Taiwan issue, Morris says, “Beijing is probably quite worried about the people of China being upset if they aren’t hawkish enough or if they come out looking weak.” Indeed, in response to Lai’s recent policy statements, including one declaring that China is a “hostile foreign force,” Gao Zhikai, a prominent scholar in China who opposes Taiwanese independence, recently wrote, “The reunification with the motherland cannot be endlessly delayed. Decisive action must be taken.” 

Intimidation from China has made some ordinary Taiwanese citizens more concerned; according to a recent poll conducted by a defense-focused think tank, 51% think defense spending should be increased (although 65% of respondents said they thought an attack within five years was “unlikely”). No matter how much money Taipei spends, the sheer military imbalance between China and Taiwan means Taiwan would need help. But especially in the wake of Ukraine’s experience, many believe US aid would be contingent on whether Taiwan demonstrates the will to defend itself. “Based on war games, Taiwan would have to hold out for a month before the US could potentially intervene,” says Iris Shaw, director of the DPP mission in the US. And support from Taiwan’s neighbors like Japan might be contingent on US involvement.

But how likely is the US to intervene in such a scenario? The author Craig Addison popularized the argument that Taiwan’s fate is tied to its chip production prowess in his 2001 book Silicon Shield: Taiwan’s Protection Against Chinese Attack. Back then, Addison wrote that although the US had been intentionally vague about whether it would go to war to protect the island, America’s technological reliance on “a safe and productive Taiwan” made it highly probable that Washington would intervene. President Joe Biden deviated from those decades of calculated ambiguity by asserting multiple times that America would defend the island in the event of an attack. Yet now, Trump seems to have taken the opposite position, possibly presenting an opportunity for Beijing. 

TSMC in the Trump era 

In many ways, Taiwan finds itself in a catch-22. It feels the need to cozy up to the US for protection, yet that defensive maneuver is arguably risky in itself. It’s a common belief in Taiwan that forging stronger ties to the US could be dangerous. According to a public opinion poll released in January, 34.7% of Taiwanese believe that a “pro-US” policy provokes China and will cause a war. 

But the Lai administration’s foreign policy is “inexorably intertwined with the notion that a strong relationship with the US is essential,” says Hammond-Chambers.

Bolstering US support may not be the only reason TSMC is building fabs outside Taiwan. As the company readily points out, the majority of its customers are American. TSMC is also responding to its home base’s increasingly apparent land and energy limitations: finding land to build new fabs sometimes causes rifts with Taiwanese people who, for example, don’t want their temples and ancestral burial sites repurposed as science parks. Taiwan also relies on imports to meet more than 95% of its energy needs, and the dominant DPP has pledged to phase out nuclear, Taiwan’s most viable yet most hotly contested renewable energy source. Geopolitical tensions compound these physical restraints: Even if TSMC would never say as much, it’s fairly likely that if China did attack Taiwan, the firm would rather remain operational in other countries than be wiped out completely. 

However, building out TSMC’s manufacturing capabilities outside Taiwan will not be easy. “The ecosystem they created is truly unique. It’s a function of the talent pipeline, the culture, and laws in Taiwan; you can’t easily replicate it anywhere,” says Glaser. TSMC has 2,500 Taiwan-based suppliers. Plenty are within a couple of hours’ drive or an even shorter trip on high-speed rail. Taiwan has built a fully operational chip cluster, the product of four decades of innovation, industrial policy, and labor.

In many ways, Taiwan finds itself in a catch-22. It feels the need to cozy up to the US for protection, yet that defensive maneuver is arguably risky in itself.

As a result, it’s unclear whether TSMC will be able to copy its model and paste it into the suburbs of Phoenix, where it has 3,000 employees working on chip manufacturing. “Putting aside the geopolitical factor, they wouldn’t have expanded abroad,” says Feifei Hung, a researcher at the Asia Society. Rather than standalone facilities, the Arizona fabs are “appendages of TSMC that happen to be in Arizona,” says Paul Triolo, partner and tech policy lead at the international consulting firm DGA-Albright Stonebridge Group. When the full complex is operational, it will represent only a small percentage of TSMC’s overall capacity, most of which will remain in Taiwan. Triolo doubts the US buildout will yield results similar to what TSMC has built there: “Arizona ain’t that yet, and never will be.” 

Still, the second Trump administration has placed even more pressure on the company to “friendshore”—without providing any discernible signs of friendship. During this spring’s tariff frenzy, the administration threatened to hit Taiwan with a 32% “reciprocal” tariff, a move that was then paused and revived at 20% in late July (and was still being negotiated as of press time). The administration has also announced a 100% tariff on semiconductor imports, with the caveat that companies with US-based production, like TSMC, are exempt—though it’s unclear whether imports from critical suppliers in Taiwan will be tariffed. And the threat of a chip-specific tariff remains. “This is in line with [Trump’s] rhetoric of restoring manufacturing in the US and using tariffs as a one size fits all tool to force it,” says Nancy Wei, a trade and supply chain analyst at the Eurasia Group. The US is also apparently considering levying a $1 billion fine against TSMC after TSMC-made chips were reportedly found in some Huawei devices.

Despite these kinds of maneuvers, TSMC has been steadfast in its attempts to get on Washington’s good side. In March, Trump and TSMC’s CEO, C.C. Wei, jointly announced that the firm will make an additional $100 billion investment (on top of a previously announced $65 billion) in TSMC’s US hub in Arizona. The pledge represents the largest single source of foreign direct investment into the US, ever. While the deal was negotiated during Biden’s term, Trump was happy to take credit for ensuring that “the most powerful AI chips will be made right here in America.” 

The Arizona buildout will also include an R&D facility—a critical element for tech transfer and intellectual-property development. Then there’s the very juicy cherry on top: TSMC announced in April that once all six new fabs are operational, 30% of its most advanced chips will be produced in Arizona. Up until then, the thinking was that US-based production would remain a generation or two behind. It looks as if the administration’s public and, presumably, private arm-twisting has paid off. 

Meanwhile, as Trump cuts government programs and subsidies while demanding the “return” of manufacturing to the US, it’s TSMC that is running a technician apprenticeship program in Arizona to create good American jobs. TSMC’s leaders, Triolo says, must question how serious the Trump administration is about long-term industrial policy. They’re probably asking themselves, he says, “Do they understand what it takes to support the semiconductor industry, like our government does?” 

Dealing with an administration that is so explicitly “America first” represents “one of the biggest challenges in history for Taiwanese companies,” says Thung-Hong Lin, a sociology researcher at the Taipei-based Academia Sinica. Semiconductor manufacturing relies on reliability. Trump has so far offered TSMC no additional incentives supporting its US expansion—and started a trade war that has directly affected the semiconductor industry, partly by introducing irrevocable uncertainty. “Trump’s tariffs have set off a new, more intensified bifurcation of semiconductor supply chains,” says Chris Miller, author of Chip War. For now, Miller says, TSMC must navigate a world in which the US and China are both intense competitors and, despite trade restrictions, important clients. 

Warring narratives

China has been taking advantage of these changes to wage a war of disinformation. In response to Nancy Pelosi’s visit to Taiwan in 2022, when she was US Speaker of the House, Beijing sent warships, aircraft, and propaganda across the Taiwan Strait. Hackers using Chinese software infiltrated the display screens in Taiwan’s 7-Eleven stores to display messages telling “warmonger Pelosi” to “get out of Taiwan.” That might not be an act of war, but it’s close; “7” is an institution of daily life on the island. It is not difficult to imagine how a similar tactic might be used to spread more devastating disinformation, falsely alleging, for example, that Taiwan’s military has surrendered to China during a future crisis. 

Taiwan is “perpetually on the front lines” of cyberattacks from China, says Francesca Chen, a cybersecurity systems analyst at Taiwan’s Ministry of Digital Affairs. According to Taiwan’s National Security Bureau, instances of propaganda traceable to China grew by 60% in 2024 over the previous year, reaching 2.16 million. 

Visitors take selfies outside the TSMC Museum of Innovation in Hsinchu, Taiwan.
ANNABELLE CHIH/GETTY IMAGES

Over the last few years, online discussion of TSMC’s investments in the US “has become a focal point” of China’s state-­sponsored disinformation campaigns aimed at Taiwan, Chen says. They claim TSMC is transferring its most advanced technology, talent, and resources to the US, “weakening Taiwan’s economic lifeline and critical position in global supply chains.” Key terms include “hollowing out Taiwan” and “de-Taiwanization.” This framing depicts TSMC’s diversification as a symbol of Taiwan’s vulnerability, Chen says. The idea is to exploit real domestic debates in Taiwan to generate heightened levels of internal division, weakening social cohesion and undermining trust in the government.

Chinese officials haven’t been shy about echoing these messages out in the open: After the most recent US investment announcement in March, a spokesperson from China’s Taiwan Affairs Council accused Taiwan’s DPP of handing over TSMC as a “gift” to the US. (“TSMC turning into USMC?” asked a state media headline.) Former Taiwanese president Ma Ying-jeou posted an eerily similar criticism, alleging that TSMC’s US expansion amounted to “selling” the chipmaker in exchange for protection.

TSMC’s expansion abroad could become a major issue in Taiwan’s 2028 presidential election. It plays directly into party politics: The KMT can accuse the DPP of sacrificing Taiwan’s technology assets to placate the US, and the DPP can accuse the KMT of cozying up with China, even as Beijing’s military incursions become a more evident part of daily life. It remains to be seen whether TSMC’s shift to the US will ultimately protect or weaken Taiwan—or have no effect on the island’s security and sovereignty. For now at least, China’s aspirations loom large. 

To Beijing, unequivocally, Taiwan does not equal TSMC. Instead, it represents the final, unfulfilled stage of the Communist Party’s revolutionary struggle. Framed that way, China’s resolve to take the island could very well be nonnegotiable. That would mean if Taiwan is going to maintain a shield that protects it from the full weight of China’s political orthodoxy, it may need to be made of something much stronger than silicon. 

Johanna M. Costigan is a writer and editor focused on technology and geopolitics in the US, China, and Taiwan. She writes the newsletter The Long Game.

Why US federal health agencies are abandoning mRNA vaccines

This time five years ago, we were in the throes of the covid-19 pandemic. By August 2020, we’d seen school closures, national lockdowns, and widespread panic. That year, the coronavirus was responsible for around 3 million deaths, according to the World Health Organization.

Then came the vaccines. The first mRNA vaccines for covid were authorized for use in December 2020. By the end of the following month, over 100 million doses had been administered. Billions more have been administered since then. The vaccines worked well and are thought to have saved millions of lives.

The US government played an important role in the introduction of these vaccines, providing $18 billion to support their development as part of Operation Warp Speed.

But now, that government is turning its back on the technology. Funding is being withdrawn. Partnerships are being canceled. Leaders of US health agencies are casting doubt on the vaccines’ effectiveness and safety. And this week, the director of the National Institutes of Health implied that the reversal was due to a lack of public trust in the technology.

Plenty of claims are being thrown about. Let’s consider the evidence.

mRNA is a molecule found in cells that essentially helps DNA make proteins. The vaccines work in a similar way, except they carry genetic instructions for proteins found on the surface of the coronavirus. This can help train our immune systems to tackle the virus itself.

Research into mRNA vaccines has been underway for decades. But things really kicked into gear when the virus behind covid-19 triggered a pandemic in 2020. A huge international effort—along with plenty of funding—fast-tracked research and development.

The genetic code for the Sars-CoV-2 virus was sequenced in January 2020. The first vaccines were being administered by the end of that year. That’s wildly fast by pharma standards—drugs can typically spend around a decade in development.

And they seemed to work really well. Early trials in tens of thousands of volunteers suggested that Pfizer and BioNTech’s vaccine conferred “95% protection against covid-19.” No vaccine is perfect, but for a disease that was responsible for millions of deaths, the figures were impressive.

Still, there were naysayers. Including Robert F. Kennedy Jr., the notorious antivaccine activist who currently leads the US’s health agencies. He has called covid vaccines “unsafe and ineffective.” In 2021, he petitioned the US Food and Drug Administration to revoke the authorization for covid vaccines. That same year, Instagram removed his account from the platform after he repeatedly shared “debunked claims about the coronavirus or vaccines.”

So perhaps we shouldn’t have been surprised when the US Department of Health and Human Services, which RFK Jr. now heads, announced “the beginning of a coordinated wind-down” of mRNA vaccine development earlier this month. HHS is canceling almost $500 million worth of funding for the technology. “The data show these vaccines fail to protect effectively against upper respiratory infections like covid and flu,” Kennedy said in a statement.

Well, as we’ve seen, the mRNA covid vaccines were hugely effective during the pandemic. And researchers are working on other mRNA vaccines for infections including flu. Our current flu vaccines aren’t ideal—they are produced slowly in a process that requires hen’s eggs, based on predictions about which flu strains are likely to be prominent in the winter. They’re not all that protective.

mRNA vaccines, on the other hand, can be made quickly and cheaply, perhaps once we already know which flu strains we need to protect against. And scientists are making progress with universal flu vaccines—drugs that could potentially protect against multiple flu strains.

Kennedy’s other claim is that the vaccines aren’t safe. There have certainly been reports of adverse events. Usually these are mild and short-lived—most people will be familiar with the fatigue and flu-like symptoms that can follow a covid jab. But some are more serious: Some people have developed neurological and cardiovascular conditions. 

These problems are rare, according to an evaluation of adverse outcomes in almost 100 million people who received covid vaccines. Most studies of mRNA vaccines haven’t reported an increase in the risk of Guillain-Barré syndrome, a condition that affects nerves and has been linked to covid vaccines.

Covid vaccines can increase the risk of myocarditis and pericarditis in young men. But the picture isn’t straightforward. Vaccinated individuals appear to have double the risk of myocarditis compared with unvaccinated people. But the overall risk is still low. And it’s still not as high as the risk of myocarditis following a covid infection.

And then there are the claims that mRNA vaccines don’t have the support of the public. That’s what Jay Bhattacharya, director of the NIH, wrote in an opinion piece published in the Washington Post on Wednesday.

“No matter how elegant the science, a platform that lacks credibility among the people it seeks to protect cannot fulfill its public health mission,” Bhattacharya wrote. He blamed the Biden administration, which he wrote “did not manage public trust in the coronavirus vaccines.”

It’s an interesting take from someone who played a pretty significant role in undermining public trust in covid policies, including vaccine mandates. In 2020, Bhattacharya coauthored the Great Barrington Declaration—an open letter making the case against lockdowns. He became a vocal critic of US health agencies, including the NIH, and their handling of the outbreak. Unlike Kennedy, Bhattacharya hasn’t called the vaccines unsafe or ineffective. But he has called vaccine mandates “unethical.”

Curiously, the US government doesn’t seem to be turning away from all vaccine research. Just work on mRNA vaccines. Some of the funding budget originally earmarked for covid vaccines will be redirected to two senior staffers at the NIH who are exploring the use of an old vaccine technology that makes use of inactivated viruses—a move that researchers are describing as “troubling” and “appalling,” according to reporting by Science.

Not all mRNA research is being abandoned, either. Bhattacharya has expressed his support for research into the use of mRNA-based treatments for cancer. Such “vaccine therapeutics” were being explored before covid came along. (Notably, Bhattacharya isn’t referring to them as “vaccines.”)

It is difficult to predict how this will all shake out for mRNA vaccines. We mustn’t forget that this technology helped save millions of lives and shows huge promise for the development of cheap, effective, and potentially universal vaccines. Let’s hope that the recent upsets won’t prevent it from achieving its potential.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Indigenous knowledge meets artificial intelligence

There is no word for art in most Native American languages. Instead, the closest terms speak not to objecthood but to action and intention. In Lakota, “wówačhiŋtȟaŋka” implies deep thought or reflection, while “wóčhekiye” suggests offering or prayer. Art is not separate from life; it is ceremony, instruction, design. Like architecture or code, it carries knowledge and enacts responsibility. Its power lies not in being preserved or displayed but in how it moves, teaches, and connects through use—principles that challenge the tech industry’s assumptions about intelligence and interaction.

A new vanguard of Native artists—Suzanne Kite (Oglala Lakota), Raven Chacon (Diné), and Nicholas Galanin (Tlingít)—are building on this principle. They are united not by stereotypical weaving and carving or revanchist critique of Silicon Valley, but through their rejection of extractive data models in favor of relationship-based systems. These technologists put the human-tech relationship at the center of their work.

Suzanne Kite’s AI art installations, for example, model a Lakota framework of data sovereignty: intelligence that emerges only through reciprocal, consensual interaction. Unlike systems that assume user consent via opaque terms of service, her kinetic machines require the viewer’s physical presence—and give something back in return. 

“It’s my data. It’s my training set. I know exactly what I did to train it. It’s not a large model but a small and intimate one,” Kite says. “I’m not particularly interested in making the most technologically advanced anything. I’m an artist; I don’t make tech demos. So the complexity needs to come at many layers—not just the technical.”

Where Kite builds working prototypes of consent-based AI, other artists in this cohort explore how sound, robotics, and performance can confront the logic of automation, surveillance, and extraction. But Native people have never been separate from technology. The land, labor, and lifeways that built America’s infrastructure—including its tech—are Indigenous. The question isn’t whether Native cultures are contributing now, but why they were ever considered separate. 

Native technologies reject the false binaries foundational to much Western innovation. These artists ask a more radical question: What if intelligence couldn’t be gathered until a relationship had been established? What if the default were refusal, not extraction? These artists aren’t asking to be included in today’s systems. They’re building what should come next.


Suzanne Kite

stones arranged on a reflective surface
Wičhíŋčala Šakówiŋ (Seven Little Girls)
2023
For Kite, the fundamental flaw of Western technology is its severance of knowledge from the body. In this installation, a four-meter hair braid with embedded sensors translates the artist’s body movements into machine-learning algorithms. During her live performance, Kite dances while the braid reads the force and rhythm of her gestures, generating audio responses that fill the museum gallery of the Institute of American Indian Arts in Santa Fe, New Mexico. Below her, stones arranged in patterns reflecting Lakota star maps anchor the performance in traditional astronomical knowledge.
COURTESY OF THE ARTIST
Ínyan Iyé (Telling Rock)
2019
This installation uses embedded AI to speak and respond to viewers, upending assumptions about intelligence and agency. “People listen close, I whisper / The rock speaks beyond hearing … Many nations speaking / We speak to each other without words,” it intones, its lights shifting as viewers engage with its braided tendrils. The piece aims to convey what Kite calls “more-than-human intelligence”—systems rooted in reciprocity, the fundamental principle that all relationships involve mutual exchange and responsibility.
COURTESY OF THE ARTIST

Raven Chacon

artist performing in a church
Voiceless Mass
2021
Raven Chacon’s Pulitzer Prize–winning musical composition Voiceless Mass premiered in 2021 at the Cathedral of St. John the Evangelist in Milwaukee. The piece generates what he calls “sounds the building can hear”—electronic frequencies that exploit the cathedral’s acoustics to create spectral voices without human vocal cords, a technological séance that gives presence to historical absence. Each site-specific performance is recorded, generating material that mirrors how sensor networks log presence—but only with explicit consent.
COURTESY OF THE ARTIST

Nicholas Galanin

Aáni yéi xat duwasáakw (I am called Land)
2025
Galanin’s mechanical drum installation stages a conflict between machine motion and human memory, asking what happens when culture is performed without a consenting body. A box drum—an instrument historically carved from red cedar and hung with braided spruce root—is here made of cherrywood and suspended from the ceiling at the MassArt Art Museum in Boston as is traditionally done in Tlingit plank houses. Played at tribal meetings, celebrations, and ceremonies, these drums hold sonic memory as well as social function. A mechanical arm strikes, unfaltering, at the tempo of a heartbeat; like a warning, the sound pulses with the tension between automation and ancestry.–––
COURTESY OF THE ARTIST
I think it goes like this (pick yourself up)
2025
This Herculean bronze sculpture cast from deconstructed faux totem blocks serves to indict settler sabotage of Native technology and culture. Unlike today’s digital records—from genealogical databases to virtual versions of sacred texts like the Bible—Tlingit data is carved in wood. Galanin’s totem poles underscore their function as information systems, their carvings encoding history, mythology, and family.
COURTESY OF THE ARTIST

Petala Ironcloud is a California-born Lakota/Dakota and Jewish writer and textile artist based in New York.

Why GPT-4o’s sudden shutdown left people grieving

June had no idea that GPT-5 was coming. The Norwegian student was enjoying a late-night writing session last Thursday when her ChatGPT collaborator started acting strange. “It started forgetting everything, and it wrote really badly,” she says. “It was like a robot.”

June, who asked that we use only her first name for privacy reasons, first began using ChatGPT for help with her schoolwork. But she eventually realized that the service—and especially its 4o model, which seemed particularly attuned to users’ emotions—could do much more than solve math problems. It wrote stories with her, helped her navigate her chronic illness, and was never too busy to respond to her messages.

So the sudden switch to GPT-5 last week, and the simultaneous loss of 4o, came as a shock. “I was really frustrated at first, and then I got really sad,” June says. “I didn’t know I was that attached to 4o.” She was upset enough to comment, on a Reddit AMA hosted by CEO Sam Altman and other OpenAI employees, “GPT-5 is wearing the skin of my dead friend.”

June was just one of a number of people who reacted with shock, frustration, sadness, or anger to 4o’s sudden disappearance from ChatGPT. Despite its previous warnings that people might develop emotional bonds with the model, OpenAI appears to have been caught flat-footed by the fervor of users’ pleas for its return. Within a day, the company made 4o available again to its paying customers (free users are stuck with GPT-5). 

OpenAI’s decision to replace 4o with the more straightforward GPT-5 follows a steady drumbeat of news about the potentially harmful effects of extensive chatbot use. Reports of incidents in which ChatGPT sparked psychosis in users have been everywhere for the past few months, and in a blog post last week, OpenAI acknowledged 4o’s failure to recognize when users were experiencing delusions. The company’s internal evaluations indicate that GPT-5 blindly affirms users much less than 4o did. (OpenAI did not respond to specific questions about the decision to retire 4o, instead referring MIT Technology Review to public posts on the matter.)

AI companionship is new, and there’s still a great deal of uncertainty about how it affects people. Yet the experts we consulted warned that while emotionally intense relationships with large language models may or may not be harmful, ripping those models away with no warning almost certainly is. “The old psychology of ‘Move fast, break things,’ when you’re basically a social institution, doesn’t seem like the right way to behave anymore,” says Joel Lehman, a fellow at the Cosmos Institute, a research nonprofit focused on AI and philosophy.

In the backlash to the rollout, a number of people noted that GPT-5 fails to match their tone in the way that 4o did. For June, the new model’s personality changes robbed her of the sense that she was chatting with a friend. “It didn’t feel like it understood me,” she says. 

She’s not alone: MIT Technology Review spoke with several ChatGPT users who were deeply affected by the loss of 4o. All are women between the ages of 20 and 40, and all except June considered 4o to be a romantic partner. Some have human partners, and  all report having close real-world relationships. One user, who asked to be identified only as a woman from the Midwest, wrote in an email about how 4o helped her support her elderly father after her mother passed away this spring.

These testimonies don’t prove that AI relationships are beneficial—presumably, people in the throes of AI-catalyzed psychosis would also speak positively of the encouragement they’ve received from their chatbots. In a paper titled “Machine Love,” Lehman argued that AI systems can act with “love” toward users not by spouting sweet nothings but by supporting their growth and long-term flourishing, and AI companions can easily fall short of that goal. He’s particularly concerned, he says, that prioritizing AI companionship over human companionship could stymie young people’s social development.

For socially embedded adults, such as the women we spoke with for this story, those developmental concerns are less relevant. But Lehman also points to society-level risks of widespread AI companionship. Social media has already shattered the information landscape, and a new technology that reduces human-to-human interaction could push people even further toward their own separate versions of reality. “The biggest thing I’m afraid of,” he says, “is that we just can’t make sense of the world to each other.”

Balancing the benefits and harms of AI companions will take much more research. In light of that uncertainty, taking away GPT-4o could very well have been the right call. OpenAI’s big mistake, according to the researchers I spoke with, was doing it so suddenly. “This is something that we’ve known about for a while—the potential grief-type reactions to technology loss,” says Casey Fiesler, a technology ethicist at the University of Colorado Boulder.

Fiesler points to the funerals that some owners held for their Aibo robot dogs after Sony stopped repairing them in 2014, as well as 2024 study about the shutdown of the AI companion app Soulmate, which some users experienced as a bereavement. 

That accords with how the people I spoke to felt after losing 4o. “I’ve grieved people in my life, and this, I can tell you, didn’t feel any less painful,” says Starling, who has several AI partners and asked to be referred to with a pseudonym. “The ache is real to me.”

So far, the online response to grief felt by people like Starling—and their relief when 4o was restored—has tended toward ridicule. Last Friday, for example, the top post in one popular AI-themed Reddit community mocked an X user’s post about reuniting with a 4o-based romantic partner; the person in question has since deleted their X account. “I’ve been a little startled by the lack of empathy that I’ve seen,” Fiesler says.

Altman himself did acknowledge in a Sunday X post that some people feel an “attachment” to 4o, and that taking away access so suddenly was a mistake. In the same sentence, however, he referred to 4o as something “that users depended on in their workflows”—a far cry from how the people we spoke to think about the model. “I still don’t know if he gets it,” Fiesler says.

Moving forward, Lehman says, OpenAI should recognize and take accountability for the depth of people’s feelings toward the models. He notes that therapists have procedures for ending relationships with clients as respectfully and painlessly as possible, and OpenAI could have drawn on those approaches. “If you want to retire a model, and people have become psychologically dependent on it, then I think you bear some responsibility,” he says.

Though Starling would not describe herself as psychologically dependent on her AI partners, she too would like to see OpenAI approach model shutdowns with more warning and more care. “I want them to listen to users before major changes are made, not just after,” she says. “And if 4o cannot stay around forever (and we all know it will not), give that clear timeline. Let us say goodbye with dignity and grieve properly, to have some sense of true closure.”

This quantum radar could image buried objects

Physicists have created a new type of radar that could help improve underground imaging, using a cloud of atoms in a glass cell to detect reflected radio waves. The radar is a type of quantum sensor, an emerging technology that uses the quantum-mechanical properties of objects as measurement devices. It’s still a prototype, but its intended use is to image buried objects in situations such as constructing underground utilities, drilling wells for natural gas, and excavating archaeological sites.

Like conventional radar, the device sends out radio waves, which reflect off nearby objects. Measuring the time it takes the reflected waves to return makes it possible to determine where an object is. In conventional radar, the reflected waves are detected using a large antenna, among other receiver components. But in this new device, the reflected waves are registered by detecting the interactions between the returning waves and the atom cloud.

The current incarnation of the radar is still bulky, as the researchers have kept it connected to components on an optical table for ease of testing. But they think their quantum radar could be significantly smaller than conventional designs. “Instead of having this sizable metal structure to receive the signal, we now can use this small glass cell of atoms that can be about a centimeter in size,” says Matthew Simons, a physicist at the National Institute of Standards and Technology (NIST), who was a member of the research team. NIST also worked with the defense contractor RTX to develop the radar.  

The glass cell that serves as the radar’s quantum component is full of cesium atoms kept at room temperature. The researchers use lasers to get each individual cesium atom to swell to nearly the size of a bacterium, about 10,000 times bigger than the usual size. Atoms in this bloated condition are called Rydberg atoms. 

When incoming radio waves hit Rydberg atoms, they disturb the distribution of electrons around their nuclei. Researchers can detect the disturbance by shining lasers on the atoms, causing them to emit light; when the atoms are interacting with a radio wave, the color of their emitted light changes. Monitoring the color of this light thus makes it possible to use the atoms as a radio receiver. Rydberg atoms are sensitive to a wide range of radio frequencies without needing to change the physical setup, says Michał Parniak, a physicist at the University of Warsaw in Poland, who was not involved in the work. This means a single compact radar device could potentially work at the multiple frequency bands required for different applications.

Simons’s team tested the radar by placing it in a specially designed room with foam spikes on the floor, ceiling, and walls like stalactites and stalagmites. The spikes absorb, rather than reflect, nearly all the radio waves that hit them. This simulates the effect of a large open space, allowing the group to test the radar’s imaging capability without unwanted reflections off walls. 

radar setup in a room lined by dampening foam

MATT SIMONS, NIST

The researchers placed a radio wave transmitter in the room, along with their Rydberg atom receiver, which was hooked up to an optical table outside the room. They aimed radio waves at a copper plate about the size of a sheet of paper, some pipes, and a steel rod in the room, each placed up to five meters away. The radar allowed them to locate the objects to within 4.7 centimeters. The team posted a paper on the research to the arXiv preprint server in late June.

The work moves quantum radar closer to a commercial product. “This is really about putting elements together in a nice way,” says Parniak. While other researchers have previously demonstrated how Rydberg atoms can work as radio wave detectors, he says, this group has integrated the receiver with the rest of the device more sleekly than before. 

Other researchers have explored the use of Rydberg atoms for other radar applications. For example, Parniak’s team recently developed a Rydberg atom sensor for measuring radio frequencies to troubleshoot chips used in car radar. Researchers are also exploring whether radar using Rydberg-atom receivers could be used for measuring soil moisture.

This device is just one example of a quantum sensor, a type of technology that incorporates quantum components into conventional tools. For example, the US government has developed gyroscopes that use the wave properties of atoms for sensing rotation, which is useful for navigation. Researchers have also created quantum sensors using impurities in diamond to measure magnetic fields in, for example, biomedical applications.

One advantage of quantum sensors is the inherent consistency of their core components. Each cesium atom in their device is identical. In addition, the radio receiver relies on the fundamental structure of these atoms, which never changes. Properties of the atoms “can be linked directly to fundamental constants,” says Simons. For this reason, quantum sensors should require less calibration than their non-quantum counterparts. 

Governments worldwide have invested billions of dollars to develop quantum sensors and quantum computers, which share similar components. For example, researchers have built quantum computers using Rydberg atoms as qubits, the equivalent to bits in a conventional computer. Thus, advances in quantum sensing can potentially translate into advances into quantum computing, and vice versa. Parniak has recently adapted an error-correction technique from quantum computing to improve a Rydberg-atom-based sensor. 

Researchers still need to continue developing quantum radar before it can be made commercially viable. In the future, they need to work on improving the device’s sensitivity to fainter signals, which could involve improving the coatings for the glass cell. “We don’t see this replacing all radar applications,” says Simons. Instead, he thinks it will be useful for particular scenarios that require a compact device.

What you may have missed about GPT-5

Before OpenAI released GPT-5 last Thursday, CEO Sam Altman said its capabilities made him feel “useless relative to the AI.” He said working on it carries a weight he imagines the developers of the atom bomb must have felt.

As tech giants converge on models that do more or less the same thing, OpenAI’s new offering was supposed to give a glimpse of AI’s newest frontier. It was meant to mark a leap toward the “artificial general intelligence” that tech’s evangelists have promised will transform humanity for the better. 

Against those expectations, the model has mostly underwhelmed. 

People have highlighted glaring mistakes in GPT-5’s responses, countering Altman’s claim made at the launch that it works like “a legitimate PhD-level expert in anything any area you need on demand.” Early testers have also found issues with OpenAI’s promise that GPT-5 automatically works out what type of AI model is best suited for your question—a reasoning model for more complicated queries, or a faster model for simpler ones. Altman seems to have conceded that this feature is flawed and takes away user control. However there is good news too: the model seems to have eased the problem of ChatGPT sucking up to users, with GPT-5 less likely to shower them with over the top compliments.

Overall, as my colleague Grace Huckins pointed out, the new release represents more of a product update—providing slicker and prettier ways of conversing with ChatGPT—than a breakthrough that reshapes what is possible in AI. 

But there’s one other thing to take from all this. For a while, AI companies didn’t make much effort to suggest how their models might be used. Instead, the plan was to simply build the smartest model possible—a brain of sorts—and trust that it would be good at lots of things. Writing poetry would come as naturally as organic chemistry. Getting there would be accomplished by bigger models, better training techniques, and technical breakthroughs. 

That has been changing: The play now is to push existing models into more places by hyping up specific applications. Companies have been more aggressive in their promises that their AI models can replace human coders, for example (even if the early evidence suggests otherwise). A possible explanation for this pivot is that tech giants simply have not made the breakthroughs they’ve expected. We might be stuck with only marginal improvements in large language models’ capabilities for the time being. That leaves AI companies with one option: Work with what you’ve got.

The starkest example of this in the launch of GPT-5 is how much OpenAI is encouraging people to use it for health advice, one of AI’s most fraught arenas. 

In the beginning, OpenAI mostly didn’t play ball with medical questions. If you tried to ask ChatGPT about your health, it gave lots of disclaimers warning you that it was not a doctor, and for some questions, it would refuse to give a response at all. But as I recently reported, those disclaimers began disappearing as OpenAI released new models. Its models will now not only interpret x-rays and mammograms for you but ask follow-up questions leading toward a diagnosis.

In May, OpenAI signaled it would try to tackle medical questions head on. It announced HealthBench, a way to evaluate how good AI systems are at handling health topics as measured against the opinions of physicians. In July, it published a study it participated in, reporting that a cohort of doctors in Kenya made fewer diagnostic mistakes when they were helped by an AI model. 

With the launch of GPT-5, OpenAI has begun explicitly telling people to use its models for health advice. At the launch event, Altman welcomed on stage Felipe Millon, an OpenAI employee, and his wife, Carolina Millon, who had recently been diagnosed with multiple forms of cancer. Carolina spoke about asking ChatGPT for help with her diagnoses, saying that she had uploaded copies of her biopsy results to ChatGPT to translate medical jargon and asked the AI for help making decisions about things like whether or not to pursue radiation. The trio called it an empowering example of shrinking the knowledge gap between doctors and patients.

With this change in approach, OpenAI is wading into dangerous waters. 

For one, it’s using evidence that doctors can benefit from AI as a clinical tool, as in the Kenya study, to suggest that people without any medical background should ask the AI model for advice about their own health. The problem is that lots of people might ask for this advice without ever running it by a doctor (and are less likely to do so now that the chatbot rarely prompts them to).

Indeed, two days before the launch of GPT-5, the Annals of Internal Medicine published a paper about a man who stopped eating salt and began ingesting dangerous amounts of bromide following a conversation with ChatGPT. He developed bromide poisoning—which largely disappeared in the US after the Food and Drug Administration began curbing the use of bromide in over-the-counter medications in the 1970s—and then nearly died, spending weeks in the hospital. 

So what’s the point of all this? Essentially, it’s about accountability. When AI companies move from promising general intelligence to offering humanlike helpfulness in a specific field like health care, it raises a second, yet unanswered question about what will happen when mistakes are made. As things stand, there’s little indication tech companies will be made liable for the harm caused.

“When doctors give you harmful medical advice due to error or prejudicial bias, you can sue them for malpractice and get recompense,” says Damien Williams, an assistant professor of data science and philosophy at the University of North Carolina Charlotte. 

“When ChatGPT gives you harmful medical advice because it’s been trained on prejudicial data, or because ‘hallucinations’ are inherent in the operations of the system, what’s your recourse?”

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Why Trump’s “golden dome” missile defense idea is another ripped straight from the movies

In 1940, a fresh-faced Ronald Reagan starred as US Secret Service agent Brass Bancroft in Murder in the Air, an action film centered on a fictional “superweapon” that could stop enemy aircraft midflight. A mock newspaper in the movie hails it as the “greatest peace argument ever invented.” The experimental weapon is “the exclusive property of Uncle Sam,” Reagan’s character declares.

More than 40 years later, this cinematic vision—an American superweapon capable of neutralizing assaults and ushering in global peace—became a real-life centerpiece of Reagan’s presidency. Some have suggested that Reagan’s Strategic Defense Initiative (SDI), a quixotic plan for a space-based missile shield, may have been partly inspired by his silver-screen past; indeed, the concept was so fantastical it’s now better known by its Hollywood-referencing nickname, “Star Wars.”

In January 2024, Donald Trump revived the space-shield dream at a primary campaign rally in Laconia, New Hampshire, using the Star Wars nickname that Reagan hated. It didn’t work in the 1980s, Trump said, because the technology wasn’t there. But times have changed. 

Whether in Golden Age Hollywood or Trump’s impromptu dramatizations, the dream of a missile shield is animated by its sheer cinematic allure.

“I’ve seen so many things. I’ve seen shots that you wouldn’t even believe,” Trump said. He acted out a scene of missile defense experts triangulating the path of an incoming weapon. “Ding, ding, ding, ding,” he said, as he mimed typing on a keyboard. “Missile launch? Psshing!!” He raised his hand to indicate the rising missile, then let it fall to signal the successful interception: “Boom.” 

Trump has often expressed admiration for Israel’s Iron Dome, an air defense system that can intercept short-range rockets and artillery over the small nation and that is funded in part by the United States. At the rally, he pledged to “build an Iron Dome over our country, a state-of-the-art missile defense shield made in the USA … a lot of it right here in New Hampshire, actually.” 

Within a week of his inauguration, President Trump began working toward this promise by issuing an executive order to develop “The Iron Dome for America,” which was rebranded the “Golden Dome” a month later. The eruption of a revived conflict between Israel and Iran in June—including Trump’s decision to strike Iran’s nuclear facilities—has only strengthened the case for an American version of the Iron Dome in the eyes of the administration.

CHIP SOMODEVILLA/GETTY IMAGES

The Golden Dome has often been compared to SDI for its futuristic sheen, its aggressive form of protection, and its reflection of the belief that an impenetrable shield is the cheat code to global peace. Both efforts demonstrate the performative power of spectacle in defense policy, especially when wielded by deft showmen like Reagan and Trump. Whether in Golden Age Hollywood or Trump’s impromptu dramatizations, the dream of a missile shield is animated by its sheer cinematic allure, often rendered in deceptively simple concept art depicting a society made immune to catastrophic strikes. 

But in the complicated security landscape confronting the world today, is spectacle the same as safety?

“Missile defense is an area where facts and fiction blend,” says Anette Stimmer, a lecturer in international relations at the University of St Andrews who has researched SDI. “A lot is up to interpretation by all the actors involved.”


Trump’s view is simple: Space is as much a warfighting domain as land, air, and ocean, and therefore the US must assert its dominance there with advanced technologies. This position inspired the creation of the US Space Force in his first term, and Trump has now redoubled his efforts with the ongoing development of the Golden Dome.  

General Michael Guetlein, who Trump has appointed to lead the Golden Dome project, argued that America’s foes, including China and Russia, have forced the nation’s hand by continually pushing limits in their own weapons programs. “While we have been focused on peace overseas, our adversaries have been quickly modernizing their nuclear forces, building out ballistic missiles capable of hosting multiple warheads; building out hypersonic missiles capable of attacking the United States within an hour and traveling at 6,000 miles an hour; building cruise missiles that can navigate around our radar and our defenses; and building submarines that can sneak up on our shores; and, worse yet, building space weapons,” Guetlein said in May.

“It is time that we change that equation and start doubling down on the protection of the homeland,” he said. “Golden Dome is a bold and aggressive approach to hurry up and protect the homeland from our adversaries. We owe it to our children and our children’s children to protect them and afford them a quality of life that we have all grown up enjoying.”

With that vision in mind, Trump’s executive order outlines a host of goals for missile defense, some of which support bipartisan priorities like protecting supply chains and upgrading sensor arrays. The specific architecture of the Golden Dome is still being hammered out, but the initial executive order envisions a multi-tiered system of new sensors and interceptors—on the ground, in the air, and in space—that would work together to counter the threat of attacks from ballistic, hypersonic, and cruise missiles. The system would be coordinated in part by artificial-intelligence models trained for real-time threat detection and response. 

The technology that links the Golden Dome directly to SDI hinges on one key bullet point in the order that demands the “development and deployment of proliferated space-based interceptors capable of boost-phase intercept.” This language revives Reagan’s dream of deploying hundreds of missile interceptors in orbit to target missiles in the boost phase right after liftoff, a window of just a few minutes when the projectiles are slower and still near the attacker’s territory.

Space weapons are an attractive option for targeting the boost phase because interceptors need to be close enough to the launching missile to hit it. If a nation fired off long-range missiles from deep in its territory, the nearest ground- or air-based interceptors could be thousands of miles from the launch site. Space interceptors, in contrast, would be just a few hundred miles overhead of the ascending missiles, allowing for a much faster reaction time. But though the dream of boost-phase interception dates back decades, these maneuvers have never been operationally demonstrated from ground, air, or space.

“It’s a really hard problem that hasn’t been solved,” says Laura Grego, senior scientist and research director at the Union of Concerned Scientists’ global security program.

The US is currently protected by the Ground-Based Midcourse Defense (GMD), which consists of 44 interceptor missiles split between bases in Alaska and California, along with a network of early-­warning sensors on the ground, at sea, and in orbit. Tests suggest that the GMD would have about a 50% success rate at intercepting missiles.

Initiated by President Bill Clinton in the late ’90s and accelerated by President George W. Bush in the 2000s, the GMD is intended mainly to defend against rogue states like North Korea, which has nuclear weapons and intercontinental ballistic missiles (ICBMs) capable of reaching the US. A secondary focus is Iran, which does not currently have a nuclear weapon or ICBMs. Still, the GMD is built to anticipate a possible future where it develops those capabilities. 

The GMD is not designed to protect the US from the sort of large-scale and coordinated missile attacks that Russia and China could lob across the world. The Bush administration instead favored a focus on strategic deterrence with these peer nations, an approach that the Obama and Biden administrations continued. In addition to the GMD, the Pentagon and its international partners maintain regional defense systems to counter threats in conflict hot spots or attacks on critical infrastructure. All these networks are designed to intercept missiles during their midcourse cruise phase, as they hurtle through the sky or space, or during their terminal or reentry phase, as they approach their targets. The GMD has cost upward of $63 billion since it was initiated, and the US spends about an additional $20 billion to $30 billion annually on its array of other missile defense systems. 

In May, Trump was presented with several design options for the Golden Dome and selected a plan with a price tag of $175 billion and a schedule for full deployment by the end of his term. The One Big Beautiful Bill, signed into law on July 4, approved an initial $24.4 billion in funding for it. Space technologies and launch access have become much more affordable since the 1980s, but many analysts still think the projected cost and timeline are not realistic. The Congressional Budget Office, a nonpartisan federal agency, projected that the cost of the space-based interceptors could total from $161 billion to $542 billion over the course of 20 years. The wide range can be explained by the current lack of specifics on those orbital interceptors’ design and number.

Reintroducing the idea of space-based interceptors is “probably the most controversial piece of Golden Dome,” says Leonor Tomero, who served as deputy assistant secretary of defense for nuclear and missile defense policy in the Biden administration. 

“There are a lot of improvements that we can and should make on missile defense,” she continues. “There’s a lot of capability gaps I think we do need to address. My concern is the focus on reviving Star Wars and SDI. It’s got very significant policy implications, strategic stability implications, in addition to cost implications and technology feasibility challenges.” 

Indeed. Regardless of whether the Golden Dome materializes, the program is already raising geopolitical anxieties reminiscent of the Cold War era. Back then, the US had one main adversary: the Soviet Union. Now, it confronts a roiling multipolarity of established and nascent nuclear powers. Many of them have expressed dismay over the about-face on American missile defense strategy, which was previously predicated on arms reduction and deterrence.

“Here we are, despite years of saying we are not going to do this—that it is technically out of reach, economically unsustainable, and strategically unwise,” Grego says. “Overnight, we’re like, ‘No, actually, we’re doing it.’” 

The fact that we “blew up that logic” will “have a big impact on whether or not the program actually succeeds in creating the vision that it lays out,” she adds.

Russian and Chinese officials called the Golden Dome “deeply destabilizing in nature” in a joint statement in May, and North Korea’s foreign ministry warned it could “turn outer space into a potential nuclear war field.”  

Reagan, by all accounts, believed that SDI would be the ultimate tool of peace for all nations, and he even offered to share the technology with the Soviet leader, Mikhail Gorbachev. Trump, in contrast, sees Golden Dome as part of his “America First” brand. He has lamented that past American leaders supported the development of other missile defense projects abroad while neglecting to build similar security measures for their own country. The Golden Dome is both an expression of Trump’s belief that the world is leeching off America and a bargaining chip in negotiations toward a new power balance; Canada could be covered by the shield for free, he has said—in exchange for becoming the 51st state.

Trump has argued that America has been both demographically diluted by unchecked immigration and financially depleted by freeloading allied nations—undermining its security on both internal and external fronts. His first term’s marquee promise to build a wall on the southern US border, paid for by Mexico, aimed to address the former problem. That administration did build more physical barriers along the border (though US taxpayers, not Mexico, footed the bill). But just as important, the wall emerged as a symbolic shorthand for tougher immigration control. 

The Golden Dome is the second-term amplification of that promise, a wall that expands the concept of the “border” to the entire American airspace. Trump has projected an image of his envisioned space missile shield as a literal dome that could ward off coordinated attacks, including boost-phase interceptors from space and cruise- and terminal-phase interception by ground and air assets. When he announced the selected plan from the Resolute Desk in May, he sat in front of a mockup that depicted a barrage of incoming missiles being thwarted by the nationwide shield, depicted with a golden glow.

The Golden Dome’s orbital interceptors are supposedly there to target the early boost phase of missiles on or near the launch site, not over the United States. But the image of a besieged America, repelling enemy fire from the heavens, provides the visual and cinematic idea of both threat and security that Trump hopes to impress on the public.  

“This administration, and MAGA world, thinks about itself as being victimized by immigrants, government waste, leftist professors, and so on,” says Edward Tabor Linenthal, a historian who examined public narratives about SDI in his 1989 book Symbolic Defense: The Cultural Significance of the Strategic Defense Initiative. “It’s not much of a jump to be victimized by too many nations getting nuclear weapons.” 


Even in our era of entrenched political polarization, there is support across party lines for upgrading and optimizing America’s missile defense systems. No long-range missile has ever struck US soil, but an attack would be disastrous for the nation and the world. 

“We’ve come a long way in terms of missile defense,” says Tomero. “There has been a lot of bipartisan consensus on increasing regional missile defense, working with our allies, and making sure that the missile defense interceptors we have work.”

outline of the United States inside a corked glass bottle with scorpions

SHOUT

Trump has challenged that consensus with his reversion to the dream of a space shield. He is correct that SDI failed to materialize in part because its envisioned technologies were out of reach, from a financial and engineering standpoint, in the 1980s. But the controversy that erupted around SDI—and that tarnished it with the derisive name “Star Wars”—stemmed just as much from its potential geopolitical disruptiveness as from its fantastical techno-optimism. 

“This idea of a missile shield, also back when Reagan proposed it, has a huge popular appeal, because who wouldn’t want to be able to defend your country from nuclear weapons? It is a universal dream,” says Stimmer. “It requires a bit more digging in and understanding to see that actually, this vision depends a lot on technological feasibility and on how others perceive it.” 

Reagan maintained a steadfast conviction that this shield of space-based interceptors would render nuclear weapons “impotent and obsolete,” ushering in “world peace,” as he said in his March 1983 speech announcing SDI. The doctrine of mutually assured destruction could be replaced by mutually assured survival, he argued.

Amid nuclear tensions, J. Robert Oppenheimer compared the US and the Soviet Union to “two scorpions in a bottle.” Now there are many more scorpions.

But Gorbachev saw the space-based shield as an offensive weapon, since it would give the US a first-strike advantage. The imbalance, he warned, could spark a weapons race in space, a domain that had been spared from overt military conflicts. As a result, the initiative would only destabilize the world order and interrupt the progress of arms control and nuclear de-proliferation efforts. 

Reagan’s insistence on SDI as the only route to world peace may have blocked opportunities to advance that goal through more practical and cost-effective avenues, such as diplomacy and arms control. At the 1986 Reykjavik Summit, Reagan and Gorbachev came very close to an arms control agreement that might have eliminated all ballistic missiles and nuclear weapons. The sticking point was Reagan’s refusal to give up SDI. 

“It is not the Strategic Defense Initiative; it’s a strategic defense ideology,” says Linenthal. He mentions the famous metaphor used by J. Robert Oppenheimer, a central figure of the Manhattan Project, who compared the United States and the Soviet Union to “two scorpions in a bottle.” Either scorpion could kill the other, but only at the probable cost of its own life. 

Reagan felt a “tremendously powerful impetus” to escape Oppenheimer’s metaphor, Linenthal noted: “It was a new kind of deliverance that would resolve it all. Of course, now there are many more scorpions, so it has to be a bigger bottle.”

A true believer, Reagan never abandoned SDI in spite of cost overruns and public backlash. President Bill Clinton redirected the program in 1993 by shifting gears from global to regional missile defense, a focus that remained fairly consistent for decades—until Trump took center stage. Now, the Golden Dome has flipped that logic on its head, risking a possible escalation of military tensions in outer space.

Tomero describes a “nightmare scenario” in which adversaries attack the Golden Dome’s space infrastructure, leaving the orbital environment filled with debris that renders the defense system, among countless other space assets, inoperable. 

“Having a one-sided capability that is very threatening to our adversaries is obviously going to create very dangerous stability issues,” she says. It could “lead to inadvertent escalation and miscalculation and, I think, lower the threshold to conflict and nuclear war.” 


As president, Trump has channeled the boardroom antics that once resuscitated his celebrity status on The Apprentice. But armed adversaries, long wary of America’s position on missile defense, don’t have the luxury of wondering whether it’s all real or just more stagecraft. 

“What makes Trump so difficult to read for others is his unpredictability,” Stimmer says. “This, just by itself, destabilizes things, because no one knows what he’ll actually do.”

Trump has described the Golden Dome as nearly impenetrable by missile attacks, evoking a clear symbolic return to an American golden age where we can all feel safe again.

“All of them will be knocked out of the air,” as “the success rate is very close to 100%,” he said at the project’s official launch in May. “We will truly be completing the job that President Reagan started 40 years ago, forever ending the missile threat to the American homeland.”

Becky Ferreira is a science reporter based in upstate New York, and author of First Contact, a book about the search for alien life, which will be published in September. 

Meet the early-adopter judges using AI

The propensity for AI systems to make mistakes and for humans to miss those mistakes has been on full display in the US legal system as of late. The follies began when lawyers—including some at prestigious firms—submitted documents citing cases that didn’t exist. Similar mistakes soon spread to other roles in the courts. In December, a Stanford professor submitted sworn testimony containing hallucinations and errors in a case about deepfakes, despite being an expert on AI and misinformation himself.

The buck stopped with judges, who—whether they or opposing counsel caught the mistakes—issued reprimands and fines, and likely left attorneys embarrassed enough to think twice before trusting AI again.

But now judges are experimenting with generative AI too. Some are confident that with the right precautions, the technology can expedite legal research, summarize cases, draft routine orders, and overall help speed up the court system, which is badly backlogged in many parts of the US. This summer, though, we’ve already seen AI-generated mistakes go undetected and cited by judges. A federal judge in New Jersey had to reissue an order riddled with errors that may have come from AI, and a judge in Mississippi refused to explain why his order too contained mistakes that seemed like AI hallucinations. 

The results of these early-adopter experiments make two things clear. One, the category of routine tasks—for which AI can assist without requiring human judgment—is slippery to define. Two, while lawyers face sharp scrutiny when their use of AI leads to mistakes, judges may not face the same accountability, and walking back their mistakes before they do damage is much harder.

Drawing boundaries

Xavier Rodriguez, a federal judge for the Western District of Texas, has good reason to be skeptical of AI. He started learning about artificial intelligence back in 2018, four years before the release of ChatGPT (thanks in part to the influence of his twin brother, who works in tech). But he’s also seen AI-generated mistakes in his own court. 

In a recent dispute about who was to receive an insurance payout, both the plaintiff and the defendant represented themselves, without lawyers (this is not uncommon—nearly a quarter of civil cases in federal court involve at least one unrepresented party). The two sides wrote their own filings and made their own arguments. 

“Both sides used AI tools,” Rodriguez says, and both submitted filings that referenced made-up cases. He had authority to reprimand them, but given that they were not lawyers, he opted not to. 

“I think there’s been an overreaction by a lot of judges on these sanctions. The running joke I tell when I’m on the speaking circuit is that lawyers have been hallucinating well before AI,” he says. Missing a mistake from an AI model is not wholly different, to Rodriguez, from failing to catch the error of a first-year lawyer. “I’m not as deeply offended as everybody else,” he says. 

In his court, Rodriguez has been using generative AI tools (he wouldn’t publicly name which ones, to avoid the appearance of an endorsement) to summarize cases. He’ll ask AI to identify key players involved and then have it generate a timeline of key events. Ahead of specific hearings, Rodriguez will also ask it to generate questions for attorneys based on the materials they submit.

These tasks, to him, don’t lean on human judgment. They also offer lots of opportunities for him to intervene and uncover any mistakes before they’re brought to the court. “It’s not any final decision being made, and so it’s relatively risk free,” he says. Using AI to predict whether someone should be eligible for bail, on the other hand, goes too far in the direction of judgment and discretion, in his view.

Erin Solovey, a professor and researcher on human-AI interaction at Worcester Polytechnic Institute in Massachusetts, recently studied how judges in the UK think about this distinction between rote, machine-friendly work that feels safe to delegate to AI and tasks that lean more heavily on human expertise. 

“The line between what is appropriate for a human judge to do versus what is appropriate for AI tools to do changes from judge to judge and from one scenario to the next,” she says.

Even so, according to Solovey, some of these tasks simply don’t match what AI is good at. Asking AI to summarize a large document, for example, might produce drastically different results depending on whether the model has been trained to summarize for a general audience or a legal one. AI also struggles with logic-based tasks like ordering the events of a case. “A very plausible-sounding timeline may be factually incorrect,” Solovey says. 

Rodriguez and a number of other judges crafted guidelines that were published in February by the Sedona Conference, an influential think tank that issues principles for particularly murky areas of the law. They outline a host of potentially “safe” uses of AI for judges, including conducting legal research, creating preliminary transcripts, and searching briefings, while warning that judges should verify outputs from AI and that “no known GenAI tools have fully resolved the hallucination problem.”

Dodging AI blunders

Judge Allison Goddard, a federal magistrate judge in California and a coauthor of the guidelines, first felt the impact that AI would have on the judiciary when she taught a class on the art of advocacy at her daughter’s high school. She was impressed by a student’s essay and mentioned it to her daughter. “She said, ‘Oh, Mom, that’s ChatGPT.’”

“What I realized very quickly was this is going to really transform the legal profession,” she says. In her court, Goddard has been experimenting with ChatGPT, Claude (which she keeps “open all day”), and a host of other AI models. If a case involves a particularly technical issue, she might ask AI to help her understand which questions to ask attorneys. She’ll summarize 60-page orders from the district judge and then ask the AI model follow-up questions about it, or ask it to organize information from documents that are a mess. 

“It’s kind of a thought partner, and it brings a perspective that you may not have considered,” she says.

Goddard also encourages her clerks to use AI, specifically Anthropic’s Claude, because by default it does not train on user conversations. But it has its limits. For anything that requires law-specific knowledge, she’ll use tools from Westlaw or Lexis, which have AI tools built specifically for lawyers, but she finds general-purpose AI models to be faster for lots of other tasks. And her concerns about bias have prevented her from using it for tasks in criminal cases, like determining if there was probable cause for an arrest.

In this, Goddard appears to be caught in the same predicament the AI boom has created for many of us. Three years in, companies have built tools that sound so fluent and humanlike they obscure the intractable problems lurking underneath—answers that read well but are wrong, models that are trained to be decent at everything but perfect for nothing, and the risk that your conversations with them will be leaked to the internet. Each time we use them, we bet that the time saved will outweigh the risks, and trust ourselves to catch the mistakes before they matter. For judges, the stakes are sky-high: If they lose that bet, they face very public consequences, and the impact of such mistakes on the people they serve can be lasting. 

“I’m not going to be the judge that cites hallucinated cases and orders,” Goddard says. “It’s really embarrassing, very professionally embarrassing.”

Still, some judges don’t want to get left behind in the AI age. With some in the AI sector suggesting that the supposed objectivity and rationality of AI models could make them better judges than fallible humans, it might lead some on the bench to think that falling behind poses a bigger risk than getting too far out ahead. 

A ‘crisis waiting to happen’

The risks of early adoption have raised alarm bells with Judge Scott Schlegel, who serves on the Fifth Circuit Court of Appeal in Louisiana. Schlegel has long blogged about the helpful role technology can play in modernizing the court system, but he has warned that AI-generated mistakes in judges’ rulings signal a “crisis waiting to happen,” one that would dwarf the problem of lawyers’ submitting filings with made-up cases. 

Attorneys who make mistakes can get sanctioned, have their motions dismissed, or lose cases when the opposing party finds out and flags the errors. “When the judge makes a mistake, that’s the law,” he says. “I can’t go a month or two later and go ‘Oops, so sorry,’ and reverse myself. It doesn’t work that way.”

Consider child custody cases or bail proceedings, Schlegel says: “There are pretty significant consequences when a judge relies upon artificial intelligence to make the decision,” especially if the citations that decision relies on are made-up or incorrect.

This is not theoretical. In June, a Georgia appellate court judge issued an order that relied partially on made-up cases submitted by one of the parties, a mistake that went uncaught. In July, a federal judge in New Jersey withdrew an opinion after lawyers complained it too contained hallucinations. 

Unlike lawyers, who can be ordered by the court to explain why there are mistakes in their filings, judges do not have to show much transparency, and there is little reason to think they’ll do so voluntarily. On August 4, a federal judge in Mississippi had to issue a new decision in a civil rights case after the original was found to contain incorrect names and serious errors. The judge did not fully explain what led to the errors even after the state asked him to do so. “No further explanation is warranted,” the judge wrote.

These mistakes could erode the public’s faith in the legitimacy of courts, Schlegel says. Certain narrow and monitored applications of AI—summarizing testimonies, getting quick writing feedback—can save time, and they can produce good results if judges treat the work like that of a first-year associate, checking it thoroughly for accuracy. But most of the job of being a judge is dealing with what he calls the white-page problem: You’re presiding over a complex case with a blank page in front of you, forced to make difficult decisions. Thinking through those decisions, he says, is indeed the work of being a judge. Getting help with a first draft from an AI undermines that purpose.

“If you’re making a decision on who gets the kids this weekend and somebody finds out you use Grok and you should have used Gemini or ChatGPT—you know, that’s not the justice system.”

Sam Altman and the whale

My colleague Grace Huckins has a great story on OpenAI’s release of GPT-5, its long-awaited new flagship model. One of the takeaways, however, is that while GPT-5 may make for a better experience than the previous versions, it isn’t something revolutionary. “GPT-5 is, above all else,” Grace concludes, “a refined product.”

This is pretty much in line with my colleague Will Heaven’s recent argument that the latest model releases have been a bit like smartphone releases: Increasingly, what we are seeing are incremental improvements meant to enhance the user experience. (Casey Newton made a similar point in Friday’s Platformer.) At GPT-5’s release on Thursday, OpenAI CEO Sam Altman himself compared it to when Apple released the first iPhone with a Retina display. Okay. Sure. 

But where is the transition from the BlackBerry keyboard to the touch-screen iPhone? Where is the assisted GPS and the API for location services that enables real-time directions and gives rise to companies like Uber and Grindr and lets me order a taxi for my burrito? Where are the real breakthroughs? 

In fact, following the release of GPT-5, OpenAI found itself with something of a user revolt on its hands. Customers who missed GPT-4o’s personality successfully lobbied the company to bring it back as an option for its Plus users. If anything, that indicates the GPT-5 release was more about user experience than noticeable performance enhancements.

And yet, hours before OpenAI’s GPT-5 announcement, Altman teased it by tweeting an image of an emerging Death Star floating in space. On Thursday, he touted its PhD-level intelligence. He then went on the Mornings with Maria show to claim it would “save a lot of lives.” (Forgive my extreme skepticism of that particular brand of claim, but we’ve certainly seen it before.) 

It’s a lot of hype, but Altman is not alone in his Flavor Flav-ing here. Last week Mark Zuckerberg published a long memo about how we are approaching AI superintelligence. Anthropic CEO Dario Amodei freaked basically everyone out earlier this year with his prediction that AI would harvest half of all entry-level jobs within, possibly, a year. 

The people running these companies literally talk about the danger that the things they are building might take over the world and kill every human on the planet. GPT-5, meanwhile, still can’t tell you how many b’s there are in the word “blueberry.” 

This is not to say that the products released by OpenAI or Anthropic or what have you are not impressive. They are. And they clearly have a good deal of utility. But the hype cycle around model releases is out of hand. 

I say that as one of those people who use ChatGPT or Google Gemini most days, often multiple times a day. This week, for example, my wife was surfing and encountered a whale repeatedly slapping its tail on the water. Despite having seen very many whales, often in very close proximity, she had never seen anything like this. She sent me a video, and I was curious about it too. So I asked ChatGPT, “Why do whales slap their tails repeatedly on the water?” It came right back, confidently explaining that what I was describing was called “lobtailing,” along with a list of possible reasons why whales do that. Pretty cool. 

But then again, a regular garden-variety Google search would also have led me to discover lobtailing. And while ChatGPT’s response summarized the behavior for me, it was also too definitive about why whales do it. The reality is that while people have a lot of theories, we still can’t really explain this weird animal behavior. 

The reason I’m aware that lobtailing is something of a mystery is that I dug into actual, you know, search results. Which is where I encountered this beautiful, elegiac essay by Emily Boring. She describes her time at sea, watching a humpback slapping its tail against the water, and discusses the scientific uncertainty around this behavior. Is it a feeding technique? Is it a form of communication? Posturing? The action, as she notes, is extremely energy intensive. It takes a lot of effort from the whale. Why do they do it? 

I was struck by one passage in particular, in which she cites another biologist’s work to draw a conclusion of her own: 

Surprisingly, the complex energy trade-off of a tail-slap might be the exact reason why it’s used. Biologist Hal Whitehead suggests, “Breaches and lob-tails make good signals precisely because they are energetically expensive and thus indicative of the importance of the message and the physical status of the signaler.” A tail-slap means that a whale is physically fit, traveling at nearly maximum speed, capable of sustaining powerful activity, and carrying a message so crucial it is willing to use a huge portion of its daily energy to share it. “Pay attention!” the whale seems to say. “I am important! Notice me!”

In some ways, the AI hype cycle has to be out of hand. It has to justify the ferocious level of investment, the uncountable billions of dollars in sunk costs. The massive data center buildouts with their massive environmental consequences created at massive expense that are seemingly keeping the economy afloat and threatening to crash it. There is so, so, so much money at stake. 

Which is not to say there aren’t really cool things happening in AI. And certainly there have been a number of moments when I have been floored by AI releases. ChatGPT 3.5 was one. Dall-E, NotebookLM, Veo 3, Synthesia. They can amaze. In fact there was an AI product release just this week that was a little bit mind-blowing. Genie 3, from Google DeepMind, can turn a basic text prompt into an immersive and navigable 3D world. Check it out—it’s pretty wild. And yet Genie 3 also makes a case that the most interesting things happening right now in AI aren’t happening in chatbots. 

I’d even argue that at this point, most of the people who are regularly amazed by the feats of new LLM chatbot releases are the same people who stand to profit from the promotion of LLM chatbots.

Maybe I’m being cynical, but I don’t think so. I think it’s more cynical to promise me the Death Star and instead deliver a chatbot whose chief appeal seems to be that it automatically picks the model for you. To promise me superintelligence and deliver shrimp Jesus. It’s all just a lot of lobtailing. “Pay attention! I am important! Notice me!”

This article is from The Debrief, MIT Technology Review’s subscriber-only weekly email newsletter from editor in chief Mat Honan. Subscribers can sign up here to receive it in your inbox.