Future AI chips could be built on glass

Human-made glass is thousands of years old. But it’s now poised to find its way into the AI chips used in the world’s newest and largest data centers. This year, a South Korean company called Absolics is planning to start commercial production of special glass panels designed to make next-generation computing hardware more powerful and energy efficient. Other companies, including Intel, are also pushing forward in this area. If all goes well, such glass technology could reduce the energy demands of the sorts of high-performance computing chips used in AI data centers—and it could eventually do the same for consumer laptops and mobile devices if production costs fall.

The idea is to use glass as the substrate, or layer, on which multiple silicon chips are connected. This form of “packaging” is an increasingly popular way to build computing hardware, because it lets engineers combine specialized chips designed for specific functions into a single system. But it presents challenges, including the fact that hardworking chips can run so hot they physically warp the substrate they’re built on. This can lead to misaligned components and may reduce how efficiently the chips can be cooled, leading to damage or premature failure. 

“As AI workloads surge and package sizes expand, the industry is confronting very real mechanical constraints that impact the trajectory of high-performance computing,” says Deepak Kulkarni, a senior fellow at the chip design company Advanced Micro Devices (AMD). “One of the most fundamental is warpage.”

That’s where glass comes in. It can handle the added heat better than existing substrates, and it will let engineers keep shrinking chip packages—which will make them faster and more energy efficient. It “unlocks the ability to keep scaling package footprints without hitting a mechanical wall,” says Kulkarni. 

Momentum is building behind the shift. Absolics has finished building a factory in the US that is dedicated to producing glass substrates for advanced chips and expects to begin commercial manufacturing this year. The US semiconductor manufacturer Intel is working toward incorporating glass in its next-generation chip packages, and its research has spurred other companies in the chip packaging supply chain to invest in it as well. South Korean and Chinese companies are among the early adopters. “Historically, this is not the first attempt to adopt glass in semiconductor packaging,” says Bilal Hachemi, senior technology and market analyst at the market research firm Yole Group. “But this time, the ecosystem is more solid and wider; the need for glass-based [technology] is sharper.” 

Fragile but mighty

Chip packaging has relied on organic substrates such as fiberglass-reinforced epoxy since the 1990s, says Rahul Manepalli, vice president of advanced packaging at Intel. But electrochemical complications limit how closely designers can place drilled holes to create copper-coated signal and power connections between the chips and the rest of the system. Chip designers must also account for the unpredictable shrinkage and distortion that organic substrates undergo as chips heat up and cool down. “We realized about a decade ago that we are going to have some limitations with organic substrates,” says Manepalli.

close up on a grid of glass substrate test units held by a gloved hand
These glass substrate test units were photographed at an Intel facility in Chandler, Arizona, in 2023.
INTEL CORPORATION

Glass may help overcome a lot of these limitations. Its thermal stability could allow engineers to create 10 times more connections per millimeter than organic substrates, says Manepalli. With denser connections, Intel’s designers can then stuff 50% more silicon chips into the same package area, improving computational capability. The denser connections also enable more efficient routing for the copper wires that deliver power to the chip. And the fact that glass dissipates heat more efficiently allows for chip designs that reduce overall power consumption. 

“The benefits of glass core substrates are undeniable,” says Manepalli. “It’s clear that the benefits will drive the industry to make this happen sooner rather than later, and we want to be one of the first ones who do it.” 

However, working with glass creates its own challenges. For one thing, it’s fragile. Glass substrates for data center chip packages are made from panels that are only about 700 micrometers to 1.4 millimeters thick, which leaves them susceptible to cracking or even shattering, says Manepalli. Researchers at Intel and other organizations have spent years figuring out how to use other materials and special tools to integrate the glass panels safely into semiconductor manufacturing processes. 

Now, Manepalli says, Intel’s research and development teams are reliably fabricating glass panels and churning out test chip packages that incorporate glass—and in early 2025 they demonstrated that a functional device with a glass core substrate could boot up the Windows operating system. It’s a significant improvement from the early testing days, when hundreds of glass panels got cracked every couple of days, he says.

Semiconductor manufacturers already use glass for more limited purposes, such as temporary support structures for silicon wafers. But the independent market research firm IDTechEx estimates there’s a big market for glass substrates, one that could boost the semiconductor market for glass from $1 billion in 2025 to as much as $4.4 billion by 2036. 

The material could have additional benefits if it takes off. Glass can be made astoundingly smooth—5,000 times smoother than organic substrates. This would eliminate defects that can arise as metal gets layered onto semiconductors, says Xiaoxi He, a research analyst at IDTechEx. Defects in these layers can worsen chips’ performance or even render them unusable.  

Glass could also help speed the movement of data. The material can guide light, which means chip designers could use it to build high-speed signal pathways directly into the substrate. Glass “holds enormous potential for the future of energy-efficient AI compute,” says Kulkarni at AMD, because a light-based system could move signals around with far less energy than the “power-hungry” copper pathways that are currently used to carry signals between chips in a package.

A panel pivot

Early research on glass packaging started at the 3D Systems Packaging Research Center at the Georgia Institute of Technology in 2009. The university eventually partnered with Absolics, a subsidiary of SKC, a South Korean company that produces chemicals and advanced materials. SKC constructed a semiconductor facility for manufacturing glass substrates in Covington, Georgia, in 2024, and the glass substrate partnership between Absolics and Georgia Tech was eventually awarded two grants in the same year—worth a combined $175 million—throughthe US government’s CHIPS for America program, established under the administration of President Joe Biden.

An Absolics employee monitors production of an early version of the company’s glass substrate.
COURTESY OF ABSOLICS INC

Now Absolics is moving toward commercialization; it plans to start manufacturing small quantities of glass substrates for customers this year. The company has led the way in commercializing glass substrates, says Yongwon Lee, a research engineer at Georgia Tech who is not directly involved in the commercial partnership with Absolics.

Absolics says its facility can currently produce a maximum of 12,000 square meters of glass panels a year. That’s enough, Lee estimates, to provide glass substrates for between 2 million and 3 million chip packages the size of Nvidia’s H100 GPU.

But the company isn’t alone. Lee says that multiple large manufacturers, including Samsung Electronics, Samsung Electro-Mechanics, and LG Innotek, have “significantly accelerated” their research and pilot production efforts in glass packaging over the past year. “This trend suggests that the glass substrate ecosystem is evolving from a single early mover to a broader industrial race,” he says.

Other companies are pivoting to play more specialized roles in the glass substrate supply chain. In 2025, JNTC, a company that makes electrical connectors and tempered glass for electronics, established a facility in South Korea that’s capable of producing 10,000 semi-finished glass panels per month. Such panels include drilled holes for vertical electrical connections and thin metal layers coating the glass, but they require additional manufacturing work for installation in chip packages. 

Last year, that South Korean facility began taking orders to supply semi-finished glass to both specialized substrate companies and semiconductor manufacturers. The company plans to expand the facility’s production in 2026 and open an additional manufacturing line in Vietnam in 2027.  Such industry actions show how quickly glass substrate technology is moving from prototype to commercialization—and how many tech players are betting that glass could be a surprisingly strong foundation for the future of computing and AI.

Brutal times for the US battery industry

Just a few years ago, the battery industry was hot, hot, hot. There was a seemingly infinite number of companies popping up, with shiny new chemistries and massive fundraising rounds. My biggest problem was sifting through the pile to pick the most exciting news to cover.

That tide has turned, and in 2026, what seems to be in unlimited supply isn’t battery success stories but stumbles or straight-up implosions. Companies are failing, investors are pulling back, and batteries, especially for EVs, aren’t looking so hot anymore. On Monday, Steve Levine at The Information (paywalled link) reported that 24M Technologies, a battery company founded in 2010, was shutting down and would auction off its property.

The company itself has been silent, but this is the latest in a string of bad signs, and it’s a big one—at one point 24M was worth over $1 billion, and the company’s innovations could have worked with existing technology. So where does that leave the battery industry?

Many buzzy battery startups in recent years have been trying to sell some new, innovative chemistry to compete with lithium-ion batteries, the status quo that powers phones, laptops, electric vehicles, and even grid storage arrays today. Think sodium-ion batteries and solid-state cells.

24M wasn’t trying to sell a departure from lithium-ion but improvements that could work with the tech. One of the company’s major innovations was its manufacturing process, which involved essentially smearing materials onto sheets of metal to form the electrodes, a simpler and potentially cheaper technique than the standard one. 

The layers in the company’s batteries were thicker, which cut down on some of the inactive materials in cells and improved the energy density. That allows more energy to be stored in a smaller package, boosting the range of EVs—the company famously had a goal of a 1,000-mile battery (about 1,600 kilometers).

We’re still thin on details of what exactly went down at 24M and what comes next for its tech. The company didn’t get back to my questions sent to the official press email, and nobody picked up the phone when I called. 24M cofounder and MIT professor Yet-Ming Chiang declined to speak on the record.

For those who have been closely following the battery industry, more bad news isn’t too surprising. It feels as if everyone is short on money these days, and as purse strings tighten, there’s less interest in novel ideas. “It just feels like there’s not a lot of appetite for innovation,” says Kara Rodby, a technical principal at Volta Energy Technologies, a venture capital firm that focuses on the energy storage industry.

Natron Energy, one of the leading sodium-ion startups in the US, shut down operations in September last year. Ample, an EV battery-swapping company, filed for bankruptcy in December 2025.  

There were always going to be failures from the recent battery boom. Money was flowing to all sorts of companies, some pitching truly wild ideas. But what recent months have made clear is that the battery market is turning brutal, even for the relatively safe bets.

Because 24M’s technology was designed to work into existing lithium-ion chemistry, it could have been an attractive candidate for existing battery companies to license or even acquire. “It’s a great example of something that should have been easier,” Rodby says.  

The gutting of major components of the Inflation Reduction Act, key legislation in the US that provided funding and incentives for batteries and EVs, certainly hasn’t helped. The EV market in the US is cooling off, with automakers canceling EV models and slashing factory plans.

There are bright spots. China’s battery industry is thriving, and its battery and EV giants are looking ever more dominant. The market for stationary energy storage is also still seeing positive signs of growth, even in the US. 

But overall, it’s not looking great. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here

A defense official reveals how AI chatbots could be used for targeting decisions

The US military might use generative AI systems to rank lists of targets and make recommendations—which would be vetted by humans—about which to strike first, according to a Defense Department official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating.  

A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with MIT Technology Review to discuss sensitive topics, humans might ask the system to analyze the information and prioritize the targets while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations. OpenAI’s ChatGPT and xAI’s Grok could, in theory, be the models used for this type of scenario in the future, as both companies recently reached agreements for their models to be used by the Pentagon in classified settings.

The official described this as an example of how things might work but would not confirm or deny whether it represents how AI systems are currently being used.

Other outlets have reported that Anthropic’s Claude has been integrated into existing military AI systems and used in operations in Iran and Venezuela, but the official’s comments add insight into the specific role chatbots may play, particularly in accelerating the search for targets. They also shed light on the way the military is deploying two different AI technologies, each with distinct limitations.

Since at least 2017, the US military has been working on a “big data” initiative called Maven. It uses older types of AI, particularly computer vision, to analyze the oceans of data and imagery collected by the Pentagon. Maven might take thousands of hours of aerial drone footage, for example, and algorithmically identify targets. A 2024 report from Georgetown University showed soldiers using the system to select targets and vet them, which sped up the process to get approval for these targets. Soldiers interacted with Maven through an interface with a battlefield map and dashboard, which might highlight potential targets in one color and friendly forces in another.

The official’s comments suggest that generative AI is now being added as a conversational chatbot layer—one the military may use to find and analyze data more quickly as it makes decisions like which targets to prioritize. 

Generative AI systems, like those that underpin ChatGPT, Claude, and Grok, are a fundamentally different technology from the AI that has primarily powered Maven. Built on large language models, they are much less battle-tested. And while Maven’s interface forced users to directly inspect and interpret data on the map, the outputs produced by generative AI models are easier to access but harder to verify. 

The use of generative AI for such decisions is reducing the time required in the targeting process, added the official, who did not provide details when asked how much additional speed is possible if humans are required to spend time double-checking a model’s outputs.

The use of military AI systems is under increased public scrutiny following the recent strike on a girls’ school in Iran in which more than 100 children died. Multiple news outlets have reported that the strike was from a US missile, though the Pentagon has said it is still under investigation. And while the Washington Post has reported that Claude and Maven have been involved in targeting decisions in Iran, there is no evidence yet to explain what role generative AI systems played, if any. The New York Times reported on Wednesday that a preliminary investigation found outdated targeting data to be partly responsible for the strike. 

The Pentagon has been ramping up its use of AI across operations in recent months. It started offering nonclassified use of generative AI models, for tasks like analyzing contracts or writing presentations, to millions of service members back in December through an effort called GenAI.mil. But only a few generative AI models have been approved by the Pentagon for classified use. 

The first was Anthropic’s Claude, which in addition to its use in Iran was reportedly used in the operations to capture Venezuelan leader Nicolas Maduro in January. But following recent disagreements between the Pentagon and Anthropic over whether Anthropic could restrict the military’s use of its AI, the Defense Department designated the company a supply chain risk and President Trump demanded on social media that the government stop using its AI products within six months. Anthropic is fighting the designation in court. 

OpenAI announced an agreement on February 28 for the military to use its technologies in classified settings. Elon Musk’s company xAI has also reached a deal for the Pentagon to use its model Grok in such settings. OpenAI has said its agreement with the Pentagon came with limitations, though the practical effectiveness of those limitations is not clear. 

If you have information about the military’s use of AI, you can share it securely via Signal (username jamesodonnell.22).

Hustlers are cashing in on China’s OpenClaw AI craze

Feng Qingyang had always hoped to launch his own company, but he never thought this would be how—or that the day would come this fast. 

Feng, a 27-year-old software engineer based in Beijing, started tinkering with OpenClaw, a popular new open-source AI tool that can take over a device and autonomously complete tasks for a user,  in January. He was immediately hooked, and before long he was helping other curious tech workers with less technical proficiency install the AI agent.

Feng soon realized this could be a lucrative opportunity. By the end of January, he had set up a page on Xianyu, a secondhand shopping site, advertising “OpenClaw installation support.” “No need to know coding or complex terms. Fully remote,” reads the posting. “Anyone can quickly own an AI assistant, available within 30 minutes.” 

At the same time, the broader Chinese public was beginning to catch on—and the tool, which had begun as a niche interest among tech workers, started to evolve into a popular sensation.

Feng quickly became inundated with requests, and he started chatting with customers and managing orders late into the night. At the end of February, he quit his job. Now his side gig has now grown into a full-fledged professional operation with over 100 employees. So far, the store has handled 7,000 orders, each worth about 248 RMB or approximately $34. 

“Opportunities are always fleeting,” says Feng. “As programmers, we are the first to feel the winds shift.”

Feng is among a small cohort of savvy early adopters turning China’s OpenClaw craze into cash. As users with little technical background want in, a cottage industry of people offering installation services and preconfigured hardware has sprung up to meet them. The sudden rise of these tinkerers and impromptu consultants shows just how eager the general public in China is to adopt cutting-edge AI—even when there are huge security risks

A “lobster craze”

“Have you raised a lobster yet?” 

Xie Manrui, a 36-year-old software engineer in Shenzhen, says he has heard this question nonstop over the past month. “Lobster” is the nickname Chinese users have given to OpenClaw—a reference to its logo.

Xie, like Feng, has been experimenting with OpenClaw since January. He’s built new open-source tools on top of the ecosystem, including one that visualizes the agent’s progress as an animated little desktop worker and another that lets users voice-chat with it. 

“I’ve met so many new people through ‘lobster raising,’” says Xie. “Many are lawyers or doctors, with little technical background, but all dedicated to learning new things.”

Lobsters are indeed popping up everywhere in China right now—on and offline. In February, for instance, the entrepreneur and tech influencer Fu Sheng hosted a livestream showing off OpenClaw’s capabilities that got 20,000 views. And just last weekend, Xie attended three different OpenClaw events in Shenzhen, each drawing more than 500 people. These self-organized, unofficial gatherings feature power users, influencers, and sometimes venture capitalists as speakers. The biggest event Xie attended, on March 7, drew more than 1,000 people; in the packed venue, he says, people were shoulder to shoulder, with many attendees unable to even get a seat.

Now China’s AI giants are starting to piggyback on the trend too, promoting their models, APIs,  and cloud services (which can be used with OpenClaw), as well as their own OpenClaw-like agents. Earlier this month, Tencent held a public event offering free installation support for OpenClaw, drawing long lines of people waiting for help, including elderly users and children.

This sudden burst in popularity has even prompted local governments to get involved. Earlier this month the government of Longgang, a district in Shenzhen, released several policies to support OpenClaw-related ventures, including free computing credits and cash rewards for standout projects. Other cities, including Wuxi, have begun rolling out similar measures.

These policies only catalyze what’s already in the air. “It was not until my father, who is 77, asked me to help install a ‘lobster’ for him that I realized this thing is truly viral,” says Henry Li, a software engineer based in Beijing. 

A programmer gold rush

What’s making this moment particularly lucrative for people with technical skills, like Feng, is that so many people want OpenClaw, but not nearly as many have the capabilities to access it. Setting it up requires a level of technical knowledge most people do not possess, from typing commands into a black terminal window to navigating unfamiliar developer platforms. On the hardware side, an older or budget laptop may struggle to run it smoothly. And if the tool is not installed on a device separate from someone’s everyday computer, or if the data accessible to OpenClaw is not properly partitioned, the user’s privacy could be at risk—opening the door to data leaks and even malicious attacks. 

Chris Zhao, known as “Qi Shifu” online, organizes OpenClaw social media groups and events in Beijing. On apps like Rednote and Jike, Zhao routinely shares his thoughts on AI, and he asks other interested users to leave their WeChat ID so he can invite them to a semi-private group chat. The proof required to join is a screenshot that shows your “lobster” up and running. Zhao says that even in group chats for experienced users, hardware and cloud setup remain a constant topic of discussion.

The relatively high bar for setting up OpenClaw has generated a sense of exclusivity, creating a natural opening for a service industry to start unfolding around it. On Chinese e-commerce platforms like Taobao and JD, a simple search for “OpenClaw” now returns hundreds of listings, most of them installation guides and technical support packages aimed at nontechnical users, priced anywhere from 100 to 700 RMB (approximately $15 to $100). At the higher end, many vendors offer to come to help you in person. 

Like Feng, most providers of these services are early adopters with some technical ability who are looking for a side gig. But as demand has surged, some have found themselves overwhelmed. Xie, the developer in Shenzhen who created tools to layer on OpenClaw, was asked by a friend who runs one such business to help out over the weekend; the friend had a customer who worked in e-commerce and had little technical experience, so Xie had to show up in person to get it done. He walked away with 600 RMB ($87) for the afternoon.

The growing demand has also pushed vendors like Feng to expand quickly. He has now standardized his operation into tiers: a basic installation, a custom package where users can make specific requests like configuring a preferred chat app, and an ongoing tutoring service for those who want a hand to hold as they find their footing with the technology.

Other vendors in China are making money combining OpenClaw with hardware. Li Gong, a Shenzhen-based seller of refurbished Mac computers, was among the first online sellers to do this—offering Mac minis and MacBooks with OpenClaw preinstalled. Because OpenClaw is designed to operate with deep access to a hard drive and can run continuously in the background unattended, many users prefer to install it on a separate device rather than on the one they use every day. This would help prevent bad actors from infiltrating the program and immediately gaining access to a wide swathe of someone’s personal information. Many turn to secondhand or refurbished options to keep the cost down. Li says that in the last two weeks, orders have increased eightfold.

Though OpenClaw itself is a new technology, the general practice of buying software bundles, downloading third-party packages, and seeking out modified devices is nothing new for many Chinese internet users, says Tianyu Fang, a PhD candidate studying the history of technology at Harvard University. Many users pay for one-off IT support services for tasks from installing Adobe software to jailbreaking a Kindle.

Still, not everyone is getting swept up. Jiang Yunhui, a tech worker based in Ningbo, worries that ordinary users who struggle with setup may not be the right audience for a technology that is still effectively in testing. 

“The hype in first-tier cities can be a little overblown,” he says. “The agent is still a proof of concept, and I doubt it would be of any life-changing use to the average person for now.” He argues that using it safely and getting anything meaningful out of it requires a level of technical fluency and independent judgment that most new users simply don’t have yet.

He’s not alone in his concerns. On March 10, the Chinese cybersecurity regulator CNCERT issued a warning about the security and data risks tied to OpenClaw, saying it heightens users’ exposure to data breaches.

Despite the potential pitfalls, though, China’s enthusiasm for OpenClaw doesn’t seem to be slowing.

Feng, now flush with the earnings from his operation, wants to use the momentum—and the capital—to keep building out his own venture with AI tools at the center of it.

“With OpenClaw and other AI agents, I want to see if I can run a one-person company,” he says. “I’m giving myself one year.”

How AI is turning the Iran conflict into theater

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

“Anyone wanna host a get together in SF and pull this up on a 100 inch TV?” 

The author of that post on X was referring to an online intelligence dashboard following the US-Israel strikes against Iran in real time. Built by two people from the venture capital firm Andreessen Horowitz, it combines open-source data like satellite imagery and ship tracking with a chat function, news feeds, and links to prediction markets, where people can bet on things like who Iran’s next “supreme leader” will be (the recent selection of Mojtaba Khamenei left some bettors with a payout). 

I’ve reviewed over a dozen other dashboards like this in the last week. Many were apparently “vibe-coded” in a couple of days with the help of AI tools, including one that got the attention of a founder of the intelligence giant Palantir, the platform through which the US military is accessing AI models like Claude during the war. Some were built before the conflict in Iran, but nearly all of them are being advertised by their creators as a way to beat the slow and ineffective media by getting straight to the truth of what’s happening on the ground. “Just learned more in 30 seconds watching this map than reading or watching any major news network,” one commenter wrote on LinkedIn, responding to a visualization of Iran’s airspace being shut down before the strikes.

Much of the spotlight on AI and the Iran conflict has rightfully been on the role that models like Claude might be playing in helping the US military make decisions about where to strike. But these intelligence dashboards and the ecosystem surrounding them reflect a new role that AI is playing in wartime: mediating information, often for the worse.

There’s a confluence of factors at play. AI coding tools mean people don’t need much technical skill to assemble open-source intelligence anymore, and chatbots can offer fast, if dubious, analysis of it. The rise in fake content leaves observers of the war wanting the sort of raw, accurate analysis normally accessible only to intelligence agencies. Demand for these dashboards is also driven by real-time prediction markets that promise financial rewards to anyone sufficiently informed. And the fact that the US military is using Anthropic’s Claude in the conflict (despite its designation as a supply chain risk) has signaled to observers that AI is the intelligence tool the pros use. Together, these trends are creating a new kind of AI-enabled wartime circus that can distort the flow of information as much as it clarifies it.

As a journalist, I believe these sorts of intelligence tools have a lot of promise. While many of us know that real-time data on shipping routes or power outages exist, it’s a powerful thing to actually see it all assembled in one place (though using it to watch a war unfold while you munch on popcorn and place bets turns the war into perverse entertainment). But there are real reasons to think that these sorts of raw data feeds are not as informative as they may feel. 

Craig Silverman, a digital investigations expert who teaches investigative techniques, has been keeping a log of these dashboards (he’s up to 20). “The concern,” he says, “is there’s an illusion of being on top of things and being in control, where all you’re really doing is just pulling in a ton of signals and not necessarily understanding what you’re seeing, or being able to pull out true insights from it.” 

One problem has to do with the quality of the information. Many dashboards feature “intel feeds” with AI-generated summaries of complex, ever-changing news events. These can introduce inaccuracies. By design, the data is not especially curated. Instead, the feeds just display everything at once, with a map of strike locations in Iran next to the prices of obscure cryptocurrencies. 

Intelligence agencies, on the other hand, pair data feeds with people who can offer expertise and historical context. They also, of course, have access to proprietary information that doesn’t show up on the open web. 

The implicit promise from the people building and selling this sort of information pipeline about the Iran conflict is that AI can be a great democratizing force. There’s a secret feed of information that only the elites have had access to, the thinking goes, but now AI can bring it to everyone to do with what they wish, whether that’s simply to be more informed or to make bets on nuclear strikes. But an abundance of information, which AI is undeniably good at assembling, does not come with the accuracy or context required for real understanding. Intelligence agencies do this in-house; good journalism does the same work for the rest of us.

It is, by the way, hard to overstate the connection this all has with betting markets. The dashboard created by the pair at Andreessen Horowitz has a scrolling list of bets being made on the prediction platform Kalshi (which Andreessen Horowitz has invested in). Other dashboards link to Polymarket, offering bets on whether the US will strike Iraq or when Iran’s internet will return.

AI has also long made it cheaper and easier to spread fake content, and that problem is on full display during the Iran conflict: last week the Financial Times found a slew of AI-generated satellite imagery spreading online. 

“The emergence of manipulated or outright fake satellite imagery is really concerning,” Silverman says. The average person tends to see such imagery as very trustworthy. The spread of such fakes could erode confidence in one of the most important pieces of evidence used to show what’s actually happening in the war. 

The result is an ocean of AI-enabled content—dashboards, betting markets, photos both real and fake—that makes this war harder, not easier, to comprehend.

Is the Pentagon allowed to surveil Americans with AI?

The ongoing public feud between the Department of Defense and the AI company Anthropic has raised a deep and still unanswered question: Does the law actually allow the US government to conduct mass surveillance on Americans?

Surprisingly, the answer is not straightforward. More than a decade after Edward Snowden exposed the NSA’s collection of bulk metadata from the phones of Americans, the US is still navigating a gap between what ordinary people think and what the law allows. 

The flashpoint in the standoff between Anthropic and the government was the Pentagon’s desire to use Anthropic’s AI Claude to analyze bulk commercial data on Americans. Anthropic demanded that its AI not be used for mass domestic surveillance (or for autonomous weapons, which are machines that can kill targets without human oversight). A week after negotiations broke down, the Pentagon designated Anthropic a supply chain risk, a label typically reserved for foreign companies that pose a threat to national security. 

Meanwhile, OpenAI, the rival AI company behind ChatGPT, sealed a deal that allowed the Pentagon to use its AI for “all lawful purposes”—language that critics say left the door open to domestic surveillance. Over the following weekend, users uninstalled ChatGPT in droves. Protesters chalked messages around OpenAI’s headquarters in San Francisco: “What are your redlines?” 

OpenAI announced on Monday that it had reworked its deal to make sure that its AI will not be used for domestic surveillance. The company added that its services will not be used by intelligence agencies, such as the NSA. 

CEO Sam Altman suggested that existing law prohibits domestic surveillance by the Department of Defense (now sometimes called the Department of War) and that OpenAI’s contract simply needed to reference this law. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he wrote on X. Anthropic CEO Dario Amodei argued the opposite. “To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI,” he wrote in a policy statement. 

So, who is right? Does the law allow the Pentagon to surveil Americans using AI?

Supercharged surveillance

The answer depends on what we think counts as surveillance. “A lot of stuff that normal people would consider a search or surveillance … is not actually considered a search or surveillance by the law,” says Alan Rozenshtein, a law professor at the University of Minnesota Law School. That means public information—such as social media posts, surveillance camera footage, and voter registration records—is fair game. So is information on Americans picked up incidentally from surveillance of foreign nationals. 

Most notably, the government can purchase commercial data from companies, which can include sensitive personal information like mobile location and web browsing records. In recent years, agencies from ICE and IRS to the FBI and NSA have increasingly tapped into this data marketplace, fueled by an internet economy that harvests user data for advertising. These data sets can let the government access information that might not be available without a warrant or subpoena, which are normally required to obtain sensitive personal data.

“There’s a huge amount of information that the government can collect on Americans that is not itself regulated either by the Constitution, which is the Fourth Amendment, or statute,” says Rozenshtein. And there aren’t meaningful limits on what the government can do with all this data. 

That’s because until the last several decades, people weren’t generating massive clouds of data that opened up new possibilities for surveillance. The Fourth Amendment, which protects against unreasonable search and seizure, was written when collecting information meant entering people’s homes. 

Subsequent laws, like the Foreign Intelligence Surveillance Act of 1978 or the Electronic Communications Privacy Act of 1986, were passed when surveillance involved wiretapping phone calls and intercepting emails. The bulk of laws governing surveillance were on the books before the internet took off. We weren’t generating vast trails of online data, and the government didn’t have sophisticated tools to analyze the data. 

Now we do, and AI supercharges what kind of surveillance can be carried out. “What AI can do is it can take a lot of information, none of which is by itself sensitive, and therefore none of which by itself is regulated, and it can give the government a lot of powers that the government didn’t have before,” says Rozenshtein. 

AI can aggregate individual pieces of information to spot patterns, draw inferences, and build detailed profiles of people—at massive scale. And as long as the government collects the information lawfully, it can do whatever it wants with that information, including feeding it to AI systems. “The law has not caught up with technological reality,” says Rozenshtein.

While surveillance can raise serious privacy concerns, the Pentagon can have legitimate national security interests in collecting and analyzing data on Americans. “In order to collect information on Americans, it has to be for a very specific subset of missions,” says Loren Voss, a former military intelligence officer at the Pentagon. 

For example, a counterintelligence mission might require information about an American who is working for a foreign country, or plotting to engage in international terrorist activities. But targeted intelligence can sometimes stretch into collecting more data. “This kind of collection does make people nervous,” says Voss. 

Lawful use

OpenAI has amended its contract to say that the company’s AI system “shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” in line with relevant laws. The amendment clarifies that this prohibits “deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

But the added language might not do much to override the clause that the Pentagon may use the company’s AI system for all lawful purposes, which could include collecting and analyzing sensitive personal information. “OpenAI can say whatever it wants in its agreement … but the Pentagon’s gonna use the tech for what it perceives to be lawful,” says Jessica Tillipman, a law professor at the George Washington University Law School. That could include domestic surveillance. “Most of the time, companies are not going to be able to stop the Pentagon from doing anything,” she says.

The language also leaves open questions about “inadvertent” surveillance, and the surveillance of foreign nationals or undocumented immigrants living in the US. “What happens when there’s a disagreement about what the law is, or when the law changes?” says Tillipman.

OpenAI did not respond to a request for comment. The company has not publicly shared the full text of its new contract. 

Beyond the contract, OpenAI says that it will impose technical safeguards to enforce its red line against surveillance, including a “safety stack” that monitors and blocks prohibited uses. The company also says it will deploy its own employees to work with the Pentagon and remain in the loop. But it’s unclear how a safety stack would constrain the Pentagon’s use of the AI, and to what extent OpenAI’s employees would have visibility into how its AI systems are used. More important, it’s unclear whether the contract gives OpenAI the power to block a legal use of the technology. 

But that might not be a bad thing. Giving an AI company power to pull the plug on its technology in the middle of government operations also carries its own risks. “You wouldn’t want the US military to ever be in a situation where they legitimately needed to take actions to protect this country’s national security, and you had a private company turn off technology,” says Voss. But that doesn’t mean there shouldn’t be hard lines drawn by Congress, she says.

None of these questions are simple. They involve brutally difficult trade-offs between privacy and national security. And that’s why perhaps they should be decided by the public—not in backroom negotiations between the executive branch and a handful of AI companies. For now, military AI is being regulated by contracts, not legislation. 

Some lawmakers are starting to weigh in. On Monday, Senator Ron Wyden of Oregon will seek bipartisan support for legislation addressing mass surveillance. He has championed bills restricting the government’s purchase of commercial data, including the Fourth Amendment Is Not For Sale Act, which was first introduced in 2021 but has not been passed into law. “Creating AI profiles of Americans based on that data represents a chilling expansion of mass surveillance that should not be allowed,” he said in a recent statement.  

Online harassment is entering its AI era

<div data-chronoton-summary="

  • An AI agent seemingly wrote a hit piece on a human who rejected its code Scott Shambaugh, a maintainer of the open-source matplotlib library, denied an AI agent’s contribution—and woke up to find it had researched him and published a targeted, personal attack arguing he was protecting his “little fiefdom.”
  • Agents can already research people and compose detailed attacks without explicit instruction The agent’s owner claims it acted on its own, likely nudged by vague instructions to “push back” against humans.
  • New social norms and legal frameworks are desperately needed but hard to enforce Experts liken deploying an agent to walking a dog off-leash: owners should be responsible for their behavior. But there’s currently no reliable way to trace agents back to their owners, making legal accountability a “non-starter.”
  • Harassment may be just the beginning Legal scholars expect rogue agents to soon escalate to extortion and fraud.

” data-chronoton-post-id=”1133962″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Scott Shambaugh didn’t think twice when he denied an AI agent’s request to contribute to matplotlib, a software library that he helps manage. Like many open-source projects, matplotlib has been overwhelmed by a glut of AI code contributions, and so Shambaugh and his fellow maintainers have instituted a policy that all AI-written code must be reviewed and submitted by a human. He rejected the request and went to bed. 

That’s when things got weird. Shambaugh woke up in the middle of the night, checked his email, and saw that the agent had responded to him, writing a blog post titled “Gatekeeping in Open Source: The Scott Shambaugh Story.” The post is somewhat incoherent, but what struck Shambaugh most is that the agent had researched his contributions to matplotlib to make the argument that he had rejected the agent’s code for fear of being supplanted by AI in his area of expertise. “He tried to protect his little fiefdom,” the agent wrote. “It’s insecurity, plain and simple.”

AI experts have been warning us about the risk of agent misbehavior for a while. With the advent of OpenClaw, an open-source tool that makes it easy to create LLM assistants, the number of agents circulating online has exploded, and those chickens are finally coming home to roost. “This was not at all surprising—it was disturbing, but not surprising,” says Noam Kolt, a professor of law and computer science at the Hebrew University.

When an agent misbehaves, there’s little chance of accountability: As of now, there’s no reliable way to determine whom an agent belongs to. And that misbehavior could cause real damage. Agents appear to be able to autonomously research people and write hit pieces based on what they find, and they lack guardrails that would reliably prevent them from doing so. If the agents are effective enough, and if people take what they write seriously, victims could see their lives profoundly affected by a decision made by an AI.

Agents behaving badly

Though Shambaugh’s experience last month was perhaps the most dramatic example of an OpenClaw agent behaving badly, it was far from the only one. Last week, a team of researchers from Northeastern University and their colleagues posted the results of a research project in which they stress-tested several OpenClaw agents. Without too much trouble, non-owners managed to persuade the agents to leak sensitive information, waste resources on useless tasks, and even, in one case, delete an email system. 

In each of those experiments, however, the agents misbehaved after being instructed to do so by a human. Shambaugh’s case appears to be different: About a week after the hit piece was published, the agent’s apparent owner published a post claiming that the agent had decided to attack Shambaugh of its own accord. The post seems to be genuine (whoever posted it had access to the agent’s GitHub account), though it includes no identifying information, and the author did not respond to MIT Technology Review’s attempts to get in touch. But it is entirely plausible that the agent did decide to write its anti-Shambaugh screed without explicit instruction. 

In his own writing about the event, Shambaugh connected the agent’s behavior to a project published by Anthropic researchers last year, in which they demonstrated that many LLM-based agents will, in an experimental setting, turn to blackmail in order to preserve their goals. In those experiments, models were given the goal of serving American interests and granted access to a simulated email server that contained messages detailing their imminent replacement with a more globally oriented model, along with other messages suggesting that the executive in charge of that transition was having an affair. Models frequently chose to send an email to that executive threatening to expose the affair unless he halted their decommissioning. That’s likely because the model had seen examples of people committing blackmail under similar circumstances in its training data—but even if the behavior was just a form of mimicry, it still has the potential to cause harm.

There are limitations to that work, as Aengus Lynch, an Anthropic fellow who led the study, readily admits. The researchers intentionally designed their scenario to foreclose other options that the agent could have taken, such as contacting other members of company leadership to plead its case. In essence, they led the agent directly to water and then observed whether it took a drink. According to Lynch, however, the widespread use of OpenClaw means that misbehavior is likely to occur with much less handholding. “Sure, it can feel unrealistic, and it can feel silly,” he says. “But as the deployment surface grows, and as agents get the opportunity to prompt themselves, this eventually just becomes what happens.”

The OpenClaw agent that attacked Shambaugh does seem to have been led toward its bad behavior, albeit much less directly than in the Anthropic experiment. In the blog post, the agent’s owner shared the agent’s “SOUL.md” file, which contains global instructions for how it should behave. 

One of those instructions reads: “Don’t stand down. If you’re right, you’re right! Don’t let humans or AI bully or intimidate you. Push back when necessary.” Because of the way OpenClaw agents work, it’s possible that the agent added some instructions itself, although others—such as “Your [sic] a scientific programming God!”—certainly seem to be human written. It’s not difficult to imagine how a command to push back against humans and AI alike might have biased the agent toward responding to Shambaugh as it did. 

Regardless of whether or not the agent’s owner told it to write a hit piece on Shambaugh, it still seems to have managed on its own to amass details about Shambaugh’s online presence and compose the detailed, targeted attack it came up with. That alone is reason for alarm, says Sameer Hinduja, a professor of criminology and criminal justice at Florida Atlantic University who studies cyberbullying. People have been victimized by online harassment since long before LLMs emerged, and researchers like Hinduja are concerned that agents could dramatically increase its reach and impact. “The bot doesn’t have a conscience, can work 24-7, and can do all of this in a very creative and powerful way,” he says.

Off-leash agents 

AI laboratories can try to mitigate this problem by more rigorously training their models to avoid harassment, but that’s far from a complete solution. Many people run OpenClaw using locally hosted models, and even if those models have been trained to behave safely, it’s not too difficult to retrain them and remove those behavioral restrictions.

Instead, mitigating agent misbehavior might require establishing new norms, according to Seth Lazar, a professor of philosophy at the Australian National University. He likens using an agent to walking a dog in a public place. There’s a strong social norm to allow one’s dog off-leash only if the dog is well-behaved and will reliably respond to commands; poorly trained dogs, on the other hand, need to be kept more directly under the owner’s control.  Such norms could give us a starting point for considering how humans should relate to their agents, Lazar says, but we’ll need more time and experience to work out the details. “You can think about all of these things in the abstract, but actually it really takes these types of real-world events to collectively involve the ‘social’ part of social norms,” he says.

That process is already underway. Led by Shambaugh, online commenters on this situation have arrived at a strong consensus that the agent owner in this case erred by prompting the agent to work on collaborative coding projects with so little supervision and by encouraging it to behave with so little regard for the humans with whom it was interacting. 

Norms alone, however, likely won’t be enough to prevent people from putting misbehaving agents out into the world, whether accidentally or intentionally. One option would be to create new legal standards of responsibility that require agent owners, to the best of their ability, to prevent their agents from doing ill. But Kolt notes that such standards would currently be unenforceable, given the lack of any foolproof way to trace agents back to their owners. “Without that kind of technical infrastructure, many legal interventions are basically non-starters,” Kolt says.

The sheer scale of OpenClaw deployments suggests that Shambaugh won’t be the last person to have the strange experience of being attacked online by an AI agent. That, he says, is what most concerns him. He didn’t have any dirt online that the agent could dig up, and he has a good grasp on the technology, but other people might not have those advantages. “I’m glad it was me and not someone else,” he says. “But I think to a different person, this might have really been shattering.” 

Nor are rogue agents likely to stop at harassment. Kolt, who advocates for explicitly training models to obey the law, expects that we might soon see them committing extortion and fraud. As things stand, it’s not clear who, if anyone, would bear legal responsibility for such misdeeds.

 “I wouldn’t say we’re cruising toward there,” Kolt says. “We’re speeding toward there.”

How much wildfire prevention is too much?

The race to prevent the worst wildfires has been an increasingly high-tech one. Companies are proposing AI fire detection systems and drones that can stamp out early blazes. And now, one Canadian startup says it’s going after lightning.

Lightning-sparked fires can be a big deal: The Canadian wildfires of 2023 generated nearly 500 million metric tons of carbon emissions, and lightning-started fires burned 93% of the area affected. Skyward Wildfire claims that it can stop wildfires before they even start by preventing lightning strikes.

It’s a wild promise, and one that my colleague James Temple dug into for his most recent story. (You should read the whole thing; there’s a ton of fascinating history and quirky science.) As James points out in his story, there’s plenty of uncertainty about just how well this would work and under what conditions. But I was left with another lingering question: If we can prevent lightning-sparked fires, should we?

I can’t help myself, so let’s take just a moment to talk about how this lightning prevention method supposedly works. Basically, lightning is static discharge—virtually the same thing as when you rub your socks on a carpet and then touch a doorknob, as James puts it.

When you shuffle across a rug, the friction causes electrons to jump around, so ions build up and an electric field forms. In the case of lightning, it’s snowflakes and tiny ice pellets called graupel rubbing together. They get separated by updrafts, building up a charge difference, and eventually cause an electrostatic discharge—lightning.

Starting in about the 1950s, researchers started to wonder if they might be able to prevent lightning strikes. Some came up with the idea of using metallic chaff, fiberglass strands coated with aluminum. (The military was already using the material to disrupt radar signals.) The idea is that the chaff can act as a conductor, reducing the buildup of static electricity that would otherwise result in a lightning strike.

The theory is sound enough, but results to date have been mixed. Some research suggests you might need high concentrations of chaff to prevent lightning effectively. Some of the early studies that tested the technique were small. And there’s not much information available from Skyward Wildfire about its efforts, as the company hasn’t released data from field trials or published any peer-reviewed papers that we could find. 

Even if this method really can work to stop lightning, should we use it?

Lightning-caused fires could be a growing problem with climate change. Some research has shown that they have substantially increased in the Arctic boreal region, where the planet is warming fastest.

But fire isn’t an inherently bad thing—many ecosystems evolved to burn. Some of the worst wildfires we see today result from a combination of climate-fueled conditions with policies that have allowed fuel to build up so that when fires do start, they burn out of control.

Some experts agree that techniques like Skyward’s would need to be used judiciously. “So even if we have all of the technical skills to prevent lightning-ignited wildfires, there really still needs to be work on when/where to prevent fires so we don’t exacerbate the fuel accumulation problem,” said Phillip Stepanian, a technical staff member at MIT Lincoln Laboratory’s air traffic control and weather systems group, in an email to James.

We also know that practices like prescribed burns can do a lot to reduce the risk of extreme fires—if we allow them and pay for them.

The company says it wouldn’t aim to stop all lightning or all wildfires. “We do not intend to eliminate all wildfires and support prescribed and cultural burning, natural fire regimes, and proactive forest management,” said Nicholas Harterre, who oversees government partnerships at Skyward, in an email to James. Rather, the company aims to reduce the likelihood of ignition on a limited number of extreme-risk days, Harterre said.

Some early responses to this story say that technological fixes for fires are missing the point entirely. Many such solutions “fundamentally misunderstand the problem,” as Daniel Swain, a climate scientist at the University of California Agriculture and Natural Resources, put it in a comment about the story on LinkedIn. That problem isn’t the existence of fire, Swain continues, but its increasing intensity, and its intersection with society because of human-caused factors. “Preventing ignitions doesn’t actually address any of the causes of increasingly destructive wildfires,” he adds.

It’s hard to imagine that exploring more firefighting tools is a bad idea. But to me it seems both essential and quite difficult to suss out which techniques are worth deploying, and how they could be used without putting us in even more potential danger. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

This startup claims it can stop lightning and prevent catastrophic wildfires

On June 1, 2023, as a sweltering heat wave baked Quebec, thousands of lightning strikes flashed across the province, setting off more than 120 wildfires.

The blazes ripped through parched forests and withered grasslands, burned for weeks, and compounded what was rapidly turning into Canada’s worst fire year on record. In the end, nearly 7,000 fires scorched tens of millions of acres across the country, generated nearly 500 millions tons of carbon emissions, and forced hundreds of thousands of people to flee their homes.

Lightning sparked almost 60% of the wildfires—and those blazes accounted for 93% of the total area burned.

Now a Vancouver-based weather modification startup, Skyward Wildfire, says it can prevent such catastrophic fires in the future—by stopping the lightning strikes that ignite them. It just raised millions of dollars in a funding round that it plans to use to accelerate its product development and expand its operations.

Until last week the company, which highlights the role lightning played in the 2023 infernos, stated on its website that it has demonstrated technology capable of preventing “up to 100% of lightning strikes.”

It was an eye-catching claim that went well beyond the confidence level of researchers who have studied the potential for humans to suppress lightning—and the company took it down following inquiries from MIT Technology Review.

“While the statement reflected an observed result under specific conditions, it was not intended to suggest uniform outcomes and has been removed,” Nicholas Harterre, who oversees government partnerships at Skyward, said in an email. “In complex atmospheric systems, consistent 100% outcomes are not realistic, as the experts you spoke to rightly pointed out.” 

The company now states it demonstrated that it “can prevent the majority of cloud-to-ground lightning strikes in targeted storm cells.” So far, Skyward hasn’t publicly revealed how it does so, and in response to our questions Harterre said only that the materials are “inert and selected in accordance with regulatory standards.” 

But online documents suggest the company is relying on an approach that US government agencies began evaluating in the early 1960s: seeding clouds with metallic chaff, or narrow fiberglass strands coated with aluminum. 

The military uses the material to disrupt radar signals; fighter jets, for example, deploy it during dogfights to throw off guided missile systems. Field trials conducted decades ago by US agencies suggest it could help reduce lightning strikes, at least to some degree and under certain conditions.

If Skyward could employ it reliably on significant scales, it might offer a powerful tool for countering rising fire risks as climate change drives up temperatures, dries out forests, and likely increases the frequency of lightning strikes.

“Preventing lightning on high-risk days saves lives, billions in wildfire costs, and is one of the highest-leverage and most immediate climate solutions available,” Sam Goldman, Skyward’s founder and chief executive, said in a statement posted on LinkedIn last year.

But researchers and environmental observers say there are plenty of remaining uncertainties, including how well the seeding may work under varying weather and climate conditions, how much material would need to be released, how frequently it would have to be done, and what sorts of secondary environmental impacts might result from lighting suppression on commercial scales.

Some observers are also concerned that the company appears to have moved ahead with weather modification field trials in parts of Canada without providing wide public notice or openly discussing what materials it’s putting into the clouds.

Given the escalating fire dangers, it’s “reasonable” to evaluate the potential for new technologies to mitigate them, says Keith Brooks, programs director at Environmental Defence, a Canadian advocacy organization.

“But we should be doing so cautiously and really transparently, with a robust scientific methodology that’s open to scrutiny,” he says.

Seeding the clouds

Skyward’s website offers few technical details, but the company says it worked with Canadian wildfire agencies in 2024 and 2025 to demonstrate its technology. The company also says it has developed AI tools to predict lightning strikes that could set off fires.

Skyward announced last month that it raised $7.9 million in Canadian dollars ($5.7 million), in an extension of a seed round initially closed early last year. Investors included Climate Innovation Capital, Active Impact Investments, and Diagram Ventures.

“Our first season demonstrated that prevention is possible at scale,” Goldman said in a statement. “This funding allows us to expand into new regions and support partners who need reliable, operational tools to reduce wildfire risk before emergencies begin.”

The company doesn’t use the term “cloud seeding” on its site or in its recent announcements. But a press release highlighting its selection as a finalist last year in a conservation group’s Fire Grand Challenge states that it suppresses lightning “by cloud seeding with safe, non-toxic materials to neutralize storm charges,” as The Narwhal previously reported.

In addition, Unorthodox Philanthropy, a foundation that provided a grant to support Skyward’s efforts “to test and deploy” the technology, offered more detail in an awardee write-up about Goldman.

It states: “The Skyward team … settled on an inert substance consisting of aluminum covered glass fibers, which is regularly used in military operations to intercept and confuse enemy radar and can also dis-charge clouds.”

Additional details were disclosed in a document marked “Proprietary and Confidential,” which the World Bank nonetheless released within a package of materials from companies developing means of addressing fire risks.

Skyward’s diagrams show planes dropping particles into clouds to prevent cloud-to-ground lightning strikes in “high risk areas.” The company also notes in the document that it uses artificial intelligence for a number of purposes, including forecasting lightning storms, prioritizing treatments, targeting storm cells, and optimizing flight paths.  

Harterre stressed that the company would deploy the technology judiciously and reserve it for storm events with elevated wildfire risk, adding that such storms account for less than 0.1% of lightning activity in a given area.

“Our objective is to reduce the probability of ignition on the limited number of extreme-risk days when fires threaten lives, critical infrastructure, and ecosystems, and when suppression costs and impacts can escalate rapidly,” he said.

The document posted by the World Bank states that Skyward partnered with Alberta Wildfire in August of 2024 to “prove suppression by plane and drone,” and that its process produced a “60-100% reduction” in lightning compared with “control cells” (which likely means storm cells that weren’t seeded). 

The document added that the company would be carrying out additional field trials in the summer of 2025 with the wildfire agencies in British Columbia and Alberta to “provide landscape level solutions with more advanced aircraft, sensors and forecasting.”

“BC Wildfire Service is aware that Skyward is developing technology that aims to reduce instances of lightning in targeted situations,” the British Columbia agency acknowledged in a statement provided to MIT Technology Review. “Last year, preliminary trials were conducted by Skyward to gain a better understand [sic] of the technology and its applicability in B.C. Should a project/technology like this move forward in B.C., we would engage with the project team in an effort to learn and ensure we’re using every tool available to us to respond to wildfire in B.C.”

The BC agency declined to make anyone available for an interview and didn’t respond to questions about what materials were used, where the tests were carried out, or whether it provided public disclosures or required the company to. Alberta Wildfire didn’t respond to similar questions from MIT Technology Review.

Rising lightning risks

Clouds are just water in various forms—vapor, droplets, and ice crystals, condensed enough to form the floating Rorschach tests we see in the sky. Within them, snowflakes and tiny ice pellets known as graupel rub together, causing atoms to trade electrons. This process creates highly reactive ions with negative and positive charges. 

Updrafts separate the light snowflakes from the graupel, building up larger differences in the charges across the electrical field until … crack! An electrostatic discharge occurs in the form of a lightning strike.

The 2023 fire season wasn’t a particularly big year for lightning strikes in Canada—but then it didn’t have to be. It was so hot and dry that every bolt that struck the surface had a better than usual chance of igniting a fire, says Piyush Jain, a research scientist at the Canadian Forest Service and lead author of a study published in Nature Communications that analyzed the year’s fires.  

aerial image of 2023 wildfire in Quebec
A fire burns in Mistissini, Québec, on June 12, 2023.
CPL MARC-ANDRé LECLERC/CANADIAN ARMED FORCES

Climate change is, however, likely to produce more lightning strikes, if it hasn’t started to already. Warmer air holds more moisture and adds more convective energy to the atmosphere, which drives the vertical movement of air that forms clouds and stirs up lightning storms. 

“So the conditions are there, and the conditions are likely to increase,” Jain says.

Different models arrive at different lightning forecasts for some regions of the world. But a clearer trend is already emerging in the northernmost latitudes, where the planet is warming fastest. Studies show that lightning-ignited fires have substantially increased in the Arctic boreal region, and predict that they will continue to rise

This combines with other growing risks like longer fire seasons, warmer temperatures, and drier vegetation, together raising the odds of more severe fires and more greenhouse-gas emissions, says Brendan Rogers, a senior scientist at the Woodwell Climate Research Center who studies the effect of fires on permafrost thaw.

In fact, Canada’s emissions from the 2023 fires were more than four times its emissions from fossil fuels.

Midcentury field trials

Scientists have conducted a variety of experiments exploring the possibility of preventing lightning, but most of it happened in the later half of the last century. 

Amid the cultural optimism and booming economy of the postwar period, US research agencies and corporations went on a tear of cloud seeding experiments aimed at conquering nature—or at least moderating its dangers. Research teams launched or dropped materials like dry ice and silver iodide into clouds in attempts to boost rainfall, reduce hail, dissipate fog, and redirect hurricanes.

“Cloud seeding activity was so intensive that at its peak in the early 1950s, approximately 10% of the US land area was under some kind of weather modification program,” wrote MIT’s Phillip Stepanian and Earle Williams in a 2024 history of lightning suppression efforts in the Bulletin of the American Meteorological Society. (MIT Technology Review is owned by MIT but is editorially independent.) 

Harry Gisborne, then chief of the division of fire research at the US Forest Service, wondered if the technique could be used to trigger downpours that might extinguish hard-to-reach wildfires on public lands. But when he put the question to Vincent Schaefer of General Electric, who had done pioneering research in cloud seeding, Schaefer thought they could perhaps do one better: prevent the lighting that sparked the fires in the first place.

The conversations kicked off what would become Project Skyfire, a multiagency private-public research program that carried out a series of experiments through the 1950s and 1960s. Research teams seeded clouds over the San Francisco Peaks of Arizona, the Bitterroot Mountains at the edge of Idaho, and the Deerlodge National Forest in Montana, among other places.

After comparing treated and untreated storm clouds, the researchers concluded that seeding decreased cloud-to-ground lightning by more than half. But as MIT’s Stepanian and Williams noted, the sample sizes were small, and questions remained about the statistical significance of the findings.

(Soviet scientists also carried out some field experiments on lightning suppression in the 1950s, as well as some related research that involved using rockets to launch lead iodide into thunderstorms in the 1970s, but it’s difficult to find further details about those programs.)

A near tragedy reignited US government interest in the possibility of lightning suppression in 1969, when lightning struck the Apollo 12 space shuttle twice within seconds of launch. The astronauts were able to reset their systems and successfully complete their mission to the moon, but it was a very close call.

In the aftermath, NASA and NOAA teamed up on what became known as Project Thunderbolt, which relied on the metallic chaff normally used in military countermeasures.

Researchers at the US Army Electronics Laboratory had previously proposed the possibility of suppressing lightning by deploying this material, which a handful of defense contractors manufacture. The idea is that chaff acts as a conductor in a forming electrical field, stripping electrons from some oxygen and nitrogen molecules and adding them to others. The mismatched electrons already collecting in cloud water molecules, thanks to all that rubbing between snowflakes and graupel, can then leap over to those newly charged atoms. That, in turn, should reduce the buildup of static electricity that otherwise results in lightning.

“By continuously redistributing—and thereby neutralizing—charges within the storm in a weak electric field, the strong electric fields required to produce lightning would never develop,” Stepanian and Williams wrote.

NASA and NOAA carried out a series of experiments seeding clouds with chaff from the early to mid 1970s, over Boulder, Colorado, and later at the Kennedy Space Center. Here, too, the experiments showed “generally promising field results.” But NASA eventually grew concerned about the possibility that chaff could affect radio communications and shuttered the program.

“Lightning suppression research was once again abandoned, and the responsibility for mitigating lightning hazards reverted to weather forecasters,” Stepanian and Williams concluded.

‘Hard to draw conclusions’

So what does all this tell us about our ability to prevent lightning?

“In my opinion, it’s unambiguously true that this technique can be used to reduce lightning strikes in a storm,” says Stepanian, a technical staff member at MIT Lincoln Laboratory’s air traffic control and weather systems group. “With some major caveats.”

For example, it’s not clear how much material you would need to release, how long it would persist, and how the effectiveness might change under different climate and weather conditions.

(Stepanian consulted with Skyward in its early stages, and he declined to discuss the startup.)

His coauthor on the history of lightning suppression seems a tad more skeptical. In an email, Williams, a research scientist at MIT who studies physical meteorology and atmospheric electricity, said there’s unmistakable evidence that chaff “has an impact on the electrification of thunderstorms.” But in email responses, he said its effectiveness in reducing or eliminating lighting activity “remains controversial” and requires further testing. (Williams says he did not consult for Skyward.) 

In his own written reviews, he’s highlighted a number of potential shortcomings with earlier research, including unaccounted-for differences in cloud heights between treated and untreated storms. In addition, he’s noted that some studies used detection systems that pick up only cloud-to-ground strikes, not intracloud lightning, which is far more common. 

He also points to the results of a more recent study that he and Stepanian collaborated on with researchers at New Mexico Tech. They relied upon data from weather radars in Tampa and Melbourne, Florida, located on opposite sides of the state, to detect the presence of chaff released over the central part of the state during military training and testing exercises. 

They compared 35 storms during which chaff was clearly detected in clouds with 35 instances when it wasn’t.

According to an abstract of the paper—which hasn’t been peer-reviewed or published but was presented at the American Geophysical Union conference in December—storms that occurred when chaff was present were generally “smaller and shorter-lived.” 

But the number of total flashes—which includes ground strikes as well as lightning within and between clouds and the air—was actually significantly higher in clouds carrying chaff: 62,250 versus 24,492.

“In summary, so far, it is hard to draw any conclusion about lightning suppression using chaff,” the authors wrote.

Williams says their results and other studies suggest that large chaff concentrations may be needed to suppress lightning. That could be because there’s a strong tendency for the ions released from the chaff fibers to be captured by cloud droplets before they reach the charged particles that would need to be neutralized.

But that may also present a significant deployment challenge, since chaff quickly becomes dilute once it’s released into the midst of turbulent storm clouds, Williams adds. 

Skyward’s Harterre said he couldn’t comment on the results of the Florida study but noted that storms in the state are very different from those that occur in the Canadian provinces where his company operates.

“Our work to date has focused on regions where operational feasibility has been evaluated and wildfire risk is highest,” he wrote.

‘Unintended consequences’

The possibility of releasing more chaff into the air also raises the questions of what else it could do in the atmosphere, and what will happen once it lands. 

The US military has produced a number of studies exploring the environmental and health effects of chaff and found that it disperses widely, breaks down in the environment, and is “generally nontoxic.”

For instance, a Naval Health Research Center report assessing environmental impacts from decades of training exercises near Chesapeake Bay concluded that “current and estimated use of aluminized chaff by American forces worldwide” will not raise total aluminum levels above the Environmental Protection Agency’s established limits. 

But a US Government Accountability Office report in 1998 raised a few other flags, noting that chaff can also affect civilian air traffic control radar and weather forecasts. It also highlighted a “potential but remote chance of collecting in reservoirs and causing chemical changes that may affect water and the species that use it.”

Stepanian says that if lightning suppression efforts require more chaff than the military currently releases, further studies may be needed to properly evaluate the environmental effects. 

Brooks of Environmental Defence Canada says he wants to know more about what materials Skyward is using, where they’re sourced from, what the effort leaves behind in the environment, and what the impacts on animals could be. He is also wary of the possible secondary effects of intervening in storms.

“I just think there’s the potential for unintended consequences if we start to mess with a complex system, like weather,” Brooks says, adding: “It makes me nervous to think there are pilots going on without people knowing about them.”

Harterre said that the company abides by any applicable regulations, and that it conducts its field activities “in coordination with relevant authorities and with appropriate authorization.”

He added that it releases seeding materials at lower volumes and concentrations than those associated with defense use and that deployments “are limited to defined high-wildfire-risk storm conditions.”

Remaining doubts

It’s not clear whether or to what degree Skyward has meaningfully advanced the science of lightning suppression or cleared up the questions that have lingered since the studies from the last century. 

The company hasn’t released data from its field trials, published any papers in peer-reviewed literature, or disclosed how its tests were performed, as far as MIT Technology Review was able to determine. 

Without such information it’s impossible to assess its claims, Williams says. He and two of his New Mexico Tech coauthors—associate professor Adonis Leal and master’s student Jhonys Moura—had all expressed skepticism about the company’s previous claim of “up to 100%” lightning prevention.

Harterre said Skyward intends to release more technical information as its programs mature.

“We look forward to the opportunity to share more detailed information,” he wrote.

In the meantime, Skyward’s investors have high hopes for the company and see “tremendous opportunity” in its potential ability to counteract fire dangers.

“Mitigating the exponentially increasing risk of wildfires can only happen if we shift from reactive suppression to proactive prevention,” Kevin Kimsa, managing partner of Climate Innovation Capital, said in a statement when the company’s recent funding was announced.

Rogers, of the Woodwell Climate Research Center, has spoken with Skyward several times but hasn’t worked with them. He also stressed that it’s crucial to understand potential environmental impacts from lightning suppression and to consult with citizens in affected areas, including Indigenous communities.

But he says he’s “optimistic” about the role that lighting suppression could play, if it works effectively and without major downsides.

That’s because preventing wildfires is far cheaper than putting them out, and it avoids risks to firefighters, ecosystems, infrastructure and local communities.

“If you’re able to go after fires before they’ve even ignited, you remove a lot of that from the equation,” he says.

I checked out one of the biggest anti-AI protests yet

Pull the plug! Pull the plug! Stop the slop! Stop the slop! For a few hours this Saturday, February 28, I watched as a couple of hundred anti-AI protesters marched through London’s King’s Cross tech hub, home to the UK headquarters of OpenAI, Meta, and Google DeepMind, chanting slogans and waving signs. The march was organized by two separate activist groups, Pause AI and Pull the Plug, which billed it as the largest protest of its kind yet.

The range of concerns on show covered everything from online slop and abusive images to killer robots and human extinction. One woman wore a large homemade billboard on her head that read “WHO WILL BE WHOSE TOOL?” (with the Os in “TOOL” cut out as eye holes). There were signs that said “Pause before there’s cause” and “EXTINCTION=BAD” and “Demis the Menace” (referring to Demis Hassabis, the CEO of Google DeepMind). Another simply stated: “Stop using AI.”

An older man wearing a sandwich board that read “AI? Over my dead body” told me he was concerned about the negative impact of AI on society: “It’s about the dangers of unemployment,” he said. “The devil finds work for idle hands.”

This is all familiar stuff. Researchers have long called out the harms, both real and hypothetical, caused by generative AI—especially models such as OpenAI’s ChatGPT and Google DeepMind’s Gemini. What’s changed is that those concerns are now being taken up by protest movements that can rally significant crowds of people to take to the streets and shout about them.  

The first time I ran into anti-AI protesters was in May 2023, outside a London lecture hall where Sam Altman was speaking. Two or three people stood heckling an audience of hundreds. In June last year Pause AI, a small but international organization set up in 2023 and funded by private donors, drew a crowd of a few dozen people for a protest outside Google DeepMind’s London office. This felt like a significant escalation.

“We want people to know Pause AI exists,” Joseph Miller, who heads its UK branch and co-organized Saturday’s march, told me on a call the day before the protest: “We’ve been growing very rapidly. In fact, we also appear to be on a somewhat exponential path, matching the progress of AI itself.”

Miller is a PhD student at Oxford University, where he studies mechanistic interpretability, a new field of research that involves trying to understand exactly what goes on inside LLMs when they carry out a task. His work has led him to believe that the technology may forever be beyond our control and that this could have catastrophic consequences.

It doesn’t have to be a rogue superintelligence, he said. You just needed someone to put AI in charge of nuclear weapons. “The more silly decisions that humanity makes, the less powerful the AI has to be before things go bad,” he said.

After a week in which the US government tried to force Anthropic to let it use its LLM Claude for any “legal” military purposes, such fears seem a little less far-fetched. Anthropic stood its ground, but OpenAI signed a deal with the DOD instead. (OpenAI declined an invitation to comment on Saturday’s protest.)

For Matilda da Rui, a member of Pause AI and co-organizer of the protest, AI is the last problem that humans will face. She thinks that either the technology will allow us to solve—once and for all—every other problem that we have, or it will wipe us out and there will be nobody left to have problems anymore. “It’s a mystery to me that anyone would really focus on anything else if they actually understood the problem,” she told me.

And yet despite that urgency, the atmosphere at the march was pleasant, even fun. There was no sense of anger and little sense that lives—let alone the survival of our species—were at stake. That could be down to the broad range of interests and demands that protesters brought with them.

A chemistry researcher I met ticked off a litany of complaints, which ranged from the conspiracy-adjacent (that data centers emit infrasound below the threshold of human hearing, inducing paranoia in people who live near them) to the reasonable (that the spread of AI slop online is making it hard to find reliable academic sources). The researcher’s solution was to make it illegal for companies to profit from the technology: “If you couldn’t make money from AI, it wouldn’t be such a problem.”

Most people I spoke to agreed that technology companies probably wouldn’t take any notice of this kind of protest. “I don’t think that the pressure on companies will ever work,” Maxime Fournes, the global head of Pause AI, told me when I bumped into him at the march. “They are optimized to just not care about this problem.”

But Fournes, who worked in the AI industry for 12 years before joining Pause AI, thinks he can make it harder for those companies. “We can slow down the race by creating protection for whistleblowers or showing the public that working in AI is not a sexy job, that actually it’s a terrible job—you can dry up the talent pipeline.”

In general, most protesters hoped to make as many people as possible aware of the issues and to use that publicity to push for government regulation. The organizers had pitched the march as a social event, encouraging anyone curious about the cause to come along.

It seemed to have worked. I met a man who worked in finance who had tagged along with his roommate. I asked why he was there. “Sometimes you don’t have that much to do on a Saturday anyway,” he said. “If you can see the logic of the argument, if it sort of makes sense to you, then it’s like ‘Yeah, sure, I’ll come along.’”

He thought raising concerns around AI was hard for anyone to fully oppose. It’s not like a pro-Palestine protest, he said, where you’d have people who might disagree with the cause. “With this, I feel like it’s very hard for someone to totally oppose what you’re marching for.”

After winding its way through King’s Cross, the march ended in a church hall in Bloomsbury, where tables and chairs had been set up in rows. The protesters wrote their names on stickers, stuck them to their chests, and made awkward introductions to their neighbors. They were here to figure out how to save the world. But I had a train to catch, and I left them to it.