The Pentagon is planning for AI companies to train on classified data, defense official says

The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned. 

AI models like Anthropic’s Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before. 

Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with MIT Technology Review. The news comes as demand for more powerful models is high: The Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings and is implementing a new agenda to become an “an ‘AI-first’ warfighting force” as the conflict with Iran escalates. (The Pentagon did not comment on its AI training plans as of publication time.)

Training would be done in a secure data center that’s accredited to host classified government projects, and where a copy of an AI model is paired with classified data, according to two people familiar with how such operations work. Though the Department of Defense would remain the owner of the data, personnel from AI companies might in rare cases access the data if they have appropriate security clearance, the official said. 

Before allowing this new training, though, the official said, the Pentagon intends to evaluate how accurate and effective models are when trained on nonclassified data, like commercially available satellite imagery. 

The military has long used computer vision models, an older form of AI, to identify objects in images and footage it collects from drones and airplanes, and federal agencies have awarded contracts to companies to train AI models on such content. And AI companies building large language models (LLMs) and chatbots have created versions of their models fine-tuned for government work, like Anthropic’s Claude Gov, which are designed to operate across more languages and in secure environments. But the official’s comments are the first indication that AI companies building LLMs, like OpenAI and xAI, could train government-specific versions of their models directly on classified data.

Aalok Mehta, who directs the Wadhwani AI Center at the Center for Strategic and International Studies and previously led AI policy efforts at Google and OpenAI, says training on classified data, as opposed to just answering questions about it, would present new risks. 

The biggest of these, he says, is that classified information these models train on could be resurfaced to anyone using the model. That would be a problem if lots of different military departments, all with different classification levels and needs for information, were to share the same AI. 

“You can imagine, for example, a model that has access to some sort of sensitive human intelligence—like the name of an operative—leaking that information to a part of the Defense Department that isn’t supposed to have access to that information,” Mehta says. That could create a security risk for the operative, one that’s difficult to perfectly mitigate if a particular model is used by more than one group within the military.

However, Mehta says, it’s not as hard to keep information contained from the broader world: “If you set this up right, you will have very little risk of that data being surfaced on the general internet or back to OpenAI.” The government has some of the infrastructure for this already; the security giant Palantir has won sizable contracts for building a secure environment through which officials can ask AI models about classified topics without sending the information back to AI companies. But using these systems for training is still a new challenge. 

The Pentagon, spurred by a memo from Defense Secretary Pete Hegseth in January, has been racing to incorporate more AI. It has been used in combat, where generative AI has ranked lists of targets and recommended which to strike first, and in more administrative roles, like drafting contracts and reports.

There are lots of tasks currently handled by human analysts that the military might want to train leading AI models to perform and would require access to classified data, Mehta says. That could include learning to identify subtle clues in an image the way an analyst does, or connecting new information with historical context. The classified data could be pulled from the unfathomable amounts of text, audio, images, and video, in many languages, that intelligence services collect. 

It’s really hard to say which specific military tasks would require AI models to train on such data, Mehta cautions, “because obviously the Defense Department has lots of incentives to keep that information confidential, and they don’t want other countries to know what kind of capabilities we have exactly in that space.”

If you have information about the military’s use of AI, you can share it securely via Signal (username jamesodonnell.22).

The Download: glass chips and “AI-free” logos

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Future AI chips could be built on glass 

Human-made glass is thousands of years old. But it’s now poised to find its way into the AI chips used in the world’s newest and largest data centers.  

This year, a South Korean company called Absolics will start producing special glass panels that make next-generation computing hardware more powerful and efficient. Other companies, including Intel, are also pushing forward in this area.  

If all goes well, the technology could reduce the energy demands of chips in AI data centers—and even consumer laptops and mobile devices. Read the full story

—Jeremy Hsu

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 The race is on to establish a globally recognized “AI-free” logo 
Organizations are rushing to develop a universal label for human-made products. (BBC
+ A “QuitGPT” campaign is urging people to ditch ChatGPT. (MIT Technology Review

2 Elizabeth Warren wants answers on xAI’s access to military data 
The Pentagon reportedly gave it access to classified networks. (NBC News
+ Here’s how chatbots could be used for targeting decisions. (MIT Technology Review
+ The DoD is struggling to upgrade software for fighter jets. (Bloomberg $) 

3 Models are applying to be the faces of AI romance scams 
The “AI face models” are duping victims out of their money. (Wired $) 
+ Survivors have revealed how the “pig butchering” scams work. (MIT Technology Review

4 Meta is planning layoffs that could affect over 20% of staff 
The job cuts could offset its costly bet on AI. (Reuters $) 
+ There’s a long history of fears about AI’s impact on jobs. (MIT Technology Review

5 ByteDance delayed launching a video AI model after copyright disputes 
It famously generated footage of Tom Cruise and Brad Pitt fighting. (The Information $) 

6 Cybersecurity investigators have exposed a huge North Korean con 
The scammers secured remote jobs in the US, then stole money and sensitive information. (NBC News

7 A Chinese AI startup is set for a whopping $18 billion valuation 
That’s more than quadruple its valuation just three months ago. (Bloomberg $) 
+ Chinese open models are spreading fast—here’s why that matters. (MIT Technology Review)  

8 Peter Thiel has started a lecture series about the antichrist in Rome 
His plans have drawn attention from the Catholic Church. (Reuters $) 

9 Norway is fighting back against internet enshittification 
It’s joined a global campaign against the online world’s decay. (The Guardian
+ We may need to move beyond the big platforms. (MIT Technology Review

10 How a startup plans to resurrect the dodo 
Humans wiped them out nearly 400 years ago—can gene editing bring them back now? (Guardian

Quote of the day 

“I would build fission weapons. I would build fusion weapons. Nuclear weapons have been one of the most stabilizing forces in history—ever.” 

—Anduril founder Palmer Luckey shares his love of nukes with Axios

One More Thing 

We need a moonshot for computing 

grid of chips

TIM HERMAN/INTEL

The US government is organizing itself for the next era of computing. Ultimately, it has one big choice to make: adopt a conservative strategy that aims to preserve its lead for the next five years—or orient itself toward genuine computing moonshots. 

There is no shortage of candidates, including quantum computing, neuromorphic computing and reversible computing. And there are plenty of novel materials and devices. These possibilities could even be combined to form hybrid computing systems. 

The National Semiconductor Technology Center can drive these ideas forward. To be successful, it would do well to follow DARPA’s lead by focusing on moonshot programs. Read the full story
 
—Brady Helwig & PJ Maykish 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 
 
+ A UPS delivery driver heroically escaped from two murderous turkeys. 
+ Art’s love affair with cats is charmingly depicted in a new book. 
+ The humble pea and six other forgotten superfoods promise accessible nutritional power. 
MF DOOM: Long Island to Leeds is the Transatlantic tale of your favorite rapper’s favorite rapper. 

Nurturing agentic AI beyond the toddler stage

Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child’s first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach.

Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub. No more crawling on the carpet—the generative AI tech baby broke into a sprint, and very few governance principles were operationally prepared.

The accountability challenge: It’s not them, it’s you

Until now, governance has been focused on model output risks with humans in the loop before consequential decisions were made—such as with loan approvals or job applications. Model behavior, including drift, alignment, data exfiltration, and poisoning, was the focus. The pace was set by a human prompting a model in a chatbot format with plenty of back and forth interactions between machine and human.

Today, with autonomous agents operating in complex workflows, the vision and the benefits of applied AI require significantly fewer humans in the loop. The point is to operate a business at machine pace by automating manual tasks that have clear architecture and decision rules. The goal, from a liability standpoint, is no reduction in enterprise or business risk between a machine operating a workflow and a human operating a workflow. CX Today summarizes the situation succinctly: “AI does the work, humans own the risk,” and   California state law (AB 316), went into effect January 1, 2026, which removes the “AI did it; I didn’t approve it” excuse.  This is similar to parenting when an adult is held responsible for a child’s actions that negatively impacts the larger community.

The challenge is that without building in code that enforces operational governance aligned to different levels of risk and liability along the entire workflow, the benefit of autonomous AI agents is negated. In the past, governance had been static and aligned to the pace of interaction typical for a chatbot. However, autonomous AI by design removes humans from many decisions, which can affect governance.  

Considering permissions

Much like handing a three-year-old child a video game console that remotely controls an Abrams tank or an armed drone, leaving a probabilistic system operating without real-time guardrails that can change critical enterprise data carries significant risks.  For instance, agents that integrate and chain actions across multiple corporate systems can drift beyond privileges that a single human user would be granted. To move forward successfully, governance must shift beyond policy set by committees to operational code built into the workflows from the start.  

A humorous meme around the behavior of toddlers with toys starts with all the reasons that whatever toy you have is mine and ends with a broken toy that is definitely yours.  For example, OpenClaw delivered a user experience closer to working with a human assistant;, but the excitement shifted as security experts realized inexperienced users could be easily compromised by using it.

For decades, enterprise IT has lived with shadow IT and the reality that skilled technical teams must take over and clean up assets they did not architect or install, much like the toddler giving back a broken toy. With autonomous agents, the risks are larger: persistent service account credentials, long-lived API tokens, and permissions to make decisions over core file systems. To meet this challenge, it’s imperative to allocate upfront appropriate IT budget and labor to sustain central discovery, oversight, and remediation for the thousands of employee or department-created agents.

Having a retirement plan

Recently, an acquaintance mentioned that she saved a client hundreds of thousands of dollars by identifying and then ending a “zombie project” —a neglected or failed AI pilot left running on a GPU cloud instance. There are potentially thousands of agents that risk becoming a zombie fleet inside a business. Today, many executives encourage employees to use AI—or else—and employees are told to create their own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it is easy to project that the number of build-my-own agents coming to the office with their human employee will explode. Since an AI agent is a program that would fall under the definition of company-owned IP, as a employee changes departments or companies, those agents may be orphaned. There needs to be proactive policy and governance to decommission and retire any agents linked to a specific employee ID and permissions.

Financial optimization is governance out of the gate

While for some executives, autonomous AI sounds like a way to improve their operating margins by limiting human capital, many are finding that the ROI for human labor replacement is the wrong angle to take. Adding AI capabilities to the enterprise does not mean purchasing a new software tool with predictable instance-per-hour or per-seat pricing. A December 2025 IDC survey sponsored by Data Robot indicated that 96% of organizations deploying generative AI and 92% of those implementing agentic AI reported costs were higher or much higher than expected.

The survey separates the concepts of governance and ROI, but as AI systems scale across large enterprises, financial and liability governance should be architected into the workflows from the beginning. Part of enterprise class governance stems from predicting and adhering to allocated budgeting. Unlike the software financial models of per-seat costs with support and maintenance fees, use of AI is consumption and usage costs scale as the workflow scales across the enterprise: the more users, the more tokens or the more compute time, and the higher the bill. Think of it as a tab left open, or an online retailer’s digital shopping cart button unlocked on a toddler’s electronic game device.

Cloud FinOps was deterministic, but generative AI and agentic AI systems built on generative AI are probabilistic. Some AI-first founders are realizing that a single agents’ token costs can be as high as $100,000 per session. Without guardrails built in from the start, chaining complex autonomous agents that run unsupervised for long periods of time can easily blow past the budget for hiring a junior developer.

Keeping humans in the loop remains critical

The promise of autonomous agentic AI is acceleration of business operations, product introductions, customer experience, and customer retention. Shifting to machine-speed decisions without humans in and or on the loop for these key functions significantly changes the governance landscape. While many of the principles around proactive permissions, discovery, audit, remediation, and financial operations/optimizations are the same, how they are executed has to shift to keep pace with autonomous agentic AI.

This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.

Where OpenAI’s technology could show up in Iran

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI’s agreement allows for; Sam Altman said the military can’t use his company’s technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI’s other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious.

It’s unclear what OpenAI’s motivations are. It’s not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it’s just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads). Or perhaps Altman truly believes the ideological framing he often invokes: that liberal democracies (and their militaries) must have access to the most powerful AI to compete with China.

The more consequential question is what happens next. OpenAI has decided it is comfortable operating right in the messy heart of combat, just as the US escalates its strikes against Iran (with AI playing a larger role in that than ever before). So where exactly could OpenAI’s tech show up in this fight? And which applications will its customers (and employees) tolerate?

Targets and strikes

Though its Pentagon agreement is in place, it’s unclear when OpenAI’s technology will be ready for classified environments, since it must be integrated with other tools the military uses (Elon Musk’s xAI, which recently struck its own deal with the Pentagon, is expected to go through the same process with its AI model Grok). But there’s pressure to do this quickly because of controversy around the technology in use to date: After Anthropic refused to allow its AI to be used for “any lawful use,” President Trump ordered the military to stop using it, and Anthropic was designated a supply chain risk by the Pentagon. (Anthropic is fighting the designation in court.)

If the Iran conflict is still underway by the time OpenAI’s tech is in the system, what could it be used for? A recent conversation I had with a defense official suggests it might look something like this: A human analyst could put a list of potential targets into the AI model and ask it to analyze the information and prioritize which to strike first. The model could account for logistics information, like where particular planes or supplies are located. It could analyze lots of different inputs in the form of text, image, and video. 

A human would then be responsible for manually checking these outputs, the official said. But that raises an obvious question: If a person is truly double-checking AI’s outputs, how is it speeding up targeting and strike decisions?

For years the military has been using another AI system, called Maven, which can handle things like automatically analyzing drone footage to identify possible targets. It’s likely that OpenAI’s models, like Anthropic’s Claude, will offer a conversational interface on top of that, allowing users to ask for interpretations of intelligence and recommendations for which targets to strike first. 

It’s hard to overstate how new this is: AI has long done analysis for the military, drawing insights out of oceans of data. But using generative AI’s advice about which actions to take in the field is being tested in earnest for the first time in Iran.

Drone defense

At the end of 2024, OpenAI announced a partnership with Anduril, which makes both drones and counter-drone technologies for the military. The agreement said OpenAI would work with Anduril to do time-sensitive analysis of drones attacking US forces and help take them down. An OpenAI spokesperson told me at the time that this didn’t violate the company’s policies, which prohibited “systems designed to harm others,” because the technology was being used to target drones and not people. 

Anduril provides a suite of counter-drone technologies to military bases around the world (though the company declined to tell me whether its systems are deployed near Iran). Neither company has provided updates on how the project has developed since it was announced. However, Anduril has long trained its own AI models to analyze camera footage and sensor data to identify threats; what it focuses less on are conversational AI systems that allow soldiers to query those systems directly or receive guidance in natural language—an area where OpenAI’s models may fit.

The stakes are high. Six US service members were killed in Kuwait on March 1 following an Iranian drone attack that was not intercepted by US air defenses. 

Anduril’s interface, called Lattice, is where soldiers can control everything from drone defenses to missiles and autonomous submarines. And the company is winning massive contracts—$20 billion from the US Army just last week—to connect its systems with legacy military equipment and layer AI on them. If OpenAI’s models prove useful to Anduril, Lattice is designed to incorporate them quickly across this broader warfare stack. 

Back-office AI

In December, Defense Secretary Pete Hegseth started encouraging millions of people in more administrative roles in the military—contracts, logistics, purchasing—to use a new AI tool. Called GenAI.mil, it provided a way for personnel to securely access commercial AI models and use them for the same sorts of things as anyone in the business world. 

Google Gemini was one of the first to be available. In January, the Pentagon announced that xAI’s Grok was going to be added to the GenAI.mil platform as well, despite incidents in which the model had spread antisemitic content and created nonconsensual deepfakes. OpenAI followed in February, with the company announcing that its models would be used for drafting policy documents and contracts and assisting with administrative support of missions.

Anyone using ChatGPT for unclassified tasks on this platform is unlikely to have much sway over sensitive decisions in Iran, but the prospect of OpenAI deploying on the platform is important in another way. It serves the all-in attitude toward AI that Hegseth has been pushing relentlessly across the Pentagon (even if many early users aren’t entirely sure what they’re supposed to use it for). The message is that AI is transforming every aspect of how the US fights, from targeting decisions down to paperwork. And OpenAI is increasingly winning a piece of it all.

Search Console Adds Brand Filters

Search Console now includes filters to show only branded queries or exclude them.

Google uses AI to classify queries, and notes that it can make mistakes. Branded queries include:

  • Name of company or site,
  • Domain,
  • Brand-specific products and services, including common misspellings.

The feature makes query filtering easier but adds no new functionality. Query filters had included brand names via regular expressions and AI prompts.

In my testing, the new filter produced brand-name variations, such as:

  • One word and two,
  • Hyphenated,
  • Abbreviated,
  • Misspellings.

The filter correctly included the names of products and of human representatives, though it skipped quite a few branded items that it evidently didn’t recognize. For example, it identified a founder’s name as a branded query but not the title of the founder’s book.

It also included queries for unrelated executives and products, as well as for competitor names and clients’ case studies. Those inclusions may have been intentional, however, depending on Google’s definition of a “brand” search.

Brand-name filters

Despite the apparent mistakes, the feature makes it easier to analyze branded and non-branded rankings.

Here are a few use cases.

Find where you’re losing customers

Competitors likely bid on your brand terms in Google Ads or create “Alternative to [Your Brand]” pages.

If your average position for a branded term isn’t number 1, or if the click-through rate is lower than 50%, a competitor might be outranking you for your own name or aggressively advertising against it.

Evaluate each instance and create a plan to improve your branded search rankings.

Track the impact of a new marketing campaign

Any marketing campaign (advertising, email, editorial outreach) likely elevates branded traffic. Annotate those campaigns in Performance reports to track the impact, even if the direct attribution is unclear. (Note the new branded filter shows results from February 21 onward.)

To add an annotation, right-click on any performance chart.

Right-click on a Performance chart to activate annotations.

Compare branded search by region

Global sellers can check brand searches by country. Use the branded query filter and then add any country. Sellers can also compare two countries, such as branded traffic in Canada and the United Kingdom.

Screenshot of a Search Console report showing organic search performance in Canana and the U.K.

Compare brand recognition in two countries, such as Canada and the U.K.

Launch Your Own Private-Label Brand

Merchants develop private-label brands to boost revenue, improve supply chain control, and differentiate their stores. The process is not simple, but it can be profitable.

Retailers are the customer-facing endpoint in a network of brands, manufacturers, and distributors.

Profit comes after those participants have taken a share.

Private Benefit

Amazon, Walmart, and Target, as examples, have their own brands. The aim is to improve unit economics in at least three ways.

  • Profit. Sourced directly from a manufacturer, private-label brands remove one or more layers of intermediaries from the supply chain, usually distributors or other brands. A nearly identical private brand can earn more margin, even at a low price.
  • Supply chain control. The retailer can select a manufacturer, define the product’s specifications, negotiate minimum order quantities, and align lead times with peak demand.
  • Differentiation. Because the private label is exclusive to its owner, no competitor can sell it or match the SKU. Customers who like the private brand must return to the retailer to buy more.
Home page of Thomasnet

Thomasnet is a popular source for locating manufacturers, as is Alibaba.

Private vs. White Label

Private-label and white-label products are not the same. The difference is typically customization and exclusivity.

Private label goods are manufactured to a retailer’s specifications and sold exclusively under its brand. The retailer controls features, materials, packaging, and positioning.

White-label products are generic items that a manufacturer produces and sells to multiple brands. Retailers typically apply their own label or packaging but cannot substantially change the product itself.

Print-on-demand merchandise largely follows a white-label model since the base product is standard, although custom artwork can give it a private-label feel, a “private-label lite.”

Process

Launching a private-label product is a lot of work. Complications can arise at any point. The following steps are illustrative but not exhaustive. The devil is always in the details.

Choose a niche and validate

The first step is identifying a product with demand and room for improvement. Sources include marketplace listings, search trends, and product reviews to identify gaps. Even analyzing the products you already sell well can uncover potential private-label alternatives.

Tools such as Helium 10, Jungle Scout, and Google Trends can help estimate search demand and competition. Generative AI platforms can summarize thousands of product reviews to identify common complaints or feature requests.

Define the product and positioning

Once demand is validated, the merchant determines what makes the product unique.

Differentiation might include materials, features, packaging, manufacturing origin, or bundling complementary items. Even the price point can be a differentiator. Think good, better, best pricing, for example. The retailer also sets a target price and margin based on those differentiators.

Find manufacturers

Would-be private label owners typically locate manufacturers through directories such as Thomasnet and Alibaba, as well as trade shows and industry networks. The process usually involves requesting quotes, comparing minimum order quantities, and ordering samples.

Overseas manufacturers often have lower per-item costs, but domestic suppliers offer advantages such as faster shipping, easier communication, and improved quality oversight.

Create the brand and packaging

Branding turns a functional product into an offering. Choose a brand name, design a logo, and create packaging that communicates the product’s value and market positioning. Packaging impacts shipping durability and the unboxing experience.

Generative AI tools can assist with drafting product descriptions, generating branding ideas, and testing packaging concepts before committing to a final design.

Confirm compliance and product specifications

Before production may begin, the merchant and manufacturer will finalize product specifications and compliance requirements.

Specifications may include materials, dimensions, packaging instructions, and labeling. Some product categories and certifications require official testing.

Plan logistics and fulfillment

Next, the retailer decides how inventory will be stored and shipped. Will the merchant manage warehousing and fulfillment in-house? Will it outsource?

Logistics planning includes freight costs, lead times, and inventory management.

Place the production order

After approving the final product, the merchant places the first production order.

This order is the result of negotiations that cover unit price, payment terms, production timelines, and quality expectations. It kicks off the sales process.

Launch

Launch strategies include email promotions, advertising, social media posts, and influencer partnerships. Initial customer reviews are important for establishing credibility.

AI-powered advertising tools can help with campaign targeting and bidding as performance data accumulates.

Improve and expand

The final step is iteration. Analyze reviews, returns, and sales data to refine the product. Improvements might involve materials, packaging, or new variations.

In short, private label products allow ecommerce companies to move beyond reselling existing brands and create merchandise directly associated with their businesses.

Future AI chips could be built on glass

Human-made glass is thousands of years old. But it’s now poised to find its way into the AI chips used in the world’s newest and largest data centers. This year, a South Korean company called Absolics is planning to start commercial production of special glass panels designed to make next-generation computing hardware more powerful and energy efficient. Other companies, including Intel, are also pushing forward in this area. If all goes well, such glass technology could reduce the energy demands of the sorts of high-performance computing chips used in AI data centers—and it could eventually do the same for consumer laptops and mobile devices if production costs fall.

The idea is to use glass as the substrate, or layer, on which multiple silicon chips are connected. This form of “packaging” is an increasingly popular way to build computing hardware, because it lets engineers combine specialized chips designed for specific functions into a single system. But it presents challenges, including the fact that hardworking chips can run so hot they physically warp the substrate they’re built on. This can lead to misaligned components and may reduce how efficiently the chips can be cooled, leading to damage or premature failure. 

“As AI workloads surge and package sizes expand, the industry is confronting very real mechanical constraints that impact the trajectory of high-performance computing,” says Deepak Kulkarni, a senior fellow at the chip design company Advanced Micro Devices (AMD). “One of the most fundamental is warpage.”

That’s where glass comes in. It can handle the added heat better than existing substrates, and it will let engineers keep shrinking chip packages—which will make them faster and more energy efficient. It “unlocks the ability to keep scaling package footprints without hitting a mechanical wall,” says Kulkarni. 

Momentum is building behind the shift. Absolics has finished building a factory in the US that is dedicated to producing glass substrates for advanced chips and expects to begin commercial manufacturing this year. The US semiconductor manufacturer Intel is working toward incorporating glass in its next-generation chip packages, and its research has spurred other companies in the chip packaging supply chain to invest in it as well. South Korean and Chinese companies are among the early adopters. “Historically, this is not the first attempt to adopt glass in semiconductor packaging,” says Bilal Hachemi, senior technology and market analyst at the market research firm Yole Group. “But this time, the ecosystem is more solid and wider; the need for glass-based [technology] is sharper.” 

Fragile but mighty

Chip packaging has relied on organic substrates such as fiberglass-reinforced epoxy since the 1990s, says Rahul Manepalli, vice president of advanced packaging at Intel. But electrochemical complications limit how closely designers can place drilled holes to create copper-coated signal and power connections between the chips and the rest of the system. Chip designers must also account for the unpredictable shrinkage and distortion that organic substrates undergo as chips heat up and cool down. “We realized about a decade ago that we are going to have some limitations with organic substrates,” says Manepalli.

close up on a grid of glass substrate test units held by a gloved hand
These glass substrate test units were photographed at an Intel facility in Chandler, Arizona, in 2023.
INTEL CORPORATION

Glass may help overcome a lot of these limitations. Its thermal stability could allow engineers to create 10 times more connections per millimeter than organic substrates, says Manepalli. With denser connections, Intel’s designers can then stuff 50% more silicon chips into the same package area, improving computational capability. The denser connections also enable more efficient routing for the copper wires that deliver power to the chip. And the fact that glass dissipates heat more efficiently allows for chip designs that reduce overall power consumption. 

“The benefits of glass core substrates are undeniable,” says Manepalli. “It’s clear that the benefits will drive the industry to make this happen sooner rather than later, and we want to be one of the first ones who do it.” 

However, working with glass creates its own challenges. For one thing, it’s fragile. Glass substrates for data center chip packages are made from panels that are only about 700 micrometers to 1.4 millimeters thick, which leaves them susceptible to cracking or even shattering, says Manepalli. Researchers at Intel and other organizations have spent years figuring out how to use other materials and special tools to integrate the glass panels safely into semiconductor manufacturing processes. 

Now, Manepalli says, Intel’s research and development teams are reliably fabricating glass panels and churning out test chip packages that incorporate glass—and in early 2025 they demonstrated that a functional device with a glass core substrate could boot up the Windows operating system. It’s a significant improvement from the early testing days, when hundreds of glass panels got cracked every couple of days, he says.

Semiconductor manufacturers already use glass for more limited purposes, such as temporary support structures for silicon wafers. But the independent market research firm IDTechEx estimates there’s a big market for glass substrates, one that could boost the semiconductor market for glass from $1 billion in 2025 to as much as $4.4 billion by 2036. 

The material could have additional benefits if it takes off. Glass can be made astoundingly smooth—5,000 times smoother than organic substrates. This would eliminate defects that can arise as metal gets layered onto semiconductors, says Xiaoxi He, a research analyst at IDTechEx. Defects in these layers can worsen chips’ performance or even render them unusable.  

Glass could also help speed the movement of data. The material can guide light, which means chip designers could use it to build high-speed signal pathways directly into the substrate. Glass “holds enormous potential for the future of energy-efficient AI compute,” says Kulkarni at AMD, because a light-based system could move signals around with far less energy than the “power-hungry” copper pathways that are currently used to carry signals between chips in a package.

A panel pivot

Early research on glass packaging started at the 3D Systems Packaging Research Center at the Georgia Institute of Technology in 2009. The university eventually partnered with Absolics, a subsidiary of SKC, a South Korean company that produces chemicals and advanced materials. SKC constructed a semiconductor facility for manufacturing glass substrates in Covington, Georgia, in 2024, and the glass substrate partnership between Absolics and Georgia Tech was eventually awarded two grants in the same year—worth a combined $175 million—throughthe US government’s CHIPS for America program, established under the administration of President Joe Biden.

An Absolics employee monitors production of an early version of the company’s glass substrate.
COURTESY OF ABSOLICS INC

Now Absolics is moving toward commercialization; it plans to start manufacturing small quantities of glass substrates for customers this year. The company has led the way in commercializing glass substrates, says Yongwon Lee, a research engineer at Georgia Tech who is not directly involved in the commercial partnership with Absolics.

Absolics says its facility can currently produce a maximum of 12,000 square meters of glass panels a year. That’s enough, Lee estimates, to provide glass substrates for between 2 million and 3 million chip packages the size of Nvidia’s H100 GPU.

But the company isn’t alone. Lee says that multiple large manufacturers, including Samsung Electronics, Samsung Electro-Mechanics, and LG Innotek, have “significantly accelerated” their research and pilot production efforts in glass packaging over the past year. “This trend suggests that the glass substrate ecosystem is evolving from a single early mover to a broader industrial race,” he says.

Other companies are pivoting to play more specialized roles in the glass substrate supply chain. In 2025, JNTC, a company that makes electrical connectors and tempered glass for electronics, established a facility in South Korea that’s capable of producing 10,000 semi-finished glass panels per month. Such panels include drilled holes for vertical electrical connections and thin metal layers coating the glass, but they require additional manufacturing work for installation in chip packages. 

Last year, that South Korean facility began taking orders to supply semi-finished glass to both specialized substrate companies and semiconductor manufacturers. The company plans to expand the facility’s production in 2026 and open an additional manufacturing line in Vietnam in 2027.  Such industry actions show how quickly glass substrate technology is moving from prototype to commercialization—and how many tech players are betting that glass could be a surprisingly strong foundation for the future of computing and AI.

The Download: how AI is used for military targeting, and the Pentagon’s war on Claude

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Defense official reveals how AI chatbots could be used for targeting decisions 

The US military might use generative AI systems to rank targets and recommend which to strike first, according to a Defense Department official. 

A list of possible targets could first be fed into a generative AI system that the Pentagon is fielding for classified settings. Humans might then ask the system to analyze the information and prioritize the targets. They would then be responsible for checking and evaluating the results and recommendations. 

OpenAI’s ChatGPT and xAI’s Grok could soon be at the center of exactly these sorts of high-stakes military decisions. Read the full story

—James O’Donnell 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 The Pentagon’s CTO claims Claude would “pollute” the defense supply chain 
He blamed a “policy preference” that’s baked into the model. (CNBC
+ Anthropic is reeling from OpenAI’s “compromise” with the DoD. (MIT Technology Review

2 An ex-DOGE staffer has been accused of stealing social security data 
Then taking the information to his new job in the IT division of a government contractor. (Wired
+ He allegedly used a thumb drive to steal the data. (Washington Post

3 Ukraine is offering its battlefield data for AI training 
Allies can access the data to train drones and other UAVs. (Reuters)  
+ Europe has a drone-filled vision for the future of war. (MIT Technology Review)  

4 Meta has postponed its latest AI launch over performance issues 
It fell short of rival models from Google, OpenAI, and Anthropic. (NYT $) 
+ The company’s former AI chief is betting against LLMs. (MIT Technology Review). 

5 X could be breaching sanctions on Iran 
An account for Iran’s new supreme leader may break US rules. (Engadget
+ Hacker group Handala has become the face of Iranian cyberwarfare. (Wired
+ AI is turning the conflict into theater. (MIT Technology Review)  

6 A landmark social media addiction trial is wrapping up 
It’ll decide whether the platforms are liable for harms caused to children. (The Guardian)  
+ AI companions are the next stage of digital addiction. (MIT Technology Review

7 Western AI models have “failed spectacularly” on agriculture in the Global South 
The biggest problem? They’re not trained on local data. (Rest of World

8 Internet outages in Moscow are sparking surging sales of pagers 
The disruptions have been blamed on new tests of web controls. (Bloomberg $) 

9 Why is China obsessed with OpenClaw? 
Lobster-mania is spreading to the general public. (SCMP
Tech-savvy “tinkerers” are cashing in on the craze. (MIT Technology Review

10 Hollywood has soured on Silicon Valley 
Movies and TV shows have swapped eccentric founders for megalomaniac moguls. (NYT $) 

Quote of the day 

“We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” 

—OpenAI CEO Sam Altman makes a new pitch to investors at a BlackRock event, Gizmodo reports. 

One More Thing 

How the Ukraine-Russia war is reshaping the tech sector in Eastern Europe 

Latvia’s annual national defense exercises took place in September and October, as the Ukraine-Russia war nears its third anniversary.
GATIS INDRēVICS/ LATVIAN MINISTRY OF DEFENSE

When Latvian startup Global Wolf Motors first pitched the idea of a military scooter, it was met with skepticism—and a wall of bureaucracy. Then Russia launched its full-scale invasion of Ukraine in February 2022, and everything changed.  

Suddenly, Ukrainian combat units wanted any equipment they could get their hands on, and they were willing to try out ideas that might not have made the cut in peacetime. 

Within weeks, the scooters were on the front line—and even behind it, being used on daring reconnaissance missions. It signaled that a new product category for companies along Ukraine’s borders had opened: civilian technologies repurposed for military needs. Read the full story

—Peter Guest 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ A new mini magnet could slash the costs of MRIs and nuclear fusion.  
+ This interactive map of Earth offers new routes to facts about our planet. 
+ Escape the news cycle with this deep dive into the power of fantasy and nature. (Big thanks to reader and MIT alum Vicki for the find!) 
+ Reports of reading’s death are greatly exaggerated

Why physical AI is becoming manufacturing’s next advantage

For decades, manufacturers have pursued automation to drive efficiency, reduce costs, and stabilize operations. That approach delivered meaningful gains, but it is no longer enough.

Today’s manufacturing leaders face a different challenge: how to grow amid labor constraints, rising complexity, and increasing pressure to innovate faster without sacrificing safety, quality, or trust. The next phase of transformation will not be defined by isolated AI tools or individual robots, but by intelligence that can operate reliably in the physical world.

This is where physical AI—intelligence that can sense, reason, and act in the real world—marks a decisive shift. And it is why Microsoft and NVIDIA are working together to help manufacturers move from experimentation to production at industrial scale.

The industrial frontier: Intelligence and trust, not just automation

Most early AI adoption focused on narrow optimization: automating tasks, improving utilization, and cutting costs. While valuable, that phase often created new friction, including skills gaps, governance concerns, and uncertainty about long‑term impact. Furthermore, the use cases were plentiful but not as strategic.

The industrial frontier represents a different approach. Rather than asking how much work machines can replace, frontier manufacturers ask how AI can expand human capability, accelerate innovation, and unlock new forms of value while remaining trustworthy and controllable.

Across industries, companies that successfully move into this frontier phase share two non‑negotiables:

  • Intelligence: AI systems must understand how the business actually handles its data, workflows, and institutional knowledge.
  • Trust: As AI begins to act in high‑stakes environments, organizations must retain security, governance, and observability at every layer.

Without intelligence, AI becomes generic. Without trust, adoption stalls.

Why manufacturing is the proving ground for physical AI

Manufacturing is uniquely positioned at the center of this shift.

AI is no longer confined to planning or analytics. It is moving into physical execution: coordinating machines, adapting to real‑world variability, and working alongside people on the factory floor. Robotics, autonomous systems, and AI agents must now perceive, reason, and act in dynamic environments.

This transition exposes a critical gap. Traditional automation excels at repetition but struggles with adaptability. Human workers bring judgment and context but are constrained by scale. Physical AI closes that gap by enabling human‑led, AI‑operated systems, where people set intent and intelligent systems execute, learn, and improve over time. Humans are essential for scaled success.

Microsoft and NVIDIA: Accelerating physical AI at scale

Physical AI cannot be delivered through point solutions. It requires agentic-driven, enterprise-grade development, deployment, and operations toolchains and workflows that connect simulation, data, AI models, robotics, and governance into a coherent system.

NVIDIA is building the AI infrastructure that makes physical AI possible, including accelerated computing, open models, simulation libraries, and robotics frameworks and blueprints that enable the ecosystem to build autonomous robotics systems that can perceive, reason, plan, and take action in the physical world. Microsoft complements this with a cloud and data platform designed to operate physical AI securely, at scale, and across the enterprise.

Together, Microsoft and NVIDIA are enabling manufacturers to move beyond pilots toward production‑ready physical AI systems that can be developed, tested, deployed, and continuously improved across heterogeneous environments spanning the product lifecycle, factory operations, and supply chain.

From intelligence to action: Human-agent teams in the factory

At the industrial frontier, AI is not a standalone system, but a digital teammate.

When AI agents are grounded in the proper operational data, embedded in human workflows, and governed end to end, they can assist with tasks such as:

  • Optimizing production lines in real time
  • Coordinating maintenance and quality decisions
  • Adapting operations to supply or demand disruptions
  • Accelerating engineering and product lifecycle decisions

For example, manufacturers are beginning to use simulation‑grounded AI agents to evaluate production changes virtually before deploying them on the factory floor, reducing risk while accelerating decision‑making.

Crucially, frontier manufacturers design these systems so humans remain in control. AI executes, monitors, and recommends, while people provide intent, oversight, and judgment. This balance allows organizations to move faster without losing confidence or control.

The role of trust in scaling physical AI

As physical AI systems scale, trust becomes the limiting factor.

Manufacturers must ensure that AI systems are secure, observable, and operating within policy, especially when they influence safety‑critical or mission‑critical processes. Governance cannot be an afterthought; It must be engineered into the platform itself.

This is why frontier manufacturers treat trust as a first‑class requirement, pairing innovation with visibility, compliance, and accountability. Only then can physical AI move from promising demonstrations to enterprise‑wide deployment.

Why this moment matters—and what’s next

The convergence of AI agents, robotics, simulation, and real‑time data marks an inflection point for manufacturing. What was once experimental is becoming operational. What was once siloed is becoming connected.

At NVIDIA GTC 2026, Microsoft and NVIDIA will demonstrate how this collaboration supports physical AI systems that manufacturers can deploy today and scale responsibly tomorrow. From simulation‑driven development to real‑world execution, the focus is on helping manufacturers cross the industrial frontier with confidence.

For manufacturing leaders, the question is no longer whether physical AI will reshape operations, but how quickly they can adopt it responsibly, at scale, and with trust built in from the start.

Discover more with Microsoft at NVIDIA GTC 2026.

This content was produced by Microsoft. It was not written by MIT Technology Review’s editorial staff.