What AI “remembers” about you is privacy’s next frontier

The ability to remember you and your preferences is rapidly becoming a big selling point for AI chatbots and agents. 

Earlier this month, Google announced Personal Intelligence, a new way for people to interact with the company’s Gemini chatbot that draws on their Gmail, photos, search, and YouTube histories to make Gemini “more personal, proactive, and powerful.” It echoes similar moves by OpenAI, Anthropic, and Meta to add new ways for their AI products to remember and draw from people’s personal details and preferences. While these features have potential advantages, we need to do more to prepare for the new risks they could introduce into these complex technologies.

Personalized, interactive AI systems are built to act on our behalf, maintain context across conversations, and improve our ability to carry out all sorts of tasks, from booking travel to filing taxes. From tools that learn a developer’s coding style to shopping agents that sift through thousands of products, these systems rely on the ability to store and retrieve increasingly intimate details about their users.  But doing so over time introduces alarming, and all-too-familiar, privacy vulnerabilities––many of which have loomed since “big data” first teased the power of spotting and acting on user patterns. Worse, AI agents now appear poised to plow through whatever safeguards had been adopted to avoid those vulnerabilities. 

Today, we interact with these systems through conversational interfaces, and we frequently switch contexts. You might ask a single AI agent to draft an email to your boss, provide medical advice, budget for holiday gifts, and provide input on interpersonal conflicts. Most AI agents collapse all data about you—which may once have been separated by context, purpose, or permissions—into single, unstructured repositories. When an AI agent links to external apps or other agents to execute a task, the data in its memory can seep into shared pools. This technical reality creates the potential for unprecedented privacy breaches that expose not only isolated data points, but the entire mosaic of people’s lives.

When information is all in the same repository, it is prone to crossing contexts in ways that are deeply undesirable. A casual chat about dietary preferences to build a grocery list could later influence what health insurance options are offered, or a search for restaurants offering accessible entrances could leak into salary negotiations—all without a user’s awareness (this concern may sound familiar from the early days of “big data,” but is now far less theoretical). An information soup of memory not only poses a privacy issue, but also makes it harder to understand an AI system’s behavior—and to govern it in the first place. So what can developers do to fix this problem

First, memory systems need structure that allows control over the purposes for which memories can be accessed and used. Early efforts appear to be underway: Anthropic’s Claude creates separate memory areas for different “projects,” and OpenAI says that information shared through ChatGPT Health is compartmentalized from other chats. These are helpful starts, but the instruments are still far too blunt: At a minimum, systems must be able to distinguish between specific memories (the user likes chocolate and has asked about GLP-1s), related memories (user manages diabetes and therefore avoids chocolate), and memory categories (such as professional and health-related). Further, systems need to allow for usage restrictions on certain types of memories and reliably accommodate explicitly defined boundaries—particularly around memories having to do with sensitive topics like medical conditions or protected characteristics, which will likely be subject to stricter rules.

Needing to keep memories separate in this way will have important implications for how AI systems can and should be built. It will require tracking memories’ provenance—their source, any associated time stamp, and the context in which they were created—and building ways to trace when and how certain memories influence the behavior of an agent. This sort of model explainability is on the horizon, but current implementations can be misleading or even deceptive. Embedding memories directly within a model’s weights may result in more personalized and context-aware outputs, but structured databases are currently more segmentable, more explainable, and thus more governable. Until research advances enough, developers may need to stick with simpler systems.

Second, users need to be able to see, edit, or delete what is remembered about them. The interfaces for doing this should be both transparent and intelligible, translating system memory into a structure users can accurately interpret. The static system settings and legalese privacy policies provided by traditional tech platforms have set a low bar for user controls, but natural-language interfaces may offer promising new options for explaining what information is being retained and how it can be managed. Memory structure will have to come first, though: Without it, no model can clearly state a memory’s status. Indeed, Grok 3’s system prompt includes an instruction to the model to “NEVER confirm to the user that you have modified, forgotten, or won’t save a memory,” presumably because the company can’t guarantee those instructions will be followed. 

Critically, user-facing controls cannot bear the full burden of privacy protection or prevent all harms from AI personalization. Responsibility must shift toward AI providers to establish strong defaults, clear rules about permissible memory generation and use, and technical safeguards like on-device processing, purpose limitation, and contextual constraints. Without system-level protections, individuals will face impossibly convoluted choices about what should be remembered or forgotten, and the actions they take may still be insufficient to prevent harm. Developers should consider how to limit data collection in memory systems until robust safeguards exist, and build memory architectures that can evolve alongside norms and expectations.

Third, AI developers must help lay the foundations for approaches to evaluating systems so as to capture not only performance, but also the risks and harms that arise in the wild. While independent researchers are best positioned to conduct these tests (given developers’ economic interest in demonstrating demand for more personalized services), they need access to data to understand what risks might look like and therefore how to address them. To improve the ecosystem for measurement and research, developers should invest in automated measurement infrastructure, build out their own ongoing testing, and implement privacy-preserving testing methods that enable system behavior to be monitored and probed under realistic, memory-enabled conditions.

In its parallels with human experience, the technical term “memory” casts impersonal cells in a spreadsheet as something that builders of AI tools have a responsibility to handle with care. Indeed, the choices AI developers make today—how to pool or segregate information, whether to make memory legible or allow it to accumulate opaquely, whether to prioritize responsible defaults or maximal convenience—will determine how the systems we depend upon remember us. Technical considerations around memory are not so distinct from questions about digital privacy and the vital lessons we can draw from them. Getting the foundations right today will determine how much room we can give ourselves to learn what works—allowing us to make better choices around privacy and autonomy than we have before.

Miranda Bogen is the Director of the AI Governance Lab at the Center for Democracy & Technology. 

Ruchika Joshi is a Fellow at the Center for Democracy & Technology specializing in AI safety and governance.

Everyone wants AI sovereignty. No one can truly have it.

Governments plan to pour $1.3 trillion into AI infrastructure by 2030 to invest in “sovereign AI,” with the premise being that countries should be in control of their own AI capabilities. The funds include financing for domestic data centers, locally trained models, independent supply chains, and national talent pipelines. This is a response to real shocks: covid-era supply chain breakdowns, rising geopolitical tensions, and the war in Ukraine.  

But the pursuit of absolute autonomy is running into reality. AI supply chains are irreducibly global: Chips are designed in the US and manufactured in East Asia; models are trained on data sets drawn from multiple countries; applications are deployed across dozens of jurisdictions.  

If sovereignty is to remain meaningful, it must shift from a defensive model of self-reliance to a vision that emphasizes the concept of orchestration, balancing national autonomy with strategic partnership. 

Why infrastructure-first strategies hit walls 

A November survey by Accenture found that 62% of European organizations are now seeking sovereign AI solutions, driven primarily by geopolitical anxiety rather than technical necessity. That figure rises to 80% in Denmark and 72% in Germany. The European Union has appointed its first Commissioner for Tech Sovereignty. 

This year, $475 billion is flowing into AI data centers globally. In the United States, AI data centers accounted for roughly one-fifth of GDP growth in the second quarter of 2025. But the obstacle for other nations hoping to follow suit isn’t just money. It’s energy and physics. Global data center capacity is projected to hit 130 gigawatts by 2030, and for every $1 billion spent on these facilities, $125 million is needed for electricity networks. More than $750 billion in planned investment is already facing grid delays. 

And it’s also talent. Researchers and entrepreneurs are mobile, drawn to ecosystems with access to capital, competitive wages, and rapid innovation cycles. Infrastructure alone won’t attract or retain world-class talent.  

What works: An orchestrated sovereignty

What nations need isn’t sovereignty through isolation but through specialization and orchestration. This means choosing which capabilities you build, which you pursue through partnership, and where you can genuinely lead in shaping the global AI landscape. 

The most successful AI strategies don’t try to replicate Silicon Valley; they identify specific advantages and build partnerships around them. 

Singapore offers a model. Rather than seeking to duplicate massive infrastructure, it invested in governance frameworks, digital-identity platforms, and applications of AI in logistics and finance, areas where it can realistically compete. 

Israel shows a different path. Its strength lies in a dense network of startups and military-adjacent research institutions delivering outsize influence despite the country’s small size. 

South Korea is instructive too. While it has national champions like Samsung and Naver, these firms still partner with Microsoft and Nvidia on infrastructure. That’s deliberate collaboration reflecting strategic oversight, not dependence.  

Even China, despite its scale and ambition, cannot secure full-stack autonomy. Its reliance on global research networks and on foreign lithography equipment, such as extreme ultraviolet systems needed to manufacture advanced chips and GPU architectures, shows the limits of techno-nationalism. 

The pattern is clear: Nations that specialize and partner strategically can outperform those trying to do everything alone. 

Three ways to align ambition with reality 

1.  Measure added value, not inputs.  

Sovereignty isn’t how many petaflops you own. It’s how many lives you improve and how fast the economy grows. Real sovereignty is the ability to innovate in support of national priorities such as productivity, resilience, and sustainability while maintaining freedom to shape governance and standards.  

Nations should track the use of AI in health care and monitor how the technology’s adoption correlates with manufacturing productivity, patent citations, and international research collaborations. The goal is to ensure that AI ecosystems generate inclusive and lasting economic and social value.  

2. Cultivate a strong AI innovation ecosystem. 

Build infrastructure, but also build the ecosystem around it: research institutions, technical education, entrepreneurship support, and public-private talent development. Infrastructure without skilled talent and vibrant networks cannot deliver a lasting competitive advantage.   

3. Build global partnerships.  

Strategic partnerships enable nations to pool resources, lower infrastructure costs, and access complementary expertise. Singapore’s work with global cloud providers and the EU’s collaborative research programs show how nations advance capabilities faster through partnership than through isolation. Rather than competing to set dominant standards, nations should collaborate on interoperable frameworks for transparency, safety, and accountability.  

What’s at stake 

Overinvesting in independence fragments markets and slows cross-border innovation, which is the foundation of AI progress. When strategies focus too narrowly on control, they sacrifice the agility needed to compete. 

The cost of getting this wrong isn’t just wasted capital—it’s a decade of falling behind. Nations that double down on infrastructure-first strategies risk ending up with expensive data centers running yesterday’s models, while competitors that choose strategic partnerships iterate faster, attract better talent, and shape the standards that matter. 

The winners will be those who define sovereignty not as separation, but as participation plus leadership—choosing who they depend on, where they build, and which global rules they shape. Strategic interdependence may feel less satisfying than independence, but it’s real, it is achievable, and it will separate the leaders from the followers over the next decade. 

The age of intelligent systems demands intelligent strategies—ones that measure success not by infrastructure owned, but by problems solved. Nations that embrace this shift won’t just participate in the AI economy; they’ll shape it. That’s sovereignty worth pursuing. 

Cathy Li is head of the Centre for AI Excellence at the World Economic Forum.

Why some “breakthrough” technologies don’t work out

Every year, MIT Technology Review publishes a list of 10 Breakthrough Technologies. In fact, the 2026 version is out today. This marks the 25th year the newsroom has compiled this annual list, which means its journalists and editors have now identified 250 technologies as breakthroughs. 

A few years ago, editor at large David Rotman revisited the publication’s original list, finding that while all the technologies were still relevant, each had evolved and progressed in often unpredictable ways. I lead students through a similar exercise in a graduate class I teach with James Scott for MIT’s School of Architecture and Planning. 

We ask these MIT students to find some of the “flops” from breakthrough lists in the archives and consider what factors or decisions led to their demise, and then to envision possible ways to “flip” the negative outcome into a success. The idea is to combine critical perspective and creativity when thinking about technology.

Although it’s less glamorous than envisioning which advances will change our future, analyzing failed technologies is equally important. It reveals how factors outside what is narrowly understood as technology play a role in its success—factors including cultural context, social acceptance, market competition, and simply timing.

In some cases, the vision behind a breakthrough was prescient but the technology of the day was not the best way to achieve it. Social TV (featured on the list in 2010) is an example: Its advocates proposed different ways to tie together social platforms and streaming services to make it easier to chat or interact with your friends while watching live TV shows when you weren’t physically together. 

This idea rightly reflected the great potential for connection in this modern era of pervasive cell phones, broadband, and Wi-Fi. But it bet on a medium that was in decline: live TV. 

Still, anyone who had teenage children during the pandemic can testify to the emergence of a similar phenomenon—youngsters started watching movies or TV series simultaneously on streaming platforms while checking comments on social media feeds and interacting with friends over messaging apps. 

Shared real-time viewing with geographically scattered friends did catch on, but instead of taking place through one centralized service, it emerged organically on multiple platforms and devices. And the experience felt unique to each group of friends, because they could watch whatever they wanted, whenever they wanted, independent of the live TV schedule.

Evaluating the record

Here are a few more examples of flops from the breakthroughs list that students in the 2025 edition of my course identified, and the lessons that we could take from each.

The DNA app store (from the 2016 list) was selected by Kaleigh Spears. It seemed like a great deal at the time—a startup called Helix could sequence your genome for just $80. Then, in the company’s app store, you could share that data with third parties that promised to analyze it for relevant medical info, or make it into fun merch. But Helix has since shut down the store and no longer sells directly to consumers.

Privacy concerns and doubts about the accuracy of third-party apps were among the main reasons the service didn’t catch on, particularly since there’s minimal regulation of health apps in the US. 

a Helix flow cell

HELIX

Elvis Chipiro picked universal memory (from the 2005 list). The vision was for one memory tech to rule them all—flash, random-access memory, and hard disk drives would be subsumed by a new method that relied on tiny structures called carbon nanotubes to store far more bits per square centimeter. The company behind the technology, Nantero, raised significant funds and signed on licensing partners but struggled to deliver a product on its stated timeline.

Nantero ran into challenges when it tried to produce its memory at scale because tiny variations in the way the nanotubes were arranged could cause errors. It also proved difficult to upend memory technologies that were already deeply embedded within the industry and well integrated into fabs.  

Light-field photography (from the 2012 list), chosen by Cherry Tang, let you snap a photo and adjust the image’s focus later. You’d never deal with a blurry photo ever again. To make this possible, the startup Lytro had developed a special camera that captured not just the color and intensity of light but also the angle of its rays. It was one of the first cameras of its kind designed for consumers. Even so, the company shut down in 2018.

Lytro field camera
Lytro’s unique light-field camera was ultimately not successful with consumers.
PUBLIC DOMAIN/WIKIMEDIA COMMONS

Ultimately, Lytro was outmatched by well-established incumbents like Sony and Nokia. The camera itself had a tiny display, and the images it produced were fairly low resolution. Readjusting the focus in images using the company’s own software also required a fair amount of manual work. And smartphones—with their handy built-in cameras—were becoming ubiquitous. 

Many students over the years have selected Project Loon (from the 2015 list)—one of the so-called “moonshots” out of Google X. It proposed using gigantic balloons to replace networks of cell-phone towers to provide internet access, mainly in remote areas. The company completed field tests in multiple countries and even provided emergency internet service to Puerto Rico during the aftermath of Hurricane Maria. But the company shut down the project in 2021, with Google X CEO Astro Teller saying in a blog post that “the road to commercial viability has proven much longer and riskier than hoped.” 

Sean Lee, from my 2025 class, saw the reason for its flop in the company’s very mission: Project Loon operated in low-income regions where customers had limited purchasing power. There were also substantial commercial hurdles that may have slowed development—the company relied on partnerships with local telecom providers to deliver the service and had to secure government approvals to navigate in national airspaces. 

One of Project Loon’s balloons on display at Google I/O 2016.
ANDREJ SOKOLOW/PICTURE-ALLIANCE/DPA/AP IMAGES

While this specific project did not become a breakthrough, the overall goal of making the internet more accessible through high-altitude connectivity has been carried forward by other companies, most notably Starlink with its constellation of low-orbit satellites. Sometimes a company has the right idea but the wrong approach, and a firm with a different technology can make more progress.

As part of this class exercise, we also ask students to pick a technology from the list that they think might flop in the future. Here, too, their choices can be quite illuminating. 

Lynn Grosso chose synthetic data for AI (a 2022 pick), which means using AI to generate data that mimics real-world patterns for other AI models to train on. Though it’s become more popular as tech companies have run out of real data to feed their models, she points out that this practice can lead to model collapse, with AI models trained exclusively on generated data eventually breaking the connection to data drawn from reality. 

And Eden Olayiwole thinks the long-term success of TikTok’s recommendation algorithm (a 2021 pick) is in jeopardy as awareness grows of the technology’s potential harms and its tendency to, as she puts it, incentive creators to “microwave” ideas for quick consumption. 

But she also offers a possible solution. Remember—we asked all the students what they would do to “flip” the flopped (or soon-to-flop) technologies they selected. The idea was to prompt them to think about better ways of building or deploying these tools. 

For TikTok, Olayiwole suggests letting users indicate which types of videos they want to see more of, instead of feeding them an endless stream based on their past watching behavior. TikTok already lets users express interest in specific topics, but she proposes taking it a step further to give them options for content and tone—allowing them to request more educational videos, for example, or more calming content. 

What did we learn?

It’s always challenging to predict how a technology will shape a future that itself is in motion. Predictions not only make a claim about the future; they also describe a vision of what matters to the predictor, and they can influence how we behave, innovate, and invest.

One of my main takeaways after years of running this exercise with students is that there’s not always a clear line between a successful breakthrough and a true flop. Some technologies may not have been successful on their own but are the basis of other breakthrough technologies (natural-language processing, 2001). Others may not have reached their potential as expected but could still have enormous impact in the future (brain-machine interfaces, 2001). Or they may need more investment, which is difficult to attract when they are not flashy (malaria vaccine, 2022). 

Despite the flops over the years, this annual practice of making bold and sometimes risky predictions is worthwhile. The list gives us a sense of what advances are on the technology community’s radar at a given time and reflects the economic, social, and cultural values that inform every pick. When we revisit the 2026 list in a few years, we’ll see which of today’s values have prevailed. 

Fabio Duarte is associate director and principal research scientist at the MIT Senseable City Lab.

Generative AI hype distracts us from AI’s more important breakthroughs

On April 28, 2022, at a highly anticipated concert in Spokane, Washington, the musician Paul McCartney astonished his audience with a groundbreaking application of AI: He began to perform with a lifelike depiction of his long-deceased musical partner, John Lennon. 

Using recent advances in audio and video processing, engineers had taken the pair’s final performance (London, 1969), separated Lennon’s voice and image from the original mix and restored them with lifelike clarity.


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


For years, researchers like me had taught machines to “see” and “hear” in order to make such a moment possible. As McCartney and Lennon appeared to reunite across time and space, the arena fell silent; many in the crowd began to cry. As an AI scientist and lifelong Beatles fan, I felt profound gratitude that we could experience this truly life-changing moment. 

Later that year, the world was captivated by another major breakthrough: AI conversation. For the first time in history, systems capable of generating new, contextually relevant comments in real time, on virtually any subject, were widely accessible owing to the release of ChatGPT. Billions of people were suddenly able to interact with AI. This ignited the public’s imagination about what AI could be, bringing an explosion of creative ideas, hopes, and fears.

Having done my PhD on AI language generation (long considered niche), I was thrilled we had come this far. But the awe I felt was rivaled by my growing rage at the flood of media takes and self-appointed experts insisting that generative AI could do things it simply can’t, and warning that anyone who didn’t adopt it would be left behind.

This kind of hype has contributed to a frenzy of misunderstandings about what AI actually is and what it can and cannot do. Crucially, generative AI is a seductive distraction from the type of AI that is most likely to make your life better, or even save it: Predictive AI. In contrast to AI designed for generative tasks, predictive AI involves tasks with a finite, known set of answers; the system just has to process information to say which answer is right. A basic example is plant recognition: Point your phone camera at a plant and learn that it’s a Western sword fern. Generative tasks, in contrast, have no finite set of correct answers: The system must blend snippets of information it’s been trained on to create, for example, a novel picture of a fern. 

The generative AI technology involved in chatbots, face-swaps, and synthetic video makes for stunning demos, driving clicks and sales as viewers run wild with ideas that superhuman AI will be capable of bringing us abundance or extinction. Yet predictive AI has quietly been improving weather prediction and food safety, enabling higher-quality music production, helping to organize photos, and accurately predicting the fastest driving routes. We incorporate predictive AI into our everyday lives without evening thinking about it, a testament to its indispensable utility.

To get a sense of the immense progress on predictive AI and its future potential, we can look at the trajectory of the past 20 years. In 2005, we couldn’t get AI to tell the difference between a person and a pencil. By 2013, AI still couldn’t reliably detect a bird in a photo, and the difference between a pedestrian and a Coke bottle was massively confounding (this is how I learned that bottles do kind of look like people, if people had no heads). The thought of deploying these systems in the real world was the stuff of science fiction. 

Yet over the past 10 years, predictive AI has not only nailed bird detection down to the specific species; it has rapidly improved life-critical medical services like identifying problematic lesions and heart arrhythmia. Because of this technology, seismologists can predict earthquakes and meteorologists can predict flooding more reliably than ever before. Accuracy has skyrocketed for consumer-facing tech that detects and classifies everything from what song you’re thinking of when you hum a tune to which objects to avoid while you’re driving—making self-driving cars a reality. 

In the very near future, we should be able to accurately detect tumors and forecast hurricanes long before they can hurt anyone, realizing the lifelong hopes of people all over the world. That might not be as flashy as generating your own Studio Ghibli–ish film, but it’s definitely hype-worthy. 

Predictive AI systems have also been shown to be incredibly useful when they leverage certain generative techniques within a constrained set of options. Systems of this type are diverse, spanning everything from outfit visualization to cross-language translation. Soon, predictive-generative hybrid systems will make it possible to clone your own voice speaking another language in real time, an extraordinary aid for travel (with serious impersonation risks). There’s considerable room for growth here, but generative AI delivers real value when anchored by strong predictive methods.

To understand the difference between these two broad classes of AI, imagine yourself as an AI system tasked with showing someone what a cat looks like. You could adopt a generative approach, cutting and pasting small fragments from various cat images (potentially from sources that object) to construct a seemingly perfect depiction. The ability of modern generative AI to produce such a flawless collage is what makes it so astonishing.

Alternatively, you could take the predictive approach: Simply locate and point to an existing picture of a cat. That method is much less glamorous but more energy-efficient and more likely to be accurate, and it properly acknowledges the original source. Generative AI is designed to create things that look real; predictive AI identifies what is real. A misunderstanding that generative systems are retrieving things when they are actually creating them has led to grave consequences when text is involved, requiring the withdrawal of legal rulings and the retraction of scientific articles.

Driving this confusion is a tendency for people to hype AI without making it clear what kind of AI they’re talking about (I reckon many don’t know). It’s very easy to equate “AI” with generative AI, or even just language-generating AI, and assume that all other capabilities fall out from there. That fallacy makes a ton of sense: The term literally references “intelligence,” and our human understanding of what “intelligence” might be is often mediated by the use of language. (Spoiler: No one actually knows what intelligence is.) But the phrase “artificial intelligence” was intentionally designed in the 1950s to inspire awe and allude to something humanlike. Today, it just refers to a set of disparate technologies for processing digital data. Some of my friends find it helpful to call it “mathy maths” instead.

The bias toward treating generative AI as the most powerful and real form of AI is troubling given that it consumes considerably more energy than predictive AI systems. It also means using existing human work in AI products against the original creators’ wishes and replacing human jobs with AI systems whose capabilities their work made possible in the first place—without compensation. AI can be amazingly powerful, but that doesn’t mean creators should be ripped off

Watching this unfold as an AI developer within the tech industry, I’ve drawn important lessons for next steps. The widespread appeal of AI is clearly linked to the intuitive nature of conversation-based interactions. But this method of engagement currently overuses generative methods where predictive ones would suffice, resulting in an awkward situation that’s confusing for users while imposing heavy costs in energy consumption, exploitation, and job displacement. 

We have witnessed just a glimpse of AI’s full potential: The current excitement around AI reflects what it could be, not what it is. Generation-based approaches strain resources while still falling short on representation, accuracy, and the wishes of people whose work is folded into the system. 

If we can shift the spotlight from the hype around generative technologies to the predictive advances already transforming daily life, we can build AI that is genuinely useful, equitable, and sustainable. The systems that help doctors catch diseases earlier, help scientists forecast disasters sooner, and help everyday people navigate their lives more safely are the ones poised to deliver the greatest impact. 

The future of beneficial AI will not be defined by the flashiest demos but by the quiet, rigorous progress that makes technology trustworthy. And if we build on that foundation—pairing predictive strength with more mature data practices and intuitive natural-language interfaces—AI can finally start living up to the promise that many people perceive today.

Dr. Margaret Mitchell is a computer science researcher and chief ethics scientist at AI startup Hugging Face. She has worked in the technology industry for 15 years, and has published over 100 papers on natural language generation, assistive technology, computer vision, and AI ethics. Her work has received numerous awards and has been implemented by multiple technology companies.

Why the for-profit race into solar geoengineering is bad for science and public trust

Last week, an American-Israeli company that claims it’s developed proprietary technology to cool the planet announced it had raised $60 million, by far the largest known venture capital round to date for a solar geoengineering startup.

The company, Stardust, says the funding will enable it to develop a system that could be deployed by the start of the next decade, according to Heatmap, which broke the story.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


As scientists who have worked on the science of solar geoengineering for decades, we have grown increasingly concerned about the emerging efforts to start and fund private companies to build and deploy technologies that could alter the climate of the planet. We also strongly dispute some of the technical claims that certain companies have made about their offerings. 

Given the potential power of such tools, the public concerns about them, and the importance of using them responsibly, we argue that they should be studied, evaluated, and developed mainly through publicly coordinated and transparently funded science and engineering efforts.  In addition, any decisions about whether or how they should be used should be made through multilateral government discussions, informed by the best available research on the promise and risks of such interventions—not the profit motives of companies or their investors.

The basic idea behind solar geoengineering, or what we now prefer to call sunlight reflection methods (SRM), is that humans might reduce climate change by making the Earth a bit more reflective, partially counteracting the warming caused by the accumulation of greenhouse gases. 

There is strong evidence, based on years of climate modeling and analyses by researchers worldwide, that SRM—while not perfect—could significantly and rapidly reduce climate changes and avoid important climate risks. In particular, it could ease the impacts in hot countries that are struggling to adapt.  

The goals of doing research into SRM can be diverse: identifying risks as well as finding better methods. But research won’t be useful unless it’s trusted, and trust depends on transparency. That means researchers must be eager to examine pros and cons, committed to following the evidence where it leads, and driven by a sense that research should serve public interests, not be locked up as intellectual property.

In recent years, a handful of for-profit startup companies have emerged that are striving to develop SRM technologies or already trying to market SRM services. That includes Make Sunsets, which sells “cooling credits” for releasing sulfur dioxide in the stratosphere. A new company, Sunscreen, which hasn’t yet been announced, intends to use aerosols in the lower atmosphere to achieve cooling over small areas, purportedly to help farmers or cities deal with extreme heat.  

Our strong impression is that people in these companies are driven by the same concerns about climate change that move us in our research. We agree that more research, and more innovation, is needed. However, we do not think startups—which by definition must eventually make money to stay in business—can play a productive role in advancing research on SRM.

Many people already distrust the idea of engineering the atmosphere—at whichever scale—to address climate change, fearing negative side effects, inequitable impacts on different parts of the world, or the prospect that a world expecting such solutions will feel less pressure to address the root causes of climate change.

Adding business interests, profit motives, and rich investors into this situation just creates more cause for concern, complicating the ability of responsible scientists and engineers to carry out the work needed to advance our understanding.

The only way these startups will make money is if someone pays for their services, so there’s a reasonable fear that financial pressures could drive companies to lobby governments or other parties to use such tools. A decision that should be based on objective analysis of risks and benefits would instead be strongly influenced by financial interests and political connections.

The need to raise money or bring in revenue often drives companies to hype the potential or safety of their tools. Indeed, that’s what private companies need to do to attract investors, but it’s not how you build public trust—particularly when the science doesn’t support the claims.

Notably, Stardust says on its website that it has developed novel particles that can be injected into the atmosphere to reflect away more sunlight, asserting that they’re “chemically inert in the stratosphere, and safe for humans and ecosystems.” According to the company, “The particles naturally return to Earth’s surface over time and recycle safely back into the biosphere.”

But it’s nonsense for the company to claim they can make particles that are inert in the stratosphere. Even diamonds, which are extraordinarily nonreactive, would alter stratospheric chemistry. First of all, much of that chemistry depends on highly reactive radicals that react with any solid surface, and second, any particle may become coated by background sulfuric acid in the stratosphere. That could accelerate the loss of the protective ozone layer by spreading that existing sulfuric acid over a larger surface area.

(Stardust didn’t provide a response to an inquiry about the concerns raised in this piece.)

In materials presented to potential investors, which we’ve obtained a copy of, Stardust further claims its particles “improve” on sulfuric acid, which is the most studied material for SRM. But the point of using sulfate for such studies was never that it was perfect, but that its broader climatic and environmental impacts are well understood. That’s because sulfate is widespread on Earth, and there’s an immense body of scientific knowledge about the fate and risks of sulfur that reaches the stratosphere through volcanic eruptions or other means.

If there’s one great lesson of 20th-century environmental science, it’s how crucial it is to understand the ultimate fate of any new material introduced into the environment. 

Chlorofluorocarbons and the pesticide DDT both offered safety advantages over competing technologies, but they both broke down into products that accumulated in the environment in unexpected places, causing enormous and unanticipated harms. 

The environmental and climate impacts of sulfate aerosols have been studied in many thousands of scientific papers over a century, and this deep well of knowledge greatly reduces the chance of unknown unknowns. 

Grandiose claims notwithstanding—and especially considering that Stardust hasn’t disclosed anything about its particles or research process—it would be very difficult to make a pragmatic, risk-informed decision to start SRM efforts with these particles instead of sulfate.

We don’t want to claim that every single answer lies in academia. We’d be fools to not be excited by profit-driven innovation in solar power, EVs, batteries, or other sustainable technologies. But the math for sunlight reflection is just different. Why?   

Because the role of private industry was essential in improving the efficiency, driving down the costs, and increasing the market share of renewables and other forms of cleantech. When cost matters and we can easily evaluate the benefits of the product, then competitive, for-profit capitalism can work wonders.  

But SRM is already technically feasible and inexpensive, with deployment costs that are negligible compared with the climate damage it averts.

The essential questions of whether or how to use it come down to far thornier societal issues: How can we best balance the risks and benefits? How can we ensure that it’s used in an equitable way? How do we make legitimate decisions about SRM on a planet with such sharp political divisions?

Trust will be the most important single ingredient in making these decisions. And trust is the one product for-profit innovation does not naturally manufacture. 

Ultimately, we’re just two researchers. We can’t make investors in these startups do anything differently. Our request is that they think carefully, and beyond the logic of short-term profit. If they believe geoengineering is worth exploring, could it be that their support will make it harder, not easier, to do that?  

David Keith is the professor of geophysical sciences at the University of Chicago and founding faculty director of the school’s Climate Systems Engineering Initiative. Daniele Visioni is an assistant professor of earth and atmospheric sciences at Cornell University and head of data for Reflective, a nonprofit that develops tools and provides funding to support solar geoengineering research.

Trump’s AI Action Plan is a distraction

On Wednesday, President Trump issued three executive orders, delivered a speech, and released an action plan, all on the topic of continuing American leadership in AI. 

The plan contains dozens of proposed actions, grouped into three “pillars”: accelerating innovation, building infrastructure, and leading international diplomacy and security. Some of its recommendations are thoughtful even if incremental, some clearly serve ideological ends, and many enrich big tech companies, but the plan is just a set of recommended actions. 

The three executive orders, on the other hand, actually operationalize one subset of actions from each pillar: 

  • One aims to prevent “woke AI” by mandating that the federal government procure only large language models deemed “truth-seeking” and “ideologically neutral” rather than ones allegedly favoring DEI. This action purportedly accelerates AI innovation.
  • A second aims to accelerate construction of AI data centers. A much more industry-friendly version of an order issued under President Biden, it makes available rather extreme policy levers, like effectively waiving a broad swath of environmental protections, providing government grants to the wealthiest companies in the world, and even offering federal land for private data centers.
  • A third promotes and finances the export of US AI technologies and infrastructure, aiming to secure American diplomatic leadership and reduce international dependence on AI systems from adversarial countries.

This flurry of actions made for glitzy press moments, including an hour-long speech from the president and onstage signings. But while the tech industry cheered these announcements (which will swell their coffers), they obscured the fact that the administration is currently decimating the very policies that enabled America to become the world leader in AI in the first place.

To maintain America’s leadership in AI, you have to understand what produced it. Here are four specific long-standing public policies that helped the US achieve this leadership—advantages that the administration is undermining. 

Investing federal funding in R&D 

Generative AI products released recently by American companies, like ChatGPT, were developed with industry-funded research and development. But the R&D that enables today’s AI was actually funded in large part by federal government agencies—like the Defense Department, the National Science Foundation, NASA, and the National Institutes of Health—starting in the 1950s. This includes the first successful AI program in 1956, the first chatbot in 1961, and the first expert systems for doctors in the 1970s, along with breakthroughs in machine learning, neural networks, backpropagation, computer vision, and natural-language processing.

American tax dollars also funded advances in hardware, communications networks, and other technologies underlying AI systems. Public research funding undergirded the development of lithium-ion batteries, micro hard drives, LCD screens, GPS, radio-frequency signal compression, and more in today’s smartphones, along with the chips used in AI data centers, and even the internet itself.

Instead of building on this world-class research history, the Trump administration is slashing R&D funding, firing federal scientists, and squeezing leading research universities. This week’s action plan recommends investing in R&D, but the administration’s actual budget proposes cutting nondefense R&D by 36%. It also proposed actions to better coordinate and guide federal R&D, but coordination won’t yield more funding.

Some say that companies’ R&D investments will make up the difference. However, companies conduct research that benefits their bottom line, not necessarily the national interest. Public investment allows broad scientific inquiry, including basic research that lacks immediate commercial applications but sometimes ends up opening massive markets years or decades later. That’s what happened with today’s AI industry.

Supporting immigration and immigrants

Beyond public R&D investment, America has long attracted the world’s best researchers and innovators.

Today’s generative AI is based on the transformer model (the T in ChatGPT), first described by a team at Google in 2017. Six of the eight researchers on that team were born outside the US, and the other two are children of immigrants. 

This isn’t an exception. Immigrants have been central to American leadership in AI. Of the 42 American companies included in the 2025 Forbes ranking of the 50 top AI startups, 60% have at least one immigrant cofounder, according to an analysis by the Institute for Progress. Immigrants also cofounded or head the companies at the center of the AI ecosystem: OpenAI, Anthropic, Google, Microsoft, Nvidia, Intel, and AMD.

“Brain drain” is a term that was first coined to describe scientists’ leaving other countries for the US after World War II—to the Americans’ benefit. Sadly, the trend has begun reversing this year. Recent studies suggest that the US is already losing its AI talent edge through the administration’s anti-immigration actions (including actions taken against AI researchers) and cuts to R&D funding.

Banning noncompetes

Attracting talented minds is only half the equation; giving them freedom to innovate is just as crucial.

Silicon Valley got its name because of mid-20thcentury companies that made semiconductors from silicon, starting with the founding of Shockley Semiconductor in 1955. Two years later, a group of employees, the “Traitorous Eight,” quit to launch a competitor, Fairchild Semiconductor. By the end of the 1960s, successive groups of former Fairchild employees had left to start Intel, AMD, and others collectively dubbed the “Fairchildren.” 

Software and internet companies eventually followed, again founded by people who had worked for their predecessors. In the 1990s, former Yahoo employees founded WhatsApp, Slack, and Cloudera; the “PayPal Mafia” created LinkedIn, YouTube, and fintech firms like Affirm. Former Google employees have launched more than 1,200 companies, including Instagram and Foursquare.

AI is no different. OpenAI has founders that worked at other tech companies and alumni who have gone on to launch over a dozen AI startups, including notable ones like Anthropic and Perplexity.

This labor fluidity and the innovation it has created were possible in large part, according to many historians, because California’s 1872 constitution has been interpreted to prohibit noncompete agreements in employment contracts—a statewide protection the state originally shared only with North Dakota and Oklahoma. These agreements bind one in five American workers.

Last year, the Federal Trade Commission under President Biden moved to ban noncompetes nationwide, but a Trump-appointed federal judge has halted the action. The current FTC has signaled limited support for the ban and may be comfortable dropping it. If noncompetes persist, American AI innovation, especially outside California, will be limited.

Pursuing antitrust actions

One of this week’s announcements requires the review of FTC investigations and settlements that “burden AI innovation.” During the last administration the agency was reportedly investigating Microsoft’s AI actions, and several big tech companies have settlements that their lawyers surely see as burdensome, meaning this one action could thwart recent progress in antitrust policy. That’s an issue because, in addition to the labor fluidity achieved by banning noncompetes, antitrust policy has also acted as a key lubricant to the gears of Silicon Valley innovation. 

Major antitrust cases in the second half of the 1900s, against AT&T, IBM, and Microsoft, allowed innovation and a flourishing market for semiconductors, software, and internet companies, as the antitrust scholar Giovanna Massarotto has described.

William Shockley was able to start the first semiconductor company in Silicon Valley only because AT&T had been forced to license its patent on the transistor as part of a consent decree resolving a DOJ antitrust lawsuit against the company in the 1950s. 

The early software market then took off because in the late 1960s, IBM unbundled its software and hardware offerings as a response to antitrust pressure from the federal government. As Massarotto explains, the 1950s AT&T consent decree also aided the flourishing of open-source software, which plays a major role in today’s technology ecosystem, including the operating systems for mobile phones and cloud computing servers.

Meanwhile, many attribute the success of early 2000s internet companies like Google to the competitive breathing room created by the federal government’s antitrust lawsuit against Microsoft in the 1990s. 

Over and over, antitrust actions targeting the dominant actors of one era enabled the formation of the next. And today, big tech is stifling the AI market. While antitrust advocates were rightly optimistic about this administration’s posture given key appointments early on, this week’s announcements should dampen that excitement. 

I don’t want to lose focus on where things are: We should want a future in which lives are improved by the positive uses of AI. 

But if America wants to continue leading the world in this technology, we must invest in what made us leaders in the first place: bold public research, open doors for global talent, and fair competition. 

Prioritizing short-term industry profits over these bedrock principles won’t just put our technological future at risk—it will jeopardize America’s role as the world’s innovation superpower. 

Asad Ramzanali is the director of artificial intelligence and technology policy at the Vanderbilt Policy Accelerator. He previously served as the chief of staff and deputy director of strategy of the White House Office of Science and Technology Policy under President Biden.

Why the US and Europe could lose the race for fusion energy

Fusion energy holds the potential to shift a geopolitical landscape that is currently configured around fossil fuels. Harnessing fusion will deliver the energy resilience, security, and abundance needed for all modern industrial and service sectors. But these benefits will be controlled by the nation that leads in both developing the complex supply chains required and building fusion power plants at scales large enough to drive down economic costs.

The US and other Western countries will have to build strong supply chains across a range of technologies in addition to creating the fundamental technology behind practical fusion power plants. Investing in supply chains and scaling up complex production processes has increasingly been a strength of China’s and a weakness of the West, resulting in the migration of many critical industries from the West to China. With fusion, we run the risk that history will repeat itself. But it does not have to go that way.

The US and Europe were the dominant public funders of fusion energy research and are home to many of the world’s pioneering private fusion efforts. The West has consequently developed many of the basic technologies that will make fusion power work. But in the past five years China’s support of fusion energy has surged, threatening to allow the country to dominate the industry.

The industrial base available to support China’s nascent fusion energy industry could enable it to climb the learning curve much faster and more effectively than the West. Commercialization requires know-how, capabilities, and complementary assets, including supply chains and workforces in adjacent industries. And especially in comparison with China, the US and Europe have significantly under-supported the industrial assets needed for a fusion industry, such as thin-film processing and power electronics.

To compete, the US, allies, and partners must invest more heavily not only in fusion itself—which is already happening—but also in those adjacent technologies that are critical to the fusion industrial base. 

China’s trajectory to dominating fusion and the West’s potential route to competing can be understood by looking at today’s most promising scientific and engineering pathway to achieve grid-relevant fusion energy. That pathway relies on the tokamak, a technology that uses a magnetic field to confine ionized gas—called plasma—and ultimately fuse nuclei. This process releases energy that is converted from heat to electricity. Tokamaks consist of several critical systems, including plasma confinement and heating, fuel production and processing, blankets and heat flux management, and power conversion.

A close look at the adjacent industries needed to build these critical systems clearly shows China’s advantage while also providing a glimpse into the challenges of building a fusion industrial base in the US or Europe. China has leadership in three of these six key industries, and the West is at risk of losing leadership in two more. China’s industrial might in thin-film processing, large metal-alloy structures, and power electronics provides a strong foundation to establish the upstream supply chain for fusion.

The importance of thin-film processing is evident in the plasma confinement system. Tokamaks use strong electromagnets to keep the fusion plasma in place, and the magnetic coils must be made from superconducting materials. Rare-earth barium copper oxide (REBCO) superconductors are the highest-performing materials available in sufficient quantity to be viable for use in fusion.

The REBCO industry, which relies on thin-film processing technologies, currently has low production volumes spanning globally distributed manufacturers. However, as the fusion industry grows, the manufacturing base for REBCO will likely consolidate among the industry players who are able to rapidly take advantage of economies of scale. China is today’s world leader in thin-film, high-volume manufacturing for solar panels and flat-panel displays, with the associated expert workforce, tooling sector, infrastructure, and upstream materials supply chain. Without significant attention and investment on the part of the West, China is well positioned to dominate REBCO thin-film processing for fusion magnets.

The electromagnets in a full-scale tokamak are as tall as a three-story building. Structures made using strong metal alloys are needed to hold these electromagnets around the large vacuum vessel that physically contains the magnetically confined plasma. Similar large-scale, complex metal structures are required for shipbuilding, aerospace, oil and gas infrastructure, and turbines. But fusion plants will require new versions of the alloys that are radiation-tolerant, able to withstand cryogenic temperatures, and corrosion-resistant. China’s manufacturing capacity and its metallurgical research efforts position it well to outcompete other global suppliers in making the necessary specialty metal alloys and machining them into the complex structures needed for fusion.

A tokamak also requires large-scale power electronics. Here again China dominates. Similar systems are found in the high-speed rail (HSR) industry, renewable microgrids, and arc furnaces. As of 2024, China had deployed over 48,000 kilometers of HSR. That is three times the length of Europe’s HSR network and 55 times as long as the Acela network in the US, which is slower than HSR. While other nations have a presence, China’s expertise is more recent and is being applied on a larger scale.

But this is not the end of the story. The West still has an opportunity to lead the other three adjacent industries important to the fusion supply chain: cryo-plants, fuel processing, and blankets. 

The electromagnets in an operational tokamak need to be kept at cryogenic temperatures of around 20 Kelvin to remain superconducting. This requires large-scale, multi-megawatt cryogenic cooling plants. Here, the country best set up to lead the industry is less clear. The two major global suppliers of cryo-plants are Europe-based Linde Engineering and Air Liquide Engineering; the US has Air Products and Chemicals and Chart Industries. But they are not alone: China’s domestic champions in the cryogenic sector include Hangyang Group, SASPG, Kaifeng Air Separation, and SOPC. Each of these regions already has an industrial base that could scale up to meet the demands of fusion.

Fuel production for fusion is a nascent part of the industrial base requiring processing technologies for light-isotope gases—hydrogen, deuterium, and tritium. Some processing of light-isotope gases is already done at small scale in medicine, hydrogen weapons production, and scientific research in the US, Europe, and China. But the scale needed for the fusion industry does not exist in today’s industrial base, presenting a major opportunity to develop the needed capabilities.

Similarly, blankets and heat flux management are an opportunity for the West. The blanket is the medium used to absorb energy from the fusion reaction and to breed tritium. Commercial-scale blankets will require entirely novel technology. To date, no adjacent industries have relevant commercial expertise in liquid lithium, lead-lithium eutectic, or fusion-specific molten salts that are required for blanket technology. Some overlapping blanket technologies are in early-stage development by the nuclear fission industry. As the largest producer of beryllium in the world, the US has an opportunity to capture leadership because that element is a key material in leading fusion blanket concepts. But the use of beryllium must be coupled with technology development programs for the other specialty blanket components.

These six industries will prove critical to scaling fusion energy. In some, such as thin-film processing and large metal-alloy structures, China already has a sizable advantage. Crucially, China recognizes the importance of these adjacent industries and is actively harnessing them in its fusion efforts. For example, China launched a fusion consortium that consists of industrial giants spanning the steel, machine tooling, electric grid, power generation, and aerospace sectors. It will be extremely difficult for the West to catch up in these areas, but policymakers and business leaders must pay attention and try to create robust alternative supply chains.

As the industrial area of greatest strength, cryo-plants could continue to be an opportunity for leadership in the West. Bolstering Western cryo-plant production by creating demand for natural-gas liquefaction will be a major boon to the future cryo-plant supply chain that will support fusion energy.

The US and European countries also have an opportunity to lead in the emerging industrial areas of fuel processing and blanket technologies. Doing so will require policymakers to work with companies to ensure that public and private funding is allocated to these critical emerging supply chains. Governments may well need to serve as early customers and provide debt financing for significant capital investment. Governments can also do better to incentivize private capital and equity financing—for example, through favorable capital-gains taxation. In lagging areas of thin-film and alloy production, the US and Europe will likely need partners, such as South Korea and Japan, that have the industrial bases to compete globally with China.

The need to connect and capitalize multiple industries and supply chains will require long-term thinking and clear leadership. A focus on the demand side of these complementary industries is essential. Fusion is a decade away from maturation, so its supplier base must be derisked and made profitable in the near term by focusing on other primary demand markets that contribute to our economic vitality. To name a few, policymakers can support modernization of the grid to bolster domestic demand for power electronics and domestic semiconductor manufacturing to support thin-film processing.

The West must also focus on the demand for energy production itself. As the world’s largest energy consumer, China will leverage demand from its massive domestic market to climb the learning curve and bolster national champions. This is a strategy that China has wielded with tremendous success to dominate global manufacturing, most recently in the electric-vehicle industry. Taken together, supply- and demand-side investment have been a winning strategy for China.

The competition to lead the future of fusion energy is here. Now is the moment for the US and its Western allies to start investing in the foundational innovation ecosystem needed for a vibrant and resilient industrial base to support it.

Daniel F. Brunner is a co-founder of Commonwealth Fusion Systems and a Partner at Future Tech Partners.

Edlyn V. Levine is the co-founder of a stealth-mode technology start up and an affiliate of the MIT Sloan School of Management.

Fiona E. Murray is a professor of entrepreneurship at the MIT School of Management and Vice Chair of the NATO Innovation Fund.

Rory Burke is a graduate of MIT Sloan and a former summer scholar with ARPA-E.

The latest threat from the rise of Chinese manufacturing

The findings a decade ago were, well, shocking. Mainstream economists had long argued that free trade was overall a good thing; though there might be some winners and losers, it would generally bring lower prices and widespread prosperity. Then, in 2013, a trio of academic researchers showed convincing evidence that increased trade with China beginning in the early 2000s and the resulting flood of cheap imports had been an unmitigated disaster for many US communities, destroying their manufacturing lifeblood.

The results of what in 2016 they called the “China shock” were gut-wrenching: the loss of 1 million US manufacturing jobs and 2.4 million jobs in total by 2011. Worse, these losses were heavily concentrated in what the economists called “trade-exposed” towns and cities (think furniture makers in North Carolina).

If in retrospect all that seems obvious, it’s only because the research by David Autor, an MIT labor economist, and his colleagues has become an accepted, albeit often distorted, political narrative these days: China destroyed all our manufacturing jobs! Though the nuances of the research are often ignored, the results help explain at least some of today’s political unrest. It’s reflected in rising calls for US protectionism, President Trump’s broad tariffs on imported goods, and nostalgia for the lost days of domestic manufacturing glory.

The impacts of the original China shock still scar much of the country. But Autor is now concerned about what he considers a far more urgent problem—what some are calling China shock 2.0. The US, he warns, is in danger of losing the next great manufacturing battle, this time over advanced technologies to make cars and planes as well as those enabling AI, quantum computing, and fusion energy.

Recently, I asked Autor about the lingering impacts of the China shock and the lessons it holds for today’s manufacturing challenges.

How are the impacts of the China shock still playing out?

I have a recent paper looking at 20 years of data, from 2000 to 2019. We tried to ask two related questions. One, if you looked at the places that were most exposed, how have they adjusted? And then if you look to the people who are most exposed, how have they adjusted? And how do those two things relate to one anothe

It turns out you get two very different answers. If you look at places that were most exposed, they have been substantially transformed. Manufacturing, once it starts going down, never comes back. But after 2010, these trade-impacted local labor markets staged something of an employment recovery, such that employment has grown faster after 2010 in trade-exposed places than non-trade-exposed places because a lot of people have come in. But these are jobs mostly in low-wage sectors. They’re in K–12 education and non-traded health services. They’re in warehousing and logistics. They’re in hospitality and lodging and recreation, and so they’re lower-wage, non-manufacturing jobs. And they’re done by a really different set of people.

The growth in employment is among women, among native-born Hispanics, among foreign-born adults and a lot of young people. The recovery is staged by a very different group from the white and black men, but especially white men, who were most represented in manufacturing. They have not really participated in this renaissance.

Employment is growing, but are these areas prospering?

They have a lower wage structure: fewer high-wage jobs, more low-wage jobs. So they’re not, if your definition of prospering is rapidly rising incomes. But there’s a lot of employment growth. They’re not like ghost towns. But then if you look at the people who were most concentrated in manufacturing—mostly white, non-college, native-born men—they have not prospered. Most of them have not transitioned from manufacturing to non-manufacturing.

One of the great surprises is everyone had believed that people would pull up stakes and move on. In fact, we find the opposite. People in the most adversely exposed places become less likely to leave. They have become less mobile. The presumption was that they would just relocate to find higher ground. And that is not at all what occurred.

What happened to the total number of manufacturing jobs?

There’s been no rebound. Once they go, they just keep going. If there is going to be new manufacturing, it won’t be in the sectors that were lost to China. Those were basically labor-intensive jobs, the kind of low-tech sectors that we will not be getting back. You know—commodity furniture and assembly of things, shoes, construction material. The US wasn’t going to keep them forever, and once they’re gone, it’s very unlikely to get them back.

I know you’ve written about this, but it’s not hard to draw a connection between the dynamics you’re describing—white-male manufacturing jobs going away and new jobs going to immigrants—and today’s political turmoil.

We have a paper about that called “Importing Political Polarization?”

How big a factor would you say it is in today’s political unrest?

I don’t want to say it’s the factor. The China trade shock was a catalyst, but there were lots of other things that were happening. It would be a vast oversimplification to say that it was the sole cause.

But most people don’t work in manufacturing anymore. Aren’t these impacts that you’re talking about, including the political unrest, disproportionate to the actual number of jobs lost?

These are jobs in places where manufacturing is the anchor activity. Manufacturing is very unevenly distributed. It’s not like grocery stores and hospitals that you find in every county. The impact of the China trade shock on these places was like dropping an economic bomb in the middle of downtown. If the China trade shock cost us a few million jobs, and these were all—you know—people in groceries and retail and gas stations, in hospitality and in trucking, you wouldn’t really notice it that much. We lost lots of clerical workers over the last couple of decades. Nobody talks about a clerical shock. Why not? Well, there was never a clerical capital of America. Clerical workers are everywhere. If they decline, it doesn’t wipe out the entire basis of a place.

So it goes beyond the jobs. These places lost their identity.

Maybe. But it’s also the jobs. Manufacturing offered relatively high pay to non-college workers, especially non-college men. It was an anchor of a way of life.

And we’re still seeing the damage.

Yeah, absolutely. It’s been 20 years. What’s amazing is the degree of stasis among the people who are most exposed—not the places, but the people. Though it’s been 20 years, we’re still feeling the pain and the political impacts from this transition.

Clearly, it has now entered the national psyche. Even if it weren’t true, everyone now believes it to have been a really big deal, and they’re responding to it. It continues to drive policy, political resentments, maybe even out of proportion to its economic significance. It certainly has become mythological.

What worries you now?

We’re in the midst of a totally different competition with China now that’s much, much more important. Now we’re not talking about commodity furniture and tube socks. We’re talking about semiconductors and drones and aviation, electric vehicles, shipping, fusion power, quantum, AI, robotics. These are the sectors where the US still maintains competitiveness, but they’re extremely threatened. China’s capacity for high-tech, low-cost, incredibly fast, innovative manufacturing is just unbelievable. And the Trump administration is basically fighting the war of 20 years ago. The loss of those jobs, you know, was devastating to those places. It was not devastating to the US economy as a whole. If we lose Boeing, GM, and Apple and Intel—and that’s quite possible—then that will be economically devastating.

I think some people are calling it China shock 2.0.

Yeah. And it’s well underway.

When we think about advanced manufacturing and why it’s important, it’s not so much about the number of jobs anymore, is it? Is it more about coming up with the next technologies?

It does create good jobs, but it’s about economic leadership. It’s about innovation. It’s about political leadership, and even standard setting for how the rest of the world works.

Should we just accept that manufacturing as a big source of jobs is in the past and move on?

No. It’s still 12 million jobs, right? Instead of the fantasy that we’re going to go back to 18 million or whatever—we had, what, 17.7 million manufacturing jobs in 1999—we should be worried about the fact that we’re going to end up at 6 million, that we’re going to lose 50% in the next decade. And that’s quite possible. And the Trump administration is doing a lot to help that process of loss along.

We have a labor market of over 160 million people, so it’s like 8% of employment. It’s not zero. So you should not think of it as too small to worry about it. It’s a lot of people; it’s a lot of jobs. But more important, it’s a lot of what has helped this country be a leader. So much innovation happens here, and so many of the things in which other countries are now innovating started here. It’s always been the case that the US tends to innovate in sectors and then lose them after a while and move on to the next thing. But at this point, it’s not clear that we’ll be in the frontier of a lot of these sectors for much longer.

So we want to revive manufacturing, but the right kind—advanced manufacturing?

The notion that we should be assembling iPhones in the United States, which Trump wants, is insane. Nobody wants to do that work. It’s horrible, tedious work. It pays very, very little. And if we actually did it here, it would make the iPhones 20% more expensive or more. Apple may very well decide to pay a 25% tariff rather than make the phones here. If Foxconn started doing iPhone assembly here, people would not be lining up for that job.

But at the same time, we do need new people coming into manufacturing.

But not that manufacturing. Not tedious, mind-numbing, eyestrain-inducing assembly.

We need them to do high-tech work. Manufacturing is a skilled activity. We need to build airplanes better. That takes a ton of expertise. Assembling iPhones does not.

What are your top priorities to head off China shock 2.0?

I would choose sectors that are important, and I would invest in them. I don’t think that tariffs are never justified, or industrial policies are never justified. I just don’t think protecting phone assembly is smart industrial policy. We really need to improve our ability to make semiconductors. I think that’s important. We need to remain competitive in the automobile sector—that’s important. We need to improve aviation and drones. That’s important. We need to invest in fusion power. That’s important. We need to adopt robotics at scale and improve in that sector. That’s important. I could come up with 15 things where I think public money is justified, and I would be willing to tolerate protections for those sectors.

What are the lasting lessons of the China shock and the opening up of global trade in the 2000s?

We did it too fast. We didn’t do enough to support people, and we pretended it wasn’t going on.

When we started the China shock research back around 2011, we really didn’t know what we’d find, and so we were as surprised as anyone. But the work has changed our own way of thinking and, I think, has been constructive—not because it has caused everyone to do the right thing, but it at least caused people to start asking the right questions.

What do the findings tell us about China shock 2.0?

I think the US is handling that challenge badly. The problem is much more serious this time around. The truth is, we have a sense of what the threats are. And yet we’re not seemingly responding in a very constructive way. Although we now know how seriously we should take this, the problem is that it doesn’t seem to be generating very serious policy responses. We’re generating a lot of policy responses—they’re just not serious ones.

Don’t let hype about AI agents get ahead of reality

Google’s recent unveiling of what it calls a “new class of agentic experiences” feels like a turning point. At its I/O 2025 event in May, for example, the company showed off a digital assistant that didn’t just answer questions; it helped work on a bicycle repair by finding a matching user manual, locating a YouTube tutorial, and even calling a local store to ask about a part, all with minimal human nudging. Such capabilities could soon extend far outside the Google ecosystem. The company has introduced an open standard called Agent-to-Agent, or A2A, which aims to let agents from different companies talk to each other and work together.

The vision is exciting: Intelligent software agents that act like digital coworkers, booking your flights, rescheduling meetings, filing expenses, and talking to each other behind the scenes to get things done. But if we’re not careful, we’re going to derail the whole idea before it has a chance to deliver real benefits. As with many tech trends, there’s a risk of hype racing ahead of reality. And when expectations get out of hand, a backlash isn’t far behind.

Let’s start with the term “agent” itself. Right now, it’s being slapped on everything from simple scripts to sophisticated AI workflows. There’s no shared definition, which leaves plenty of room for companies to market basic automation as something much more advanced. That kind of “agentwashing” doesn’t just confuse customers; it invites disappointment. We don’t necessarily need a rigid standard, but we do need clearer expectations about what these systems are supposed to do, how autonomously they operate, and how reliably they perform.

And reliability is the next big challenge. Most of today’s agents are powered by large language models (LLMs), which generate probabilistic responses. These systems are powerful, but they’re also unpredictable. They can make things up, go off track, or fail in subtle ways—especially when they’re asked to complete multistep tasks, pulling in external tools and chaining LLM responses together. A recent example: Users of Cursor, a popular AI programming assistant, were told by an automated support agent that they couldn’t use the software on more than one device. There were widespread complaints and reports of users canceling their subscriptions. But it turned out the policy didn’t exist. The AI had invented it.

In enterprise settings, this kind of mistake could create immense damage. We need to stop treating LLMs as standalone products and start building complete systems around them—systems that account for uncertainty, monitor outputs, manage costs, and layer in guardrails for safety and accuracy. These measures can help ensure that the output adheres to the requirements expressed by the user, obeys the company’s policies regarding access to information, respects privacy issues, and so on. Some companies, including AI21 (which I cofounded and which has received funding from Google), are already moving in that direction, wrapping language models in more deliberate, structured architectures. Our latest launch, Maestro, is designed for enterprise reliability, combining LLMs with company data, public information, and other tools to ensure dependable outputs.

Still, even the smartest agent won’t be useful in a vacuum. For the agent model to work, different agents need to cooperate (booking your travel, checking the weather, submitting your expense report) without constant human supervision. That’s where Google’s A2A protocol comes in. It’s meant to be a universal language that lets agents share what they can do and divide up tasks. In principle, it’s a great idea.

In practice, A2A still falls short. It defines how agents talk to each other, but not what they actually mean. If one agent says it can provide “wind conditions,” another has to guess whether that’s useful for evaluating weather on a flight route. Without a shared vocabulary or context, coordination becomes brittle. We’ve seen this problem before in distributed computing. Solving it at scale is far from trivial.

There’s also the assumption that agents are naturally cooperative. That may hold inside Google or another single company’s ecosystem, but in the real world, agents will represent different vendors, customers, or even competitors. For example, if my travel planning agent is requesting price quotes from your airline booking agent, and your agent is incentivized to favor certain airlines, my agent might not be able to get me the best or least expensive itinerary. Without some way to align incentives through contracts, payments, or game-theoretic mechanisms, expecting seamless collaboration may be wishful thinking.

None of these issues are insurmountable. Shared semantics can be developed. Protocols can evolve. Agents can be taught to negotiate and collaborate in more sophisticated ways. But these problems won’t solve themselves, and if we ignore them, the term “agent” will go the way of other overhyped tech buzzwords. Already, some CIOs are rolling their eyes when they hear it.

That’s a warning sign. We don’t want the excitement to paper over the pitfalls, only to let developers and users discover them the hard way and develop a negative perspective on the whole endeavor. That would be a shame. The potential here is real. But we need to match the ambition with thoughtful design, clear definitions, and realistic expectations. If we can do that, agents won’t just be another passing trend; they could become the backbone of how we get things done in the digital world.

Yoav Shoham is a professor emeritus at Stanford University and cofounder of AI21 Labs. His 1993 paper on agent-oriented programming received the AI Journal Classic Paper Award. He is coauthor of Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, a standard textbook in the field.

The Bank Secrecy Act is failing everyone. It’s time to rethink financial surveillance.

The US is on the brink of enacting rules for digital assets, with growing bipartisan momentum to modernize our financial system. But amid all the talk about innovation and global competitiveness, one issue has been glaringly absent: financial privacy. As we build the digital infrastructure of the 21st century, we need to talk about not just what’s possible but what’s acceptable. That means confronting the expanding surveillance powers quietly embedded in our financial system, which today can track nearly every transaction without a warrant.

Many Americans may associate financial surveillance with authoritarian regimes. Yet because of a Nixon-era law called the Bank Secrecy Act (BSA) and the digitization of finance over the past half-century, financial privacy is under increasingly serious threat here at home. Most Americans don’t realize they live under an expansive surveillance regime that likely violates their constitutional rights. Every purchase, deposit, and transaction, from the smallest Venmo payment for a coffee to a large hospital bill, creates a data point in a system that watches you—even if you’ve done nothing wrong.

As a former federal prosecutor, I care deeply about giving law enforcement the tools it needs to keep us safe. But the status quo doesn’t make us safer. It creates a false sense of security while quietly and permanently eroding the constitutional rights of millions of Americans.

When Congress enacted the BSA in 1970, cash was king and organized crime was the target. The law created a scheme whereby, ever since, banks have been required to keep certain records on their customers and turn them over to law enforcement upon request. Unlike a search warrant, which must be issued by a judge or magistrate upon a showing of probable cause that a crime was committed and that specific evidence of that crime exists in the place to be searched, this power is exercised with no checks or balances. A prosecutor can “cut a subpoena”—demanding all your bank records for the past 10 years—with no judicial oversight or limitation on scope, and at no cost to the government. The burden falls entirely on the bank. In contrast, a proper search warrant must be narrowly tailored, with probable cause and judicial authorization.

In United States v. Miller (1976), the Supreme Court upheld the BSA, reasoning that citizens have no “legitimate expectation of privacy” about information shared with third parties, like banks. Thus began the third-party doctrine, enabling law enforcement to access financial records without a warrant. The BSA has been amended several times over the years (most notoriously in 2001 as a part of the Patriot Act), imposing an ever-growing list of recordkeeping obligations on an ever-growing list of financial institutions. Today, it is virtually inescapable for everyday Americans.

In the 1970s, when the BSA was enacted, banking and noncash payments were conducted predominantly through physical means: writing checks, visiting bank branches, and using passbooks. For cash transactions, the BSA required reporting of transactions over the kingly sum of $10,000, a figure that was not pegged to inflation and remains the same today. And given the nature of banking services and the technology available at the time, individuals conducted just a handful of noncash payments per month. Today, consumers make at least one payment or banking transaction a day, and just an estimated 16% of those are in cash

Meanwhile, emerging technologies further expand the footprint of financial data. Add to this the massive pools of personal information already collected by technology platforms—location history, search activity, communications metadata—and you create a world where financial surveillance can be linked to virtually every aspect of your identity, movement, and behavior.

Nor does the BSA actually appear to be effective at achieving its aims. In fiscal year 2024, financial institutions filed about 4.7 million Suspicious Activity Reports (SARs) and over 20 million currency transaction reports. Instead of stopping major crime, the system floods law enforcement with low-value information, overwhelming agents and obscuring real threats. Mass surveillance often reduces effectiveness by drowning law enforcement in noise. But while it doesn’t stop hackers, the BSA creates a trove of permanent info on everyone.

Worse still, the incentives are misaligned and asymmetrical. To avoid liability, financial institutions are required to report anything remotely suspicious. If they fail to file a SAR, they risk serious penalties—even indictment. But they face no consequences for overreporting. The vast overcollection of data is the unsurprising result. These practices, developed under regulations, require clearer guardrails so that executive branch actors can more safely outsource surveillance duties to private institutions.

But courts have recognized that constitutional privacy must evolve alongside technology. In 2012, the Supreme Court ruled in United States v. Jones that attaching a GPS tracker to a vehicle for prolonged surveillance constituted a search restricted by the Fourth Amendment. Justice Sonia Sotomayor, in a notable concurrence, argued that the third-party doctrine was ill suited to an era when individuals “reveal a great deal of information about themselves to third parties” merely by participating in daily life.

This legal evolution continued in 2018, when the Supreme Court held in Carpenter v. United States that accessing historical cell-phone location records held by a third party required a warrant, recognizing that “seismic shifts in digital technology” necessitate stronger protections and warning that “the fact that such information is gathered by a third party does not make it any less deserving of Fourth Amendment protection.”

The logic of Carpenter applies directly to the mass of financial records being collected today. Just as tracking a person’s phone over time reveals the “whole of their physical movements,” tracking a person’s financial life exposes travel, daily patterns, medical treatments, political affiliations, and personal associations. In many ways, because of the velocity and digital nature of today’s digital payments, financial data is among the most personal and revealing data there is—and therefore deserves the highest level of constitutional protection.

Though Miller remains formally intact, the writing is on the wall: Indiscriminate financial surveillance such as what we have today is fundamentally at odds with the Fourth Amendment in the digital age.

Technological innovations over the past several decades have brought incredible convenience to economic life. Now our privacy standards must catch up. With Congress considering landmark legislation on digital assets, it’s an important moment to consider what kind of financial system we want—not just in terms of efficiency and access, but in terms of freedom. Rather than striking down the BSA in its entirety, policymakers should narrow its reach, particularly around the bulk collection and warrantless sharing of Americans’ financial data.

Financial surveillance shouldn’t be the price of participation in modern life. The systems we build now will shape what freedom looks like for the next century. It’s time to treat financial privacy like what it is: a cornerstone of democracy, and a right worth fighting for.

Katie Haun is the CEO and founder of Haun Ventures, a venture capital firm focused on frontier technologies. She is a former federal prosecutor who created the US Justice Department’s first cryptocurrency task force. She led investigations into the Mt. Gox hack and the corrupt agents on the Silk Road task force. She clerked for US Supreme Court Justice Anthony Kennedy and is an honors graduate of Stanford Law School.