Useful quantum computing is inevitable—and increasingly imminent

On January 8, Nvidia CEO Jensen Huang jolted the stock market by saying that practical quantum computing is still 15 to 30 years away, at the same time suggesting those computers will need Nvidia GPUs in order to implement the necessary error correction. 

However, history shows that brilliant people are not immune to making mistakes. Huang’s predictions miss the mark, both on the timeline for useful quantum computing and on the role his company’s technology will play in that future.

I’ve been closely following developments in quantum computing as an investor, and it’s clear to me that it is rapidly converging on utility. Last year, Google’s Willow device demonstrated that there is a promising pathway to scaling up to bigger and bigger computers. It showed that errors can be reduced exponentially as the number of quantum bits, or qubits, increases. It also ran a benchmark test in under five minutes that would take one of today’s fastest supercomputers 10 septillion years. While too small to be commercially useful with known algorithms, Willow shows that quantum supremacy (executing a task that is effectively impossible for any classical computer to handle in a reasonable amount of time) and fault tolerance (correcting errors faster than they are made) are achievable.

For example, PsiQuantum, a startup my company is invested in, is set to break ground on two quantum computers that will enter commercial service before the end of this decade. The plan is for each one to be 10 thousand times the size of Willow, big enough to tackle important questions about materials, drugs, and the quantum aspects of nature. These computers will not use GPUs to implement error correction. Rather, they will have custom hardware, operating at speeds that would be impossible with Nvidia hardware.

At the same time, quantum algorithms are improving far faster than hardware. A recent collaboration between the pharmaceutical giant Boehringer Ingelheim and PsiQuantum demonstrated a more than 200x improvement in algorithms to simulate important drugs and materials. Phasecraft, another company we have invested in, has improved the simulation performance for a wide variety of crystal materials and has published a quantum-enhanced version of a widely used materials science algorithm that is tantalizingly close to beating all classical implementations on existing hardware.

Advances like these lead me to believe that useful quantum computing is inevitable and increasingly imminent. And that’s good news, because the hope is that they will be able to perform calculations that no amount of AI or classical computation could ever achieve.

We should care about the prospect of useful quantum computers because today we don’t really know how to do chemistry. We lack knowledge about the mechanisms of action for many of our most important drugs. The catalysts that drive our industries are generally poorly understood, require expensive exotic materials, or both. Despite appearances, we have significant gaps in our agency over the physical world; our achievements belie the fact that we are, in many ways, stumbling around in the dark.

Nature operates on the principles of quantum mechanics. Our classical computational methods fail to accurately capture the quantum nature of reality, even though much of our high-performance computing resources are dedicated to this pursuit. Despite all the intellectual and financial capital expended, we still don’t understand why the painkiller acetaminophen works, how type-II superconductors function, or why a simple crystal of iron and nitrogen can produce a magnet with such incredible field strength. We search for compounds in Amazonian tree bark to cure cancer and other maladies, manually rummaging through a pitifully small subset of a design space encompassing 1060 small molecules. It’s more than a little embarrassing.

We do, however, have some tools to work with. In industry, density functional theory (DFT) is the workhorse of computational chemistry and materials modeling, widely used to investigate the electronic structure of many-body systems—such as atoms, molecules, and solids. When DFT is applied to systems where electron-electron correlations are weak, it produces reasonable results. But it fails entirely on a broad class of interesting problems. 

Take, for example, the buzz in the summer of 2023 around the “room-temperature superconductor” LK-99. Many accomplished chemists turned to DFT to try to characterize the material and determine whether it was, indeed, a superconductor. Results were, to put it politely, mixed—so we abandoned our best computational methods, returning to mortar and pestle to try to make some of the stuff. Sadly, although LK-99 might have many novel characteristics, a room-temperature superconductor it isn’t. That’s unfortunate, as such a material could revolutionize energy generation, transmission, and storage, not to mention magnetic confinement for fusion reactors, particle accelerators, and more.

AI will certainly help with our understanding of materials, but it is no panacea. New AI techniques have emerged in the last few years, with some promising results. DeepMind’s Graph Networks for Materials Exploration (GNoME), for example, found 380,000 new potentially stable materials. At its core, though, GNoME depends on DFT, so its performance is only as good as DFT’s ability to produce good answers. 

The fundamental issue is that an AI model is only as good as the data it’s trained on. Training an LLM on the entire internet corpus, for instance, can yield a model that has a reasonable grasp of most human culture and can process language effectively. But if DFT fails for any non-trivially correlated quantum systems, how useful can a DFT-derived training set really be? We could also turn to synthesis and experimentation to create training data, but the number of physical samples we can realistically produce is minuscule relative to the vast design space, leaving a great deal of potential untapped. Only once we have reliable quantum simulations to produce sufficiently accurate training data will we be able to create AI models that answer quantum questions on classical hardware.

And that means that we need quantum computers. They afford us the opportunity to shift from a world of discovery to a world of design. Today’s iterative process of guessing, synthesizing, and testing materials is comically inadequate.

In a few tantalizing cases, we have stumbled on materials, like superconductors, with near-magical properties. How many more might these new tools reveal in the coming years? We will eventually have machines with millions of qubits that, when used to simulate crystalline materials, open up a vast new design space. It will be like waking up one day and finding a million new elements with fascinating properties on the periodic table.

Of course, building a million-qubit quantum computer is not for the faint of heart. Such machines will be the size of supercomputers, and require large amounts of capital, cryoplant, electricity, concrete, and steel. They also require silicon photonics components that perform well beyond anything in industry, error correction hardware that runs fast enough to chase photons, and single-photon detectors with unprecedented sensitivity. But after years of research and development, and more than a billion dollars of investment, the challenge is now moving from science and engineering to construction.

It is impossible to fully predict how quantum computing will affect our world, but a thought exercise might offer a mental model of some of the possibilities. 

Imagine our world without metal. We could have wooden houses built with stone tools, agriculture, wooden plows, movable type, printing, poetry, and even thoughtfully edited science periodicals. But we would have no inkling of phenomena like electricity or electromagnetism—no motors, generators, radio, MRI machines, silicon, or AI. We wouldn’t miss them, as we’d be oblivious to their existence. 

Today, we are living in a world without quantum materials, oblivious to the unrealized potential and abundance that lie just out of sight. With large-scale quantum computers on the horizon and advancements in quantum algorithms, we are poised to shift from discovery to design, entering an era of unprecedented dynamism in chemistry, materials science, and medicine. It will be a new age of mastery over the physical world.

Peter Barrett is a general partner at Playground Global, which invests in early-stage deep-tech companies including several in quantum computing, quantum algorithms, and quantum sensing: PsiQuantum, Phasecraft, NVision, and Ideon.

There can be no winners in a US-China AI arms race

The United States and China are entangled in what many have dubbed an “AI arms race.” 

In the early days of this standoff, US policymakers drove an agenda centered on “winning” the race, mostly from an economic perspective. In recent months, leading AI labs such as OpenAI and Anthropic got involved in pushing the narrative of “beating China” in what appeared to be an attempt to align themselves with the incoming Trump administration. The belief that the US can win in such a race was based mostly on the early advantage it had over China in advanced GPU compute resources and the effectiveness of AI’s scaling laws.

But now it appears that access to large quantities of advanced compute resources is no longer the defining or sustainable advantage many had thought it would be. In fact, the capability gap between leading US and Chinese models has essentially disappeared, and in one important way the Chinese models may now have an advantage: They are able to achieve near equivalent results while using only a small fraction of the compute resources available to the leading Western labs.    

The AI competition is increasingly being framed within narrow national security terms, as a zero-sum game, and influenced by assumptions that a future war between the US and China, centered on Taiwan, is inevitable. The US has employed “chokepoint” tactics to limit China’s access to key technologies like advanced semiconductors, and China has responded by accelerating its efforts toward self-sufficiency and indigenous innovation, which is causing US efforts to backfire.

Recently even outgoing US Secretary of Commerce Gina Raimondo, a staunch advocate for strict export controls, finally admitted that using such controls to hold back China’s progress on AI and advanced semiconductors is a “fool’s errand.” Ironically, the unprecedented export control packages targeting China’s semiconductor and AI sectors have unfolded alongside tentative bilateral and multilateral engagements to establish AI safety standards and governance frameworks—highlighting a paradoxical desire of both sides to compete and cooperate. 

When we consider this dynamic more deeply, it becomes clear that the real existential threat ahead is not from China, but from the weaponization of advanced AI by bad actors and rogue groups who seek to create broad harms, gain wealth, or destabilize society. As with nuclear arms, China, as a nation-state, must be careful about using AI-powered capabilities against US interests, but bad actors, including extremist organizations, would be much more likely to abuse AI capabilities with little hesitation. Given the asymmetric nature of AI technology, which is much like cyberweapons, it is very difficult to fully prevent and defend against a determined foe who has mastered its use and intends to deploy it for nefarious ends. 

Given the ramifications, it is incumbent on the US and China as global leaders in developing AI technology to jointly identify and mitigate such threats, collaborate on solutions, and cooperate on developing a global framework for regulating the most advanced models—instead of erecting new fences, small or large, around AI technologies and pursing policies that deflect focus from the real threat.

It is now clearer than ever that despite the high stakes and escalating rhetoric, there will not and cannot be any long-term winners if the intense competition continues on its current path. Instead, the consequences could be severe—undermining global stability, stalling scientific progress, and leading both nations toward a dangerous technological brinkmanship. This is particularly salient given the importance of Taiwan and the global foundry leader TSMC in the AI stack, and the increasing tensions around the high-tech island. 

Heading blindly down this path will bring the risk of isolation and polarization, threatening not only international peace but also the vast potential benefits AI promises for humanity as a whole.

Historical narratives, geopolitical forces, and economic competition have all contributed to the current state of the US-China AI rivalry. A recent report from the US-China Economic and Security Review Commission, for example, frames the entire issue in binary terms, focused on dominance or subservience. This “winner takes all” logic overlooks the potential for global collaboration and could even provoke a self-fulfilling prophecy by escalating conflict. Under the new Trump administration this dynamic will likely become more accentuated, with increasing discussion of a Manhattan Project for AI and redirection of US military resources from Ukraine toward China

Fortunately, a glimmer of hope for a responsible approach to AI collaboration is appearing now as Donald Trump recently  posted on January 17 that he’d restarted direct dialogue with Chairman Xi Jinping regarding various areas of collaboration, and given past cooperation should continue to be “partners and friends.” The outcome of the TikTok drama, putting Trump at odds with sharp China critics in his own administration and Congress, will be a preview of how his efforts to put US China relations on a less confrontational trajectory.

The promise of AI for good

Western mass media usually focuses on attention-grabbing issues described in terms like the “existential risks of evil AI.” Unfortunately, the AI safety experts who get the most coverage often recite the same narratives, scaring the public. In reality, no credible research shows that more capable AI will become increasingly evil. We need to challenge the current false dichotomy of pure accelerationism versus doomerism to allow for a model more like collaborative acceleration

It is important to note the significant difference between the way AI is perceived in Western developed countries and developing countries. In developed countries the public sentiment toward AI is 60% to 70% negative, while in the developing markets the positive ratings are 60% to 80%. People in the latter places have seen technology transform their lives for the better in the past decades and are hopeful AI will help solve the remaining issues they face by improving education, health care, and productivity, thereby elevating their quality of life and giving them greater world standing. What Western populations often fail to realize is that those same benefits could directly improve their lives as well, given the high levels of inequity even in developed markets. Consider what progress would be possible if we reallocated the trillions that go into defense budgets each year to infrastructure, education, and health-care projects. 

Once we get to the next phase, AI will help us accelerate scientific discovery, develop new drugs, extend our health span, reduce our work obligations, and ensure access to high-quality education for all. This may sound idealistic, but given current trends, most of this can become a reality within a generation, and maybe sooner. To get there we’ll need more advanced AI systems, which will be a much more challenging goal if we divide up compute/data resources and research talent pools. Almost half of all top AI researchers globally (47%) were born or educated in China, according to industry studies. It’s hard to imagine how we could have gotten where we are without the efforts of Chinese researchers. Active collaboration with China on joint AI research could be pivotal to supercharging progress with a major infusion of quality training data and researchers. 

The escalating AI competition between the US and China poses significant threats to both nations and to the entire world. The risks inherent in this rivalry are not hypothetical—they could lead to outcomes that threaten global peace, economic stability, and technological progress. Framing the development of artificial intelligence as a zero-sum race undermines opportunities for collective advancement and security. Rather than succumb to the rhetoric of confrontation, it is imperative that the US and China, along with their allies, shift toward collaboration and shared governance.

Our recommendations for policymakers:

  1. Reduce national security dominance over AI policy. Both the US and China must recalibrate their approach to AI development, moving away from viewing AI primarily as a military asset. This means reducing the emphasis on national security concerns that currently dominate every aspect of AI policy. Instead, policymakers should focus on civilian applications of AI that can directly benefit their populations and address global challenges, such as health care, education, and climate change. The US also needs to investigate how to implement a possible universal basic income program as job displacement from AI adoption becomes a bigger issue domestically. 
    • 2. Promote bilateral and multilateral AI governance. Establishing a robust dialogue between the US, China, and other international stakeholders is crucial for the development of common AI governance standards. This includes agreeing on ethical norms, safety measures, and transparency guidelines for advanced AI technologies. A cooperative framework would help ensure that AI development is conducted responsibly and inclusively, minimizing risks while maximizing benefits for all.
    • 3. Expand investment in detection and mitigation of AI misuse. The risk of AI misuse by bad actors, whether through misinformation campaigns, telecom, power, or financial system attacks, or cybersecurity attacks with the potential to destabilize society, is the biggest existential threat to the world today. Dramatically increasing funding for and international cooperation in detecting and mitigating these risks is vital. The US and China must agree on shared standards for the responsible use of AI and collaborate on tools that can monitor and counteract misuse globally.
    • 4. Create incentives for collaborative AI research. Governments should provide incentives for academic and industry collaborations across borders. By creating joint funding programs and research initiatives, the US and China can foster an environment where the best minds from both nations contribute to breakthroughs in AI that serve humanity as a whole. This collaboration would help pool talent, data, and compute resources, overcoming barriers that neither country could tackle alone. A global effort akin to the CERN for AI will bring much more value to the world, and a peaceful end, than a Manhattan Project for AI, which is being promoted by many in Washington today. 
    • 5. Establish trust-building measures. Both countries need to prevent misinterpretations of AI-related actions as aggressive or threatening. They could do this via data-sharing agreements, joint projects in nonmilitary AI, and exchanges between AI researchers. Reducing import restrictions for civilian AI use cases, for example, could help the nations rebuild some trust and make it possible for them to discuss deeper cooperation on joint research. These measures would help build transparency, reduce the risk of miscommunication, and pave the way for a less adversarial relationship.
    • 6. Support the development of a global AI safety coalition. A coalition that includes major AI developers from multiple countries could serve as a neutral platform for addressing ethical and safety concerns. This coalition would bring together leading AI researchers, ethicists, and policymakers to ensure that AI progresses in a way that is safe, fair, and beneficial to all. This effort should not exclude China, as it remains an essential partner in developing and maintaining a safe AI ecosystem.
    • 7. Shift the focus toward AI for global challenges. It is crucial that the world’s two AI superpowers use their capabilities to tackle global issues, such as climate change, disease, and poverty. By demonstrating the positive societal impacts of AI through tangible projects and presenting it not as a threat but as a powerful tool for good, the US and China can reshape public perception of AI. 

    Our choice is stark but simple: We can proceed down a path of confrontation that will almost certainly lead to mutual harm, or we can pivot toward collaboration, which offers the potential for a prosperous and stable future for all. Artificial intelligence holds the promise to solve some of the greatest challenges facing humanity, but realizing this potential depends on whether we choose to race against each other or work together. 

    The opportunity to harness AI for the common good is a chance the world cannot afford to miss.


    Alvin Wang Graylin

    Alvin Wang Graylin is a technology executive, author, investor, and pioneer with over 30 years of experience shaping innovation in AI, XR (extended reality), cybersecurity, and semiconductors. Currently serving as global vice president at HTC, Graylin was the company’s China president from 2016 to 2023. He is the author of Our Next Reality.

    Paul Triolo

    Paul Triolo is a partner for China and technology policy lead at DGA-Albright Stonebridge Group. He advises clients in technology, financial services, and other sectors as they navigate complex political and regulatory matters in the US, China, the European Union, India, and around the world.

    Trump’s win is a tragic loss for climate progress

    Donald Trump’s decisive victory is a stunning setback for climate change.

    The Republican president-elect’s return to the White House means the US is going to squander precious momentum, unraveling hard-won policy progress that was just beginning to pay off, all for the second time in less than a decade. 

    It comes at a moment when the world can’t afford to waste time, with nations far off track from any emissions trajectories that would keep our ecosystems stable and our communities safe. Under the policies in place today, the planet is already set to warm by more than 3 °C over preindustrial levels in the coming decades.

    Trump could push the globe into even more dangerous terrain, by defanging President Joe Biden’s signature climate laws. In fact, a second Trump administration could boost greenhouse-gas emissions by 4 billion tons through 2030 alone, according to an earlier analysis by Carbon Brief, a well-regarded climate news and data site. That will exacerbate the dangers of heat waves, floods, wildfires, droughts, and famine and increase deaths and disease from air pollution, inflicting some $900 million in climate damages around the world, Carbon Brief found.

    I started as the climate editor at MIT Technology Review just as Trump came into office the last time. Much of the early job entailed covering his systematic unraveling of the modest climate policy and progress that President Barack Obama had managed to achieve. I fear it will be far worse this time, as Trump ambles into office feeling empowered and aggrieved, and ready to test the rule of law and crack down on dissent. 

    This time he’ll be staffed all the more by loyalists and idealogues, who have already made plans to force out civil servants with expertise and experience from federal agencies including the Environmental Protection Agency. He’ll be backed by a Supreme Court that he moved well to the right, and which has already undercut landmark environmental doctrines and weakened federal regulatory agencies. 

    This time the setbacks will sting more, too, because the US did finally manage to pass real, substantive climate policy, through the slimmest of congressional margins. The Inflation Reduction Act and Bipartisan Infrastructure Law allocated massive amounts of government funding to accelerating the shift to low-emissions industries and rebuilding the US manufacturing base around a clean-energy economy. 

    Trump has made clear he will strive to repeal as many of these provisions as he can, tempered perhaps only by Republicans who recognize that these laws are producing revenue and jobs in their districts. Meanwhile, throughout the prolonged presidential campaign, Trump or his surrogates pledged to boost oil and gas production, eliminate federal support for electric vehicles, end pollution rules for power plants, and remove the US from the Paris climate agreement yet again. Each of those goals stands in direct opposition to the deep, rapid emissions cuts now necessary to prevent the planet from tipping past higher and higher temperature thresholds.

    Project 2025, considered a blueprint for the early days of a second Trump administration despite his insistence to the contrary, calls for dismantling or downsizing federal institutions including the the National Oceanic and Atmospheric Administration and the Federal Emergency Management Agency. That could cripple the nation’s ability to forecast, track, or respond to storms, floods, and fires like those that have devastated communities in recent months.

    Observers I’ve spoken to fear that the Trump administration will also return the Department of Energy, which under Biden had evolved its mission toward developing low-emissions technologies, to the primary task of helping companies dig up more fossil fuels.

    The US election could create global ripples as well, and very soon. US negotiators will meet with their counterparts at the annual UN climate conference that kicks off next week. With Trump set to move back into the White House in January, they will have little credibility or leverage to nudge other nations to step up their commitments to reducing emissions. 

    But those are just some of the direct ways that a second Trump administration will enfeeble the nation’s ability to drive down emissions and counter the growing dangers of climate change. He also has considerable power to stall the economy and sow international chaos amid escalating conflicts in Europe and the Middle East. 

    Trump’s eagerness to enact tariffs, slash government spending, and deport major portions of the workforce may stunt growth, drive up inflation, and chill investment. All that would make it far more difficult for companies to raise the capital and purchase the components needed to build anything in the US, whether that means wind turbines, solar farms, and seawalls or buildings, bridges, and data centers. 

    view from behind Trump on stage election night 2024 with press and crowd
    President-elect Donald Trump speaks at an election night event in West Palm Beach, Florida.
    WIN MCNAMEE/GETTY IMAGES

    His clumsy handling of the economy and international affairs may also help China extend its dominance in producing and selling the components that are crucial to the energy transition, including batteries, EVs, and solar panels, to customers around the globe.

    If one job of a commentator is to find some perspective in difficult moments, I admit I’m mostly failing in this one.

    The best I can do is to say that there will be some meaningful lines of defense. For now ,at least, state leaders and legislatures can continue to enact and implement stronger climate rules. Other nations could step up their efforts to cut emissions and assert themselves as global leaders on climate. 

    Private industry will likely continue to invest in and build businesses in climate tech and clean energy, since solar, wind, batteries, and EVs have proved themselves as competitive industries. And technological progress can occur no matter who is sitting in the round room on Pennsylvania Avenue, since researchers continue striving to develop cleaner, cheaper ways of producing our energy, food, and goods.

    By any measure, the job of addressing climate change is now much harder. Nothing, however, has changed about the stakes. 

    Our world doesn’t end if we surpass 2 °C, 2.5 °C, or even 3 °C, but it will steadily become a more dangerous and erratic place. Every tenth of a degree remains worth fighting for—whether two, four, or a dozen years from now—because every bit of warming that nations pull together to prevent eases future suffering somewhere.

    So as the shock wears off and the despair begins to lift, the core task before us remains the same: to push for progress, whenever, wherever, and however we can. 

    Sorry, AI won’t “fix” climate change

    In an essay last week, Sam Altman, the CEO of OpenAI, argued that the accelerating capabilities of AI will usher in an idyllic “Intelligence Age,” unleashing “unimaginable” prosperity and “astounding triumphs” like “fixing the climate.”

    It’s a promise that no one is in a position to make—and one that, when it comes to the topic of climate change, fundamentally misunderstands the nature of the problem. 

    More maddening, the argument suggests that the technology’s massive consumption of electricity today doesn’t much matter, since it will allow us to generate abundant clean power in the future. That casually waves away growing concerns about a technology that’s already accelerating proposals for natural-gas plants and diverting major tech companies from their corporate climate targets

    By all accounts, AI’s energy demands will only continue to increase, even as the world scrambles to build larger, cleaner power systems to meet the increasing needs of EV charging, green hydrogen production, heat pumps, and other low-carbon technologies. Altman himself reportedly just met with White House officials to make the case for building absolutely massive AI data centers, which could require the equivalent of five dedicated nuclear reactors to run.  

    It’s a bedrock perspective of MIT Technology Review that technological advances can deliver real benefits and accelerate societal progress in meaningful ways. But for decades researchers and companies have oversold the potential of AI to deliver blockbuster medicines, achieve super intelligence, and free humanity from the need to work. To be fair, there have been significant advances, but nothing on the order of what’s been hyped.

    Given that track record, I’d argue you need to develop a tool that does more than plagiarize journalism and help students cheat on homework before you can credibly assert that it will solve humanity’s thorniest problems, whether the target is rampant poverty or global warming.

    To be sure, AI may help the world address the rising dangers of climate change. We have begun to see research groups and startups harness the technology to try to manage power grids more effectively, put out wildfires faster, and discover materials that could create cheaper, better batteries or solar panels.

    All those advances are still relatively incremental. But let’s say AI does bring about an energy miracle. Perhaps its pattern-recognition prowess will deliver the key insight that finally cracks fusion—a technology that Altman is betting on heavily as an investor.

    That would be fantastic. But technological advances are just the start—necessary but far from sufficient to eliminate the world’s climate emissions.

    How do I know?

    Because between nuclear fission plants, solar farms, wind turbines, and batteries, we already have every technology we need to clean up the power sector. This should be the low-hanging fruit of the energy transition. Yet in the largest economy on Earth, fossil fuels still generate 60% of the electricity. The fact that so much of our power still comes from coal, petroleum, and natural gas is a regulatory failure as much as a technological one. 

    “As long as we effectively subsidize fossil fuels by allowing them to use the atmosphere as a waste dump, we are not allowing clean energy to compete on a level playing field,” Zeke Hausfather, a climate scientist at the independent research organization Berkeley Earth, wrote on X in a response to Altman’s post. “We need policy changes, not just tech breakthroughs, to meet our climate goals.”

    That’s not to say there aren’t big technical problems we still need to solve. Just look at the continuing struggles to develop clean, cost-competitive ways of fertilizing crops or flying planes. But the fundamental challenges of climate change are sunk costs, development obstacles, and inertia.

    We’ve built and paid for a global economy that spews out planet-warming gases, investing trillions of dollars in power plants, steel mills, factories, jets, boilers, water heaters, stoves, and SUVs that run on fossil fuels. And few people or companies will happily write off those investments so long as those products and plants still work. AI can’t remedy all that just by generating better ideas. 

    To raze and replace the machinery of every industry around the world at the speed now required, we will need increasingly aggressive climate policies that incentivize or force everyone to switch to cleaner plants, products, and practices.

    But with every proposal for a stricter law or some big new wind or solar farm, forces will push back, because the plan will hit someone’s wallet, block someone’s views, or threaten the areas or traditions someone cherishes. Climate change is an infrastructure problem, and building infrastructure is a messy human endeavor. 

    Tech advances can ease some of these issues. Cheaper, better alternatives to legacy industries make hard choices more politically palatable. But there are no improvements to AI algorithms or underlying data sets that solve the challenge of NIMBYism, the conflict between human interests, or the desire to breathe the fresh air in an unsullied wilderness. 

    To assert that a single technology—that just happens to be the one your company develops—can miraculously untangle these intractable conflicts of human society is at best self-serving, if not a little naïve. And it’s a troubling idea to proclaim at a point when the growth of that very technology is threatening to undermine the meager progress the world has begun to make on climate change.

    As it is, the one thing we can state confidently about generative AI is that it’s making the hardest problem we’ve ever had to solve that much harder to solve.

    Why we need an AI safety hotline

    In the past couple of years, regulators have been caught off guard again and again as tech companies compete to launch ever more advanced AI models. It’s only a matter of time before labs release another round of models that pose new regulatory challenges. We’re likely just weeks away, for example, from OpenAI’s release of ChatGPT-5, which promises to push AI capabilities further than ever before. As it stands, it seems there’s little anyone can do to delay or prevent the release of a model that poses excessive risks.

    Testing AI models before they’re released is a common approach to mitigating certain risks, and it may help regulators weigh up the costs and benefits—and potentially block models from being released if they’re deemed too dangerous. But the accuracy and comprehensiveness of these tests leaves a lot to be desired. AI models may “sandbag” the evaluation—hiding some of their capabilities to avoid raising any safety concerns. The evaluations may also fail to reliably uncover the full set of risks posed by any one model. Evaluations likewise suffer from limited scope—current tests are unlikely to uncover all the risks that warrant further investigation. There’s also the question of who conducts the evaluations and how their biases may influence testing efforts. For those reasons, evaluations need to be used alongside other governance tools. 

    One such tool could be internal reporting mechanisms within the labs. Ideally, employees should feel empowered to regularly and fully share their AI safety concerns with their colleagues, and they should feel those colleagues can then be counted on to act on the concerns. However, there’s growing evidence that, far from being promoted, open criticism is becoming rarer in AI labs. Just three months ago, 13 former and current workers from OpenAI and other labs penned an open letter expressing fear of retaliation if they attempt to disclose questionable corporate behaviors that fall short of breaking the law. 

    How to sound the alarm

    In theory, external whistleblower protections could play a valuable role in the detection of AI risks. These could protect employees fired for disclosing corporate actions, and they could help make up for inadequate internal reporting mechanisms. Nearly every state has a public policy exception to at-will employment termination—in other words, terminated employees can seek recourse against their employers if they were retaliated against for calling out unsafe or illegal corporate practices. However, in practice this exception offers employees few assurances. Judges tend to favor employers in whistleblower cases. The likelihood of AI labs’ surviving such suits seems particularly high given that society has yet to reach any sort of consensus as to what qualifies as unsafe AI development and deployment. 

    These and other shortcomings explain why the aforementioned 13 AI workers, including ex-OpenAI employee William Saunders, called for a novel “right to warn.” Companies would have to offer employees an anonymous process for disclosing risk-related concerns to the lab’s board, a regulatory authority, and an independent third body made up of subject-matter experts. The ins and outs of this process have yet to be figured out, but it would presumably be a formal, bureaucratic mechanism. The board, regulator, and third party would all need to make a record of the disclosure. It’s likely that each body would then initiate some sort of investigation. Subsequent meetings and hearings also seem like a necessary part of the process. Yet if Saunders is to be taken at his word, what AI workers really want is something different. 

    When Saunders went on the Big Technology Podcast to outline his ideal process for sharing safety concerns, his focus was not on formal avenues for reporting established risks. Instead, he indicated a desire for some intermediate, informal step. He wants a chance to receive neutral, expert feedback on whether a safety concern is substantial enough to go through a “high stakes” process such as a right-to-warn system. Current government regulators, as Saunders says, could not serve that role. 

    For one thing, they likely lack the expertise to help an AI worker think through safety concerns. What’s more, few workers will pick up the phone if they know it’s a government official on the other end—that sort of call may be “very intimidating,” as Saunders himself said on the podcast. Instead, he envisages being able to call an expert to discuss his concerns. In an ideal scenario, he’d be told that the risk in question does not seem that severe or likely to materialize, freeing him up to return to whatever he was doing with more peace of mind. 

    Lowering the stakes

    What Saunders is asking for in this podcast isn’t a right to warn, then, as that suggests the employee is already convinced there’s unsafe or illegal activity afoot. What he’s really calling for is a gut check—an opportunity to verify whether a suspicion of unsafe or illegal behavior seems warranted. The stakes would be much lower, so the regulatory response could be lighter. The third party responsible for weighing up these gut checks could be a much more informal one. For example, AI PhD students, retired AI industry workers, and other individuals with AI expertise could volunteer for an AI safety hotline. They could be tasked with quickly and expertly discussing safety matters with employees via a confidential and anonymous phone conversation. Hotline volunteers would have familiarity with leading safety practices, as well as extensive knowledge of what options, such as right-to-warn mechanisms, may be available to the employee. 

    As Saunders indicated, few employees will likely want to go from 0 to 100 with their safety concerns—straight from colleagues to the board or even a government body. They are much more likely to raise their issues if an intermediary, informal step is available.

    Studying examples elsewhere

    The details of how precisely an AI safety hotline would work deserve more debate among AI community members, regulators, and civil society. For the hotline to realize its full potential, for instance, it may need some way to escalate the most urgent, verified reports to the appropriate authorities. How to ensure the confidentiality of hotline conversations is another matter that needs thorough investigation. How to recruit and retain volunteers is another key question. Given leading experts’ broad concern about AI risk, some may be willing to participate simply out of a desire to lend a hand. Should too few folks step forward, other incentives may be necessary. The essential first step, though, is acknowledging this missing piece in the puzzle of AI safety regulation. The next step is looking for models to emulate in building out the first AI hotline. 

    One place to start is with ombudspersons. Other industries have recognized the value of identifying these neutral, independent individuals as resources for evaluating the seriousness of employee concerns. Ombudspersons exist in academia, nonprofits, and the private sector. The distinguishing attribute of these individuals and their staffers is neutrality—they have no incentive to favor one side or the other, and thus they’re more likely to be trusted by all. A glance at the use of ombudspersons in the federal government shows that when they are available, issues may be raised and resolved sooner than they would be otherwise.

    This concept is relatively new. The US Department of Commerce established the first federal ombudsman in 1971. The office was tasked with helping citizens resolve disputes with the agency and investigate agency actions. Other agencies, including the Social Security Administration and the Internal Revenue Service, soon followed suit. A retrospective review of these early efforts concluded that effective ombudspersons can meaningfully improve citizen-government relations. On the whole, ombudspersons were associated with an uptick in voluntary compliance with regulations and cooperation with the government. 

    An AI ombudsperson or safety hotline would surely have different tasks and staff from an ombudsperson in a federal agency. Nevertheless, the general concept is worthy of study by those advocating safeguards in the AI industry. 

    A right to warn may play a role in getting AI safety concerns aired, but we need to set up more intermediate, informal steps as well. An AI safety hotline is low-hanging regulatory fruit. A pilot made up of volunteers could be organized in relatively short order and provide an immediate outlet for those, like Saunders, who merely want a sounding board.

    Kevin Frazier is an assistant professor at St. Thomas University College of Law and senior research fellow in the Constitutional Studies Program at the University of Texas at Austin.

    Why a ruling against the Internet Archive threatens the future of America’s libraries

    I was raised in the 1980s and ’90s, and for my generation and generations before us, the public library was an equalizing force in every town, helping anyone move toward the American dream. In Chantilly, Virginia, where I grew up, it didn’t matter if you didn’t have a computer or your parents lacked infinite money for tutors—you could get a lifetime’s education for free at the public library. A ruling from the US Second Circuit against the Internet Archive and in favor of publisher Hachette has just thrown that promise of equality into doubt by limiting libraries’ access to digital lending.

    To understand why this is so important to the future of libraries, you first have to understand the dire state of library e-book lending. 

    Libraries have traditionally operated on a basic premise: Once they purchase a book, they can lend it out to patrons as much (or as little) as they like. Library copies often come from publishers, but they can also come from donations, used book sales, or other libraries. However the library obtains the book, once the library legally owns it, it is theirs to lend as they see fit. 

    Not so for digital books. To make licensed e-books available to patrons, libraries have to pay publishers multiple times over. First, they must subscribe (for a fee) to aggregator platforms such as Overdrive. Aggregators, like streaming services such as HBO’s Max, have total control over adding or removing content from their catalogue. Content can be removed at any time, for any reason, without input from your local library. The decision happens not at the community level but at the corporate one, thousands of miles from the patrons affected. 

    Then libraries must purchase each individual copy of each individual title that they want to offer as an e-book. These e-book copies are not only priced at a steep markup—up to 300% over consumer retail—but are also time- and loan-limited, meaning the files self-destruct after a certain number of loans. The library then needs to repurchase the same book, at a new price, in order to keep it in stock. 

    This upending of the traditional order puts massive financial strain on libraries and the taxpayers that fund them. It also opens up a world of privacy concerns; while libraries are restricted in the reader data they can collect and share, private companies are under no such obligation.

    Some libraries have turned to another solution: controlled digital lending, or CDL, a process by which a library scans the physical books it already has in its collection, makes secure digital copies, and lends those out on a one-to-one “owned to loaned” ratio.  The Internet Archive was an early pioneer of this technique.

    When the digital copy is loaned, the physical copy is sequestered from borrowing; when the physical copy is checked out, the digital copy becomes unavailable. The benefits to libraries are obvious; delicate books can be circulated without fear of damage, volumes can be moved off-site for facilities work without interrupting patron access, and older and endangered works become searchable and can get a second chance at life. Library patrons, who fund their local library’s purchases with their tax dollars, also benefit from the ability to freely access the books.

    Publishers are, unfortunately, not a fan of this model, and in 2020 four of them sued the Internet Archive over its CDL program. The suit ultimately focused on the Internet Archive’s lending of 127 books that were already commercially available through licensed aggregators. The publisher plaintiffs accused the Internet Archive of mass copyright infringement, while the Internet Archive argued that its digitization and lending program was a fair use. The trial court sided with the publishers, and on September 4, the Court of Appeals for the Second Circuit reaffirmed that decision with some alterations to the underlying reasoning. 

    This decision harms libraries. It locks them into an e-book ecosystem designed to extract as much money as possible while harvesting (and reselling) reader data en masse. It leaves local communities’ reading habits at the mercy of curatorial decisions made by four dominant publishing companies thousands of miles away. It steers Americans away from one of the few remaining bastions of privacy protection and funnels them into a surveillance ecosystem that, like Big Tech, becomes more dangerous with each passing data breach. And by increasing the price for access to knowledge, it puts up even more barriers between underserved communities and the American dream.

    It doesn’t stop there. This decision also renders the fair use doctrine—legally crucial in everything from parody to education to news reporting—almost unusable. And while there were occasional moments of sanity (such as recognizing that a “Donate here” button does not magically turn a nonprofit into a commercial enterprise), this decision fractured, rather than clarified, the law. 

    If the courts won’t recognize CDL-based library lending as fair use, then the next step falls to Congress. Libraries are in crisis, caught between shrinking budgets and growing demand for services. Congress must act now to ensure that a pillar of equality in our communities isn’t sacrificed on the altar of profit. 

    Chris Lewis is president and CEO of Public Knowledge, a consumer advocacy group that works to shape technology policy in the public interest. Public Knowledge promotes freedom of expression, an open internet, and access to affordable communications tools and creative works.

    How the US and its allies can rebuild economic security

    A country’s economic security—its ability to generate both national security and economic prosperity—is grounded in it having significant technological capabilities that outpace those of its adversaries and complement those of its allies. Though this is a principle well known throughout history, the move over the last few decades toward globalization and offshoring of technologically advanced industrial capacity has made ensuring a nation state’s security and economic prosperity increasingly problematic. A broad span of technologies ranging from automation and secure communications to energy storage and vaccine design are the basis for wider economic prosperity—and high priorities for governments seeking to maintain national security. However, the necessary capabilities do not spring up overnight. They rely upon long decades of development, years of accumulated knowledge, and robust supply chains.

    For the US and, especially, its allies in NATO, a particular problem has emerged: a “missing middle” in technology investment. Insufficient capital is allocated toward the maturation of breakthroughs in critical technologies to ensure that they can be deployed at scale. Investment is allocated either toward the rapid deployment of existing technologies or to scientific ideas that are decades away from delivering practical capability or significant economic impact (for example, quantum computers). But investment in scaling manufacturing technologies, learning while doing, and maturing of emerging technologies to contribute to a next-generation industrial base, is too often absent. Without this middle-ground commitment, the United States and its partners lack the production know-how that will be crucial for tomorrow’s batteries, the next generation of advanced computing, alternative solar photovoltaic cells, and active pharmaceutical ingredients.

    While this once mattered only for economic prosperity, it is now a concern for national security too—especially given that China has built strong supply chains and other domestic capabilities that confer both economic security and significant geopolitical leverage.

    Consider drone technology. Military doctrine has shifted toward battlefield technology that relies upon armies of small, relatively cheap products enabled by sophisticated software—from drones above the battlefield to autonomous boats to CubeSats in space.

    Drones have played a central role in the war in Ukraine. First-person viewer (FPV) drones—those controlled by a pilot on the ground via a video stream—are often strapped with explosives to act as precision kamikaze munitions and have been essential to Ukraine’s frontline defenses. While many foundational technologies for FPV drones were pioneered in the West, China now dominates the manufacturing of drone components and systems, which ultimately enables the country to have a significant influence on the outcome of the war.

    When the history of the war in Ukraine is written, it will be taught as the first true “drone war.” But it should also be understood as an industrial wake-up call: a time when the role of a drone’s component parts was laid bare and the supply chains that support this technology—the knowledge, production operations, and manufacturing processes—were found wanting. Heroic stories will be told of Ukrainian ingenuity in building drones with Chinese parts in basements and on kitchen tables, and we will hear of the country’s attempt to rebuild supply chains dominated by China while in the midst of an existential fight for survival. But in the background, we will also need to understand the ways in which other nations, especially China, controlled the war through long-term economic policies focused on capturing industrial capacity that the US and its allies failed to support through to maturity.

    Disassemble one of the FPV drones found across the battlefields of Ukraine and you will find about seven critical subsystems: power, propulsion, flight control, navigation and sensors (which gather location data and other information to support flight), compute (the processing and memory capacity needed to analyze the vast array of information and then support operations), communications (to connect the drone to the ground), and—supporting it all—the airframe.

    We have created a bill of materials listing the components necessary to build an FPV drone and the common suppliers for those parts.

    China’s manufacturing dominance has resulted in a domestic workforce with the experience to achieve process innovations and product improvements that have no equal in the West.  And it has come with the sophisticated supply chains that support a wide range of today’s technological capabilities and serve as the foundations for the next generation. None of that was inevitable. For example, most drone electronics are integrated on printed circuit boards (PCBs), a technology that was developed in the UK and US. However, first-mover advantage was not converted into long-term economic or national security outcomes, and both countries have lost the PCB supply chain to China.

    Propulsion is another case in point. The brushless DC motors used to convert electrical energy from batteries into mechanical energy to rotate drone propellers were invented in the US and Germany. The sintered permanent neodymium (NdFeB) magnets used in these motors were invented in Japan and the US. Today, to our knowledge, all brushless DC motors for drones are made in China. Similarly, China dominates all steps in the processing and manufacture of NdFeB magnets, accounting for 92% of global NdFeB magnet and magnet alloy markets.

    The missing middle of technology investment—insufficient funding for commercial production—is evident in each and every one of these failures, but the loss of expertise is an added dimension. For example, lithium polymer (LiPo) batteries are at the heart of every FPV drone. LiPo uses a solid or gel polymer electrolyte and achieves higher specific energy (energy per unit of weight)—a feature that is crucial for lightweight drones. Today, you would be hard-pressed to find a LiPo battery that was not manufactured in China. The experienced workforce behind these companies has contributed to learning curves that have led to a 97% drop in the cost of lithium-ion batteries and a simultaneous 300%-plus increase in battery energy density over the past three decades.

    China’s dominance in LiPo batteries for drones reflects its overall dominance in Li-ion manufacturing. China controls approximately 75% of global lithium-ion capacity—the anode, cathode, electrolyte, and separator subcomponents as well as the assembly into a single unit. It dominates the manufacture of each of these subcomponents, producing over 85% of anodes and over 70% of cathodes, electrolytes, and separators. China also controls the extraction and refinement of minerals needed to make these subcomponents.

    Again, this dominance was not inevitable. Most of the critical breakthroughs needed to invent and commercialize Li-ion batteries were made by scientists in North America and Japan. But in comparison to the US and Europe (at least until very recently), China has taken a proactive stance to coordinate, support, and co-invest with strategic industries to commercialize emerging technologies. China’s Ministry of Industry and Information Technology has been at pains to support these domestic industries.

    The case of Li-ion batteries is not an isolated one. The shift to Chinese dominance in the underlying electronics for FPV drones coincides with the period beginning in 2000, when Shenzhen started to emerge as a global hub for low-cost electronics. This trend was amplified by US corporations from Apple, for which low-cost production in China has been essential, to General Electric, which also sought low-cost approaches to maintain the competitive edge of its products. The global nature of supply chains was seen as a strength for US companies, whose comparative advantage lay in the design and integration of consumer products (such as smartphones) with little or no relevance for national security. Only a small handful of “exquisite systems” essential for military purposes were carefully developed within the US. And even those have relied upon global supply chains.

    While the absence of the high-tech industrial capacity needed for economic security is easy to label, it is not simple to address. Doing so requires several interrelated elements, among them designing and incentivizing appropriate capital investments, creating and matching demand for a talented technology workforce, building robust industrial infrastructure, ensuring visibility into supply chains, and providing favorable financial and regulatory environments for on- and friend-shoring of production. This is a project that cannot be done by the public or the private sector alone. Nor is the US likely to accomplish it absent carefully crafted shared partnerships with allies and partners across both the Atlantic and the Pacific.

    The opportunity to support today’s drones may have passed, but we do have the chance to build a strong industrial base to support tomorrow’s most critical technologies—not simply the eye-catching finished assemblies of autonomous vehicles, satellites, or robots but also their essential components. This will require attention to our manufacturing capabilities, our supply chains, and the materials that are the essential inputs. Alongside a shift in emphasis to our own domestic industrial base must come a willingness to plan and partner more effectively with allies and partners.

    If we do so, we will transform decades of US and allied support for foundational science and technology into tomorrow’s industrial base vital for economic prosperity and national security. But to truly take advantage of this opportunity, we need to value and support our shared, long-term economic security. And this means rewarding patient investment in projects that take a decade or more, incentivizing high-capital industrial activity, and maintaining a determined focus on education and workforce development—all within a flexible regulatory framework.

    Edlyn V. Levine is CEO and co-founder of a stealth-mode technology start up and an affiliate at MIT Sloan School of Management and the Department of Physics at Harvard University. Levine was co-founder and CSO of America’s Frontier Fund, and formerly Chief Technologist for the MITRE Corporation.

    Fiona Murray is the William Porter (1967) Professor of Entrepreneurship at the MIT School of Management where she works at the intersection of critical technologies, entrepreneurship, and geopolitics. She is the Vice Chair of the NATO Innovation Fund—a multi-sovereign venture fund for defense, security and resilience, and served for a decade on the UK Prime Minister’s Council on Science and Technology.

    The SEC’s new climate rules were a missed opportunity to accelerate corporate action

    This week, the US Securities and Exchange Commission enacted a set of long-awaited climate rules, requiring most publicly traded companies to disclose their greenhouse-gas emissions and the climate risks building up on their balance sheets. 

    Unfortunately, the federal agency watered down the regulations amid intense lobbying from business interests, undermining their ultimate effectiveness—and missing the best shot the US may have for some time at forcing companies to reckon with the rising dangers of a warming world. 

    These new regulations were driven by the growing realization that climate risks are financial risks. Global corporations now face climate-related supply chain disruptions. Their physical assets are vulnerable to storms, their workers will be exposed to extreme heat events, and some of their customers may be forced to relocate. There are fossil-fuel assets on their balance sheets that they may never be able to sell, and their business models will be challenged by a rapidly changing planet.

    These are not just coal and oil companies. They are utilities, transportation companies, material producers, consumer product companies, even food companies. And investors—you, me, your aunt’s pension—are buying and holding these fossilized stocks, often unknowingly.

    Investors, policymakers, and the general public all need clearer, better information on how businesses are accelerating climate change, what they are doing to address those impacts, and what the cascading effects could mean for their bottom line.

    The new SEC rules formalize and mandate what has essentially been a voluntary system of corporate carbon governance, now requiring corporations to report how climate-related risks may affect their business.

    They also must disclose their “direct emissions” from sources they own or control, as well as their indirect emissions from the generation of “purchased energy,” which generally means their use of electricity and heat. 

    But crucially, companies will have to do so only when they determine that the information is financially “material,” providing companies considerable latitude over whether they do or don’t provide those details.

    The original draft of the SEC rules would have also required corporations to report emissions from “upstream and downstream activities” in their value chains. That generally refers to the associated emissions from their suppliers and customers, which can often make up 80% of a company’s total climate pollution.  

    The loss of that requirement and the addition of the “materiality” standard both seem attributable to intense pressure from business groups. 

    To be sure, these rules should help make it clearer how some companies are grappling with climate change and their contributions to it. Out of legal caution, plenty of businesses are likely to determine that emissions are material.

    And clearer information will help accelerate corporate climate action as firms concerned about their reputation increasingly feel pressure from customers, competitors, and some investors to reduce their emissions. 

    But the SEC could and should have gone much further. 

    After all, the EU’s similar policies are much more comprehensive and stringent. California’s emissions disclosure law, signed this past October, goes further still, requiring both public and private corporations with revenues over $1 billion to report every category of emissions, and then to have this data audited by a third party.

    Unfortunately, the SEC rules merely move corporations to the starting line of the process required to decarbonize the economy, at a time when they should already be deep into the race. We know these rules don’t go far enough, because firms already following similar voluntary protocols have shown minimal progress in reducing their greenhouse-gas emissions. 

    The disclosure system upon which the SEC rules are based faces two underlying problems that have limited how much and how effectively any carbon accounting and reporting can be put to use. 

    First: problems with the data itself. The SEC rules grant firms significant latitude in carbon accounting, allowing them to set different boundaries for their “carbon footprint,” model and measure emissions differently, and even vary how they report their emissions. In aggregate, what we will end up with are corporate reports of the previous year’s partial emissions, without any way to know what a company actually did to reduce its carbon pollution.

    Second: limitations in how stakeholders can use this data. As we’ve seen with voluntary corporate climate commitments, the wide variations in reporting make it impossible to compare firms accurately. Or as the New Climate Institute argues, “The rapid acceleration in the volume of corporate climate pledges, combined with the fragmentation of approaches and the general lack of regulation or oversight, means that it is more difficult than ever to distinguish between real climate leadership and unsubstantiated greenwashing.”

    Investor efforts to evaluate carbon emissions, decarbonization plans, and climate risks through ESG (environmental, social, and governance) rating schemes have merely produced what some academics call “aggregate confusion.” And corporations have faced few penalties for failing to clearly disclose emissions or even meet their own standards. 

    All of which is to say that a new set of SEC carbon accounting and reporting rules that largely replicate the problems with voluntary corporate action, by failing to require consistent and actionable disclosures, isn’t going to drive the changes we need, at the speed we need. 

    Companies, investors, and the public require rules that drive changes inside companies and that can be properly assessed from outside them. 

    This system needs to track the main sources of corporate emissions and incentivize companies to make real investments in efforts to achieve deep emissions cuts, both within the company and across its supply chain.

    The good news is that even though the rules in place are limited and flawed, regulators, regions, and companies themselves can build upon them to move toward more meaningful climate action.

    The smartest firms and investors are already going beyond the SEC regulations. They’re developing better systems to track the drivers and costs of carbon emissions, and taking concrete steps to address them: reducing fuel use, building energy-efficient infrastructure, and adopting lower-carbon materials, products, and processes. 

    It is now just good business to look for carbon reductions that actually save money.

    The SEC has taken an important, albeit flawed, first step in nudging our financial laws to recognize climate impacts and risks. But regulators and corporations need to pick up the pace from here, ensuring that they’re providing a clear picture of how quickly or slowly companies are moving as they take the steps and make the investments needed to thrive in a transitioning economy—and on an increasingly risky planet.

    Dara O’Rourke is an associate professor and co-director of the master of climate solutions program at the University of California, Berkeley.