In a first, Google has released data on how much energy an AI prompt uses

Google has just released a technical report detailing how much energy its Gemini apps use for each query. In total, the median prompt—one that falls in the middle of the range of energy demand—consumes 0.24 watt-hours of electricity, the equivalent of running a standard microwave for about one second. The company also provided average estimates for the water consumption and carbon emissions associated with a text prompt to Gemini.

It’s the most transparent estimate yet from a Big Tech company with a popular AI product, and the report includes detailed information about how the company calculated its final estimate. As AI has become more widely adopted, there’s been a growing effort to understand its energy use. But public efforts attempting to directly measure the energy used by AI have been hampered by a lack of full access to the operations of a major tech company. 

Earlier this year, MIT Technology Review published a comprehensive series on AI and energy, at which time none of the major AI companies would reveal their per-prompt energy usage. Google’s new publication, at last, allows for a peek behind the curtain that researchers and analysts have long hoped for.

The study focuses on a broad look at energy demand, including not only the power used by the AI chips that run models but also by all the other infrastructure needed to support that hardware. 

“We wanted to be quite comprehensive in all the things we included,” said Jeff Dean, Google’s chief scientist, in an exclusive interview with MIT Technology Review about the new report.

That’s significant, because in this measurement, the AI chips—in this case, Google’s custom TPUs, the company’s proprietary equivalent of GPUs—account for just 58% of the total electricity demand of 0.24 watt-hours. 

Another large portion of the energy is used by equipment needed to support AI-specific hardware: The host machine’s CPU and memory account for another 25% of the total energy used. There’s also backup equipment needed in case something fails—these idle machines account for 10% of the total. The final 8% is from overhead associated with running a data center, including cooling and power conversion. 

This sort of report shows the value of industry input to energy and AI research, says Mosharaf Chowdhury, a professor at the University of Michigan and one of the heads of the ML.Energy leaderboard, which tracks energy consumption of AI models. 

Estimates like Google’s are generally something that only companies can produce, because they run at a larger scale than researchers are able to and have access to behind-the-scenes information. “I think this will be a keystone piece in the AI energy field,” says Jae-Won Chung, a PhD candidate at the University of Michigan and another leader of the ML.Energy effort. “It’s the most comprehensive analysis so far.”

Google’s figure, however, is not representative of all queries submitted to Gemini: The company handles a huge variety of requests, and this estimate is calculated from a median energy demand, one that falls in the middle of the range of possible queries.

So some Gemini prompts use much more energy than this: Dean gives the example of feeding dozens of books into Gemini and asking it to produce a detailed synopsis of their content. “That’s the kind of thing that will probably take more energy than the median prompt,” Dean says. Using a reasoning model could also have a higher associated energy demand because these models take more steps before producing an answer.

This report was also strictly limited to text prompts, so it doesn’t represent what’s needed to generate an image or a video. (Other analyses, including one in MIT Technology Review’s Power Hungry series earlier this year, show that these tasks can require much more energy.)

The report also finds that the total energy used to field a Gemini query has fallen dramatically over time. The median Gemini prompt used 33 times more energy in May 2024 than it did in May 2025, according to Google. The company points to advancements in its models and other software optimizations for the improvements.  

Google also estimates the greenhouse gas emissions associated with the median prompt, which they put at 0.03 grams of carbon dioxide. To get to this number, the company multiplied the total energy used to respond to a prompt by the average emissions per unit of electricity.

Rather than using an emissions estimate based on the US grid average, or the average of the grids where Google operates, the company instead uses a market-based estimate, which takes into account electricity purchases that the company makes from clean energy projects. The company has signed agreements to buy over 22 gigawatts of power from sources including solar, wind, geothermal, and advanced nuclear projects since 2010. Because of those purchases, Google’s emissions per unit of electricity on paper are roughly one-third of those on the average grid where it operates.

AI data centers also consume water for cooling, and Google estimates that each prompt consumes 0.26 milliliters of water, or about five drops. 

The goal of this work was to provide users a window into the energy use of their interactions with AI, Dean says. 

“People are using [AI tools] for all kinds of things, and they shouldn’t have major concerns about the energy usage or the water usage of Gemini models, because in our actual measurements, what we were able to show was that it’s actually equivalent to things you do without even thinking about it on a daily basis,” he says, “like watching a few seconds of TV or consuming five drops of water.”

The publication greatly expands what’s known about AI’s resource usage. It follows recent increasing pressure on companies to release more information about the energy toll of the technology. “I’m really happy that they put this out,” says Sasha Luccioni, an AI and climate researcher at Hugging Face. “People want to know what the cost is.”

This estimate and the supporting report contain more public information than has been available before, and it’s helpful to get more information about AI use in real life, at scale, by a major company, Luccioni adds. However, there are still details that the company isn’t sharing in this report. One major question mark is the total number of queries that Gemini gets each day, which would allow estimates of the AI tool’s total energy demand. 

And ultimately, it’s still the company deciding what details to share, and when and how. “We’ve been trying to push for a standardized AI energy score,” Luccioni says, a standard for AI similar to the Energy Star rating for appliances. “This is not a replacement or proxy for standardized comparisons.”

Is Google playing catchup on search with OpenAI?

This story originally appeared in The Debrief with Mat Honan, a weekly newsletter about the biggest stories in tech from our editor in chief. Sign up here to get the next one in your inbox.

I’ve been mulling over something that Will Heaven, our senior editor for AI, pointed out not too long ago: that all the big players in AI seem to be moving in the same directions and converging on the same things. Agents. Deep research. Lightweight versions of models. Etc. 

Some of this makes sense in that they’re seeing similar things and trying to solve similar problems. But when I talked to Will about this, he said, “it almost feels like a lack of imagination, right?” Yeah. It does.

What got me thinking about this, again, was a pair of announcements from Google over the past couple of weeks, both related to the ways search is converging with AI language models, something I’ve spent a lot of time reporting on over the past year. Google took direct aim at this intersection by adding new AI features from Gemini to search, and also by adding search features to Gemini. In using both, what struck me more than how well they work is that they are really just about catching up with OpenAI’s ChatGPT.  And their belated appearance in March of the year 2025 doesn’t seem like a great sign for Google. 

Take AI Mode, which it announced March 5. It’s cool. It works well. But it’s pretty much a follow-along of what OpenAI was already doing. (Also, don’t be confused by the name. Google already had something called AI Overviews in search, but AI Mode is different and deeper.) As the company explained in a blog post, “This new Search mode expands what AI Overviews can do with more advanced reasoning, thinking and multimodal capabilities so you can get help with even your toughest questions.”

Rather than a brief overview with links out, the AI will dig in and offer more robust answers. You can ask followup questions too, something AI Overviews doesn’t support. It feels like quite a natural evolution—so much so that it’s curious why this is not already widely available. For now, it’s limited to people with paid accounts, and even then only via the experimental sandbox of Search Labs. But more to the point, why wasn’t it available, say, last summer?

The second change is that it added search history to its Gemini chatbot, and promises even more personalization is on the way. On this one, Google says “personalization allows Gemini to connect with your Google apps and services, starting with Search, to provide responses that are uniquely insightful and directly address your needs.”

Much of what these new features are doing, especially AI Mode’s ability to ask followup questions and go deep, feels like hitting feature parity with what ChatGPT has been doing for months. It’s also been compared to Perplexity, another generative AI search engine startup. 

What neither feature feels like is something fresh and new. Neither feels innovative. ChatGPT has long been building user histories and using the information it has to deliver results. While Gemini could also remember things about you, it’s a little bit shocking to me that Google has taken this long to bring in signals from its other products. Obviously there are privacy concerns to field, but this is an opt-in product we’re talking about. 

The other thing is that, at least as I’ve found so far, ChatGPT is just better at this stuff. Here’s a small example. I tried asking both: “What do you know about me?” ChatGPT replied with a really insightful, even thoughtful, profile based on my interactions with it. These aren’t  just the things I’ve explicitly told it to remember about me, either. Much of it comes from the context of various prompts I’ve fed it. It’s figured out what kind of music I like. It knows little details about my taste in films. (“You don’t particularly enjoy slasher films in general.”) Some of it is just sort of oddly delightful. For example: “You built a small shed for trash cans with a hinged wooden roof and needed a solution to hold it open.”

Google, despite having literal decades of my email, search, and browsing history, a copy of every digital photo I’ve ever taken, and more darkly terrifying insight into the depths of who I really am than I probably I do myself, mostly spat back the kind of profile an advertiser would want, versus a person hoping for useful tailored results. (“You enjoy comedy, music, podcasts, and are interested in both current and classic media”)

I enjoy music, you say? Remarkable! 

I’m also reminded of something an OpenAI executive said to me late last year, as the company was preparing to roll out search. It has more freedom to innovate precisely because it doesn’t have the massive legacy business that Google does. Yes, it’s burning money while Google mints it. But OpenAI has the luxury of being able to experiment (at least until the capital runs out) without worrying about killing a cash cow like Google has with traditional search. 

Of course, it’s clear that Google and its parent company Alphabet can innovate in many areas—see Google DeepMind’s Gemini Robotics announcement this week, for example. Or ride in a Waymo! But can it do so around its core products and business? It’s not the only big legacy tech company with this problem. Microsoft’s AI strategy to date has largely been reliant on its partnership with OpenAI. And Apple, meanwhile, seems completely lost in the wilderness, as this scathing takedown from longtime Apple pundit John Gruber lays bare

Google has billions of users and piles of cash. It can leverage its existing base in ways OpenAI or Anthropic (which Google also owns a good chunk of) or Perplexity just aren’t capable of. But I’m also pretty convinced that unless it can be the market leader here, rather than a follower, it points to some painful days ahead. But hey, Astra is coming. Let’s see what happens.

Tech workers should shine a light on the industry’s secretive work with the military

It’s a hell of a time to have a conscience if you work in tech. The ongoing Israeli assault on Gaza has brought the stakes of Silicon Valley’s military contracts into stark relief. Meanwhile, corporate leadership has embraced a no-politics-in-the-workplace policy enforced at the point of the knife.

Workers are caught in the middle. Do I take a stand and risk my job, my health insurance, my visa, my family’s home? Or do I ignore my suspicion that my work may be contributing to the murder of innocents on the other side of the world?  

No one can make that choice for you. But I can say with confidence born of experience that such choices can be more easily made if workers know what exactly the companies they work for are doing with militaries at home and abroad. And I also know this: those same companies themselves will never reveal this information unless they are forced to do so—or someone does it for them. 

For those who doubt that workers can make a difference in how trillion-dollar companies pursue their interests, I’m here to remind you that we’ve done it before. In 2017, I played a part in the successful #CancelMaven campaign that got Google to end its participation in Project Maven, a contract with the US Department of Defense to equip US military drones with artificial intelligence. I helped bring to light information that I saw as critically important and within the bounds of what anyone who worked for Google, or used its services, had a right to know. The information I released—about how Google had signed a contract with the DOD to put AI technology in drones and later tried to misrepresent the scope of that contract, which the company’s management had tried to keep from its staff and the general public—was a critical factor in pushing management to cancel the contract. As #CancelMaven became a rallying cry for the company’s staff and customers alike, it became impossible to ignore. 

Today a similar movement, organized under the banner of the coalition No Tech for Apartheid, is targeting Project Nimbus, a joint contract between Google and Amazon to provide cloud computing infrastructure and AI capabilities to the Israeli government and military. As of May 10, just over 97,000 people had signed its petition calling for an end to collaboration between Google, Amazon, and the Israeli military. I’m inspired by their efforts and dismayed by Google’s response. Earlier this month the company fired 50 workers it said had been involved in “disruptive activity” demanding transparency and accountability for Project Nimbus. Several were arrested. It was a decided overreach.  

Google is very different from the company it was seven years ago, and these firings are proof of that. Googlers today are facing off with a company that, in direct response to those earlier worker movements, has fortified itself against new demands. But every Death Star has its thermal exhaust port, and today Google has the same weakness it did back then: dozens if not hundreds of workers with access to information it wants to keep from becoming public. 

Not much is known about the Nimbus contract. It’s worth $1.2 billion and enlists Google and Amazon to provide wholesale cloud infrastructure and AI for the Israeli government and its ministry of defense. Some brave soul leaked a document to Time last month, providing evidence that Google and Israel negotiated an expansion of the contract as recently as March 27 of this year. We also know, from reporting by The Intercept, that Israeli weapons firms are required by government procurement guidelines to buy their cloud services from Google and Amazon. 

Leaks alone won’t bring an end to this contract. The #CancelMaven victory required a sustained focus over many months, with regular escalations, coordination with external academics and human rights organizations, and extensive internal organization and discipline. Having worked on the public policy and corporate comms teams at Google for a decade, I understood that its management does not care about one negative news cycle or even a few of them. Management buckled only after we were able to keep up the pressure and escalate our actions (leaking internal emails, reporting new info about the contract, etc.) for over six months. 

The No Tech for Apartheid campaign seems to have the necessary ingredients. If a strategically placed insider released information not otherwise known to the public about the Nimbus project, it could really increase the pressure on management to rethink its decision to get into bed with a military that’s currently overseeing mass killings of women and children.

My decision to leak was deeply personal and a long time in the making. It certainly wasn’t a spontaneous response to an op-ed, and I don’t presume to advise anyone currently at Google (or Amazon, Microsoft, Palantir, Anduril, or any of the growing list of companies peddling AI to militaries) to follow my example. 

However, if you’ve already decided to put your livelihood and freedom on the line, you should take steps to try to limit your risk. This whistleblower guide is helpful. You may even want to reach out to a lawyer before choosing to share information. 

In 2017, Google was nervous about how its military contracts might affect its public image. Back then, the company responded to our actions by defending the nature of the contract, insisting that its Project Maven work was strictly for reconnaissance and not for weapons targeting—conceding implicitly that helping to target drone strikes would be a bad thing. (An aside: Earlier this year the Pentagon confirmed that Project Maven, which is now a Palantir contract, had been used in targeting drone attacks in Yemen, Iraq, and Syria.) 

Today’s Google has wrapped its arms around the American flag, for good or ill. Yet despite this embrace of the US military, it doesn’t want to be seen as a company responsible for illegal killings. Today it maintains that the work it is doing as part of Project Nimbus “is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” At the same time, it asserts that there is no room for politics at the workplace and has fired those demanding transparency and accountability. This raises a question: If Google is doing nothing sensitive as part of the Nimbus contract, why is it firing workers who are insisting that the company reveal what work the contract actually entails?  

As you read this, AI is helping Israel annihilate Palestinians by expanding the list of possible targets beyond anything that could be compiled by a human intelligence effort, according to +972 Magazine. Some Israel Defense Forces insiders are even sounding the alarm, calling it a dangerous “mass assassination program.” The world has not yet grappled with the implications of the proliferation of AI weaponry, but that is the trajectory we are on. It’s clear that absent sufficient backlash, the tech industry will continue to push for military contracts. It’s equally clear that neither national governments nor the UN is currently willing to take a stand. 

It will take a movement. A document that clearly demonstrates Silicon Valley’s direct complicity in the assault on Gaza could be the spark. Until then, rest assured that tech companies will continue to make as much money as possible developing the deadliest weapons imaginable. 

William Fitzgerald is a founder and partner at the Worker Agency, an advocacy agency in California. Before setting the firm up in 2018, he spent a decade at Google working on its government relation and communications teams.

Google DeepMind’s new AI tool helped create more than 700 new materials

From EV batteries to solar cells to microchips, new materials can supercharge technological breakthroughs. But discovering them usually takes months or even years of trial-and-error research. 

Google DeepMind hopes to change that with a new tool that uses deep learning to dramatically speed up the process of discovering new materials. Called graphical networks for material exploration (GNoME), the technology has already been used to predict structures for 2.2 million new materials, of which more than 700 have gone on to be created in the lab and are now being tested. It is described in a paper published in Nature today. 

Alongside GNoME, Lawrence Berkeley National Laboratory also announced a new autonomous lab. The lab takes data from the materials database that includes some of GNoME’s discoveries and uses machine learning and robotic arms to engineer new materials without the help of humans. Google DeepMind says that together, these advancements show the potential of using AI to scale up the discovery and development of new materials.

GNoME can be described as AlphaFold for materials discovery, according to Ju Li, a materials science and engineering professor at the Massachusetts Institute of Technology. AlphaFold, a DeepMind AI system announced in 2020, predicts the structures of proteins with high accuracy and has since advanced biological research and drug discovery. Thanks to GNoME, the number of known stable materials has grown almost tenfold, to 421,000.

“While materials play a very critical role in almost any technology, we as humanity know only a few tens of thousands of stable materials,” said Dogus Cubuk, materials discovery lead at Google DeepMind, at a press briefing. 

To discover new materials, scientists combine elements across the periodic table. But because there are so many combinations, it’s inefficient to do this process blindly. Instead, researchers build upon existing structures, making small tweaks in the hope of discovering new combinations that hold potential. However, this painstaking process is still very time consuming. Also, because it builds on existing structures, it limits the potential for unexpected discoveries. 

To overcome these limitations, DeepMind combines two different deep-learning models. The first generates more than a billion structures by making modifications to elements in existing materials. The second, however, ignores existing structures and predicts the stability of new materials purely on the basis of chemical formulas. The combination of these two models allows for a much broader range of possibilities. 

Once the candidate structures are generated, they are filtered through DeepMind’s GNoME models. The models predict the decomposition energy of a given structure, which is an important indicator of how stable the material can be. “Stable” materials do not easily decompose, which is important for engineering purposes. GNoME selects the most promising candidates, which go through further evaluation based on known theoretical frameworks.

This process is then repeated multiple times, with each discovery incorporated into the next round of training.

In its first round, GNoME predicted different materials’ stability with a precision of around 5%, but it increased quickly throughout the iterative learning process. The final results showed GNoME managed to predict the stability of structures over 80% of the time for the first model and 33% for the second. 

Using AI models to come up with new materials is not a novel idea. The Materials Project, a program led by Kristin Persson at Berkeley Lab, has used similar techniques to discover and improve the stability of 48,000 materials. 

However, GNoME’s size and precision set it apart from previous efforts. It was trained on at least an order of magnitude more data than any previous model, says Chris Bartel, an assistant professor of chemical engineering and materials science at the University of Minnesota. 

Doing similar calculations has previously been expensive and limited in scale, says Yifei Mo, an associate professor of materials science and engineering at the University of Maryland. GNoME allows these computations to scale up with higher accuracy and at much less computational cost, Mo says: “The impact can be huge.”

Once new materials have been identified, it is equally important to synthesize them and prove their usefulness. Berkeley Lab’s new autonomous laboratory, named the A-Lab, has been using some of GNoME’s discoveries with the Materials Project information, integrating robotics with machine learning to optimize the development of such materials.

The lab is capable of making its own decisions about how to make a proposed material and creates up to five initial formulations. These formulations are generated by a machine-learning model trained on existing scientific literature. After each experiment, the lab uses the results to adjust the recipes.

Researchers at Berkeley Lab say that A-Lab was able to perform 355 experiments over 17 days and successfully synthesized 41 out of 58 proposed compounds. This works out to two successful syntheses a day.

In a typical, human-led lab, it takes much longer to make materials. “If you’re unlucky, it can take months or even years,” said Persson at a press briefing. Most students give up after a few weeks, she said. “But the A-Lab doesn’t mind failing. It keeps trying and trying.”

Researchers at DeepMind and Berkeley Lab say these new AI tools can help accelerate hardware innovation in energy, computing, and many other sectors.

“Hardware, especially when it comes to clean energy, needs innovation if we are going to solve the climate crisis,” says Persson. “This is one aspect of accelerating that innovation.”

Bartel, who was not involved in the research, says that these materials will be promising candidates for technologies spanning batteries, computer chips, ceramics, and electronics. 

Lithium-ion battery conductors are one of the most promising use cases. Conductors play an important role in batteries by facilitating the flow of electric current between various components. DeepMind says GNoME identified 528 promising lithium-ion conductors among other discoveries, some of which may help make batteries more efficient. 

However, even after new materials are discovered, it usually takes decades for industries to take them to the commercial stage. “If we can reduce this to five years, that will be a big improvement,” says Cubuk.

Correction: This story has been updated to make clear where the lab’s data comes from.

Google Launches Fuchsia
Google launches its third major operating system, Fuchsia

Google is officially rolling out a new operating system, called Fuchsia, to consumers. The release is a bit hard to believe at this point, but Google confirmed the news to 9to5Google, and several members of the Fuchsia team have confirmed it on Twitter. The official launch date was apparently yesterday. Fuchsia is certainly getting a quiet, anti-climactic release, as it’s only being made available to one device, the Google Home Hub, aka the first-generation Nest Hub. There are no expected changes to the UI or functionality of the Home Hub, but Fuchsia is out there. Apparently, Google simply wants to prove out the OS in a consumer environment.

Fuchsia’s one launch device was originally called the Google Home Hub and is a 7-inch smart display that responds to Google Assistant commands. It came out in 2018. The device was renamed the “Nest Hub” in 2019, and it’s only this first-generation device, not the second-generation Nest Hub or Nest Hub Max, that is getting Fuchsia. The Home Hub’s OS has always been an odd duck. When the device was released, Google was pitching a smart display hardware ecosystem to partners based on Android Things, a now-defunct Internet-of-things/kiosk OS. Instead of following the recommendations it gave to hardware partners, Google loaded the Home Hub with its in-house Google Cast Platform instead—and then undercut all its partners on.

Fuchsia has long been a secretive project. We first saw the OS as a pre-alpha smartphone UI that was ported to Android in 2017. In 2018, we got the OS running natively on a Pixelbook. After that, the Fuchsia team stopped doing its work in the open and stripped all UI work out of the public repository.
There’s no blog post or any fanfare at all to mark Fuchsia’s launch. Google’s I/O conference happened last week, and the company didn’t make a peep about Fuchsia there, either. Really, this ultra-quiet, invisible release is the most “Fuchsia” launch possible.

Fuchsia is something very rare in the world of tech: it’s a built-from-scratch operating system that isn’t based on Linux. Fuchsia uses a microkernel called “Zircon” that Google developed in house. Creating an operating system entirely from scratch and bringing it all the way to production sounds like a difficult task, but Google managed to do exactly that over the past six years. Fuchsia’s primary app-development language is Flutter, a cross-platform UI toolkit from Google. Flutter runs on Android, iOS, and the web, so writing Flutter apps today for existing platforms means you’re also writing Fuchsia apps for tomorrow.

The Nest Hub’s switch to Fuchsia is kind of interesting because of how invisible it should be. It will be the first test of this Fuchsia’s future-facing Flutter app support—the Google smart display interface is written in Flutter, so Google can take the existing interface, rip out all the Google Cast guts underneath, and plop the exact same interface code down on top of Fuchsia. Google watchers have long speculated that this was the plan all along. Rather than having a disruptive OS switch, Google could just get coders to write in Flutter and then it could seamlessly swap out the operating system.

So, unless we get lucky, don’t expect a dramatic hands-on post of Fuchsia running on the Nest Hub. It’s likely that there isn’t currently much to see or do with the new operating system, and that’s exactly how Google wants it. Fuchsia is more than just a smart-display operating system, though. An old Bloomberg report from 2018 has absolutely nailed the timing of Fuchsia so far, saying that Google wanted to first ship the OS on connected home devices “within three years”—the report turns three years old in July. The report also laid out the next steps for Fuchsia, including an ambitious expansion to smartphones and laptops by 2023.
Taking over the Nest Hub is one thing—no other team at Google really has a vested interest in the Google Cast OS (you could actually argue that the Cast OS is on the way out, as the latest Chromecast is switching to Android). Moving the OS onto smartphones and laptops is an entirely different thing, though, since the Fuchsia team would crash into the Android and Chrome OS divisions. Now you’re getting into politics.