AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way. 

But all that is up for grabs. We are at a new inflection point.

The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way. 

Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”

AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results. 

More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.

And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google.

Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene. 

I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources. 

On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages. 

People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer.

It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.”

But this isn’t just about publishers (or my own self-interest). 

People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer.

But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate. 

Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know? 


In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good. 

Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey.

Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed. 

And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was.

But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.  

But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 

And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing

Sundar Pichai
Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”
JENS GYARMATY/LAIF/REDUX

For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  

But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search. 

“It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly. 

It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be. 

But once you’ve used AI Overviews a bit, you realize they are different

Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world.

While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 

“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.”

The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.) 

“[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.” 

That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 

That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video. 

When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous.

“We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai. 

There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous.

In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from.

Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out? 

I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources.

“When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.”

In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too. 

“Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.”

The new search

Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work.


Search Engine

Google
The search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries.

What it’s good at

Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material.


Perplexity
Perplexity is a conversational search engine that uses third-party large
language models from OpenAI and Anthropic to answer queries.

Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content.


ChatGPT
While Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search.

Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent.


When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web. 

“You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.”

There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful. 

“If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.” 

But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?  

Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.  

“If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says. 

Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.” 

Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”


 “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.” 

He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew? 

A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.  

“There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.”

Kevin Weil, chief product officer, OpenAI

According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says. 

OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more. 

“I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.”

Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience. 

Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does. 

Elizabeth Reid
“For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.
WINNI WINTERMEYER/REDUX

Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.)

But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners.

Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.” 

When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them. 

“And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.”

Indeed! 

The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers. 


It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.” 

We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge. 

The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities.

“A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.”

This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets. 

Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed. 

“It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.”

And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.”

“We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.”

This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information. 

In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses. 

But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today.

These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not.

That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Small language models: 10 Breakthrough Technologies 2025

WHO

Allen Institute for Artificial Intelligence, Anthropic, Google, Meta, Microsoft, OpenAI

WHEN

Now

Make no mistake: Size matters in the AI world. When OpenAI launched GPT-3 back in 2020, it was the largest language model ever built. The firm showed that supersizing this type of model was enough to send performance through the roof. That kicked off a technology boom that has been sustained by bigger models ever since. As Noam Brown, a research scientist at OpenAI, told an audience at TEDAI San Francisco in October, “The incredible progress in AI over the past five years can be summarized in one word: scale.”

But as the marginal gains for new high-end models trail off, researchers are figuring out how to do more with less. For certain tasks, smaller models that are trained on more focused data sets can now perform just as well as larger ones—if not better. That’s a boon for businesses eager to deploy AI in a handful of specific ways. You don’t need the entire internet in your model if you’re making the same kind of request again and again. 

Most big tech firms now boast fun-size versions of their flagship models for this purpose: OpenAI offers both GPT-4o and GPT-4o mini; Google DeepMind has Gemini Ultra and Gemini Nano; and Anthropic’s Claude 3 comes in three flavors: outsize Opus, midsize Sonnet, and tiny Haiku. Microsoft is pioneering a range of small language models called Phi.

A growing number of smaller companies offer small models as well. The AI startup Writer claims that its latest language model matches the performance of the largest top-tier models on many key metrics despite in some cases having just a 20th as many parameters (the values that get calculated during training and determine how a model behaves). 

Explore the full 2025 list of 10 Breakthrough Technologies.

Smaller models are more efficient, making them quicker to train and run. That’s good news for anyone wanting a more affordable on-ramp. And it could be good for the climate, too: Because smaller models work with a fraction of the computer oomph required by their giant cousins, they burn less energy. 

These small models also travel well: They can run right in our pockets, without needing to send requests to the cloud. Small is the next big thing.

Vera C. Rubin Observatory: 10 Breakthrough Technologies 2025

WHO

US Department of Energy’s SLAC National Accelerator Laboratory, US National Science Foundation

WHEN

6 months

The next time you glance up at the night sky, consider: The particles inside everything you can see make up only about 5% of what’s out there in the universe. Dark energy and dark matter constitute the rest, astronomers believe—but what exactly is this mysterious stuff? 

A massive new telescope erected in Chile will explore this question and other cosmic unknowns. It’s named for Vera Rubin, an American astronomer who in the 1970s and 1980s observed stars moving faster than expected in the outer reaches of dozens of spiral galaxies. Her calculations made a strong case for the existence of dark matter—mass we can’t directly observe but that appears to shape everything from the paths of stars to the structure of the universe itself. 

Explore the full 2025 list of 10 Breakthrough Technologies.

Soon, her namesake observatory will carry on that work in much higher definition. The facility, run by the SLAC National Accelerator Laboratory and the US National Science Foundation, will house the largest digital camera ever made for astronomy. And its first mission will be to complete what’s called the Legacy Survey of Space and Time. Astronomers will focus its giant lens on the sky over the Southern Hemisphere and snap photo after photo, passing over the same patches of sky repeatedly for a decade. 

By the end of the survey, this 3.2-gigapixel camera will have catalogued 20 billion galaxies and collected up to 60 petabytes of data—roughly three times the amount currently stored by the US Library of Congress. Compiling all these images together, with help from specialized algorithms and a supercomputer, will give astronomers a time-lapse view of the sky. Seeing how so many galaxies are dispersed and shaped will enable them to study dark matter’s gravitational effect. They also plan to create the most detailed three-dimensional map of our Milky Way galaxy ever made. 

If all goes well, the telescope will snap its first science-quality images—a special moment known as first light—in mid-2025. The public could see the first photo released from Rubin soon after. 

Long-acting HIV prevention meds: 10 Breakthrough Technologies 2025

WHO

Gilead Sciences, GSK, ViiV Healthcare

WHEN

1 to 3 years

In June 2024, results from a trial of a new medicine to prevent HIV were announced—and they were jaw-dropping. Lenacapavir, a treatment injected once every six months, protected over 5,000 girls and women in Uganda and South Africa from getting HIV. And it was 100% effective.

The drug, which is produced by Gilead, has other advantages. We’ve had effective pre-exposure prophylactic (PrEP) drugs for HIV since 2012, but these must be taken either daily or in advance of each time a person is exposed to the virus. That’s a big ask for healthy people. And because these medicines also treat infections, there’s stigma attached to taking them. For some, the drugs are expensive or hard to access. In the lenacapavir trial, researchers found that injections of the new drug were more effective than a daily PrEP pill, probably because participants didn’t manage to take the pills every day.

 In 2021, the US Food and Drug Administration approved another long-acting injectable drug that protects against HIV. That drug, cabotegravir, is manufactured by ViiV Healthcare (which is largely owned by GSK) and needs to be injected every two months. But despite huge demand, rollout has been slow.   

Explore the full 2025 list of 10 Breakthrough Technologies.

Scientists and activists hope that the story will be different for lenacapavir. So far, the FDA has approved the drug only for people who already have HIV that’s resistant to other treatments. But Gilead has signed licensing agreements with manufacturers to produce generic versions for HIV prevention in 120 low-income countries. 

In October, Gilead announced more trial results for lenacapavir, finding it 96% effective at preventing HIV infection in just over 3,200 cisgender gay, bisexual, and other men, as well as transgender men, transgender women, and nonbinary people who have sex with people assigned male at birth. 

The United Nations has set a goal of ending AIDS by 2030. It’s ambitious, to say the least: We still see over 1 million new HIV infections globally every year. But we now have the medicines to get us there. What we need is access. 

Generative AI search: 10 Breakthrough Technologies 2025

WHO

Apple, Google, Meta, Microsoft, OpenAI, Perplexity

WHEN

Now

Google’s introduction of AI Overviews, powered by its Gemini language model, will alter how billions of people search the internet. And generative search may be the first step toward an AI agent that handles any question you have or task you need done.

Rather than returning a list of links, AI Overviews offer concise answers to your queries. This makes it easier to get quick insights without scrolling and clicking through to multiple sources. After a rocky start with high-profile nonsense results following its US release in May 2024, Google limited its use of answers that draw on user-­generated content or satire and humor sites.   

Explore the full 2025 list of 10 Breakthrough Technologies.

The rise of generative search isn’t limited to Google. Microsoft and OpenAI both rolled out versions in 2024 as well. Meanwhile, in more places, on our computers and other gadgets, AI-assisted searches are now analyzing images, audio, and video to return custom answers to our queries. 

But Google’s global search dominance makes it the most important player, and the company has already rolled out AI Overviews to more than a billion people worldwide. The result is searches that feel more like conversations. Google and OpenAI both report that people interact differently with generative search—they ask longer questions and pose more follow-ups.    

This new application of AI has serious implications for online advertising and (gulp) media. Because these search products often summarize information from online news stories and articles in their responses, concerns abound that generative search results will leave little reason for people to click through to the original sources, depriving those websites of potential ad revenue. A number of publishers and artists have sued over the use of their content to train AI models; now, generative search will be another battleground between media and Big Tech.

Cattle burping remedies: 10 Breakthrough Technologies 2025

WHO

Blue Ocean Barns, DSM-Firmenich, Rumin8, Symbrosia

WHEN

Now

Companies are finally making real progress on one of the trickiest problems for climate change: cow burps. 

The world’s herds of cattle belch out methane as a by-product of digestion, as do sheep and goats. That powerful greenhouse gas makes up the single biggest source of livestock emissions, which together contribute 11% to 20% of the world’s total climate pollution, depending on the analysis.

It’s hard to meaningfully cut those emissions by reducing demand, simply because hamburgers, steaks, butter, and milk taste good—and a global population that’s growing larger and wealthier is only set to consume more of these foods. 

Explore the full 2025 list of 10 Breakthrough Technologies.

Enter the cattle burping supplement. DSM-Firmenich, a Netherlands-based conglomerate that produces fragrances, pharmaceuticals, and other products, has developed a feed supplement, Bovaer, that it says can cut methane emissions by 30% in dairy cattle and even more in beef cattle. It works by inhibiting an enzyme in the animals’ guts, which ordinarily helps convert hydrogen and carbon dioxide produced during digestion into the methane that they burp up. 

In May 2024, the Food and Drug Administration cleared the way for its use in the US. DSM says the additive is now available in more than 55 countries, including Australia, Brazil, and members of the European Union.

Meanwhile, startups like Blue Ocean Barns, Rumin8, and Symbrosia are developing, testing, or seeking approval for products derived from a type of red seaweed, which could reduce methane levels even further. Still other organizations are trying to tackle the problem in longer-lasting ways, by developing vaccines or altering the microbes in the guts of cattle.

It remains to be seen how many cattle farmers will pay for such products. But in the case of Bovaer, farmers who use it can earn greenhouse-gas credits that some companies will buy on voluntary carbon markets as a way to reduce their corporate climate footprints, according to Elanco, which is marketing the additive in the US. Meanwhile, Rumin8 says cattle taking its supplements could deliver more meat and milk.

The additives certainly don’t solve the whole problem. The cattle industry needs to take other major steps to cut its climate emissions, including halting its encroachment into carbon-absorbing forests. And to make any real dent in demand, food companies will have to develop better, cheaper, cleaner alternative products, like plant-based burgers and dairy substitutes.

But methane-cutting supplements increasingly look like a promising way to solve a big chunk of a very big problem.

Cleaner jet fuel: 10 Breakthrough Technologies 2025

WHO

Gevo, LanzaJet, Montana Renewables, Neste, World Energy

WHEN

Now

All the world’s planes consumed roughly 100 billion gallons of jet fuel as they crisscrossed the planet in 2024. Only about 0.5% of it was something other than fossil fuel. That could soon change.

Alternative jet fuels could slash aviation emissions—which have caused about 4% of global warming to date. These new fuels can be made with materials like used cooking oils, crop residue, industrial waste, and carbon dioxide captured from the air. Depending on the source, they can reduce emissions by half or nearly eliminate them. And they can generally be used in existing planes, which could enable quick climate progress.

Explore the full 2025 list of 10 Breakthrough Technologies.

More governments are now setting targets or passing legislation requiring airlines to begin using these alternative fuels (sometimes called sustainable aviation fuels, or SAFs). Starting this year, alternative fuels must make up at least 2% of the fuel used at airports in the European Union and UK. That mandate will ramp up in the coming decades, reaching 70% in the EU by 2050.

Today, nearly all commercially available alternative fuel is made with waste fats, oils, and greases. Montana Renewables recently got a $1.44 billion loan commitment from the US Department of Energy to expand one facility for such production. Still, these materials remain in limited supply.

Companies using other technologies and inputs are making progress scaling up. LanzaJet opened the first commercial-scale facility to make jet fuel from ethanol in early 2024, with a capacity of 9 million gallons annually. Synthetic fuels made with carbon dioxide could further expand options for airlines, though those fuels aren’t being produced at commercial scale yet.

One crucial factor for alternative jet fuels moving forward will be cost—on average, SAFs on the market today tend to be nearly three times more expensive than conventional jet fuel. Having more companies producing more fuel should help bring down the price, though newer fuels could be even more costly. 

Fast-learning robots: 10 Breakthrough Technologies 2025

WHO

Agility, Amazon, Covariant, Robust, Toyota Research Institute

WHEN

Now

Generative AI is causing a paradigm shift in how robots are trained. It’s now clear how we might finally build the sort of truly capable robots that have for decades remained the stuff of science fiction. 

Robotics researchers are no strangers to artificial intelligence—it has for years helped robots detect objects in their path, for example. But a few years ago, roboticists began marveling at the progress being made in large language models. Makers of those models could feed them massive amounts of text—books, poems, manuals—and then fine-tune them to generate text based on prompts. 

Explore the full 2025 list of 10 Breakthrough Technologies.

The idea of doing the same for robotics was tantalizing—but incredibly complicated. It’s one thing to use AI to create sentences on a screen, but another thing entirely to use it to coach a physical robot in how to move about and do useful things.

Now, roboticists have made major breakthroughs in that pursuit. One was figuring out how to combine different sorts of data and then make it all useful and legible to a robot. Take washing dishes as an example. You can collect data from someone washing dishes while wearing sensors. Then you can combine that with teleoperation data from a human doing the same task with robotic arms. On top of all that, you can also scrape the internet for images and videos of people doing dishes.

By merging these data sources properly into a new AI model, it’s possible to train a robot that, though not perfect, has a massive head start over those trained with more manual methods. Seeing so many ways that a single task can be done makes it easier for AI models to improvise, and to surmise what a robot’s next move should be in the real world. 

It’s a breakthrough that’s set to redefine how robots learn. Robots that work in commercial spaces like warehouses are already using such advanced training methods, and the lessons we learn from those experiments could lay the groundwork for smart robots that help out at home. 

Stem-cell therapies that work: 10 Breakthrough Technologies 2025

WHO

California Institute for Regenerative Medicine, Neurona Therapeutics, Vertex Pharmaceuticals

WHEN

5 years

A quarter-century ago, researchers isolated powerful stem cells from embryos created through in vitro fertilization. These cells, theoretically able to morph into any tissue in the human body, promised a medical revolution. Think: replacement parts for whatever ails you. 

But stem-cell science didn’t go smoothly. Not at first. Even though scientists soon learned to create these make-anything cells without embryos, coaxing them to become truly functional adult tissue proved harder than anyone guessed.

Now, though, stem cells are finally on the brink of delivering. Take the case of Justin Graves, a man with debilitating epilepsy who received a transplant of lab-made neurons, engineered to quell the electrical misfires in his brain that cause epileptic attacks.

Since the procedure, carried out in 2023 at the University of California, San Diego, Graves has reported having seizures about once a week, rather than once per day as he used to. “It’s just been an incredible, complete change,” he says. “I am pretty much a stem-cell evangelist now.”

The epilepsy trial, from a company called Neurona Therapeutics, is at an early stage—only 15 patients have been treated. But the preliminary results are remarkable.

Last June, a different stem-cell study delivered dramatic results. This time it was in type 1 diabetes, the autoimmune condition formerly called juvenile diabetes, in which a person’s body attacks the beta islet cells in the pancreas. Without working beta cells to control their blood sugar levels, people with type 1 diabetes rely on daily blood glucose monitoring and insulin injections or infusions to stay alive.

Explore the full 2025 list of 10 Breakthrough Technologies.

In this ongoing study, carried out by Vertex Pharmaceuticals in Boston, some patients who got transfusions of lab-made beta cells have been able to stop taking insulin. Instead, their new cells make it when it’s needed. 

No more seizures. No more insulin injections. Those are the words patients have always wanted to hear. And it means stem-cell researchers are close to achieving functional cures—when patients can get on with life because their bodies are able to self-regulate. 

The Vera C. Rubin Observatory is ready to transform our understanding of the cosmos

High atop Chile’s 2,700-meter Cerro Pachón, the air is clear and dry, leaving few clouds to block the beautiful view of the stars. It’s here that the Vera C. Rubin Observatory will soon use a car-size 3,200-megapixel digital camera—the largest ever built—to produce a new map of the entire night sky every three days.

Generating 20 terabytes of data per night, Rubin will capture fine details about the solar system, the Milky Way, and the large-scale structure of the cosmos, helping researchers to understand their history and current evolution. It will capture rapidly changing events, including stellar explosions called supernovas, the evisceration of stars by black holes, and the whiz of asteroids overhead. Findings from the observatory will help tease apart fundamental mysteries like the nature of dark matter and dark energy, two phenomena that have not been directly observed but affect how objects in the universe are bound together—and pushed apart.  

Rubin is the latest and most advanced entrant into the illustrious lineage of all-sky surveyors—instruments that capture, or survey, the entire sky, over and over again. Its first scientific images are expected later this year. In a single exposure, Rubin will capture 100,000 galaxies, the majority invisible to other instruments. A quarter-­century in the making, the observatory is poised to expand our understanding of just about every corner of the universe.  

The facility will also look far outside the Milky Way, cataloguing around 20 billion previously unknown galaxies and mapping their placement in long filamentary structures known as the cosmic web.

“I can’t think of an astronomer who is not excited about [Rubin],” says Christian Aganze, a galactic archeologist at Stanford University in California.

The observatory was first proposed in 2001. Then called the Large-Aperture Synoptic Survey Telescope (LSST), it grew out of an earlier concept for an instrument that would study dark matter, the enigmatic substance making up 85% of the matter in the universe. LSST was later reenvisioned to focus on a broader set of scientific questions, cataloguing the night sky over the course of a decade. Five years ago, it was renamed in honor of the late American astronomer Vera Rubin, who uncovered some of the best evidence in favor of dark matter’s existence in the 1970s and ’80s. 

During operations, Rubin will point its sharp eyes at the heavens and take a 30-second exposure of an area larger than 40 full moons. It will then swivel to a new patch and snap another photo, rounding back to the same swath of sky after about three nights. In this way, it can provide a constantly updated view of the universe, essentially creating “this huge video of the southern sky for 10 years,” explains Anais Möller, an astrophysicist at the Swinburne University of Technology in Melbourne, Australia.

a man in a clean room suit shines a light at a device that spans the room.
A view of the back of the Rubin Observatory’s massive LSST camera, which boasts six filters designed to capture light from different parts of the electromagnetic spectrum.
SPENCER LOWELL
diagram of light hitting an exploded view of mirrors and lenses
1) Secondary mirror (M2); 2) Lenses; 3) Primary Mirror (M1); 4) Tertiary mirror (M3)
GREG STEWART/SLAC NATIONAL ACCELERATOR LABORATORY/NSF/DOE/RUBIN OBSERVATORY/AURA

To accomplish its work, Rubin relies on an innovative three-mirror design unlike that of any other telescope. Its primary mirror is actually made up of two separate surfaces with different curvatures. The outer section, 8.4 meters wide, captures light from the universe and reflects it onto a 3.4-meter-wide secondary mirror located above it. This bounces the light back onto the inner part of the primary, which stretches five meters across and is considered a tertiary mirror, before being reflected into a digital camera. The compact configuration allows the enormous instrument to be powerful but nimble as it shifts around to take roughly 1,000 photos per night. 

“It has five seconds to go to the next position and be ready,” says Sandrine Thomas, the deputy director for the observatory’s construction and project scientist for the telescope. “Meaning that it doesn’t move. It doesn’t vibrate. It’s just rock solid, ready to take the next image.” 

Technicians reinstall a cover on the secondary telescope mirror, to protect it before installation.
The observatory’s three mirrors and the housing of the LSST camera are mounted on a structure called the Telescope Mount Assembly. The assembly has been carefully engineered for stability and precision, allowing the observatory to track celestial objects and carry out its large-scale survey of the sky.
The primary and tertiary telescope mirrors are positioned below a chamber at the Rubin Observatory that is used to apply reflective coatings.
A view of the Telescope Mount Assembly from above, through the observatory’s protective dome shutter.

Rubin’s 3,000-kilogram camera is the most sensitive ever created for an astronomical project. By stacking together images of a piece of sky taken over multiple nights, the telescope will be able to spot fainter and fainter objects, peering deeper into the cosmos the longer it operates. 

Each exposure creates a flood of data, which has to be piped via fiber-optic cables to processing centers around the world. These use machine learning to filter the information and generate alerts for interested groups, says Möller, who helps run what are known as community brokers, groups that design software to ingest the nightly terabytes of data and search for interesting phenomena. A small change in the sky—of which Rubin is expected to see around 10 million per night—could point to a supernova explosion, a pair of merging stars, or a massive object passing in front of another. Different teams will want to know which is which so they can aim other telescopes at particular regions for follow-up studies. 

The focal plane of the LSST has a surface area large enough to capture a portion of the sky about the size of 40 full Moons. Its resolution is so high that you could spot a golf ball from 24 km (15 miles) away.

clusters of galaxies
Matter in the universe can warp and magnify the light from more distant objects. The Rubin Observatory will use this phenomenon, called gravitational lensing, to study dark matter— an as-yet-unidentified substance that makes up most of the universe’s matter.
ESA, NASA, K. SHARON/TEL AVIV UNIVERSITY AND E. OFEK/CALTECH

With its capacity to detect faint objects, Rubin is expected to increase the number of known asteroids and comets by a factor of 10 to 100. Many of them will be objects more than 140 meters in diameter with orbits passing near Earth’s, meaning they could threaten our world. And it will catalogue 40,000 new small icy bodies in the Kuiper Belt, a largely unexplored region beyond Neptune where many comets are born, helping scientists better understand the structure and history of our solar system. 

“We have never had such a big telescope imaging so wide and so deep.”

Anais Möller, astrophysicist, Swinburne University of Technology, Melbourne, Australia

Beyond our solar system, Rubin will see telltale flickers that signal exoplanets passing in front of their parent stars, causing them to briefly dim. It should also find thousands of new brown dwarfs, faint objects between planets and stars in size, whose positions in the Milky Way can provide insight into how the environments in which stars are born affect the size and type of objects that can form there. It will discover never-before-seen dim dwarf galaxies orbiting our own and look closely at stellar streams, remnant trails of stars left behind when the Milky Way tore other, similar galaxies apart.

The facility will also look far outside the Milky Way, cataloguing around 20 billion previously unknown galaxies and mapping their placement in long filamentary structures known as the cosmic web. The gravitational pull of dark matter directly affects the overall shape of this web, and by examining its structure, cosmologists will glean evidence for different theories of what dark matter is. Rubin is expected to observe millions of supernovas and determine their distance from us, a way of measuring how fast the universe is expanding. Some researchers suspect that dark energy—which is causing the cosmos to expand at an accelerated rate—may have been stronger in the past. Data from more distant, and therefore older, supernovas could help bolster or disprove such ideas and potentially narrow down the identity of dark energy too.  

An overhead view of the observatory.
SPENCER LOWELL

In just about every way, Rubin will be a monumental project, explaining the near-universal eagerness for those in the field to see it finally begin operations. 

“We have never had such a big telescope imaging so wide and so deep,” says Möller. “That’s an incredible opportunity to really pinpoint things that are changing in the sky and understand their physics.”  

Adam Mann is a freelance space and physics journalist who lives in Oakland, California.