Is Google playing catchup on search with OpenAI?

This story originally appeared in The Debrief with Mat Honan, a weekly newsletter about the biggest stories in tech from our editor in chief. Sign up here to get the next one in your inbox.

I’ve been mulling over something that Will Heaven, our senior editor for AI, pointed out not too long ago: that all the big players in AI seem to be moving in the same directions and converging on the same things. Agents. Deep research. Lightweight versions of models. Etc. 

Some of this makes sense in that they’re seeing similar things and trying to solve similar problems. But when I talked to Will about this, he said, “it almost feels like a lack of imagination, right?” Yeah. It does.

What got me thinking about this, again, was a pair of announcements from Google over the past couple of weeks, both related to the ways search is converging with AI language models, something I’ve spent a lot of time reporting on over the past year. Google took direct aim at this intersection by adding new AI features from Gemini to search, and also by adding search features to Gemini. In using both, what struck me more than how well they work is that they are really just about catching up with OpenAI’s ChatGPT.  And their belated appearance in March of the year 2025 doesn’t seem like a great sign for Google. 

Take AI Mode, which it announced March 5. It’s cool. It works well. But it’s pretty much a follow-along of what OpenAI was already doing. (Also, don’t be confused by the name. Google already had something called AI Overviews in search, but AI Mode is different and deeper.) As the company explained in a blog post, “This new Search mode expands what AI Overviews can do with more advanced reasoning, thinking and multimodal capabilities so you can get help with even your toughest questions.”

Rather than a brief overview with links out, the AI will dig in and offer more robust answers. You can ask followup questions too, something AI Overviews doesn’t support. It feels like quite a natural evolution—so much so that it’s curious why this is not already widely available. For now, it’s limited to people with paid accounts, and even then only via the experimental sandbox of Search Labs. But more to the point, why wasn’t it available, say, last summer?

The second change is that it added search history to its Gemini chatbot, and promises even more personalization is on the way. On this one, Google says “personalization allows Gemini to connect with your Google apps and services, starting with Search, to provide responses that are uniquely insightful and directly address your needs.”

Much of what these new features are doing, especially AI Mode’s ability to ask followup questions and go deep, feels like hitting feature parity with what ChatGPT has been doing for months. It’s also been compared to Perplexity, another generative AI search engine startup. 

What neither feature feels like is something fresh and new. Neither feels innovative. ChatGPT has long been building user histories and using the information it has to deliver results. While Gemini could also remember things about you, it’s a little bit shocking to me that Google has taken this long to bring in signals from its other products. Obviously there are privacy concerns to field, but this is an opt-in product we’re talking about. 

The other thing is that, at least as I’ve found so far, ChatGPT is just better at this stuff. Here’s a small example. I tried asking both: “What do you know about me?” ChatGPT replied with a really insightful, even thoughtful, profile based on my interactions with it. These aren’t  just the things I’ve explicitly told it to remember about me, either. Much of it comes from the context of various prompts I’ve fed it. It’s figured out what kind of music I like. It knows little details about my taste in films. (“You don’t particularly enjoy slasher films in general.”) Some of it is just sort of oddly delightful. For example: “You built a small shed for trash cans with a hinged wooden roof and needed a solution to hold it open.”

Google, despite having literal decades of my email, search, and browsing history, a copy of every digital photo I’ve ever taken, and more darkly terrifying insight into the depths of who I really am than I probably I do myself, mostly spat back the kind of profile an advertiser would want, versus a person hoping for useful tailored results. (“You enjoy comedy, music, podcasts, and are interested in both current and classic media”)

I enjoy music, you say? Remarkable! 

I’m also reminded of something an OpenAI executive said to me late last year, as the company was preparing to roll out search. It has more freedom to innovate precisely because it doesn’t have the massive legacy business that Google does. Yes, it’s burning money while Google mints it. But OpenAI has the luxury of being able to experiment (at least until the capital runs out) without worrying about killing a cash cow like Google has with traditional search. 

Of course, it’s clear that Google and its parent company Alphabet can innovate in many areas—see Google DeepMind’s Gemini Robotics announcement this week, for example. Or ride in a Waymo! But can it do so around its core products and business? It’s not the only big legacy tech company with this problem. Microsoft’s AI strategy to date has largely been reliant on its partnership with OpenAI. And Apple, meanwhile, seems completely lost in the wilderness, as this scathing takedown from longtime Apple pundit John Gruber lays bare

Google has billions of users and piles of cash. It can leverage its existing base in ways OpenAI or Anthropic (which Google also owns a good chunk of) or Perplexity just aren’t capable of. But I’m also pretty convinced that unless it can be the market leader here, rather than a follower, it points to some painful days ahead. But hey, Astra is coming. Let’s see what happens.

Behind Microsoft CEO Satya Nadella’s push to get AI tools in developers’ hands

In San Francisco last week, everyone’s favorite surprise visitor was Microsoft CEO Satya Nadella. 

At OpenAI’s DevDay—the company’s first-ever event for developers building on its platform—Nadella bounded on stage to join OpenAI CEO Sam Altman, blowing the hair back on an already electrified audience. “You guys have built something magic,” he gushed. 

Two days later on another stage, in another venue, at another developers’ conference, Nadella made his second unannounced appearance of the week—this time at GitHub Universe. There Thomas Dohmke, GitHub’s CEO, was showing off a new version of the company’s AI programming tool, Copilot, that can generate computer code from natural language. Nadella was effusive: “I can code again!” he exclaimed. 

Today, Nadella will be onstage speaking to developers at Microsoft Ignite, where the company is announcing even more AI-based developer tools, including an Azure AI Studio that will let devs choose between model catalogs from not only Microsoft, but also the likes of Meta, OpenAI, and Hugging Face, as well as new tools for customizing Copilot for Microsoft 365. 

If it seems like Nadella is obsessed with developers, you’re not wrong. He’s making the rounds to tout all the ways they can use a new generation of AI-powered tools, like GitHub Copilot (Microsoft acquired GitHub in 2018) or the new suite of developer tools from OpenAI, a company in which Microsoft has reportedly invested some $13 billion.

Last week, Nadella took a 20-minute break from all of his onstage appearances to sit down with MIT Technology Review to talk about (you guessed it) developers. He repeatedly emphasized Microsoft’s longstanding focus on developers. But he also had a message: The way we create software is fundamentally changing. 

Nadella believes a platform shift is underway, one that will prove just as significant as the shifts from mainframe to desktop or desktop to mobile. This time, the transition is to natural language AI tools, some of which he argues will lower the barrier to entry for software development, make existing developers more productive, and ultimately lead to a new era of creativity. 

We present Nadella in his own words, below. His remarks have been edited and condensed somewhat for readability.  

ON THE RELATIONSHIP WITH OPENAI

One criticism of OpenAI is that its very business is only possible via Microsoft, which has given the startup billions of dollars and access to the resources it needs to power its computing-intensive language model. Yet Microsoft is also highly dependent on OpenAI’s technology to power services like GitHub Copilot, Bing, and Office 365. Altman even joked about the partnership onstage. We asked Nadella about this relationship.   

I’ve always felt that Microsoft is a platform-and-partner-first company, and this is not new to us. And so therefore, we both are effectively codependent, right? They depend on us to build the best systems, we depend on them to build the best models, and we go to market together. 

ON HIS MISSION TO GET IN FRONT OF DEVELOPERS

Nadella says this platform shift is different enough from previous ones that he feels the company needs to provide developers not only with tools, but also with a clear message about what it’s thinking and how devs can come along. 

Whenever you have a platform shift, the key thing is to make sure the platform is ubiquitously available for developers to build all kinds of new things. So to us, the most important task is to make the developer tools, the developer platforms, broadly available. 

The second thing is for us to also show the light, right? Whether it’s OpenAI building ChatGPT and then innovating on top of it, or us building Copilot and innovating on it. That will give developers an opportunity to distribute their applications. So the most important thing in any platform creation is to get the platform ubiquitously available, and then help developers reach [their] audience. 

Those are the two goals that we have across all of these [conferences].

ON WHAT IS DIFFERENT ABOUT THIS SHIFT AND PRODUCTIVITY

Productivity gains in the United States have been sluggish for the past 15 or more years. The last huge platform shift—the rise of mobile development—did little to achieve widespread prosperity. Nadella says this time will be different, largely because the shift to AI will fuel a creative revolution by making it easy for anyone to generate new work, including code. 

On the other hand, coding today is a highly skilled, well-paid job, and there’s some concern that AI could effectively automate it. Nadella argues that skilled programmers will remain in demand, but that their jobs will change and even more jobs will become available. Nadella has said he envisions 1 billion developers creating on its platforms, many of them with little to no previous experience with coding.   

Anytime you have something as disruptive as this, you have to think about the displacement and causes. And that means it’s all about upskilling and reskilling, and in an interesting way, it’s more akin to what happened when word processors and spreadsheets started showing up. Obviously, if you were a typist, it really drastically changed. But at the same time, it enabled a billion people to be able to type into word processors and create and share documents.

I don’t think professional developers are going to be any less valuable than they are today. It’s just that we’re going to have many, many gradations of developers. Each time you’re prompting a Bing chat or ChatGPT, you’re essentially programming. The conversation itself is steering a model.

I think there will be many, many new jobs, there will be many, many new types of knowledge work, or frontline work, where the drudgery is removed.

I think the mobile era was fantastic. It made ubiquitous consumption of services. It didn’t translate into ubiquitous creation of services.

The last time there was a broad spread of productivity in the United States and beyond because of information technology was the [advent of the] PC. In fact, even the critics of information technology and productivity, like Robert Gordon of Northwestern, acknowledged that the PC, when it first showed up at work, did actually translate to broad productivity stats changes.

So that’s where I think this is, where these tools, like Copilot, being used by a [beginner] software engineer in Detroit, in order to be able to write [code].… I think we’ll have a real change in the productivity of the auto industry. Same thing in retail, same thing in frontline work and knowledge work.

The barrier to entry is very low. Because it’s natural language, domain experts can build apps or workflows. That, I think, is what’s the most exciting thing about this. This is not about just a consumption-led thing. This is not about elite creation. This is about democratized creation. I’m very, very hopeful that we’ll start seeing the productivity gains much more broadly.

ON PROTECTING DEVELOPERS

Numerous intellectual property cases and class action lawsuits are before the US courts over issues of fair use. At least one singles out GitHub Copilot specifically, claiming Microsoft and OpenAI’s generative tools, which are trained on open source code, amount to software piracy. There’s a fear that people who use these tools could be subject to intellectual property claims themselves. Microsoft is trying to address these issues with a broad indemnification policy. OpenAI also announced its own indemnification policy, Copyright Shield, at its DevDay conference. 

Fundamentally these large models crawl and get content and then train on that content, right? If anybody doesn’t want their content to be crawled, we have great granular controls in our crawlers that allow anybody to stop it from crawling. In fact, we have controls where you can have it crawl just for search, but not for large language model training. That’s available today. So anybody who really wants to ensure that their content is not being taken for retraining can do so today. 

The second thing, of course, is I think the courts and the legislative process in some combination will have to decide what is fair use and what is not fair use.

We have taken a lot of control in making sure that we are only training models, and we are using data to train models that we’re allowed to and which we believe we have a legal standing on. 

If it comes to it, we’ll litigate it in the courts. We’ll take that burden on so the users of our products don’t have to worry about it. That’s as simple as that, which is to take the liability and transfer it from our users to us. And of course, we are going to be very, very mindful of making sure we’re on the right side of the law there.