Google: AI Makes Human Experience More Important For Content via @sejournal, @martinibuster

A recent Search Off The Record episode featuring Martin Splitt and Nikola Todorovic, Director of Software Engineering at Google Search, explored the revolutionary aspect of AI and how a new wave of users are crafting longer conversational queries. They pointed out that while AI has democratized access to information it has made experience-based insights more valuable, implying that this is a key to standing out in the AI search.

AI Makes Human Experience And Opinions More Important

While AI is making information more accessible, it’s making basic information less important because it’s something that AI can do. Something like the specs of Texas Instruments OPA1656 op-amps is something that is provided by Texas Instruments and data sheets available from sites like electronics warehouses like DigiKey and Mouser.

What AI can’t provide are opinions and experience with those electronic parts, like what is the sonic difference between using an OPA1656 and something else that is six times more expensive? This is something that an AI can’t provide and as a consequence human experience and opinion is the thing that is variously referred to as the “value” that makes one site useful and another site not useful.

Martin Splitt made this case in talking about how AI can bridge human experience and the basic type of information that’s found “on the box.”

Splitt explained:

“Some people have misunderstood whatever it was that they’re trying to accomplish or to provide to be these cumbersome bits and only these cumbersome bits, right?

But eventually that turned into…, how do I put this nicely, putting words around spec sheets from manufacturers. And that wasn’t really the value that I was looking for. I’m not interested in knowing how many gigahertz a certain new processor has because I can read that basically on the box. It says it on the box. You don’t have to tell me that this is now a 3 gigahertz processor. It says it on the box.

And I had a key moment when I was buying a joystick back in the days for a computer game. And I didn’t know what force feedback was. And that’s effectively you have a different resistance. And it might move and vibrate the device if there’s any shaking happening in the surroundings. And I didn’t know what that was. And it said on the box, it has force feedback.

And so I went to someone who worked at the shop, and I anticipated them to be like an expert on the topic. So I’m like, so this says force feedback. What does that mean? And he literally said to me, that means that this joystick has force feedback.

And this is funny, but I’m seeing this a lot in articles and on websites that they’re effectively not giving me any context. They’re just explaining what I can kind of glimpse and gather from the information that is right in front of me. And I think AI makes that easier. You don’t have to spend as much time to rattle off the spec sheets into a more readable human conversational form. But chat bots do that.”

Splitt followed up by saying that it’s no longer necessary for websites to focus on providing commonly available information. That’s still important but there is a higher level of information that based on human experience that websites can provide, even if it’s something as small as explaining what “force feedback” on a gaming joystick is.

Paradoxically, while information is now more widely available than at any point in human history, it’s also made human judgement and opinion more valuable because that’s something that an AI system cannot do.  And while there are many ways to approach content, it’s the subjective information that can be said to be the value add.

Splitt explained:

“So I think there is still enough space online for different outlets and people and opinions and experiences, but I think we have to increase the level of our content to be useful and interesting for humans, from humans to humans. And I don’t think AI is going to take that away. I think AI is going to bridge that.”

Martin Splitt insists that basic content is no substitute for expertise. He suggests that judgment and insights earned through experience are superior to surface-level content that can be found anywhere. Human experience is a key ingredient of high-value content.”

Content that only repeats widely available facts now has a weaker claim on attention because AI can make that same baseline information easier to reach. The stronger opportunity is content built from what a person notices, tests, prefers, questions, compares, and learns through use. That is where experience becomes editorial value, not as a decorative personal angle but as the part of the page that changes what the reader understands.

  • Facts explain commonly known information.
  • Experience explains what it means to a human.
  • What it means turns information into guidance.
  • Guidance is the value-add that makes a web page worth visiting.

What this means for SEO is that these kinds of considerations can be used for evaluating content and identifying reasons why it’s not being indexed, why it’s underperforming in search. And I know that for beginners a step-by-step approach feels useful but in real-life, optimizing for search engines, a checklist approach to optimizing only gets you to a shallow level of content and not to the higher standards necessary to stand out.

Listen to Search Off The Record here:

Featured Image by Shutterstock/ra2 studio

Google On Keyword Fragmentation And User Needs In AI Search via @sejournal, @martinibuster

Google’s Liz Reid explained on the Bloomberg Odd Lots podcast how AI Mode and AI Overviews are enabling detailed, need-based query patterns that create new challenges for Google. This points to a consequential change in search behavior that directly impacts how to approach SEO.

Keyword Fragmentation In AI Search

Liz Reid explained that users have always wanted to express longer natural language queries but were forced to narrow them down to keywords like “best restaurants in New York” even though what they really wanted may have been more specific like a restaurant with vegan options and an opening for a party of five.

For as long as I’ve been in SEO, and I’m near 30 years in the business, keyword research has been the foundation of digital marketing. You pick the keywords you want to rank for then create the content in a way that is optimized for that keyword. The problem with optimizing for a short keyword phrase is that there are hidden meanings within that keyword and that’s always been the case.

The way Google used the issue of latent meanings within keywords is to use things like clicks to better understand what users meant when they typed ambiguous keyword phrases like “restaurants in New York.” Some SEOs believe that the clicks were used for ranking websites but another use for clicks is understanding what people mean when they type ambiguous phrases. What Google has done for quite awhile now is to rank the most popular meaning of the keyword phrase first and no matter how many links a page received, if the content aligned with a less popular meaning the page wouldn’t rank.

Liz Reid said that people who use AI-based search are using longer queries that articulate what the problem or information need is, making it easier for Google fetch the information they’re looking for. That change gets to the heart of the problem with organic search that AI search is solving and the implications for SEO are profound.

Liz Reid begins:

“We have seen with AI overviews meaningfully longer queries. We see more natural language queries, but it’s also not even something as basic as that.

It can also be like you were searching for restaurants. We used to laugh about the like before I worked on search, I worked on maps and local, some of the intersection with search, and people would just be like, “restaurants New York.”

And you’re like, what do you want me to do with that query? Like, okay, the best restaurants in New York are going to take three months and 99.9% of the population can’t afford to go to them.

Okay, but like, are you picking 10 random ones, etc.?

But like, part of why people would do that is they had a much more complex– I want a restaurant in this location for five people. It can’t be too pricey. I have a vegan member. I also have kids. That was the question they had in their mind.

And in the old world of keyword-ese, that information would be spread throughout the web. And so you wouldn’t feel confident you could just put in the question.

And now with AI Overviews and AI Mode, you can start to actually, and you see people do this, they tell you the real problem, right?

They don’t take their need and translate it to what the computer understands. They try to give the computer their actual need and expect us to do the translation.”

The big ideas to unpack there are:

  • A typical complex question asked in AI Search may not be solved by one web page.
  • Complex questions may be one-off and rarely, if ever, repeated, which in many cases may lower the value of optimizing for those phrases, because the time used for crafting them could be more profitably spent doing something else.
  • Given that a site will likely share the AI Overviews (AIO) space with another site it increases the need to optimize other factors such as brand icons that stand out in a positive way, use of images that are relevant, and even the use of videos to claim as much AIO space as possible.
  • And yet, perhaps the bigger takeaway is that it’s not all longtail because Google breaks down the longtail phrases into smaller highly specific keyword phrases that reflect a portion of the information need, query fan-out, and fires those off to classic search. Google’s AI then picks from among the top three for each query and uses that to synthesize an answer.

So it’s not really that SEOs should optimize for long-tail queries because query fan-out uses Classic Search, bringing it all back to the specific queries that web pages are relevant and optimized for.

Addressing Real Needs

Reid didn’t go into detail about this point but it’s interesting anyway because she said that the process of breaking a complex natural language query into smaller queries becomes a quality issue. One of the problems with AI Search is that people aren’t searching with the same keyword phrases which means that Google can’t cache similar queries in the same way it can with organic search.

She explained:

“I think it means you have to do, it’s a harder job on quality, right?

You have to take this question, there’s many parts, and you have to figure out how you break it apart. And you have to do work to think about things like latency, because you can’t just, you know, if everyone uses the same keyword and it’s not personalized, then you can cache it all. If all of a sudden the queries get much more diverse, you know, it has consequences there.

But I think we just see that it’s very empowering people, right? That it takes some of the work out of searching.

A few years ago, they said, What more can you do with Google search? But if you actually ask them, Okay, when was the last time you spent 20 minutes searching when you would have preferred to spend 2? It’s actually not that hard for me. … And so it’s been kind of exciting to just… make people’s lives easier by helping them address their real need.”

On the surface, the idea of addressing user’s real needs sounds like one of those unhelpful “be awesome” or “content is king” type slogans. But it’s actually a way that every SEO should be auditing web pages. Rather than limiting their scope to keywords, headings, technical issues, take a look at how it’s filling some kind of need.

Someone today asked me to look at their website that was having trouble getting indexed. They suspected that it might be a technical issue. My response is that yeah, everyone hopes it’s a technical issue but in many cases, especially for this one I was looking at, the problem becomes apparent when looked at through the lens of asking, “what need is this page filling?” as well as by asking, “How is this not just different from some other page but different and better?

Watch the Liz Reid interview here:

Google’s Liz Reid on Who Will Own Search in a World of AI

Featured Image by Shutterstock/TierneyMJ

Google On AI Search & Why Browsy Queries Favor Full SERPs via @sejournal, @martinibuster

Google’s Liz Reid recently discussed what goes on behind the scenes of AI Search, particularly with the fragmentation of complex queries into smaller ones and a relatively  new concept, Browsy Queries. Her feedback offers insights on what SEOs should be focusing on right now in order to perform better in AI search surfaces.

Search Behavior Is Varied, Not Monolithic

Host Joe Wazenthal asked Liz Reid about user behavior patterns in search, how users choose to use classic search or AI search, and what differences in queries result from choosing one platform over the other.

Liz Reid answered by first defining what she is talking about, linking classic search and AI Mode together as Search, then positioning Gemini as something else that is fundamentally different.

She also stated that there are a massive amount of users whose search behaviors are varies across all search surfaces, in essence saying that there isn’t a monolithic user behavior pattern in which people are doing the exact same searches, the patterns the interviewer was looking for in his question.

Liz Reid answered:

There’s sort of your main search page. There’s AI Mode. That’s part of search.

And then there’s the Gemini app.

And I would say there’s a lot of users, so their behavior varies across all of them.”

Search And AI Usage Patterns Are Complex

The SEO and publishing community often thinks about Search as Google but Liz Reid says that user behavior patterns point to a more complex search ecosystem where users are relying on multiple platforms.

She continued her answer:

“But there are some patterns. There’s plenty of people who co-use across them. There’s plenty of people that are actually using several AI products right now, just in general, not even just within Google.

Across Gemini and Search, the more informational ones… Like, if it’s an informational query, then the probability that they’re using Search or AI Mode is going to be higher.

If it’s a creative query, it’s like more of a productivity question like, please rewrite this to make it sound more formal, right? Those type questions are going to be more Gemini-oriented.

Between AI Mode and Search, the main search page, some people use AI Mode mostly via AI overviews. They start in AI overviews and they transition.

For those who go direct to AI Mode, they tend to do that for queries that they consider sort of more complex, longer questions, questions where they expect that they’re going to do more follow-ups, versus if you’re doing a very browsy query, you might choose to prefer all of the SERP.”

Browsy Queries And Browse Search Intent

When we think about search, it may be useful to consider that people not only search across platforms, but they do it for different reasons.

Takeaways About How People Use AI

  • Co-Users
    People use multiple platforms simultaneously (co-use)
  • Informational Queries
    These tend to happen on Classic Search and AI Mode
  • Creative Queries
    These tend to happen on Gemini
  • AI Mode Direct
    Queries that originate on AI Mode, where people navigate to AI Mode, tend to be complex, what was traditionally called longtail.
  • Browsy Queries
    This is a relatively new phrase that Googlers apparently use.

What Are Browsy Queries?

The phrase “browsy queries” must be something that Googlers use internally and maybe is more familiar with people who do Pay Per Click advertising.  There aren’t really many instances of the phrase but here’s how Google uses it.

A software engineer formerly of DeepMind and Google describes in her LinkedIn Profile having created a machine learning model that identifies “browse intention” queries on Google Search, an invention that improved click-through rates by 5%.

She wrote:

“Built a machine learning model to identify ‘browse intention’ query on Google Search, which presents engaging content on search result pages for browsy queries (e.g. “best places to visit in Orlando”). Improved global search result click-through rate by 5%”

The phrase “browsy queries” is also used in a Google job description for a commerce software engineer, placing the phrase in the context of shopping queries.

“Commerce Retrieval researches and develops high-precision algorithms to reduce the search space for product queries by 8 orders of magnitude under tight latency and compute constraints. Our solutions are tailored to the unique complexities of the Shopping domain including browsy queries, a hierarchical schema, and short multimodal documents.”

It’s also used in the context of video ads in a Google support page for video ads:

“These new shoppable formats will be shown to potential customers in lower intent, more “browsy” Search placements earlier in their shopping journey.”

What Browsy Queries Means And How To Optimize For it

What’s consistent across all three uses is that “browsy queries” are defined by a discovery-level intent stage.

In each example, Google is identifying what the user keep the user exploring:

  • The DeepMind example ties browsy queries to engaging content that a user wishes to browse through, not direct answers.
  • The commerce job role positions browsy queries as a quality of commerce search.
  • The ads example places browsy queries earlier in the shopping journey at about the discovery phase.

The useful takeaway is that Google treats these queries as exploration problems. What makes browsy queries complex is that they have under-specified user intent and are the result of consumers who may be looking for inspiration.

For an SEO or an online merchant, it means that a user has intent but hasn’t narrowed down what they want. That’s where contexts like “Stylish Outfits For Summer” come in handy. Broad keyword phrases are probably useful here. I like a pyramid structure where the deeper a user gets into a page, the more specific it may become.

Keyword Fragmentation In AI Search

Liz Reid explained that users have always wanted to express longer natural language queries but were forced to narrow them down to keywords like “best restaurants in New York” even though what they really wanted may have been more specific like a restaurant with vegan options and an opening for a party of five.

For as long as I’ve been in SEO, and I’m near 30 years in the business, keyword research has been the foundation of digital marketing. You pick the keywords you want to rank for then create the content in a way that is optimized for that keyword. The problem with optimizing for a short keyword phrase is that there are hidden meanings within that keyword and that’s always been the case.

The way Google used the issue of latent meanings within keywords is to use things like clicks to better understand what users meant when they typed ambiguous keyword phrases like “restaurants in New York.” Some SEOs believe that the clicks were used for ranking websites but another use for clicks is understanding what people mean when they type ambiguous phrases. What Google has done for quite awhile now is to rank the most popular meaning of the keyword phrase first and no matter how many links a page received, if the content aligned with a less popular meaning the page wouldn’t rank.

Liz Reid said that people who use AI-based search are using longer queries that articulate what the problem or information need is, making it easier for Google fetch the information they’re looking for. That change gets to the heart of the problem with organic search that AI search is solving and the implications for SEO are profound.

Liz Reid begins:

“We have seen with AI overviews meaningfully longer queries. We see more natural language queries, but it’s also not even something as basic as that.

It can also be like you were searching for restaurants. We used to laugh about the like before I worked on search, I worked on maps and local, some of the intersection with search, and people would just be like, “restaurants New York.”

And you’re like, what do you want me to do with that query? Like, okay, the best restaurants in New York are going to take three months and 99.9% of the population can’t afford to go to them.

Okay, but like, are you picking 10 random ones, etc.?

But like, part of why people would do that is they had a much more complex– I want a restaurant in this location for five people. It can’t be too pricey. I have a vegan member. I also have kids. That was the question they had in their mind.

And in the old world of keyword-ese, that information would be spread throughout the web. And so you wouldn’t feel confident you could just put in the question.

And now with AI Overviews and AI Mode, you can start to actually, and you see people do this, they tell you the real problem, right?

They don’t take their need and translate it to what the computer understands. They try to give the computer their actual need and expect us to do the translation.”

The big ideas to unpack there are:

  • A typical complex question asked in AI Search may not be solved by one web page.
  • Complex questions may be one-off and rarely, if ever, repeated, which in many cases may lower the value of optimizing for those phrases, because the time used for crafting them could be more profitably spent doing something else.
  • Given that a site will likely share the AI Overviews (AIO) space with another site it increases the need to optimize other factors such as brand icons that stand out in a positive way, use of images that are relevant, and even the use of videos to claim as much AIO space as possible.
  • And yet, perhaps the bigger takeaway is that it’s not all longtail because Google breaks down the longtail phrases into smaller highly specific keyword phrases that reflect a portion of the information need, query fan-out, and fires those off to classic search. Google’s AI then picks from among the top three for each query and uses that to synthesize an answer.

So it’s not really that SEOs should optimize for long-tail queries because query fan-out uses Classic Search, bringing it all back to the specific queries that web pages are relevant and optimized for.

Addressing Real Needs

Reid didn’t go into detail about this point but it’s interesting anyway because she said that the process of breaking a complex natural language query into smaller queries becomes a quality issue. One of the problems with AI Search is that people aren’t searching with the same keyword phrases which means that Google can’t cache similar queries in the same way it can with organic search.

She explained:

“I think it means you have to do, it’s a harder job on quality, right?

You have to take this question, there’s many parts, and you have to figure out how you break it apart. And you have to do work to think about things like latency, because you can’t just, you know, if everyone uses the same keyword and it’s not personalized, then you can cache it all. If all of a sudden the queries get much more diverse, you know, it has consequences there.

But I think we just see that it’s very empowering people, right? That it takes some of the work out of searching.

A few years ago, they said, What more can you do with Google search? But if you actually ask them, Okay, when was the last time you spent 20 minutes searching when you would have preferred to spend 2? It’s actually not that hard for me. … And so it’s been kind of exciting to just… make people’s lives easier by helping them address their real need.”

On the surface, the idea of addressing user’s real needs sounds like one of those unhelpful “be awesome” or “content is king” type slogans. But it’s actually a way that every SEO should be auditing web pages. Rather than limiting their scope to keywords, headings, technical issues, take a look at how it’s filling some kind of need.

Someone today asked me to look at their website that was having trouble getting indexed. They suspected that it might be a technical issue. My response is that yeah, everyone hopes it’s a technical issue but in many cases, especially for this one I was looking at, the problem becomes apparent when looked at through the lens of asking, “what need is this page filling?” as well as by asking, “How is this not just different from some other page but different and better?

Watch the Liz Reid interview here:

Google’s Liz Reid on Who Will Own Search in a World of AI

Featured Image by Shutterstock/Summit Art Creations

Google Advises Using AI In Best Possible Way For AI Search via @sejournal, @martinibuster

Google’s Martin Splitt and Nikola Todorovic, Director of Software Engineering at Google Search, recently discussed how AI is changing Google and SEO. Todorovic encouraged SEOs and businesses to take advantage of AI to analyze data, research competition, and improve their ability to provide value.

AI And The Web Ecosystem

Google’s Martin Splitt asked a question that many SEOs and online businesses have on their minds related to what they should do for AI features like AI Mode and AI Overviews. Splitt and Todorovic both said that there are opportunities, especially with the use of AI within a narrow scope.

Martin asked:

“But one thing that we keep hearing from the ecosystem pretty much at every event we do and it’s everywhere is, how do we make sure that with AI features being part of Search now, that the ecosystem continues to thrive.

And I think that’s an interesting challenge, but also there are lots of opportunities thanks to AI features these days. And I know that we at Google try our best to go on this journey together with the ecosystem.

But how do you see it from your perspective? What is it that we do to make sure the ecosystem thrives with these new features?”

The question asked was specifically about what Google can do to assure that the web ecosystem thrives, but the answer wasn’t about what Google can do, but rather about what SEOs and businesses can do.

Todorovic acknowledged that this is a concern he’s also aware of, but he also said that there’s no “magic wand,” meaning there is no simple solution or a roadmap, and he did suggest that focusing on delivering value is a key way to adapt to the new AI search features.

He answered:

“This is clearly one of the key questions and you see them a lot on the social media as well. And I don’t think there is like a magic wand that can clearly give the guidance.
Okay, what do I do now? Like what would the SEO experts do now in the new system?

My kind of guiding principle or my like the way I see here is that the site owners, they do need to continue making sure that their products, that their websites, that their platforms are providing value to the user. Because ultimately, if you provide a particular value, then the users will continue coming to you and they will continue coming to you through Google as well.”

On the surface, this sounds like “content is king” or “be awesome” type of advice, but I think that would be missing a deeper point. One, there is so much that a Googler Engineer can say directly. But there is a lot that they can say indirectly, and I think that’s what Todorovic is doing here.

For example, if Google’s systems reward sites that users are actively looking for, then “providing value” is the kind of thing that’s going to ring bells in that kind of algorithm, where external signals generated by users play a role in what sites Google is ranking. I think it would be a mistake to conflate the advice to “provide value” as a platitude. Knowing what we know about Google’s external signals, the advice to provide value makes a lot of sense.

Todorovic continued his answer:

“So… for example, you’re selling something, you have like a product or like a platform, you have like some subscriptions, et cetera. …if you are providing value to your clients, they will continue coming to you.

In the AI centric or AI oriented system, …those kind of bringing the value still continues. …if you don’t provide value, nobody’s going to buy your newspaper or book or nobody’s going to listen to the radio or to the podcast.”

Master The Use Of AI To Provide Value

Todorovic next acknowledged that, as an employee at Google, he also faces questions of whether AI is going to take his job away, just like online businesses are worried about whether AI is going to replace them or make their businesses obsolete.

His answer is to adapt to AI and use it in a way that increases your value as an employee or as an online business.

Todorovic explained:

“So I think everybody, including all of us, there’s a lot of questions… Like, is AI going to take our jobs and so on. I think we all need to continue thinking, how do we provide value on top of all of this? And in many cases, this is about mastering the AI tools and being able to use them in the best possible way.

So this is one of my recommendations to all the SEO professionals, site owners, and the whole ecosystem, that they continue providing value, but then do not neglect the new technology and make sure you use it in the best possible way for you.

Now, obviously I don’t think we would …recommend the best possible way is to just multiply all the content and just generate because you know, it’s cheap and easy …it’s not going to provide a ton of value.

But if you’re using it to improve your grammar, to improve the style a little bit, make it kind of more interesting and so on, I don’t think that’s a wrong use of the technology. But then there’s plenty of ways, okay. Maybe AI can help you better understand your data. Maybe AI can help you understand the competition potentially better as well. So clearly this is something we can advise.”

My Example Of An AI Prompt For SEO

One of the ways you can use AI for SEO is to ask the AI to do a reverse knowledge search on your web page content. A reverse knowledge search is when an algorithm reviews content to extract the questions the web page is likely to answer. If you run this prompt for examining your web page, it will tell you what search queries your web page is likely to answer.

For example, I recently wrote an article about how Google uses clicks as part of the ranking process.

I uploaded a copy of the finished article to ChatGPT with the following prompt:

“Analyze the document and extract a list of questions that are directly and completely answered by full sentences in the text. Only include questions if the document contains a full sentence or contiguous sentences that clearly answers it. Do not include any questions that are answered only partially, implicitly, or by inference.

For each question, ensure that it is a clear and concise restatement of the exact information present. This is a reverse question generation task: only use the content already present in the document.

For each question, also include the exact sentences from the document that answer it. Only generate questions that have a complete, direct answer in the form of a full sentence or sentences in the document.”

The first question that ChatGPT said my article answers is: “What are clicks considered in the context of ranking signals?”

The following is a screenshot of ChatGPT’s response where it shows the question my article answers and a snippet of text from the article that answers that question.

Screenshot Of ChatGPT’s Answer

Query Ranks #1 In Google

I then took that question and entered it on Google and it ranks #1 for that question in the organic part of the search results.

Screenshot Of My Article Ranking #1 In Google

Query Ranks #1 In Bing

I then asked the same question in Bing and my web page content ranks in (1) the featured snippets, (2) Bing News, and (3) the top of Bing’s organic listing.

Screenshot Of Bing #1 Ranking

I didn’t use AI to create the article or to optimize it. I just wrote it based on all the different things that I know about clicks and Google’s algorithms, using a list of topics I wanted to cover. I have been doing SEO for 26+ years, so I don’t really need an AI to tell me how to optimize a web page, it’s second nature to me.

But I did use AI to check grammar.

The Reverse Knowledge Prompt is something anyone can use to test if their content is focused on the right topics, to check if the content is off-topic, or to understand what the web page is really about in order to clean it up if it’s not about what you hoped it would be.

It’s not a way to reverse-engineer search engines. It’s a way to reverse knowledge search your content with AI to see what it’s really about.

Hidden Gem Advice

I went to Google’s Search Central Live last year, and I was talking to an attorney who was attending the show, and he asked me what is an important thing to do for ranking better, and I said for you it would be branding your site’s offerings in the mind of potential site visitors with the services that you offer. Part of doing that is getting the word of mouth going so that potential clients will think of the law firm’s brand name when they need their specific service.

After the break, we went back into the auditorium, and Danny Sullivan started talking about how sites should try to be brands, and I looked over at the guy I had just been talking with, and he raised an eyebrow back at me.

The advice to provide value is a hidden gem type of advice, in my expert opinion.

Listen to the Search Off The Record Podcast here:

How AI Is Changing Google Search and SEO

Featured Image by Shutterstock/dee karen

Google Engineer Explains ‘Black Box’ AI Models In Search via @sejournal, @MattGSouthern

Nikola Todorovic, Director of Software Engineering at Google Search, appeared on an episode of Search Off the Record to discuss how AI evolved inside Google Search.

Todorovic leads Google’s SafeSearch engineering team and has worked in the search organization for 15 years. He said machine learning was difficult to deploy broadly across Search because complex models are harder to understand and fix than simpler systems.

He was explaining why Google could not simply apply ML systems across Search at once. Todorovic said these models can “function like a kind of a black box” because engineers don’t always understand what happens underneath.

That makes debugging harder when search systems change over time or when a model needs to be replaced, he said.

SafeSearch As Proving Ground

Todorovic said SafeSearch was one of the first places where Google could deploy AI models in Search because the team could isolate those systems from the main ranking flow.

SafeSearch could run standalone image and video classifiers that produced a signal, such as how explicit a result might be. If problems came up, engineers could iterate on the model without disrupting the rest of Search.

Convolutional neural networks began improving image understanding about 12 years ago, he said, making SafeSearch a natural early use case for machine learning inside Search.

AI Overviews Built On Existing Search

Todorovic described AI Overviews as a feature that “stamps on top” of Google’s existing retrieval and ranking systems. He said the retrieval and ranking underneath AI Overviews is still what he called “the old style, the old school.”

The process can involve fan-out queries, he said. Google may identify additional queries related to the original input, run them in parallel, and bring the retrieved results back into one response.

AI Overviews then combine and summarize information from selected results, including source text, snippets, titles, and other page context, he said.

AI Mode follows a similar pattern but operates with more independence, Todorovic said. He described it as still running on Search, while having a “bigger platform for its own.”

Why This Matters

The “black box” quote is getting attention, but the full context matters. Todorovic was explaining why machine learning was difficult to deploy broadly across Search, not saying Google lacks oversight of AI Overviews or AI Mode.

His comments add useful context to Google’s existing AI Search documentation. Google has already said AI Overviews and AI Mode may use query fan-out, issuing multiple related searches across subtopics and data sources to develop responses.

The useful point is not that AI is a “black box.” His comments reinforce that traditional Search systems still matter for AI Overviews, even as Google layers summarization and fan-out on top.

That keeps traditional Search fundamentals relevant to AI features, even as Google changes how results are summarized and presented.

Looking Ahead

The difference between AI Overviews and AI Mode is worth watching as Google expands AI Mode. Todorovic described AI Overviews as more isolated from the rest of Search, while AI Mode has more of its own infrastructure.

That difference may matter for how Google explains visibility, measurement, and optimization guidance as AI Mode expands.

Google’s March Core Update Shifted Visibility Away From Aggregators via @sejournal, @MattGSouthern

An analysis from Amsive found that aggregators and user-generated content platforms lost US search visibility after Google’s March core update, while first-party brand sites, government domains, and content originators gained.

Lily Ray of Amsive examined over 2,000 domains using SISTRIX Visibility Index data and categorized them with Google Product Taxonomy tags via the DataForSEO API. The analysis compared visibility on March 27 (rollout start) versus April 8 (completion).

Amsive sees this pattern as a correction for over-indexed UGC and aggregator content, favoring “the company that owns the thing” over “the platform people use to talk about the thing.”

For transparency, SISTRIX measures keyword visibility rather than organic traffic. Other factors can also influence visibility.

YouTube’s Drop Led All Losers

YouTube lost 567 visibility points, the largest single-domain decline in Amsive’s dataset. Ray notes this is roughly 30% larger than Wikipedia’s 435-point drop during the December core update.

She adds context that YouTube’s visibility dropped back to its level before the early March surge, not to a new low.

Reddit lost 64 points, Instagram lost 48, and X lost 46.

Category Patterns: Travel, Jobs, And Health

In travel, OTAs and aggregators lost ground while hotel chains gained. TripAdvisor fell 45 points, Yelp 33, Expedia 33. Hilton rose 4, Hotels.com 3.6, Trivago 3.2. NPS.gov gained 9.9, airport websites saw large gains.

In jobs and education, job board aggregators declined while employer career pages and government sites rose. Indeed lost 18, ZipRecruiter 13. BLS.gov gained 5.4, USAJobs.gov 16%, Disney Careers 59%, CVS Health Careers 45%.

Health showed a split, with GoodRx up 55% (9.5 points), NIH.gov +9.3, but the Cleveland Clinic dropped 12, WebMD 9, Mayo Clinic 6.

Google seems to favor authoritative sources over consumer health publishers, though this is interpretive.

Bounce-Backs Complicate The Loser Data

Ray notes some big losers recovered shortly after the update. Reddit and Indeed saw visibility bounce back, indicating the loser list shows the update window but not where domains settled.

Connection To Prior Research

The findings align with a Zyppy analysis of over 400 sites, published earlier this month. Cyrus Shepard’s analysis showed sites offering products or services that enable task completion tend to gain organic traffic.

Ray cites Shepard’s data as supporting, despite different methodologies: Shepard measured correlations with third-party traffic estimates, whereas Amsive tracked SISTRIX visibility during an update window.

A SISTRIX analysis of German data found similar results: online shops and utility sites lost ground, while official websites and brands were more resilient.

Why This Matters

The data doesn’t confirm what Google changed or why. What it shows is that across travel, jobs, health, finance, and entertainment, the same pattern appeared.

Platforms that aggregate, list, or comment on other people’s content lost visibility, while sites that created or owned the content gained visibility. That’s a pattern worth checking against your own data from the same window.

Looking Ahead

Google hasn’t detailed what changed in the March core update. The rollout window was March 27 to April 8, and Amsive’s data should be read as one visibility snapshot from that period.


Featured Image: Roman Samborskyi/Shutterstock

Ask Jeeves Is Gone After Nearly 30 Years Of Search via @sejournal, @MattGSouthern

Ask.com, the search engine that started life as Ask Jeeves, shut down. Parent company IAC discontinued its search business as part of an ongoing effort to refocus its operations.

A farewell message posted on the Ask.com homepage, reads:

“Every great search must come to an end. As IAC continues to sharpen its focus, we have made the decision to discontinue our search business, which includes Ask.com.”

The message thanked the engineers, designers, and teams who built the platform over the decades, as well as the users who relied on it. It closed with a short line: “Jeeves’ spirit endures.”

What Ask Jeeves Was

For anyone who came online after 2005 or so, Ask Jeeves might just be a name. But for users who first experienced the web in the late 1990s, Jeeves was something new.

Garrett Gruener and David Warthen founded the company in Berkeley, California, in 1996. The service launched publicly as AskJeeves.com and introduced an idea that felt strange at the time.

Instead of typing keywords the way every other search engine expected, Jeeves encouraged users to type a full question in plain English. The search engine would try to return a direct answer.

The mascot, a cartoon butler named after the fictional valet in P.G. Wodehouse’s novels, became one of the most recognizable characters on the early internet. Jeeves made search feel approachable when the web was still intimidating to millions of new users. Jeeves also crossed into mainstream advertising, including appearances tied to the Macy’s Thanksgiving Day Parade.

Ask Jeeves went public in 1999, riding the dot-com boom. By that point, the search engine was already handling over a million queries a day. It competed alongside Yahoo, AltaVista, Excite, and Lycos in a search market that hadn’t yet consolidated around a single winner.

Google’s rise changed the market.

The Long Decline

Google’s PageRank algorithm delivered better results faster, and users noticed. Ask Jeeves tried to keep pace. In 2001, the company acquired Teoma, a search technology firm with its own way of ranking credibility. The Teoma engine powered Ask’s organic results and earned respect among search professionals for its quality.

But the gap kept widening. IAC acquired Ask Jeeves in 2005 and quickly dropped “Jeeves” from the name. The rebrand to Ask.com was meant to modernize the product and position it for broader competition.

It didn’t work. By 2010, Barry Diller said at TechCrunch Disrupt that Ask.com couldn’t compete with Google and carried no value in IAC’s stock. That same year, Ask.com shut down its own web crawler and laid off much of its engineering staff. Core search functions were outsourced to third-party providers. The company pivoted to a question-and-answer community model.

That kept the lights on for another 16 years, but Ask never came close to relevance again.

SEJ Was There

Search Engine Journal covered Ask Jeeves extensively during its peak years.

SEJ founder Loren Baker reported in 2005 on the company’s plans to launch a paid search advertising platform to rival Google and Yahoo. He covered the rebrand rumors when Diller first floated the idea of dropping the Jeeves name. He tracked the iWon and Excite acquisitions that briefly doubled Ask Jeeves’ market share.

Those articles are now a time capsule of the era when search was still a multi-player race.

Why This Matters

Ask Jeeves pioneered asking questions in one’s own words, but Google’s rise made keyword searching standard. Now, natural-language search is central again, with Google’s AI features built on Jeeves’s original premise of asking questions in plain language.

Looking Ahead

IAC’s farewell message gave no indication of plans for the Ask.com domain or any associated properties. The shutdown appears to end IAC’s consumer search business under the Ask brand.

For the search industry, the closure is a reminder of how fast the market consolidated after Google’s rise. Of the best-known consumer search brands from that period, Google is the one that emerged with an independent global search engine.


Featured Image: viewimage/Shutterstock

What Google & Microsoft Earnings Say About Search via @sejournal, @MattGSouthern

Alphabet reported Q1 2026 earnings with Google Search & Other revenue rising 19% year over year to $60.4 billion. Microsoft announced on the same day that Bing reached 1 billion monthly active users for the first time, with search ad revenue up 12%.

Both companies posted strong search quarters. But one line item in Alphabet’s report tells a different story for the websites that depend on Google’s ad network for revenue.

Google Network Revenue Fell Below $7 Billion

The “Network” segment, including AdSense, AdMob, and Google Ad Manager, isn’t a proxy for the entire web’s ad economy but is a clear financial indicator tied to ads outside Google’s surfaces. For publishers and app developers relying on Google-brokered ads, the decline affects them more than it affects Search revenue growth.

It has been shrinking over two years, with Google Network declining each quarter from Q1 2024 to Q1 2026. Q1 2026’s $6.97 billion is the lowest, below $7 billion for the first time.

The gap is increasingly evident. In Q1 2024, Google Network accounted for about 12% of Google’s ad revenue; by Q1 2026, it fell to roughly 9%. Meanwhile, Google Search & Other grew from $46.2 billion to $60.4 billion, with Search expanding 31% and Others contracting 6%.

The decline doesn’t match the overall digital ad market. The IAB/PwC Internet Advertising Revenue Report found that U.S. programmatic advertising grew 20.5% in 2025 to $162.4 billion. The programmatic market grew while Alphabet’s Google Network line didn’t.

The quarterly numbers smooth over sharper disruptions at the publisher level. In January, a two-day technical failure in Google’s ad exchange led AdSense publishers to report eCPM and RPM drops of 50-90% without corresponding declines in traffic. Google resolved the issue, but it showed how fragile publisher-side network monetization can be.

Bing’s Milestone In Context

While Google’s revenue mix hints at an ecosystem shifting inward, Microsoft is leaning heavily into user acquisition to prove its AI bets are paying off.

CEO Satya Nadella revealed during the FY26 Q3 earnings call that Bing reached 1 billion monthly active users for the first time. Search ad revenue, excluding traffic acquisition costs, grew 12%. Edge has gained browser market share for 20 consecutive quarters.

The broader segment, which includes Bing, was down 1% overall to $13.2 billion. Search advertising was the bright spot within it.

Bing’s global search share still sits at about 5% worldwide per StatCounter’s March 2026 data. That gap between 1 billion MAU and roughly 5% global share raises questions about what the MAU figure measures. Microsoft hasn’t defined frequency, overlap, or how AI-related Bing usage is counted.

Microsoft is also building measurement tools that matter for SEOs. Bing Webmaster Tools now maps grounding queries to cited pages, and Microsoft previewed Citation Share at SEO Week in April. When Citation Share ships, it could become one of the first platform-provided tools for comparing AI visibility on Bing against competitors.

CFO Amy Hood reported Q4 search ad growth in the high single digits, down from three double-digit quarters. Nadella said the consumer business is doing “the foundational work required to win back fans.” Bing’s results support maintaining coverage, not dropping Google-first focus.

Why This Matters For Search Professionals

For over a year, SEO professionals have monitored whether AI Overviews and AI Mode decrease clicks to publisher sites. These reports don’t settle that question but support a pattern documented by independent research.

Google’s Search business is growing, with CEO Sundar Pichai calling queries “at an all-time high.” Chief Business Officer Philipp Schindler attributed the quarter’s strength to retail, finance, and health.

What’s contested is what happens after the query. Google Network revenue fell, while Search revenue accelerated, suggesting more searches stay on Google surfaces. The data doesn’t prove AI Overviews or AI Mode caused the Network decline. Google Network can decline for various reasons, such as ad demand and product changes, providing search marketers with another financial signal to compare with traffic, CTR, and publisher revenue.

Third-party data partially fills the gap, though studies measure different things. An Ahrefs study analyzed 300,000 keywords using desktop CTR data and found that AI Overviews correlate with 58% lower click-through rates. Chartbeat data shared by Axios showed small publishers lost 60% of search traffic over two years, medium publishers 47%, and large publishers 22%.

Seer Interactive tracked an organic CTR drop from 1.41% to 0.64% for queries with AI Overviews. Its April update showed some recovery. Organic CTR on AI Overview queries climbed from 1.3% in December to 2.4% in February. The worst of the initial drop may have eased, but CTR is still well below that of pages without AI Overviews.

Google’s Liz Reid on Bloomberg claims AI Overviews reduce “bounce clicks” rather than useful visits, but doesn’t provide supporting data. She said they track search recurrence, which measures Google’s retention rather than publisher traffic. Google executives made a similar argument at Google Marketing Live, calling clicks from AI-enhanced search “more highly qualified” without sharing supporting data.

Search activity continues to grow according to disclosed metrics. However, the value capture is shifting. Metrics like referral traffic, AdSense RPM, or organic CTR may no longer align with search revenue growth. Google’s revenue can rise even as publisher traffic declines.

What Neither Company Disclosed

Neither company disclosed how much AI-assisted query growth produces outbound clicks to publisher sites; that number has been absent from earnings reports since AI features launched in Search.

Pichai said queries are “at an all-time high,” referring to searches, not clicks to external sites. Microsoft hasn’t clarified what counts toward Bing’s 1 billion MAU, including whether Copilot interactions, API calls, or agent queries are included.

Looking Ahead

Pichai said more Search info will be shared at Google I/O in May and Google Marketing Live.

Microsoft’s Citation Share hasn’t shipped yet; once it does, it could be among the first platform tools for comparing AI visibility on Bing. Its usefulness depends on whether Microsoft discloses outbound click data alongside its MAU figures.

More Resources:

Google’s Preferred Sources Is Now A Global SEO Signal via @sejournal, @martinibuster

Google updated its Search Central documentation to reflect that the Preferred Sources feature is now available in all languages supported by Google Search. The change clarifies global availability and introduces updated guidance for publishers looking to grow their audience through Top Stories.

Preferred Sources

Google’s Preferred Sources feature gives users a way to choose specific publishers they want to see more often in Top Stories and other search surfaces like Google Discover. Preferred Sources is a direct user-controlled signal that works alongside Google’s ranking systems to up-rank websites users have indicated they want to see more of.

The effect on Google Discover is that users will see more of their preferred sources in their feed. The Preferred Sources feature is one of the few ways that publishers and SEOs can indirectly influence Google’s algorithm to show their sites more often to users.

Image Of A Preferred Sources Badge

Image contains the words

How Preferred Sources Works

Preferred Sources selections don’t override relevance. A publisher must still publish fresh content that aligns with a user’s interests.  Google Discover is a recommender system that shows users web pages that are relevant to a user’s interest, especially favoring fresh content (read more about Google’s freshness algorithm).

Google’s February 2026 Discover Core Update documentation made it clear that source preferences play a role in which sites are shown to users in Discover.

The documentation explains:

“We’ll continue to show content that’s personalized based on people’s creator and source preferences.”

What Changed

As of April 30th 2026, the feature is available in all languages supported by Google Search. This expands Preferred Sources from an English-only feature into a globally available system for users to signal their source preferences.

What It Means For Publishers

Preferred Sources functions as way for users to signal which sites they’d like to see more of, a signal that works alongside of other ranking factors.

The feature is one of the ways SEOs and publishers should use to build an audience. Publishers can guide users to select them through buttons and links.

Screenshot Of A Preferred Sources Badge

Image contains the words

Preferred Sources Now Available Globally

Google’s changelog confirms that the Preferred Sources feature is no longer limited to English-language content and is now available across all languages where Google Search operates. Google has also published downloadable buttons in sixteen languages that can be used by SEOs and publishers to encourage site visitors to choose the website as a preferred source.

List Of 16 Languages With Downloadable Buttons

  1. Danish
  2. English
  3. Estonian
  4. Finnish
  5. French
  6. German
  7. Hebrew
  8. Hindi
  9. Japanese
  10. Korean
  11. Portuguese (Brazil)
  12. Russian
  13. Spanish
  14. Swedish
  15. Turkish
  16. Ukrainian

The original documentation stated that the feature was “available globally in English,” which has now been removed and replaced with language confirming full international availability.

Documentation Updated To Reflect Broader Access

Google also revised supporting text to reflect the expanded reach of the feature. Language that expressly limited the availability to English language publications has been rewritten to emphasize global applicability.

For example, guidance around adding a Preferred Sources button has been updated to clarify that the feature is not limited to a subset of languages. The revised documentation explicitly notes that the feature is available in all supported languages, even if only certain assets are listed.

The updated documentation now explains:

“The preferred sources feature is available globally for queries that trigger the “Top Stories” feature in all languages where Google Search is available.

These methods are examples on how you can build your audience and help people find your site as a preferred source. It’s not required to do them in order to appear as a preferred source.”

Another new section about the Preferred Sources badge was updated with the following:

“Add a button to your site alongside your other social CTAs. You may use your own design or download the Google button assets provided in the list. Note: This feature is available in all supported languages, not just those listed.”

Google’s Changelog Offers Insight Into Change

The official changelog explains the reasoning behind the update:

“Expanding preferred sources to all languages where Google Search is available

What: Added that the preferred sources feature is now available in all languages where Google Search is available, including new translated downloadable button assets.

Why: The preferred sources feature is now available in all languages supported by Google Search.”

Takeaways For SEO

The expansion of Preferred Sources to all supported languages expands the opportunities for publishers in all languages to influence Google Search to show their websites to users. While publishers and SEOs can’t manipulate the signal itself they can positively influence their site visitors to influence Google Search.

For publishers, this means:

  • The ability to participate in Preferred Sources regardless of language
  • New opportunities to build audience signals tied to Top Stories
  • Clearer guidance on how to implement Preferred Source prompts
  • Preferred Sources is now available in all languages supported by Google Search
  • Publishers worldwide can now use Preferred Sources to build audience preference signals
Google Tells Developers To Build For AI Agents, Not Just Humans via @sejournal, @MattGSouthern

Google’s web.dev site now includes guidance advising developers to treat AI agents as a distinct audience alongside human visitors.

Titled “Build agent-friendly websites,” it tells developers that “some human users are pivoting from manual navigation to delegating goal-oriented journeys to AI agents.”

Google frames this as a design problem, noting that websites built with complex hover states and shifting layouts are “functionally broken for agents.”

What The Guide Covers

Google describes three ways agents interpret websites:

  1. Screenshots let agents use vision models to identify elements visually.
  2. Raw HTML gives agents the DOM structure and hierarchy.
  3. And the accessibility tree provides what Google calls a “high-fidelity map” of interactive elements, stripped of visual noise.

Google’s recommendations for agent-friendly design include using semantic HTML elements like and over styled

elements, keeping layouts stable across pages, linking tags to inputs with the for attribute, and setting cursor: pointer on clickable elements.

Google wraps up with a statement that highlights the connection between agent optimization and current web standards:

“Everything we suggest to make a site ‘agent-ready’ also makes sites better for humans.”

WebMCP As A Forward Signal

At the bottom of the guide, Google links to WebMCP, a proposed web standard for helping websites interact with agents. Chrome’s team describes it as an early preview program and is accepting sign-ups for developers who want to experiment.

WebMCP would let websites register tools with defined input/output schemas that agents can discover and call as functions. Slobodan Manic covered WebMCP last week as part of the broader protocol stack forming around agent interaction.

Why This Matters

Semantic HTML, stable layouts, and proper accessibility markup have been web development defaults for years, and we’ve covered agent-optimization in depth.

What’s new is Google making this an official developer resource. Putting agent-friendliness on web.dev signals that Google is treating agent interaction as part of its developer guidance, alongside established areas like accessibility and performance.

For sites that already follow accessibility best practices, there’s little to change. For those that don’t, the business case for semantic HTML now extends beyond screen readers to AI agents that browse, compare, and transact on behalf of users.

Looking Ahead

The WebMCP early preview program is open for sign-ups. Chrome is listed for Google I/O on May 19–20, giving developers another place to watch for updates on browser-based agent interactions.


Featured Image: Summit Art Creations/Shutterstock