Google On AI Search & Why Browsy Queries Favor Full SERPs via @sejournal, @martinibuster

Google’s Liz Reid recently discussed what goes on behind the scenes of AI Search, particularly with the fragmentation of complex queries into smaller ones and a relatively  new concept, Browsy Queries. Her feedback offers insights on what SEOs should be focusing on right now in order to perform better in AI search surfaces.

Search Behavior Is Varied, Not Monolithic

Host Joe Wazenthal asked Liz Reid about user behavior patterns in search, how users choose to use classic search or AI search, and what differences in queries result from choosing one platform over the other.

Liz Reid answered by first defining what she is talking about, linking classic search and AI Mode together as Search, then positioning Gemini as something else that is fundamentally different.

She also stated that there are a massive amount of users whose search behaviors are varies across all search surfaces, in essence saying that there isn’t a monolithic user behavior pattern in which people are doing the exact same searches, the patterns the interviewer was looking for in his question.

Liz Reid answered:

There’s sort of your main search page. There’s AI Mode. That’s part of search.

And then there’s the Gemini app.

And I would say there’s a lot of users, so their behavior varies across all of them.”

Search And AI Usage Patterns Are Complex

The SEO and publishing community often thinks about Search as Google but Liz Reid says that user behavior patterns point to a more complex search ecosystem where users are relying on multiple platforms.

She continued her answer:

“But there are some patterns. There’s plenty of people who co-use across them. There’s plenty of people that are actually using several AI products right now, just in general, not even just within Google.

Across Gemini and Search, the more informational ones… Like, if it’s an informational query, then the probability that they’re using Search or AI Mode is going to be higher.

If it’s a creative query, it’s like more of a productivity question like, please rewrite this to make it sound more formal, right? Those type questions are going to be more Gemini-oriented.

Between AI Mode and Search, the main search page, some people use AI Mode mostly via AI overviews. They start in AI overviews and they transition.

For those who go direct to AI Mode, they tend to do that for queries that they consider sort of more complex, longer questions, questions where they expect that they’re going to do more follow-ups, versus if you’re doing a very browsy query, you might choose to prefer all of the SERP.”

Browsy Queries And Browse Search Intent

When we think about search, it may be useful to consider that people not only search across platforms, but they do it for different reasons.

Takeaways About How People Use AI

  • Co-Users
    People use multiple platforms simultaneously (co-use)
  • Informational Queries
    These tend to happen on Classic Search and AI Mode
  • Creative Queries
    These tend to happen on Gemini
  • AI Mode Direct
    Queries that originate on AI Mode, where people navigate to AI Mode, tend to be complex, what was traditionally called longtail.
  • Browsy Queries
    This is a relatively new phrase that Googlers apparently use.

What Are Browsy Queries?

The phrase “browsy queries” must be something that Googlers use internally and maybe is more familiar with people who do Pay Per Click advertising.  There aren’t really many instances of the phrase but here’s how Google uses it.

A software engineer formerly of DeepMind and Google describes in her LinkedIn Profile having created a machine learning model that identifies “browse intention” queries on Google Search, an invention that improved click-through rates by 5%.

She wrote:

“Built a machine learning model to identify ‘browse intention’ query on Google Search, which presents engaging content on search result pages for browsy queries (e.g. “best places to visit in Orlando”). Improved global search result click-through rate by 5%”

The phrase “browsy queries” is also used in a Google job description for a commerce software engineer, placing the phrase in the context of shopping queries.

“Commerce Retrieval researches and develops high-precision algorithms to reduce the search space for product queries by 8 orders of magnitude under tight latency and compute constraints. Our solutions are tailored to the unique complexities of the Shopping domain including browsy queries, a hierarchical schema, and short multimodal documents.”

It’s also used in the context of video ads in a Google support page for video ads:

“These new shoppable formats will be shown to potential customers in lower intent, more “browsy” Search placements earlier in their shopping journey.”

What Browsy Queries Means And How To Optimize For it

What’s consistent across all three uses is that “browsy queries” are defined by a discovery-level intent stage.

In each example, Google is identifying what the user keep the user exploring:

  • The DeepMind example ties browsy queries to engaging content that a user wishes to browse through, not direct answers.
  • The commerce job role positions browsy queries as a quality of commerce search.
  • The ads example places browsy queries earlier in the shopping journey at about the discovery phase.

The useful takeaway is that Google treats these queries as exploration problems. What makes browsy queries complex is that they have under-specified user intent and are the result of consumers who may be looking for inspiration.

For an SEO or an online merchant, it means that a user has intent but hasn’t narrowed down what they want. That’s where contexts like “Stylish Outfits For Summer” come in handy. Broad keyword phrases are probably useful here. I like a pyramid structure where the deeper a user gets into a page, the more specific it may become.

Keyword Fragmentation In AI Search

Liz Reid explained that users have always wanted to express longer natural language queries but were forced to narrow them down to keywords like “best restaurants in New York” even though what they really wanted may have been more specific like a restaurant with vegan options and an opening for a party of five.

For as long as I’ve been in SEO, and I’m near 30 years in the business, keyword research has been the foundation of digital marketing. You pick the keywords you want to rank for then create the content in a way that is optimized for that keyword. The problem with optimizing for a short keyword phrase is that there are hidden meanings within that keyword and that’s always been the case.

The way Google used the issue of latent meanings within keywords is to use things like clicks to better understand what users meant when they typed ambiguous keyword phrases like “restaurants in New York.” Some SEOs believe that the clicks were used for ranking websites but another use for clicks is understanding what people mean when they type ambiguous phrases. What Google has done for quite awhile now is to rank the most popular meaning of the keyword phrase first and no matter how many links a page received, if the content aligned with a less popular meaning the page wouldn’t rank.

Liz Reid said that people who use AI-based search are using longer queries that articulate what the problem or information need is, making it easier for Google fetch the information they’re looking for. That change gets to the heart of the problem with organic search that AI search is solving and the implications for SEO are profound.

Liz Reid begins:

“We have seen with AI overviews meaningfully longer queries. We see more natural language queries, but it’s also not even something as basic as that.

It can also be like you were searching for restaurants. We used to laugh about the like before I worked on search, I worked on maps and local, some of the intersection with search, and people would just be like, “restaurants New York.”

And you’re like, what do you want me to do with that query? Like, okay, the best restaurants in New York are going to take three months and 99.9% of the population can’t afford to go to them.

Okay, but like, are you picking 10 random ones, etc.?

But like, part of why people would do that is they had a much more complex– I want a restaurant in this location for five people. It can’t be too pricey. I have a vegan member. I also have kids. That was the question they had in their mind.

And in the old world of keyword-ese, that information would be spread throughout the web. And so you wouldn’t feel confident you could just put in the question.

And now with AI Overviews and AI Mode, you can start to actually, and you see people do this, they tell you the real problem, right?

They don’t take their need and translate it to what the computer understands. They try to give the computer their actual need and expect us to do the translation.”

The big ideas to unpack there are:

  • A typical complex question asked in AI Search may not be solved by one web page.
  • Complex questions may be one-off and rarely, if ever, repeated, which in many cases may lower the value of optimizing for those phrases, because the time used for crafting them could be more profitably spent doing something else.
  • Given that a site will likely share the AI Overviews (AIO) space with another site it increases the need to optimize other factors such as brand icons that stand out in a positive way, use of images that are relevant, and even the use of videos to claim as much AIO space as possible.
  • And yet, perhaps the bigger takeaway is that it’s not all longtail because Google breaks down the longtail phrases into smaller highly specific keyword phrases that reflect a portion of the information need, query fan-out, and fires those off to classic search. Google’s AI then picks from among the top three for each query and uses that to synthesize an answer.

So it’s not really that SEOs should optimize for long-tail queries because query fan-out uses Classic Search, bringing it all back to the specific queries that web pages are relevant and optimized for.

Addressing Real Needs

Reid didn’t go into detail about this point but it’s interesting anyway because she said that the process of breaking a complex natural language query into smaller queries becomes a quality issue. One of the problems with AI Search is that people aren’t searching with the same keyword phrases which means that Google can’t cache similar queries in the same way it can with organic search.

She explained:

“I think it means you have to do, it’s a harder job on quality, right?

You have to take this question, there’s many parts, and you have to figure out how you break it apart. And you have to do work to think about things like latency, because you can’t just, you know, if everyone uses the same keyword and it’s not personalized, then you can cache it all. If all of a sudden the queries get much more diverse, you know, it has consequences there.

But I think we just see that it’s very empowering people, right? That it takes some of the work out of searching.

A few years ago, they said, What more can you do with Google search? But if you actually ask them, Okay, when was the last time you spent 20 minutes searching when you would have preferred to spend 2? It’s actually not that hard for me. … And so it’s been kind of exciting to just… make people’s lives easier by helping them address their real need.”

On the surface, the idea of addressing user’s real needs sounds like one of those unhelpful “be awesome” or “content is king” type slogans. But it’s actually a way that every SEO should be auditing web pages. Rather than limiting their scope to keywords, headings, technical issues, take a look at how it’s filling some kind of need.

Someone today asked me to look at their website that was having trouble getting indexed. They suspected that it might be a technical issue. My response is that yeah, everyone hopes it’s a technical issue but in many cases, especially for this one I was looking at, the problem becomes apparent when looked at through the lens of asking, “what need is this page filling?” as well as by asking, “How is this not just different from some other page but different and better?

Watch the Liz Reid interview here:

Google’s Liz Reid on Who Will Own Search in a World of AI

Featured Image by Shutterstock/Summit Art Creations

Google Advises Using AI In Best Possible Way For AI Search via @sejournal, @martinibuster

Google’s Martin Splitt and Nikola Todorovic, Director of Software Engineering at Google Search, recently discussed how AI is changing Google and SEO. Todorovic encouraged SEOs and businesses to take advantage of AI to analyze data, research competition, and improve their ability to provide value.

AI And The Web Ecosystem

Google’s Martin Splitt asked a question that many SEOs and online businesses have on their minds related to what they should do for AI features like AI Mode and AI Overviews. Splitt and Todorovic both said that there are opportunities, especially with the use of AI within a narrow scope.

Martin asked:

“But one thing that we keep hearing from the ecosystem pretty much at every event we do and it’s everywhere is, how do we make sure that with AI features being part of Search now, that the ecosystem continues to thrive.

And I think that’s an interesting challenge, but also there are lots of opportunities thanks to AI features these days. And I know that we at Google try our best to go on this journey together with the ecosystem.

But how do you see it from your perspective? What is it that we do to make sure the ecosystem thrives with these new features?”

The question asked was specifically about what Google can do to assure that the web ecosystem thrives, but the answer wasn’t about what Google can do, but rather about what SEOs and businesses can do.

Todorovic acknowledged that this is a concern he’s also aware of, but he also said that there’s no “magic wand,” meaning there is no simple solution or a roadmap, and he did suggest that focusing on delivering value is a key way to adapt to the new AI search features.

He answered:

“This is clearly one of the key questions and you see them a lot on the social media as well. And I don’t think there is like a magic wand that can clearly give the guidance.
Okay, what do I do now? Like what would the SEO experts do now in the new system?

My kind of guiding principle or my like the way I see here is that the site owners, they do need to continue making sure that their products, that their websites, that their platforms are providing value to the user. Because ultimately, if you provide a particular value, then the users will continue coming to you and they will continue coming to you through Google as well.”

On the surface, this sounds like “content is king” or “be awesome” type of advice, but I think that would be missing a deeper point. One, there is so much that a Googler Engineer can say directly. But there is a lot that they can say indirectly, and I think that’s what Todorovic is doing here.

For example, if Google’s systems reward sites that users are actively looking for, then “providing value” is the kind of thing that’s going to ring bells in that kind of algorithm, where external signals generated by users play a role in what sites Google is ranking. I think it would be a mistake to conflate the advice to “provide value” as a platitude. Knowing what we know about Google’s external signals, the advice to provide value makes a lot of sense.

Todorovic continued his answer:

“So… for example, you’re selling something, you have like a product or like a platform, you have like some subscriptions, et cetera. …if you are providing value to your clients, they will continue coming to you.

In the AI centric or AI oriented system, …those kind of bringing the value still continues. …if you don’t provide value, nobody’s going to buy your newspaper or book or nobody’s going to listen to the radio or to the podcast.”

Master The Use Of AI To Provide Value

Todorovic next acknowledged that, as an employee at Google, he also faces questions of whether AI is going to take his job away, just like online businesses are worried about whether AI is going to replace them or make their businesses obsolete.

His answer is to adapt to AI and use it in a way that increases your value as an employee or as an online business.

Todorovic explained:

“So I think everybody, including all of us, there’s a lot of questions… Like, is AI going to take our jobs and so on. I think we all need to continue thinking, how do we provide value on top of all of this? And in many cases, this is about mastering the AI tools and being able to use them in the best possible way.

So this is one of my recommendations to all the SEO professionals, site owners, and the whole ecosystem, that they continue providing value, but then do not neglect the new technology and make sure you use it in the best possible way for you.

Now, obviously I don’t think we would …recommend the best possible way is to just multiply all the content and just generate because you know, it’s cheap and easy …it’s not going to provide a ton of value.

But if you’re using it to improve your grammar, to improve the style a little bit, make it kind of more interesting and so on, I don’t think that’s a wrong use of the technology. But then there’s plenty of ways, okay. Maybe AI can help you better understand your data. Maybe AI can help you understand the competition potentially better as well. So clearly this is something we can advise.”

My Example Of An AI Prompt For SEO

One of the ways you can use AI for SEO is to ask the AI to do a reverse knowledge search on your web page content. A reverse knowledge search is when an algorithm reviews content to extract the questions the web page is likely to answer. If you run this prompt for examining your web page, it will tell you what search queries your web page is likely to answer.

For example, I recently wrote an article about how Google uses clicks as part of the ranking process.

I uploaded a copy of the finished article to ChatGPT with the following prompt:

“Analyze the document and extract a list of questions that are directly and completely answered by full sentences in the text. Only include questions if the document contains a full sentence or contiguous sentences that clearly answers it. Do not include any questions that are answered only partially, implicitly, or by inference.

For each question, ensure that it is a clear and concise restatement of the exact information present. This is a reverse question generation task: only use the content already present in the document.

For each question, also include the exact sentences from the document that answer it. Only generate questions that have a complete, direct answer in the form of a full sentence or sentences in the document.”

The first question that ChatGPT said my article answers is: “What are clicks considered in the context of ranking signals?”

The following is a screenshot of ChatGPT’s response where it shows the question my article answers and a snippet of text from the article that answers that question.

Screenshot Of ChatGPT’s Answer

Query Ranks #1 In Google

I then took that question and entered it on Google and it ranks #1 for that question in the organic part of the search results.

Screenshot Of My Article Ranking #1 In Google

Query Ranks #1 In Bing

I then asked the same question in Bing and my web page content ranks in (1) the featured snippets, (2) Bing News, and (3) the top of Bing’s organic listing.

Screenshot Of Bing #1 Ranking

I didn’t use AI to create the article or to optimize it. I just wrote it based on all the different things that I know about clicks and Google’s algorithms, using a list of topics I wanted to cover. I have been doing SEO for 26+ years, so I don’t really need an AI to tell me how to optimize a web page, it’s second nature to me.

But I did use AI to check grammar.

The Reverse Knowledge Prompt is something anyone can use to test if their content is focused on the right topics, to check if the content is off-topic, or to understand what the web page is really about in order to clean it up if it’s not about what you hoped it would be.

It’s not a way to reverse-engineer search engines. It’s a way to reverse knowledge search your content with AI to see what it’s really about.

Hidden Gem Advice

I went to Google’s Search Central Live last year, and I was talking to an attorney who was attending the show, and he asked me what is an important thing to do for ranking better, and I said for you it would be branding your site’s offerings in the mind of potential site visitors with the services that you offer. Part of doing that is getting the word of mouth going so that potential clients will think of the law firm’s brand name when they need their specific service.

After the break, we went back into the auditorium, and Danny Sullivan started talking about how sites should try to be brands, and I looked over at the guy I had just been talking with, and he raised an eyebrow back at me.

The advice to provide value is a hidden gem type of advice, in my expert opinion.

Listen to the Search Off The Record Podcast here:

How AI Is Changing Google Search and SEO

Featured Image by Shutterstock/dee karen

Google’s March Core Update Shifted Visibility Away From Aggregators via @sejournal, @MattGSouthern

An analysis from Amsive found that aggregators and user-generated content platforms lost US search visibility after Google’s March core update, while first-party brand sites, government domains, and content originators gained.

Lily Ray of Amsive examined over 2,000 domains using SISTRIX Visibility Index data and categorized them with Google Product Taxonomy tags via the DataForSEO API. The analysis compared visibility on March 27 (rollout start) versus April 8 (completion).

Amsive sees this pattern as a correction for over-indexed UGC and aggregator content, favoring “the company that owns the thing” over “the platform people use to talk about the thing.”

For transparency, SISTRIX measures keyword visibility rather than organic traffic. Other factors can also influence visibility.

YouTube’s Drop Led All Losers

YouTube lost 567 visibility points, the largest single-domain decline in Amsive’s dataset. Ray notes this is roughly 30% larger than Wikipedia’s 435-point drop during the December core update.

She adds context that YouTube’s visibility dropped back to its level before the early March surge, not to a new low.

Reddit lost 64 points, Instagram lost 48, and X lost 46.

Category Patterns: Travel, Jobs, And Health

In travel, OTAs and aggregators lost ground while hotel chains gained. TripAdvisor fell 45 points, Yelp 33, Expedia 33. Hilton rose 4, Hotels.com 3.6, Trivago 3.2. NPS.gov gained 9.9, airport websites saw large gains.

In jobs and education, job board aggregators declined while employer career pages and government sites rose. Indeed lost 18, ZipRecruiter 13. BLS.gov gained 5.4, USAJobs.gov 16%, Disney Careers 59%, CVS Health Careers 45%.

Health showed a split, with GoodRx up 55% (9.5 points), NIH.gov +9.3, but the Cleveland Clinic dropped 12, WebMD 9, Mayo Clinic 6.

Google seems to favor authoritative sources over consumer health publishers, though this is interpretive.

Bounce-Backs Complicate The Loser Data

Ray notes some big losers recovered shortly after the update. Reddit and Indeed saw visibility bounce back, indicating the loser list shows the update window but not where domains settled.

Connection To Prior Research

The findings align with a Zyppy analysis of over 400 sites, published earlier this month. Cyrus Shepard’s analysis showed sites offering products or services that enable task completion tend to gain organic traffic.

Ray cites Shepard’s data as supporting, despite different methodologies: Shepard measured correlations with third-party traffic estimates, whereas Amsive tracked SISTRIX visibility during an update window.

A SISTRIX analysis of German data found similar results: online shops and utility sites lost ground, while official websites and brands were more resilient.

Why This Matters

The data doesn’t confirm what Google changed or why. What it shows is that across travel, jobs, health, finance, and entertainment, the same pattern appeared.

Platforms that aggregate, list, or comment on other people’s content lost visibility, while sites that created or owned the content gained visibility. That’s a pattern worth checking against your own data from the same window.

Looking Ahead

Google hasn’t detailed what changed in the March core update. The rollout window was March 27 to April 8, and Amsive’s data should be read as one visibility snapshot from that period.


Featured Image: Roman Samborskyi/Shutterstock

Ask Jeeves Is Gone After Nearly 30 Years Of Search via @sejournal, @MattGSouthern

Ask.com, the search engine that started life as Ask Jeeves, shut down. Parent company IAC discontinued its search business as part of an ongoing effort to refocus its operations.

A farewell message posted on the Ask.com homepage, reads:

“Every great search must come to an end. As IAC continues to sharpen its focus, we have made the decision to discontinue our search business, which includes Ask.com.”

The message thanked the engineers, designers, and teams who built the platform over the decades, as well as the users who relied on it. It closed with a short line: “Jeeves’ spirit endures.”

What Ask Jeeves Was

For anyone who came online after 2005 or so, Ask Jeeves might just be a name. But for users who first experienced the web in the late 1990s, Jeeves was something new.

Garrett Gruener and David Warthen founded the company in Berkeley, California, in 1996. The service launched publicly as AskJeeves.com and introduced an idea that felt strange at the time.

Instead of typing keywords the way every other search engine expected, Jeeves encouraged users to type a full question in plain English. The search engine would try to return a direct answer.

The mascot, a cartoon butler named after the fictional valet in P.G. Wodehouse’s novels, became one of the most recognizable characters on the early internet. Jeeves made search feel approachable when the web was still intimidating to millions of new users. Jeeves also crossed into mainstream advertising, including appearances tied to the Macy’s Thanksgiving Day Parade.

Ask Jeeves went public in 1999, riding the dot-com boom. By that point, the search engine was already handling over a million queries a day. It competed alongside Yahoo, AltaVista, Excite, and Lycos in a search market that hadn’t yet consolidated around a single winner.

Google’s rise changed the market.

The Long Decline

Google’s PageRank algorithm delivered better results faster, and users noticed. Ask Jeeves tried to keep pace. In 2001, the company acquired Teoma, a search technology firm with its own way of ranking credibility. The Teoma engine powered Ask’s organic results and earned respect among search professionals for its quality.

But the gap kept widening. IAC acquired Ask Jeeves in 2005 and quickly dropped “Jeeves” from the name. The rebrand to Ask.com was meant to modernize the product and position it for broader competition.

It didn’t work. By 2010, Barry Diller said at TechCrunch Disrupt that Ask.com couldn’t compete with Google and carried no value in IAC’s stock. That same year, Ask.com shut down its own web crawler and laid off much of its engineering staff. Core search functions were outsourced to third-party providers. The company pivoted to a question-and-answer community model.

That kept the lights on for another 16 years, but Ask never came close to relevance again.

SEJ Was There

Search Engine Journal covered Ask Jeeves extensively during its peak years.

SEJ founder Loren Baker reported in 2005 on the company’s plans to launch a paid search advertising platform to rival Google and Yahoo. He covered the rebrand rumors when Diller first floated the idea of dropping the Jeeves name. He tracked the iWon and Excite acquisitions that briefly doubled Ask Jeeves’ market share.

Those articles are now a time capsule of the era when search was still a multi-player race.

Why This Matters

Ask Jeeves pioneered asking questions in one’s own words, but Google’s rise made keyword searching standard. Now, natural-language search is central again, with Google’s AI features built on Jeeves’s original premise of asking questions in plain language.

Looking Ahead

IAC’s farewell message gave no indication of plans for the Ask.com domain or any associated properties. The shutdown appears to end IAC’s consumer search business under the Ask brand.

For the search industry, the closure is a reminder of how fast the market consolidated after Google’s rise. Of the best-known consumer search brands from that period, Google is the one that emerged with an independent global search engine.


Featured Image: viewimage/Shutterstock

What Google & Microsoft Earnings Say About Search via @sejournal, @MattGSouthern

Alphabet reported Q1 2026 earnings with Google Search & Other revenue rising 19% year over year to $60.4 billion. Microsoft announced on the same day that Bing reached 1 billion monthly active users for the first time, with search ad revenue up 12%.

Both companies posted strong search quarters. But one line item in Alphabet’s report tells a different story for the websites that depend on Google’s ad network for revenue.

Google Network Revenue Fell Below $7 Billion

The “Network” segment, including AdSense, AdMob, and Google Ad Manager, isn’t a proxy for the entire web’s ad economy but is a clear financial indicator tied to ads outside Google’s surfaces. For publishers and app developers relying on Google-brokered ads, the decline affects them more than it affects Search revenue growth.

It has been shrinking over two years, with Google Network declining each quarter from Q1 2024 to Q1 2026. Q1 2026’s $6.97 billion is the lowest, below $7 billion for the first time.

The gap is increasingly evident. In Q1 2024, Google Network accounted for about 12% of Google’s ad revenue; by Q1 2026, it fell to roughly 9%. Meanwhile, Google Search & Other grew from $46.2 billion to $60.4 billion, with Search expanding 31% and Others contracting 6%.

The decline doesn’t match the overall digital ad market. The IAB/PwC Internet Advertising Revenue Report found that U.S. programmatic advertising grew 20.5% in 2025 to $162.4 billion. The programmatic market grew while Alphabet’s Google Network line didn’t.

The quarterly numbers smooth over sharper disruptions at the publisher level. In January, a two-day technical failure in Google’s ad exchange led AdSense publishers to report eCPM and RPM drops of 50-90% without corresponding declines in traffic. Google resolved the issue, but it showed how fragile publisher-side network monetization can be.

Bing’s Milestone In Context

While Google’s revenue mix hints at an ecosystem shifting inward, Microsoft is leaning heavily into user acquisition to prove its AI bets are paying off.

CEO Satya Nadella revealed during the FY26 Q3 earnings call that Bing reached 1 billion monthly active users for the first time. Search ad revenue, excluding traffic acquisition costs, grew 12%. Edge has gained browser market share for 20 consecutive quarters.

The broader segment, which includes Bing, was down 1% overall to $13.2 billion. Search advertising was the bright spot within it.

Bing’s global search share still sits at about 5% worldwide per StatCounter’s March 2026 data. That gap between 1 billion MAU and roughly 5% global share raises questions about what the MAU figure measures. Microsoft hasn’t defined frequency, overlap, or how AI-related Bing usage is counted.

Microsoft is also building measurement tools that matter for SEOs. Bing Webmaster Tools now maps grounding queries to cited pages, and Microsoft previewed Citation Share at SEO Week in April. When Citation Share ships, it could become one of the first platform-provided tools for comparing AI visibility on Bing against competitors.

CFO Amy Hood reported Q4 search ad growth in the high single digits, down from three double-digit quarters. Nadella said the consumer business is doing “the foundational work required to win back fans.” Bing’s results support maintaining coverage, not dropping Google-first focus.

Why This Matters For Search Professionals

For over a year, SEO professionals have monitored whether AI Overviews and AI Mode decrease clicks to publisher sites. These reports don’t settle that question but support a pattern documented by independent research.

Google’s Search business is growing, with CEO Sundar Pichai calling queries “at an all-time high.” Chief Business Officer Philipp Schindler attributed the quarter’s strength to retail, finance, and health.

What’s contested is what happens after the query. Google Network revenue fell, while Search revenue accelerated, suggesting more searches stay on Google surfaces. The data doesn’t prove AI Overviews or AI Mode caused the Network decline. Google Network can decline for various reasons, such as ad demand and product changes, providing search marketers with another financial signal to compare with traffic, CTR, and publisher revenue.

Third-party data partially fills the gap, though studies measure different things. An Ahrefs study analyzed 300,000 keywords using desktop CTR data and found that AI Overviews correlate with 58% lower click-through rates. Chartbeat data shared by Axios showed small publishers lost 60% of search traffic over two years, medium publishers 47%, and large publishers 22%.

Seer Interactive tracked an organic CTR drop from 1.41% to 0.64% for queries with AI Overviews. Its April update showed some recovery. Organic CTR on AI Overview queries climbed from 1.3% in December to 2.4% in February. The worst of the initial drop may have eased, but CTR is still well below that of pages without AI Overviews.

Google’s Liz Reid on Bloomberg claims AI Overviews reduce “bounce clicks” rather than useful visits, but doesn’t provide supporting data. She said they track search recurrence, which measures Google’s retention rather than publisher traffic. Google executives made a similar argument at Google Marketing Live, calling clicks from AI-enhanced search “more highly qualified” without sharing supporting data.

Search activity continues to grow according to disclosed metrics. However, the value capture is shifting. Metrics like referral traffic, AdSense RPM, or organic CTR may no longer align with search revenue growth. Google’s revenue can rise even as publisher traffic declines.

What Neither Company Disclosed

Neither company disclosed how much AI-assisted query growth produces outbound clicks to publisher sites; that number has been absent from earnings reports since AI features launched in Search.

Pichai said queries are “at an all-time high,” referring to searches, not clicks to external sites. Microsoft hasn’t clarified what counts toward Bing’s 1 billion MAU, including whether Copilot interactions, API calls, or agent queries are included.

Looking Ahead

Pichai said more Search info will be shared at Google I/O in May and Google Marketing Live.

Microsoft’s Citation Share hasn’t shipped yet; once it does, it could be among the first platform tools for comparing AI visibility on Bing. Its usefulness depends on whether Microsoft discloses outbound click data alongside its MAU figures.

More Resources:

Google’s Preferred Sources Is Now A Global SEO Signal via @sejournal, @martinibuster

Google updated its Search Central documentation to reflect that the Preferred Sources feature is now available in all languages supported by Google Search. The change clarifies global availability and introduces updated guidance for publishers looking to grow their audience through Top Stories.

Preferred Sources

Google’s Preferred Sources feature gives users a way to choose specific publishers they want to see more often in Top Stories and other search surfaces like Google Discover. Preferred Sources is a direct user-controlled signal that works alongside Google’s ranking systems to up-rank websites users have indicated they want to see more of.

The effect on Google Discover is that users will see more of their preferred sources in their feed. The Preferred Sources feature is one of the few ways that publishers and SEOs can indirectly influence Google’s algorithm to show their sites more often to users.

Image Of A Preferred Sources Badge

Image contains the words

How Preferred Sources Works

Preferred Sources selections don’t override relevance. A publisher must still publish fresh content that aligns with a user’s interests.  Google Discover is a recommender system that shows users web pages that are relevant to a user’s interest, especially favoring fresh content (read more about Google’s freshness algorithm).

Google’s February 2026 Discover Core Update documentation made it clear that source preferences play a role in which sites are shown to users in Discover.

The documentation explains:

“We’ll continue to show content that’s personalized based on people’s creator and source preferences.”

What Changed

As of April 30th 2026, the feature is available in all languages supported by Google Search. This expands Preferred Sources from an English-only feature into a globally available system for users to signal their source preferences.

What It Means For Publishers

Preferred Sources functions as way for users to signal which sites they’d like to see more of, a signal that works alongside of other ranking factors.

The feature is one of the ways SEOs and publishers should use to build an audience. Publishers can guide users to select them through buttons and links.

Screenshot Of A Preferred Sources Badge

Image contains the words

Preferred Sources Now Available Globally

Google’s changelog confirms that the Preferred Sources feature is no longer limited to English-language content and is now available across all languages where Google Search operates. Google has also published downloadable buttons in sixteen languages that can be used by SEOs and publishers to encourage site visitors to choose the website as a preferred source.

List Of 16 Languages With Downloadable Buttons

  1. Danish
  2. English
  3. Estonian
  4. Finnish
  5. French
  6. German
  7. Hebrew
  8. Hindi
  9. Japanese
  10. Korean
  11. Portuguese (Brazil)
  12. Russian
  13. Spanish
  14. Swedish
  15. Turkish
  16. Ukrainian

The original documentation stated that the feature was “available globally in English,” which has now been removed and replaced with language confirming full international availability.

Documentation Updated To Reflect Broader Access

Google also revised supporting text to reflect the expanded reach of the feature. Language that expressly limited the availability to English language publications has been rewritten to emphasize global applicability.

For example, guidance around adding a Preferred Sources button has been updated to clarify that the feature is not limited to a subset of languages. The revised documentation explicitly notes that the feature is available in all supported languages, even if only certain assets are listed.

The updated documentation now explains:

“The preferred sources feature is available globally for queries that trigger the “Top Stories” feature in all languages where Google Search is available.

These methods are examples on how you can build your audience and help people find your site as a preferred source. It’s not required to do them in order to appear as a preferred source.”

Another new section about the Preferred Sources badge was updated with the following:

“Add a button to your site alongside your other social CTAs. You may use your own design or download the Google button assets provided in the list. Note: This feature is available in all supported languages, not just those listed.”

Google’s Changelog Offers Insight Into Change

The official changelog explains the reasoning behind the update:

“Expanding preferred sources to all languages where Google Search is available

What: Added that the preferred sources feature is now available in all languages where Google Search is available, including new translated downloadable button assets.

Why: The preferred sources feature is now available in all languages supported by Google Search.”

Takeaways For SEO

The expansion of Preferred Sources to all supported languages expands the opportunities for publishers in all languages to influence Google Search to show their websites to users. While publishers and SEOs can’t manipulate the signal itself they can positively influence their site visitors to influence Google Search.

For publishers, this means:

  • The ability to participate in Preferred Sources regardless of language
  • New opportunities to build audience signals tied to Top Stories
  • Clearer guidance on how to implement Preferred Source prompts
  • Preferred Sources is now available in all languages supported by Google Search
  • Publishers worldwide can now use Preferred Sources to build audience preference signals
500M AI Searches Later: How To Actually Improve AI Search Visibility & Citations via @sejournal, @hethr_campbell

What signals actually drive AI search visibility?

Are competitors getting cited in AI Overviews while you’re watching from the sidelines?

How do you go from AI visibility gap alerts to a system that closes them?

Most SEO teams already have dashboards showing where they’re invisible in AI search. Few have a process to fix it.

Learn To Turn AI Search Visibility Data Into A High-Visibility System

Reconnect with Sam Garg, Founder and CEO of Writesonic, as he shares his practical framework for diagnosing citation gaps, prioritizing the right actions, and automating execution with AI agents and free open-source SEO & GEO tools.

You’ll Learn:

  • What drives AI citations: Visibility signal analysis from 500M+ AI conversations. You’ll learn which content types, sources, and placements actually get cited in ChatGPT, Perplexity, and Gemini.
  • GEO tasks that move the needle: Citation outreach, content refresh, and third-party placements, plus how to use AI agents and open-source tools to automate them.
  • Where AI search is headed next: Early signals on AI ecommerce and the shift from recommendations to transactions for your channel strategy.

This SEO webinar session covers what 500M+ AI conversations reveal about how citations are earned, which actions actually move the needle (citation outreach, content refresh, third-party placements), and how to use autonomous AI agents to execute at scale.

Watch on-demand now to get the most data-backed, actionable guidance available on improving your brand’s AI search visibility.

Google AI Mode In Chrome Isn’t Killing SEO; It’s Exposing Weak SEO via @sejournal, @gregjarboe

On April 16, 2026, Robby Stein, Google Search’s VP of Product, and Mike Torres, Google Chrome’s VP of Product, announced a new way to explore the web with AI Mode in Chrome. In their announcement, the two VPs wrote that the update makes it easier to “access and engage with content and dive deeper into what you find, all without losing your place or needing to switch tabs.”

Although that sounds like a product update, it’s really a warning shot. Search is moving from a list of links to a guided experience, and that should make every SEO professional pay attention.

Why? Because if Google is now helping searchers compare, refine, and continue their journey without leaving the AI layer, then the old “rank and hope” model is no longer enough. Search is becoming a trust test. And a plethora of SEO content isn’t passing it.

The Real Shift Is Control

For years, SEOs have measured success in visibility, rankings, and click-through rate. Those still matter. But AI Mode changes the sequence. A user can now start with a Google-generated answer, stay in the AI interface, open publisher pages side by side, and keep asking follow-up questions without restarting the journey. That means the click is no longer the beginning of discovery. In many cases, it’s the moment of verification.

The scale of this shift is hard to overstate. Recent research published by Index Exchange found that 69% of publishers experienced year-over-year ad opportunity declines throughout 2025, with an average drop of 14%.

Meanwhile, Ahrefs documented in February 2026 that AI Overviews now correlate with a 58% reduction in click-through rates for top-ranking pages – nearly double the 34.5% decline measured just a year earlier. Against that backdrop, the side-by-side view is not a concession to publishers. It is a structural change in what a “click” even means.

That has real consequences for reporting, budget allocation, and internal buy-in. Last-click attribution will look less and less like reality. That’s a problem for anyone still treating SEO as a traffic-only discipline.

 AI Mode Is A Stress Test

Google’s latest move isn’t bad news for SEO. It’s a stress test. If your content is thin, generic, or interchangeable, then AI Mode makes that weakness easier to see. If your content is original, useful, and clearly structured, then AI Mode gives it more chances to surface at the right moment.

Rand Fishkin made this case bluntly in his post on April 20, 2026, “5 Strategic Features that Predict Survival in the Zero-Click Era,” citing an analysis by Cyrus Shepard of 400 websites that did not collapse during what Fishkin called “the great traffic apocalypse of 2024-2026.”

What are the five features shared by survivors? They offered a unique product or service, enabled task completion, held proprietary assets, maintained tight topical focus, and built a strong brand.

Critically, Fishkin argues that “no amount of tactical excellence can save you” if your business model is one that Google and AI can disintermediate. SEO tactics alone are not the answer. The answer is whether your site offers something AI cannot summarize away.

That distinction matters here. AI Mode is not replacing SEO. It’s exposing weak SEO and rewarding strong SEO. SEO built on formulaic targeting and low-value content will struggle. SEO built on genuine expertise, clear structure, and editorial judgment will be better positioned.

 The Open Web Is Still Here – For Now

It would be easy to turn this into another dramatic story about Google swallowing the web. But the side-by-side design suggests something more nuanced: Google still needs the open web. It still wants users to explore publisher pages. The announcement confirmed that early testers found that “having both Search and the web side-by-side helped them stay focused on their tasks while exploring useful webpages.”

The sites most likely to benefit are the ones that offer something AI cannot flatten into a summary: original reporting, proprietary data, firsthand experience, strong analysis, and a point of view that adds value. Fishkin’s data backs this up: Letterboxd survived Google’s decimation of movie review sites because it offers something unique – its own user-generated data to graph movie popularity over time. That is something ChatGPT cannot replicate. AI Mode compresses the margin for mediocrity.

What SEOs Should Do Now

The core lesson here is this: The search journey is becoming less linear, more mediated, and more dependent on whether your content earns its place inside the process.

SEOs should focus on content that is clear enough to answer quickly, structured enough to be parsed, specific enough to be worth citing, original enough to stand apart, and credible enough to deserve trust.

They should also revisit how success is measured. If AI Mode affects discovery earlier in the journey, then SEO value may show up in places traditional reporting has ignored – assisted conversions, branded demand, and cross-channel influence.

Google AI Mode isn’t killing SEO. It’s exposing weak SEO, rewarding strong SEO, and forcing everyone else to rethink what visibility really means. That makes it one of the most important search stories of 2026 so far.

More Resources:


Featured Image: Kateryna Onyshchuk/Shutterstock

How Brands Are Increasing AI Visibility By Up To 2,000% [Webinar] via @sejournal, @hethr_campbell

The answer is Reddit, and yes, this 90-day strategy is worth your time.

Most brands treat Reddit as an afterthought.

However, Reddit is where buyers finalize their purchase decisions.

Reddit is where human trust gets built.

Therefore, Reddit serves as a trust signal for how AI search tools determine which brands are worth recommending.

AI Mentions & Cites Brands Based On Trust Signals, Across Channels

When ChatGPT, Perplexity, or Google AIO recommends a brand, it’s drawing on a web of signals that indicate the brand is credible, relevant, and mentioned by real people in real contexts.

Reddit is one of the most authentic of those signals.

Your opportunity: not Reddit instead of other channels, but Reddit as a meaningful addition to the multi-channel trust footprint AI reasons from.

One brand OGS Media worked with saw 2,000% AI visibility growth in 90 days after building a genuine Reddit presence. That’s the strategy Bartosz and Brent are unpacking on May 5.

What You’ll Learn In This AI Search Webinar

  • How Reddit community content contributes to the multi-channel trust signals AI uses to evaluate and surface brands
  • The 5-stage framework behind OGS Media’s 2,000% AI visibility result
  • The 7 most common Reddit mistakes brands make
  • What authentic subreddit engagement looks like when it’s actually working
  • How to find and engage in Reddit conversations that influence both buyers and AI

About the Speakers

Bartosz Goralewicz is the CEO of OGS Media and one of the most experienced Reddit marketing practitioners in SEO. Brent Csutoras is a Reddit Official Advisor and the Owner of Search Engine Journal, with nearly two decades of hands-on Reddit strategy for brands across every major vertical.

AI Gives You The Vocabulary. It Doesn’t Give You The Expertise via @sejournal, @DuaneForrester

Hiring managers are watching something uncomfortable happen in interview rooms right now. Candidates arrive with the right credentials, the right vocabulary, the right tool stack on their résumés, and then someone asks them to reason through a problem out loud, and the room goes quiet in the wrong way. Not in the thoughtful kind of way, but the empty kind that tells you the person across the table has never actually had to think through a hard problem on their own. And research is converging on the same conclusion. Microsoft, the Swiss Business School, and TestGorilla have all documented the same pattern independently: Heavy AI reliance correlates directly with declining critical thinking, and the effect is strongest in younger, less experienced practitioners.

This isn’t a technology story so much as a cognition story, and the SEO industry is living a version of it in slow motion. What none of those studies name is the specific mechanism: the three-layer architecture of expertise where AI commands the retrieval layer completely, and the judgment layers underneath it are more exposed than they’ve ever been. That architecture is what this piece is about.

The Debate Is Framed On The Wrong Axis

Every conversation about AI and critical thinking eventually lands in the same place: humans versus machines, organic thinking versus generated output, authentic expertise versus artificial fluency. It’s a compelling frame and also the wrong one.

The real fracture line isn’t human versus AI. It’s retrieval versus judgment, and those are not the same cognitive act, even though AI has made them feel interchangeable in ways that should concern anyone serious about their craft.

Retrieval is access. It’s the ability to surface relevant information, synthesize patterns across a body of knowledge, and produce fluent output that maps to the shape of expertise. Large language models are extraordinary at this, genuinely and structurally superior to any individual human at the retrieval layer, and getting better at speed. Fighting that reality is not a strategy.

Judgment, however, is different. Judgment is knowing which question is actually the right question given this specific context, the ability to recognize when something that looks correct is wrong for this situation in ways that aren’t in any training data, the accumulated weight of having been wrong in consequential situations, learning why, and recalibrating. You cannot retrieve your way to judgment. You build it through deliberate practice under real conditions, over time, with skin in the game that a model structurally cannot have.

The problem isn’t that AI handles retrieval well. The problem is that retrieval output now sounds so much like judgment output that the gap between them has become nearly invisible, especially to people who haven’t yet built enough judgment to know the difference.

The Judgment Stack

Think about expertise as a stack, not a spectrum.

Layer 1 is retrieval – synthesis, pattern vocabulary, volume processing, surface recognition. This is AI territory, and handing work in this area over to an AI is not weakness but correct resource allocation. The practitioner who uses an LLM to compress a competitive analysis that would have taken three hours into 40 minutes isn’t cutting corners; they’re buying back time to do the work that actually compounds.

Layer 2 is the interface layer – hypothesis formation, question quality, contextual filtering, knowing which output to trust and which to interrogate. This is where the leverage actually lives, and it’s fundamentally human-plus-AI territory. Your prompt quality is a direct proxy for your judgment quality. Two practitioners can feed the same LLM the same general problem and get outputs that are miles apart in usefulness, because one of them knows what a good answer looks like before they ask the question, and that foreknowledge doesn’t come from the model but from Layer 3 working backward.

Layer 3 is consequence and context – the ability to recognize when a pattern that has always worked is about to break, to assess novel situations that don’t map cleanly to anything in the training data, to hold strategic framing steady under pressure when the data is ambiguous. This is human territory, not because AI couldn’t theoretically develop something like it, but because it requires something a deployed model structurally cannot have: skin in the game, real consequence, the accumulated scar tissue of being wrong when it mattered and having to carry that forward.

The critical thinking crisis everyone is diagnosing right now is not, at its root, an AI problem but a Layer 2 collapse. People skip directly from Layer 1 retrieval to Layer 3 claims, bypassing the judgment infrastructure entirely. Layer 1 output is fluent, confident, and often correct enough to pass casual scrutiny, which keeps the gap invisible right up until someone asks a follow-up the model didn’t anticipate, and the person has no independent footing to stand on.

What SEO Is Actually Revealing

SEO is a useful diagnostic here because the industry has always been an early signal for how the broader marketing world processes technological disruption. We were the first to chase algorithmic shortcuts at scale. We were the first to industrialize content in ways that traded quality for volume. And right now we are watching two distinct practitioner populations diverge in real time, with the gap between them widening faster than most people have noticed.

The first population is using LLMs as answer machines: feed the problem in, take the output out, ship it. Ask the model what’s wrong with a site’s rankings. Ask it to write the content strategy. Ask it to explain why traffic dropped. This isn’t entirely without value, since Layer 1 retrieval has genuine utility even here, but the practitioners operating purely at this layer are making a trade they may not fully understand yet. They are outsourcing the only part of the job that compounds in value over time. Every hard problem they hand off to a model without first attempting to reason through it themselves is a training repetition they didn’t take, a weight they didn’t lift, and those repetitions are how Layer 3 gets built. You want the muscle? You have to do the work.

The second population is using LLMs as reasoning partners. They come to the model with a hypothesis already formed, a question already sharpened by their own thinking, and they use the output to pressure-test their reasoning, surface considerations they may have missed, and accelerate the parts of the work that don’t require their hard-won judgment, which frees them to apply that judgment more deliberately where it matters. These practitioners are getting faster and better simultaneously, because the model is amplifying something that already exists.

The difference between these two groups has nothing to do with tool access, since they are using the same tools, and everything to do with what each practitioner brings to the model before they open it.

The Leveling Lie

The argument for AI as a leveling tool is not wrong; it’s just incomplete, and that incompleteness is where the damage happens.

A junior practitioner today has access to a compression of the field’s knowledge that would have been unimaginable five years ago. Ask an LLM about crawl budget allocation, entity relationships, structured data implementation, or the mechanics of how retrieval-augmented systems weight freshness signals, and you will get a coherent, usually accurate answer in seconds. That is a genuine democratization of Layer 1, and dismissing it as illusory is its own form of gatekeeping.

But Layer 1 access is not expertise. It is the vocabulary of expertise, and there is a specific kind of danger in having the vocabulary before you have the understanding, because fluency masks the gap. You can discuss the concepts. You can deploy the terminology correctly. You can produce output that looks like the work of someone with deep experience, and you can do all of that while having no independent capacity to evaluate whether what you just produced is actually right for the situation in front of you.

This is not a character flaw but a metacognitive failure, the condition of not knowing what you don’t yet know. The junior practitioner using an LLM to accelerate their access to field knowledge isn’t being lazy. In many cases, they are working hard and genuinely trying to develop. The problem is that Layer 1 fluency generates a confidence signal that isn’t calibrated to actual capability. The model doesn’t tell you when you’ve hit the edge of what it knows. It doesn’t flag the situations where the standard answer breaks down. It doesn’t know what it doesn’t know either, and neither do you yet, and that combination is where well-intentioned work quietly goes wrong.

The leveling effect is real, but the ceiling on it is lower than most people assume. What gets leveled is access to the knowledge layer. What doesn’t get leveled (what cannot be compressed or transferred through any tool) is the judgment architecture that determines what you do with that knowledge when the situation doesn’t follow the pattern.

The practitioners who understand this distinction will use AI to accelerate their development. The ones who don’t will use it to feel further along than they are, right up until the moment a genuinely novel problem requires something they haven’t built yet.

Where The Abdication Actually Happens

Let’s be precise about this, because the accusation of abdication usually gets thrown around in ways that are more emotional than useful.

Using AI at Layer 1 is not abdication. Letting a model handle competitive analysis synthesis, first-draft content frameworks, technical audit pattern recognition, or structured data generation is correct delegation, since these are retrievable tasks and doing them manually when a better tool exists isn’t intellectual virtue but inefficiency pretending to be rigor.

Abdication happens at a specific and different point. It happens when you stop taking the problems that would have built your Layer 3 judgment and start routing them directly to a model instead: not because the model’s output isn’t useful, but because the attempt itself was the point. The struggle to formulate an answer to a hard problem, even an incomplete or wrong answer, is the mechanism by which judgment gets built. Hand that struggle off consistently, and you are not saving time but spending something you may not realize you’re spending until it’s gone.

This is the part of the conversation that doesn’t get said clearly enough: The low-consequence training repetitions are how you prepare for the high-consequence moments. A practitioner who has reasoned through hundreds of traffic anomalies, content decay patterns, and crawl architecture decisions (even inefficiently, even wrongly at first) has built something that cannot be replicated by having asked an LLM to reason through those same problems on their behalf, because the model’s reasoning is not your reasoning, just as watching someone else lift the weight does not build your muscle.

The senior practitioners who feel their position eroding right now are often misdiagnosing the threat. The threat isn’t that AI makes their knowledge less valuable, since genuine Layer 3 judgment is actually more valuable in an AI-saturated environment, not less, precisely because it becomes rarer as more people mistake Layer 1 fluency for the whole stack. The real threat is that the market hasn’t developed clean signals yet for distinguishing Layer 3 capability from Layer 1 fluency dressed up convincingly. It’s a signal problem that is temporary and will resolve itself in the most public and consequential ways possible – in front of clients, in front of leadership, in front of the situations where someone needs to make a call the model can’t make.

The answer for experienced practitioners is not to resist AI but to use it in ways that continue building Layer 3 rather than substituting for it. Use the model to go faster on Layer 1, and use the time that buys you to take on harder problems at Layer 2 and 3 than you could have reached before. The ceiling on your development just got higher, and whether you use that is a choice.

The answer for junior practitioners is harder but more important: Understand that the shortcut doesn’t shorten the path but changes the surface underfoot. You can move across the terrain faster with better tools, but the terrain still has to be crossed, and there is no prompt that builds the judgment architecture for you. Only doing the work, being wrong in situations that matter, and carrying that forward builds that.

The Prerequisite

Critical thinking is not the alternative to AI use. Instead, it is the prerequisite for AI use that compounds.

Without it, you are operating entirely at Layer 1, fluent and fast and increasingly indistinguishable from everyone else who has access to the same tools you do, and everyone has access to the same tools you do. The tools are not the differentiator and never were, serving instead as a floor, and that floor is rising under everyone’s feet simultaneously.

What compounds is judgment. The accumulated capacity to ask better questions than the person next to you, to recognize the moment when the standard pattern breaks, to hold a strategic position steady when the data is ambiguous and the pressure is real. That capacity doesn’t live in the model but in the practitioner, built over time through deliberate practice under real conditions, and it is the only thing in The Judgment Stack that gets more valuable as the tools get better.

The interview rooms where qualified candidates go quiet when asked to reason out loud are not showing us a technology problem. They are showing us what happens when a generation of practitioners optimizes for Layer 1 output without building the infrastructure underneath it, accumulating the vocabulary without the architecture, and the fluency without the foundation.

The practitioners who will matter in three years are building that foundation right now, using every tool available to go faster at Layer 1 and using the time that buys them to go deeper at Layer 3 than was previously possible. They are not choosing between AI and thinking but using AI to think harder than they could before, and that is not a leveling effect but a compounding one … and compounding, as anyone who has spent serious time in this industry understands, is an advantage worth building.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Summit Art Creations/Shutterstock; Paulo Bobita/Search Engine Journal