The CMO-CTO Power Struggle: Solving The Web Effectiveness Stalemate via @sejournal, @billhunt

In many organizations, a quiet but costly stalemate exists between two powerful forces: the chief marketing officer (CMO) and the chief technology officer (CTO). At the heart of this tension lies a fundamental misalignment. It is not of intent, but of incentives, timelines, and definitions of success.

What should be a collaborative engine for digital growth instead becomes a friction point that stalls progress, frustrates teams, and undermines website performance.

The Paradox: Shared Mission, Divergent Metrics

The CMO and CTO should be natural allies. Marketing relies on infrastructure such as bandwidth, uptime, speed, and scalability to execute campaigns, scale content, and deliver engaging experiences. And the CTO’s success often hinges on that very growth: traffic spikes, conversions, and customer engagement that justify investment in infrastructure.

Yet, despite their interdependence, their teams often operate in conflict.

This friction often arises because:

  • Different Success Metrics: CTOs are measured by uptime, performance, security, and technical debt reduction; CMOs by campaign speed, reach, conversions, and engagement. What should be complementary can feel mutually exclusive when objectives aren’t aligned or shared.
  • Perceived Bottlenecks: CMOs may perceive technical roadmaps or risk-management procedures as hindering progress. At the same time, CTOs may see marketing priorities as “shiny objects” that risk stability or security – each side underestimating the complexity and importance of the other’s world.
  • Communication Gaps: Technical and marketing teams may lack routine, structured communication, leading to misalignment. Without early involvement, marketing might pursue tools or campaigns incompatible with the site’s architecture, while engineering might roll out upgrades that inadvertently hurt campaign performance or SEO.

The irony is apparent: Without robust, scalable, and secure infrastructure, growth will fail under its own weight; without ambitious, creative marketing, traffic, and brand affinity may stagnate despite technical readiness.

The Cost Of The Stalemate

This tension is not just internal politics; it’s a strategic risk. When the web becomes the battleground between growth and governance, the customer experience suffers:

  • Content takes months to publish.
  • SEO recommendations remain in limbo.
  • Pages break post-launch due to miscommunication.
  • Critical updates are missed, leading to security gaps or ranking drops.

Meanwhile, the executive team wonders why web performance is lagging despite strong talent on both sides.

Case In Point: Overcoming The “IT Line Of Death”

I was invited into a project by the company’s board of directors. After making my pitch, I felt like I had been given the golden ticket: the CEO told me I could have whatever I needed to improve search performance. But when I walked into the IT department, I was met with a harsh reality of the IT roadmap. The CTO informed me that all items on the list had similar C-level backing; however, the fact is that despite an ever-growing list of approved critical actions, budgets, and resources had not changed.

This was my introduction to the IT Line of Death – the fine line between what gets done and what gets ignored.

In the CTO’s attempt to be helpful, he told me there were only two options I could:

  • Get my requests prioritized over the others, or
  • Embed SEO fixes into existing IT priorities.

The only chance of success was to ensure that I integrated SEO into as many of the existing projects as possible. That meant rethinking how we leveraged workflows, ownership structures, and business priorities was key. If SEO isn’t baked into the original blueprint and lacks executive support, it will always be an uphill battle.

Another Case: When Bandwidth Beats Visibility

At one Tech B2B company, I was engaged to help them increase traffic to the website. I started with my technical review and noticed that most of the site was blocked to web crawlers. The server team had done this deliberately as they were concerned that search engine spiders would consume too much bandwidth. Their KPI? An almost unrealistic “Nine Nines” uptime requirement.

Because uptime was their measure of success, any perceived risk to it, even from legitimate indexing activity, was blocked.

Meanwhile, the marketing team had a goal of exponential search growth. These conflicting KPIs put the teams in direct opposition. It took months of structured testing and validation to prove that crawl activity wouldn’t threaten system performance. Only after that were the blocks lifted, and search traffic began to climb.

The lesson: Unless there is a shared understanding of risk, value, and outcomes, the system defaults to self-protection over performance. And that stalls growth.

SEO As A Product: A Call For Deep Integration

In recent years, there has been a shift toward SEO as a product that amplifies the need for proper integration between the CMO and CTO functions. Eli Schwartz’s Product-Led SEO framework recasts SEO as a product development process, not a marketing channel. This view demands a collaborative strategy, user-driven technical builds, and ongoing partnerships between engineering, SEO, and content teams.

When SEO is treated like a product:

  • It has a roadmap, not just a to-do list.
  • It gets budgeted and staffed accordingly.
  • It evolves continuously based on user feedback, search behavior, and business priorities.

This approach elevates SEO to its rightful place: a shared strategic function that requires co-ownership and integrated planning from both marketing and technology leaders.

Turning Friction Into Force

In “Who Owns Web Performance,” we identified the shared nature of visibility, speed, and conversion outcomes. And in “From Line Item to Leverage,” we explored how visibility creates compounding value. But that value doesn’t materialize unless technology and marketing work in tandem, and this starts with the CMO and CTO.

The most effective organizations recognize this symbiotic relationship and create mechanisms for true collaboration:

1. Joint Planning

Have CTOs and CMOs co-create roadmaps for major website initiatives. When both are in the room from the start, stability and scalability get built alongside creativity and agility.

2. Unified Dashboards

Develop shared KPIs that reflect both technical and marketing priorities. This might include:

  • Site speed + Core Web Vitals.
  • Conversion rates by traffic source.
  • Organic visibility + uptime.
  • Structured data health + content engagement.

This makes success a “both/and,” not an “either/or.”

3. Blended Teams

Create cross-functional squads or “growth pods” that combine engineering, SEO, design, and marketing talent. These integrated teams reduce siloed thinking and create tighter feedback loops.

4. Visibility As A Shared Objective

Search visibility, indexability, and performance shouldn’t belong to one department. They are shared outcomes of infrastructure, content, governance, and strategy. Establish shared accountability with Visibility SLAs and cross-team escalation paths.

Executive Mediation: The Role Of The CEO Or COO

Ultimately, resolving this power struggle often requires intervention from above. The chief executive officer, chief operating officer, or chief digital officer must set the tone that growth and resilience are co-requisites, not competing values.

This includes:

  • Setting expectations that speed must coexist with security.
  • Holding teams accountable for shared outcomes.
  • Resourcing integration – not just in tools, but in time and team alignment.

Web Infrastructure Is Growth Infrastructure

If there’s one takeaway from the CMO-CTO power struggle, it’s this:

Your website isn’t just a marketing channel. It’s a growth engine – and it needs to be treated as such.

When SEO, performance, indexability, and campaign agility are considered upstream – not after launch – you don’t just get faster launches; you get smarter outcomes. You get sites that rank, load quickly, deliver meaningful content, and convert effectively.

This is the web as strategic infrastructure. And it can only be built when marketing and technology align.

From Turf Wars To Transformation

As AI-driven search, multimodal discovery, and customer expectations evolve, the web is no longer just a marketing asset – it’s core infrastructure. It requires both creative fuel and technical architecture.

That means the CMO-CTO relationship must shift from tension to tandem.

Organizations that navigate this shift don’t just eliminate friction – they unlock performance.

Because when technology and marketing move in sync, the web becomes more than a channel. It becomes a competitive advantage.

More Resources:


Featured Image: Creativa Images/Shutterstock

AI SEO: How To Understand AI Mode Rankings via @sejournal, @martinibuster

A simplified explanation of how Google ranks content is that it is based on understanding search queries and web pages, plus a number of external ranking signals. With AI Mode, that’s just the starting point for ranking websites. Even keywords are starting to go away, replaced by increasingly complex queries and even images. How do you optimize for that? The following are steps that can be taken to help answer that question.

Latent Questions Are A Profound Change To SEO

The word “latent” means something that exists but cannot be seen.  When a user issues a complex query the LLM must not only understand the query but also map out follow-up questions that a user might ask as part of an information journey about the topic. Those questions that comprise the follow-up questions are latent questions. Virtually every query contains latent questions.

Google’s Information Gain Patent

The issue of latent queries poses a new problem for SEO: How do you optimize for questions that are unknown? Optimizing for AI search means optimizing for the entire range of questions that are related to the initial or head query.

But even the concept of a head query is going away because users are now asking complex queries which demand complex answers. This is precisely why it may be useful for AI SEO purposes to optimize not just for one query but for the immediate information needs of the user.

How does Google understand the information need that’s hidden within a user’s query? The answer is found in Google’s Information Gain Patent. That patent is about ranking a web page that is relevant for a query then afterward ranking other web pages that have different but related content.

Identify The Latent (Hidden) Questions

One way to look at AI search results is to break them down into the questions that the AI answers are satisfying, to identify the hidden query fan-out questions.

For example, if you ask Google’s AI Mode how to make pizza dough the AI Mode will generate a pizza dough recipe. The answer is synthesized from multiple queries.

There is a way to extract the hidden (latent) questions by using a Reverse Question Answering Prompt, which I’ll show below.

Here is an example of an initial query and the additional query fan-out questions that are involved:

Initial Query: How to make pizza dough?

Query Fan-Out Questions:

  • What ingredients are needed to make pizza dough?
  • How much pizza dough does this recipe make?
  • What indicates that the yeast is active?
  • How long should the dough be kneaded by hand?
  • How long should the dough be kneaded with a mixer?
  • How long should the dough rise during the first proofing?
  • How can you develop more complex flavor in the dough?
  • What should the oven temperature be for baking the pizza?
  • How long should the pizza bake?

Actual AI Mode Question And Answers

Here are some of the hidden query fan-out questions accompanied by the answers that Google’s AI Mode generated.

Question: What ingredients are needed to make pizza dough?
Answer: “1 and 1/3 cups warm water (around 105-110°F or 40-43°C); 1 tablespoon granulated sugar (or honey); 1 packet active dry or instant yeast (2 1/4 teaspoons); 3 1/2 to 4 cups all-purpose or bread flour, plus more for dusting; 2 tablespoons extra-virgin olive oil, plus more for greasing; 1 teaspoon salt.”

Question: How much pizza dough does this recipe make?
Answer: “This recipe makes enough dough for two 10-12 inch pizzas.”

Question: What indicates that the yeast is active?
Answer: “Cover and let it sit for about 5-10 minutes, until the mixture is foamy and bubbly. This indicates the yeast is active.”

Screenshot Of The Actual AI Mode Answer

Screenshot of AI Mode answer for query

Reverse Question Answering Prompt

You can use the Reverse Question Answering Prompt to identify the underlying questions in any AI Mode answer. You can even use a similar but more precise prompt to analyze your own content to identify what questions the document answers. It’s a good way to check if your content does or does not answer the questions you want it to answer.

Prompt To Extract Questions From AI Mode

Here is the prompt to use to extract the hidden questions within an AI Mode answer:

Analyze the document and extract a list of questions that are directly and completely answered by full sentences in the text. Only include questions if the document contains a full sentence or sentences that clearly answers it. Do not include any questions that are answered only partially, implicitly, or by inference.

For each question, ensure that it is a clear and concise restatement of the exact information present. This is a reverse question generation task: only use the content already present in the document.

For each question, also include the exact sentences from the document that answer it. Only generate questions that have a complete, direct answer in the form of a full sentence or sentences in the document.

Reverse Question Answering Analysis For Web Content

The previously described prompt can be used to extract the questions that are answered by your own or a competitor’s content. But it will not differentiate between the core search queries the document is relevant for and other questions that are ancillary to the main topic.

To do a Reverse Question Answering analysis with your own content, try this more precise variant of the prompt:

Analyze the document and extract a list of questions that are core to the document’s central topic and are directly and completely answered by full sentences in the text.

Only include questions if the document contains a full sentence or contiguous sentences that clearly answers it. Do not include any questions that are answered only partially, implicitly, or by inference. Crucially, exclude any questions about supporting anecdotes, personal asides, or general background information that is not the main subject of the document.

For each question, ensure that it is a clear and concise restatement of the exact information present. This is a reverse question generation task: only use the content already present in the document.

For each question, also include the exact sentences from the document that answer it. Only generate questions that have a complete, direct answer in the form of a full sentence or sentences in the document.

The above prompt is meant to emulate how an LLM or information retrieval system might extract the core questions that a web document answers, while ignoring the parts of the document that aren’t central to its informational purpose, such as tangential commentary that do not directly contribute to the document’s main topic or purpose.

Cultivate Being Mentioned On Other Sites

Something that is becoming increasingly apparent is that AI search tends to rank companies whose websites are recommended by other sites. Research by Ahrefs found a strong correlation between sites that appear in AI Overviews and branded mentions.

According to Ahrefs:

“So we looked at these factors that correlate with the amount of times a brand appears in AI overviews, tested tons of different things, and by far the strongest correlation, very, very strong correlation, almost 0.67, was branded web mentions.

So if your brand is mentioned in a ton of different places on the web, that correlates very highly with your brand being mentioned in lots of AI conversations as well.”

Read: Data Shows Brand Mentions Boost AI Search Rankings

This finding strongly suggests that visibility in AI search may depend less on backlinks and more on how often a brand is discussed across the web. AI models seem to learn which brands are recommended by how often those sites are mentioned across other sites, including sites like Reddit.

Post-Keyword Ranking Era

We are in a post-keyword ranking era. Google’s organic search was already using AI and a core topicality system to better understand queries and the topic that web pages were about. The big difference now is that Google’s AI Mode has enabled users to search with long and complex conversational queries that aren’t necessarily answered by web pages that are focused on being relevant to keywords instead of to what people are actually looking for.

Write About Topics

Writing about topics seems like a straightforward approach but what it means depends on the context of the topic.

What “topic writing” proposes is that instead of writing about the keyword Blue Widget, the writer must write about the topic of Blue Widget.

The old way of SEO was to think about Blue Widget and all the associated Blue Widget keyword phrases:

Associated keyword phrases

  • How to make blue widgets
  • Cheap blue widgets
  • Best blue widgets

Images And Videos

The up to date way to write is to think in terms of answers and helpfulness. For example, do the images on a travel site communicate what a destination is about? Will a reader linger on the photo? On a product site, do the images communicate useful information that will help a consumer determine if something will fit and what it might look like on them?

Images and videos, if they’re helpful and answer questions, could become increasingly important as users begin to search with images and increasingly expect to see more videos in the search results, both short and longform videos.

Read:

Featured Image by Shutterstock/Nithid

LLM Traffic Is Shrinking via @sejournal, @Kevin_Indig

LLM referral traffic has been growing +65% year-to-date. But we should assume 0 in the future.

LLM Referral Traffic Is Shrinking

LLM referral traffic in B2B grew +65.1% since January – but dropped -42.6% since July.

Image Credit: Kevin Indig

My December prediction of 50% organic by 2027 is dead:

  • In December 2024, I analyzed six B2B sites and found LLM referral traffic was growing at such a fast rate it would make up 50% of organic traffic in three years.
  • Today, I’m finding the monthly growth rate of LLM traffic dropped from 25.1% in 2024 to 10.4% in November 2025.
  • Even from January to July 2025, the average growth rate was lower (19.2%) than my projection. That’s fast, but not enough to reach 50% organic traffic in three years.

LLM contribution to organic traffic grew from 0.14% in 2024 to 1.10% in 2025, which is more than I projected (0.79%).

Image Credit: Kevin Indig

But with organic traffic falling due to AI Overviews, this growth becomes meaningless.

Fewer Citations Despite Growing Usage

In August, several factors influenced LLM referral traffic:

  1. Seasonality: Siege Media documented that B2B sites lost LLM traffic in August due to vacation season.
  2. Router: ChatGPT 5, which launched on August 7, has a router that picks the model. The router favors non-reasoning models, which show fewer citations and send less traffic out.
  3. Concentration: Josh from Profound found a higher concentration of referrals to Reddit and Wikipedia starting late July.

Business seasonality has a lower impact because neither ChatGPT (consumer focus) nor Claude (business focus) sees a decrease in site visits.

Image Credit: Kevin Indig
Image Credit: Kevin Indig

ChatGPT mentions, however, dropped by one-third in October and continue dropping in November.

Image Credit: Kevin Indig

Citations for large domains like Reddit or Wikipedia follow suit (based on Profound data).

Major sites see citation declines in September (Image Credit: Kevin Indig)

Conclusion: LLM visits are up, which removes seasonality as dominant cause. The driver of lower referral traffic is ChatGPT, showing fewer citations due to the model router.

Visibility Is The Real Price

Traffic was never the right way to value LLMs because LLMs make clicks redundant:

  • The AI Mode study I published last month validates that clicks only occurred for shopping-related tasks (zero-click share = ~100%).
  • Pew Research has found that only 1% of users click links in AI Overviews.

Focusing on traffic leads to disappointing results. ChatGPT is more like TikTok than Google Search. The currency of the AI world is visibility.

The good news: LLMs grow the pie. Semrush found people don’t use Google less often because they also use ChatGPT. If LLMs are additive to Google Search, the visibility surface grows even though clicks per source shrink. You have more places to be seen, fewer clicks per place.

But our success metrics need to change. Referral traffic neither works for ChatGPT nor Google, as AI Overviews and AI mode swallow more clicks. Instead, we need to adopt visibility-first.

Default To Zero LLM Traffic

  1. Track LLM and organic search seasonality for your vertical to measure the total pie of citations and make sense of drops/spikes.
  2. Monitor total citation and mention count to answer the question, “Are we growing because the market grows?” Lower citations/mentions means fewer chances to influence purchase decisions.
  3. Prioritize brand mentions over citations in LLMs. Mentions without links drive familiarity and influence purchase decisions.
  4. Stop expecting (meaningful) LLM referral traffic. Budget for visibility.
  5. Invest resources where LLMs go to train: UGC and third-party reviews like Reddit, YouTube, review sites, community forums.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!


Featured Image: Paulo Bobita/Search Engine Journal

Ahrefs Data Shows Brand Mentions Boost AI Search Rankings via @sejournal, @martinibuster

The latest Ahrefs podcast shares data showing that brand mentions on third-party websites help improve visibility across AI search surfaces. What they found is that brand mentions correlate strongly with ranking better in AI search, indicating that we are firmly in a new era of off-page SEO.

Training Data Gets Cited

Tim Soulo, CMO of Ahrefs, said that off-page activity that increases being mentioned on other sites improves visibility in AI search results, both those based on training data and those drawing from live search results. The benefits of conducting off-page SEO apply to both. The only difference is that training data doesn’t get into LLMs right away.

Tim recommends identifying where your industry gets mentioned:

“You just need to see like where your competitors are mentioned, where you are mentioned, where your industry is mentioned.

And you have to get mentions there because then if the AI chatbot would do a search and find those pages and create their answer based on what they see on those pages, this is one thing.

But if some of the AI providers will decide to retrain their entire model on a more recent snapshot of the web, they will use essentially the same pages.”

Tim cautioned that AI companies don’t ingest new web data for training and that there’s a lag in months between how often large language models receive fresh training data from the web.

Appear On Authoritative Websites

Although Tim did not mention specific tactics for obtaining brand mentions, in my opinion, off-page link-building strategies don’t have to change much to build brand mentions.

Tim underlined the importance of appearing on authoritative websites:

“So yeah, …essentially it’s not that you have to use different tactics for those things. You do the same thing, you appear like on credible websites, but yeah, let’s continue.”

The only thing that I would add is that authoritativeness in this situation is if a site gets mentioned by AI search. But the other thing to think about is if a site is simply the go-to for a particular kind of information, relevance

Topicality Of Brand Mentions

The other thing that was discussed is the topicality of the brand mentions, meaning the context in which the brand is discussed. Ryan Law, Ahrefs’ Director of Content Marketing, said that the context of the brand mention is important, and I agree. You can’t always control the narrative, but that’s where old-fashioned PR outreach comes in, where you can include quotes and so on to build the right context.

Law explained:

“Well, that segues very nicely to what I think is probably the most useful discrete tactic you can do, and that is building off-site mentions.

A big part of how LLMs understand what your brand is about and when it should recommend it and the context it should talk about you is based on where you appear in its training data and where you appear on the web.

  • What topics are you commonly mentioned alongside?
  • What other brands are you mentioned alongside?

I think Patrick Stox has been referring to this as the era of off-page SEO. In some ways, the content on your own site is not as valuable as the content about you on other pages on the web.”

Law mentioned that these off-page mentions don’t have to be in the form of links in order to be useful for ranking in AI search.

Testing Shows Brand Mentions Are Important

Law went on to say that their data shows that brand mentions are important for ranking. He mentions a correlation coefficient of 0.67, which is a measure of how strongly two variables are related.

Here are the correlation coefficient scales:

  • 1.0 = perfect positive correlation (two things are related).
  • 0.0 = no correlation.
  • –1.0 = perfect negative correlation (for example, for every minute you drive the distance gets smaller, a negative correlation).

So, a correlation coefficient of 0.67 means that there’s a strong relationship in what’s observed.

Law explained:

“And we did indeed test this with a bit of research.

So we looked at these factors that correlate with the amount of times a brand appears in AI overviews, tested tons of different things, and by far the strongest correlation, very, very strong correlation, almost 0.67, was branded web mentions.

So if your brand is mentioned in a ton of different places on the web, that correlates very highly with your brand being mentioned in lots of AI conversations as well.”

He goes on to recommend identifying industry domains that tend to get cited in AI search for your topics and try to get mentioned on those websites.

Law also recommended getting mentions on user-generated content sites like Reddit and Quora. Next he recommended getting mentioned on review sites and on YouTube video in the transcripts because YouTube videos are highly cited by AI search.

Ahrefs Brand Radar Tool

Lastly, they discussed their Ahrefs tool called Brand Radar that’s useful for identifying domains that are frequently mentioned in AI search surfaces.

Law explained:

“And obviously, we have a tool that does exactly that. It actually helps you find the most commonly cited domains.  …if you put in whatever niche you’re interested in, you can see not only the top domains that get mentioned most often across all of the thousands, hundreds of thousands, millions of conversations we have indexed. You can also see the individual pages that get most commonly mentioned.

Obviously, if you can get your brand on those pages, yeah, immediately your AI visibility is going to shoot up in a pretty dramatic way.”

Citations Are The New Backlinks

Tim Soulo called citations the new backlinks for the AI search era and recommended their Brand Radar tool for identifying where to get mentions. In my opinion, getting a brand mentioned anywhere that’s relevant to your users or customers could also be helpful for ranking in the regular search  as well as AI (Read: Google’s Branded Search Patent)

Watch the Ahrefs podcast starting at about the 6:30 minute mark:

How to Win in AI Search (Real Data, No Hype)

Regex in GSC Reveals ChatGPT Searches

ChatGPT increasingly queries Google and other search engines for answers to prompts. Atlas, ChatGPT’s browser, similarly searches Google for research.

Here’s how to identify that activity in Google Search Console.

Regex in Search Console

Agentic searches tend to use similar query patterns, which regular expressions can often detect.

Agentic queries (i) are usually longer than those of humans because prompts tend to be more detailed, and (ii) typically seek pricing info. Plus, searches from large language models often fan out to explore user feedback.

To use regular expressions in Search Console:

  • Go to the “Performance” > “Add filter.”
  • Choose “Query” > “Custom (regex).”
Screenshot of Search Console showing the regex filtering interface

Filter queries in Search Console with regular expressions.

Regex Patterns

Longer queries

ChatGPT queries are roughly five words on average, about 60% longer than traditional searches. But you can experiment with any length. For instance, this expression filters queries that contain more than 10 words.

([^” “]*s){10,}?

Change “10” to “4” or “25” to find queries longer than 4 or 25 words.

Screenshot of the field to enter to regex

Enter the regex, such as this example for queries longer than 10 words.

Google Analytics 4 can identify pages that generate the most ChatGPT answers. Search Console can then correlate those pages with queries likely generated by AI agents.

To find pages in GA4 that generate the most traffic from ChatGPT:

  • Click “Reports” > “Engagement”
  • Choose “Pages and Screens”
  • Click “Add filter”
  • Select “Session source/medium” (in the filter settings), select “Contains,” and type “ChatGPT.”
  • Click “Apply”

GA4 now filters pages by any source containing “chatgpt.” You can copy those URL paths and create a secondary filter to see only long-tail queries that the pages rank for.

Screenshot of GA4 showing the filtering interface

Filter pages in GA4 for any source containing “chatgpt.’

Brand and transactional queries

LLMs often fan out to gather reviews of products and brands. The fan-outs can research and compare prices to include in answers based on the prompt.

You can see these queries in Search Console by using the following regex:

b(review|reviews|reddit|rating|ratings|support|warranty|return policy|refund|complaint|feedback|scam|legit|trustworthy|experience|issues|buy|purchase|price|cost|cheap|discount|coupon|order|store|near|online|sale|affordable|available|in stock|best|quality|features|specifications|warranty|deal|shop|compare|vs|versus)b

Prominence in listicles

When asked for product recommendations, LLMs typically fan out to “best of” listicles. Publishing articles listing seasonal and general “top products” could elevate visibility for your brand and products.

Here’s a regex to track your brand in listicles:

b(best|top-rated|trusted|famous|top|most|perfect)b

Find informational queries

Consumers prompt ChatGPT for instructions and answers. If it finds a solution, ChatGPT often cites the source. Here’s a regex to find likely URL citations for informational prompts:

b(guide|tutorial|how to|step by step|tips|tricks|ways to|best way to|learn|help|explain|understand|examples|instruction|methods|meaning of|definition)b

Tools to Help

GSC Helper is a Chrome extension that lets you save regex patterns, search for any query directly within Search Console (instead of copy-pasting), and export filtered results into spreadsheets.

Better Regex in Search Console is another Chrome extension with pre-built regex patterns and features to create your own.

Better Regex in Search Console provides pre-built expressions and features to create your own.

Google Discusses Digital PR Impact On AI Recommendations via @sejournal, @martinibuster

Google’s VP of Product for Google Search confirmed that PR activities may be helpful for ranking better in certain contexts and offered an explanation of how AI search works and what content creators should focus on to stay relevant to users.

PR Helps Sites Get Recommended By AI

Something interesting that was said in the podcast was that it could be beneficial to be mentioned by other sites if you want your site to be recommended by AI. Robby Stein didn’t say that this is a ranking factor. He said this in the context of showing how AI search works, saying that the behavior of AI is similar to how a human might research a question.

The context of Robby Stein’s answer was about what businesses should focus on to rank better in AI chat.

Stein’s answer implies the context of the query fan-out technique, where, to answer a question, it performs Google searches (“questions it issues“).

Here’s his answer:

“Yeah, interestingly, the AI thinks a lot like a person would in terms of the kinds of questions it issues. And so if you’re a business and you’re mentioned in top business lists or from a public article that lots of people end up finding, those kinds of things become useful for the AI to find.”

The podcast host, Marina Mogilko, interrupted his answer to remark that this is about investing in PR. And Robby Stein agreed.

He continued:

“So it’s not really different from what you would do in that regard. I think ultimately, how else are you going to decide what business to go to? Well, you’d want to understand that.”

So the point he’s making is that in order to understand if a business should be recommended, the AI, like a human, would search on Google to see what businesses are recommended by other sites. The podcast host connected that statement to PR and Stein agreed. This aligns with anecdotal experiences where not just Google’s AI but also ChatGPT will provide answers to recommendation type queries with links to sites that recommend businesses. As the podcast host suggested and Stein seems to agree, this raises the importance of PR work, getting sites to mention your business.

Mogilko then noted that her friends might not have seen the articles that were published as a result of PR activities but that she notices that the AI does see those mentions and that the AI uses them in answers.

Robby agreed with her, affirming her observation, saying:

“That’s actually a good way of thinking about it because the way I mentioned before how our AI models work, they’re issuing these Google searches as a tool.”

Content Best Practices Are Key To Ranking In AI

Stein continued his answer, shifting the topic over to what kind of content ranks well in an AI model. He said that the same best practices for making helpful and clear content also applies for ranking in AI.

Stein continued his answer:

“And so in the same way that you would optimize your website and think about how I make helpful, clear information for people? People search for a certain topic, my website’s really helpful for that. Think of an AI doing that search now. And then knowing for that query, here are the best websites given that question.

That’s now… will come into the context window of the model. And so when it renders a response and provides all of these links for you to go deeper, that website’s more likely to show up.

And so it’s a lot of that standard best practices around building great content really do apply in the AI age for sure.”

The takeaway here is that helpful and clear content is important for standard search, AI answers, and people.

The podcast host next asked Robby about reviews, candidly remarking that some people pay for reviews and asking how that would “affect the system.” Stein didn’t address the question about how paid reviews would affect AI answers, but he did circle back to affirming that AI behaves like a human might, implying that if you’re going to think about how the AI system approaches answering a question, think of it in terms of how a human could go about it.

Stein answered:

“It’s hard. I mean, the reviews, I think, again, it’s kind of like a person where like imagine something is scanning for information and trying to find things that are helpful. So it’s possible that if you have reviews that are helpful, it could come up.

But I think it’s tricky to say to pinpoint any one thing like that. I think ultimately it’s about these general best practices where you want is reliable. Kind of like if you were to Google something, what pages would show up at the top of that query? It’s still a good way of thinking about it.”

AI Visibility Overlaps With SEO

At this point, the host responded to Stein’s answer by asking if optimizing for AI is “basically the same as SEO?”

Stein answered that there’s an overlap with SEO, but that the questions are different between regular organic search and AI. The implication is that organic search tends to have keyword-based queries, and AI is conversational.

Here’s Stein’s answer:

“I think there’s a lot of overlap. I think maybe one added nuance is that the kinds of questions that people ask AI are increasingly complicated and they tend to be in different spaces.

…And so if you think about what people use AI for, a lot of it is how to for complicated things or for purchase decisions or for advice about life things.

So people who are creating content in those areas, like if I were them, I would be a student of understanding the use cases of AI and what are growing in those use cases.

And there’s been some studies that have done around how people use these products in AI.

Those are really interesting to understand.”

Stein advised content creators to study how people are using AI to find answers to specific questions. He seemed to put some emphasis on this, so it appears to be something important to pay attention to.

Understand How People Use AI

This next part changes direction to emphasize that search is transforming beyond just simple text search, saying that it is going multimodal. A modality is a computer science word that refers to a type of information such as text, images, speech, or video. This circles back to studying how users are interacting with AI, in this case expanding to include the modality of information.

The podcast host asked the natural follow-up question to what Stein previously said about the overlap with SEO, asking how business owners can understand what people are looking for and whether Google Trends is useful for this.

Stein affirmed that Google Trends is useful for this purpose.

He responded:

“Google Trends is a really useful thing. I actually think people really underutilize that. Like we have real-time information around exactly what’s trending. You can see keyword values.

I think also, you know, the ads has a really fantastic estimation too. Like as you’re booking ads, you can see kind of traffic estimates for various things. So there’s Google has a lot of tools across ads, across the search console and search trends to get information about what people are searching for.

And I think that’s going to increasingly be more interesting as, a lot more of people’s time and attention goes towards not just the way people use search too, but in these areas that are growing quickly, particularly these long specific questions people ask and multimodal, where they’re asking with images or they’re using voice to have live conversation.”

Stein’s response reflects that SEOs and businesses may want to go beyond keyword-based research toward also understanding intent across multiple ways in which users interact with AI. We’re in a moment of volatility where it’s becoming important to recognize the context and purpose in how people search.

The two takeaways that I think are important are:

  1. Long and specific questions
  2. Multimodal contexts

What makes that important is that Stein confirmed that these kinds of searches are growing quickly. Businesses and SEOs should, therefore, be thinking, will my business or client show up if a person searches with voice using a lot of specific details? Will they show up if people use images to search? Image SEO may be becoming increasingly important as more people transition to finding things using AI.

Google Wants To Provide More Information

The host followed up by asking if Google would be providing more information about how users are searching, and Stein confirmed that in the future that’s something they want to do, not just for advertisers but for everyone who is impacted by AI search.

He answered:

“I think down the road we want to get, provide a glimpse into what people are searching for broadly. Yeah. Not just advertisers too. Yeah, it could be forever for anyone.

But ultimately, I think more and more people are searching in these new ways and so the systems need to better reflect those over time.”

Watch the interview at about the 13:30 minute mark:

Featured Image by Shutterstock/Krot_Studio

How Agentic Browsers Will Change Digital Marketing via @sejournal, @DuaneForrester

The footprint of large language models keeps expanding. You see it in productivity suites, CRM, ERP, and now in the browser itself. When the browser thinks and acts, the surface you optimize for changes. That has consequences for how people find, decide, and buy.

Microsoft shows how quickly this footprint can spread across daily work. Microsoft says nearly 70% of the Fortune 500 now use Microsoft 365 Copilot. The company also reports momentum through 2025 customer stories and events. These numbers do not represent unique daily users across every product; rather, they signal reach into large enterprises where Microsoft already has distribution.

Google is pushing Gemini across Search, Workspace, and Cloud. Google highlights Gemini inside Search’s AI Mode and AI Overviews, and claims billions of monthly AI assists across Workspace. Google also points to customers putting Gemini to work across industries and reports average time savings in Workspace studies. In education, Google says Gemini for Education now reaches more than 10 million U.S. college students.

Salesforce and SAP are bringing agents into core enterprise flows. Salesforce announced Agentforce and the Agentic Enterprise, with updates in 2025 that focus on visibility and control for scaled agent deployments. SAP positioned Joule as its AI copilot and added collaborative AI agents across business processes at TechEd 2024, with ongoing releases in 2025.

And with all of that as the backdrop, should we be surprised that the browser is the next layer?

Agentic BrowsersImage Credit: Duane Forrester

What Is An Agentic Browser?

A traditional browser shows you pages and links. An agentic browser interprets the page, carries context, and can act on your behalf. It can read, synthesize, click, fill forms, and complete tasks. You ask for an outcome. It gets you there.

Perplexity’s Comet positions itself as an AI-first browser that works for you. Reuters covered its launch and the pitch to challenge Chrome’s dominance, and The Verge reports that Comet is now available to everyone for free, after a staged rollout.

Security has already surfaced as a real issue for agentic browsers. Brave’s research describes indirect prompt injection in Comet and Guardio’s work, and coverage in the trade press highlights risks of agent-led flows being manipulated.

Now OpenAI has launched ChatGPT Atlas, a browser with ChatGPT at the core and an Agent Mode for task execution.

Why This Matters To Marketing

If the browser acts, people click less and complete more tasks in place. That compresses discovery and decision steps. It raises the bar for how your content gets selected, summarized, and executed against. Martech’s analysis points to a redefined search and discovery experience when browsers bring agentic and conversational layers to the fore.

You should expect four big shifts.

Search And Discovery

Agentic flows reduce list-based searching. The agent decides which sources to read, how to synthesize, and what to do with the result. Your goal shifts from ranking to getting selected by an agent that is optimizing for the user’s preferences and constraints. That may lower raw click volumes and raise the value of being the canonical source for a clear, task-oriented answer.

Content And Experience

Content needs to be agent-friendly. That means clear structure, strong headings, accurate metadata, concise summaries, and explicit steps. You are writing for two audiences. The human who skims. The agent that must parse, validate, and act. You also need task artifacts. Checklists. How to flows. Short-form answers that are safe to act on. If your page is the long version, your agent-friendly artifact is the short version. Both matter.

CRM And First-Party Data

Agents may mediate more of the journey. You need earlier value exchanges to earn consent. You need clean APIs and structured data so agents can hand off context, initiate sessions, and trigger next best actions. You will also need to model events differently when some actions never hit your pages.

Attribution And Measurement

If an agent fills the cart or completes a form from the browser, you will not see traditional click paths. Define agent-mediated events. Track handoffs between browser agent and brand systems. Update your models so agent exposure and agent action can be credited. This is the same lesson marketers learned with assistants and chat surfaces. The browser now brings that dynamic to the mainstream.

What To Do Now

Start With Content

Audit your top 10 discovery and consideration assets. Tighten structure. Add short summaries and task snippets that an agent can lift safely. Add schema markup where it makes sense. Make dates and facts explicit. Your goal is clarity that a machine can parse and that a person can trust. Guidance on why this matters sits in the information above from the Martech article.

Build Better Machine Signals

Use schema.org where it helps understanding. Ensure feeds, sitemaps, Open Graph, and product data are complete and current. If you have APIs that expose inventory, pricing, appointments, or availability, document them clearly and make developer access straightforward.

Map Agent-First Journeys

Draft a simple flow for how your category works when the browser is the assistant. Query. Synthesis. Selection. Action. Handoff. Conversion. Then decide where you can add value. This is not only about SEO. It is about being callable by an agent to help someone finish a task with less friction.

Rethink Metrics

Define what counts as an agent impression and an agent conversion for your brand. Tag flows where the agent initiates the session. Set targets for assisted conversions that originate in agent environments. Treat this as a separate channel for planning.

Run Small Tests

Try optimizing one or two pages for agent selection and summarize ability. Instrument the flows. If there are early integrations or pilots available with agent browsers, get on the list and learn fast. For competitive context, it is useful to watch how quickly Atlas and Comet gain traction relative to incumbent browsers. Sources on current market share are below.

Why Timing Matters

We have seen how fast browsers can grow when they meet a new need. Google launched Chrome in 2008. Within a year, it was already climbing the charts. Ars Technica covered Chrome’s 1.0 release on December 11, 2008. StatCounter Press said Chrome exceeded 20% worldwide in June 2011, up from 2.8% in June 2009. By May 2012, StatCounter reported Chrome overtook Internet Explorer for the first full month. Annual StatCounter data for 2012 shows Chrome at 31.42%, Internet Explorer at 26.47%, and Firefox at 18.88%.

Firefox had its own rapid start earlier in the 2000s. Mozilla announced 50 million Firefox downloads in April 2005 and 100 million by October 2005, less than a year after 1.0. Contemporary reporting placed Firefox at roughly 9 to 10% market share by late 2005 and 18% by mid-2008.

Microsoft Edge entered later. Edge originally shipped in 2015, then relaunched on Chromium in January 2020. Edge has fluctuated. Recent coverage says Edge lost share over the summer of 2025 on desktop, citing StatCounter.

For an executive snapshot of the current landscape, StatCounter’s September 2025 worldwide totals show Chrome at about 71.8%, Safari at about 13.9%, Edge at about 4.7%, Firefox at about 2.2%, Samsung Internet at about 1.9%, and Opera at about 1.7%.

What This History Tells Us

Each major browser shift came with a clear promise. Netscape made the web accessible. Internet Explorer bundled it with the operating system. Firefox made it safer and more private. Chrome made it faster and more reliable. Every breakthrough paired capability with trust. That pattern will repeat here.

Agentic browsers can only scale if they prove both utility and safety. They must handle tasks faster and more accurately than people, without introducing new risks. Security research around Comet shows what happens when that balance tips the wrong way. If users see agentic browsing as unpredictable or unsafe, adoption slows. If it saves them time and feels dependable, adoption accelerates. History shows that trust, not novelty, drives the curves that turn experiments into standards.

For marketers, that means your work will increasingly live inside systems where trust and clarity are prerequisites. Agents will need unambiguous facts, consistent markup, and licensing that spells out how your content can be reused. Brands that make that easy will be indexed, quoted, and recommended. Brands that make it hard will vanish from the new surface before they even know it exists.

How To Position Your Brand For Agentic Browsing

Keep your approach simple and disciplined. Make your best content easy to select, summarize, and act on. Structure it tightly, keep data fresh, and ensure everything you publish can stand on its own when pulled out of context. Give agents clean, accurate snippets they can carry forward without risk of misrepresentation.

Expose the data and signals that let agents work with you. APIs, feeds, and machine-readable product information reduce guesswork. If agents can confirm availability, pricing, or location from a trusted feed, your brand becomes a reliable component in the user’s automated flow. Pair that with clear permissions on how your data can be displayed or executed, so platforms have a reason to include you without fear of legal exposure.

Treat agent-mediated activity as its own marketing channel. Name it. Measure it. Fund it. You are early, so your metrics will change as you learn, but the act of measuring will force better questions about what visibility and conversion mean when browsers complete tasks for users. The first teams to formalize this channel will understand its economics long before competitors notice the traffic shift.

Finally, stay close to the platform evolution. Watch every release of OpenAI’s Atlas and Perplexity’s Comet. Track Google’s response as it blends Gemini deeper into Chrome and Search. The pace will feel familiar (like the late 2000s browser race), but the consequences will be larger. When the browser becomes an agent, it doesn’t just display the web; it intermediates it. Every business that relies on discovery, trust, or conversion will feel that change.

The Takeaway

Agentic browsers will not replace marketing, but they will reshape how attention, trust, and action flow online. The winners will be brands that think like system integrators (clear data, structured content, and dependable facts) because those are the materials agents build with. This is the early moment before the inflection point, the time to experiment while risk is low and visibility is still yours to claim.

History shows that when browsers evolve, the web follows. This time, the web won’t just render pages. It will think, decide, and act. Your job is to make sure that when it does, it acts in your favor.

Looking ahead, even a modest 10 to 15% adoption rate for agentic browsers within three years would represent one of the fastest paradigm shifts since Chrome’s launch. For marketers, that scale means the agent layer will become a measurable channel, and every optimization choice made now – how your data is structured, how your content is summarized, how trust is signaled – will compound its impact later.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Roman Samborskyi/Shutterstock

Anthropic Research Shows How LLMs Perceive Text via @sejournal, @martinibuster

Researchers from Anthropic investigated Claude 3.5 Haiku’s ability to decide when to break a line of text within a fixed width, a task that requires the model to track its position as it writes. The study yielded the surprising result that language models form internal patterns resembling the spatial awareness that humans use to track location in physical space.

Andreas Volpini tweeted about this paper and made an analogy to chunking content for AI consumption. In a broader sense, his comment works as a metaphor for how both writers and models navigate structure, finding coherence at the boundaries where one segment ends and another begins.

This research paper, however, is not about reading content but about generating text and identifying where to insert a line break in order to fit the text into an arbitrary fixed width. The purpose of doing that was to better understand what’s going on inside an LLM as it keeps track of text position, word choice, and line break boundaries while writing.

The researchers created an experimental task of generating text with a line break at a specific width. The purpose was to understand how Claude 3.5 Haiku decides on words to fit within a specified width and when to insert a line break, which required the model to track the current position within the line of text it is generating.

The experiment demonstrates how language models learn structure from patterns in text without explicit programming or supervision.

The Linebreaking Challenge

The linebreaking task requires the model to decide whether the next word will fit on the current line or if it must start a new one. To succeed, the model must learn the line width constraint (the rule that limits how many characters can fit on a line, like in physical space on a sheet of paper). To do this the LLM must track the number of characters written, compute how many remain, and decide whether the next word fits. The task demands reasoning, memory, and planning. The researchers used attribution graphs to visualize how the model coordinates these calculations, showing distinct internal features for the character count, the next word, and the moment a line break is required.

Continuous Counting

The researchers observed that Claude 3.5 Haiku represents line character counts not as counting step by step, but as a smooth geometric structure that behaves like a continuously curved surface, allowing the model to track position fluidly (on the fly) rather than counting symbol by symbol.

Something else that’s interesting is that they discovered the LLM had developed a boundary head (an “attention head”) that is responsible for detecting the line boundary. An attention mechanism weighs the importance of what is being considered (tokens). An attention head is a specialized component of the attention mechanism of an LLM. The boundary head, which is an attention head, specializes in the narrow task of detecting the end of line boundary.

The research paper states:

“One essential feature of the representation of line character counts is that the “boundary head” twists the representation, enabling each count to pair with a count slightly larger, indicating that the boundary is close. That is, there is a linear map QK which slides the character count curve along itself. Such an action is not admitted by generic high-curvature embeddings of the circle or the interval like the ones in the physical model we constructed. But it is present in both the manifold we observe in Haiku and, as we now show, in the Fourier construction. “

How Boundary Sensing Works

The researchers found that Claude 3.5 Haiku knows when a line of text is almost reaching the end by comparing two internal signals:

  1. How many characters it has already generated, and
  2. How long the line is supposed to be.

The aforementioned boundary attention heads decide which parts of the text to focus on. Some of these heads specialize in spotting when the line is about to reach its limit. They do this by slightly rotating or lining up the two internal signals (the character count and the maximum line width) so that when they nearly match, the model’s attention shifts toward inserting a line break.

The researchers explain:

“To detect an approaching line boundary, the model must compare two quantities: the current character count and the line width. We find attention heads whose QK matrix rotates one counting manifold to align it with the other at a specific offset, creating a large inner product when the difference of the counts falls within a target range. Multiple heads with different offsets work together to precisely estimate the characters remaining. “

Final Stage

At this stage of the experiment, the model has already determined how close it is to the line’s boundary and how long the next word will be. The last step is use that information.

Here’s how it’s explained:

“The final step of the linebreak task is to combine the estimate of the line boundary with the prediction of the next word to determine whether the next word will fit on the line, or if the line should be broken.”

The researchers found that certain internal features in the model activate when the next word would cause the line to exceed its limit, effectively serving as boundary detectors. When that happens, the model raises the chance of predicting a newline symbol and lowers the chance of predicting another word. Other features do the opposite: they activate when the word still fits, lowering the chance of inserting a line break.

Together, these two forces, one pushing for a line break and one holding it back, balance out to make the decision.

Model’s Can Have Visual Illusions?

The next part of the research is kind of incredible because they endeavored to test whether the model could be susceptible to visual illusions that would cause trip it up. They started with the idea of how humans can be tricked by visual illusions that present a false perspective that make lines of the same length appear to be different lengths, one shorter than the other.

Screenshot Of A Visual Illusion

Screenshot of two lines with arrow lines on each end that are pointed in different directions for each line, one inward and the other outward. This gives the illusion that one line is longer than the other.

The researchers inserted artificial tokens, such as “@@,” to see how they disrupted the model’s sense of position. These tests caused misalignments in the model’s internal patterns it uses to keep track of position, similar to visual illusions that trick human perception. This caused the model’s sense of line boundaries to shift, showing that its perception of structure depends on context and learned patterns. Even though LLMs don’t see, they experience distortions in their internal organization similar to how humans misjudge what they see by disrupting the relevant attention heads.

They explained:

“We find that it does modulate the predicted next token, disrupting the newline prediction! As predicted, the relevant heads get distracted: whereas with the original prompt, the heads attend from newline to newline, in the altered prompt, the heads also attend to the @@.”

They wondered if there was something special about the @@ characters or would any other random characters disrupt the model’s ability to successfully complete the task. So they ran a test with 180 different sequences and found that most of them did not disrupt the models ability to predict the line break point. They discovered that only a small group of characters that were code related were able to distract the relevant attention heads and disrupt the counting process.

LLMs Have Visual-Like Perception For Text

The study shows how text-based features evolve into smooth geometric systems inside a language model. It also shows that models don’t only process symbols, they create perception-based maps from them. This part, about perception, is to me what’s really interesting about the research. They keep circling back to analogies related to human perception and how those analogies keep fitting into what they see going on inside the LLM.

They write:

“Although we sometimes describe the early layers of language models as responsible for “detokenizing” the input, it is perhaps more evocative to think of this as perception. The beginning of the model is really responsible for seeing the input, and much of the early circuitry is in service of sensing or perceiving the text similar to how early layers in vision models implement low level perception.”

Then a little later they write:

“The geometric and algorithmic patterns we observe have suggestive parallels to perception in biological neural systems. …These features exhibit dilation—representing increasingly large character counts activating over increasingly large ranges—mirroring the dilation of number representations in biological brains. Moreover, the organization of the features on a low dimensional manifold is an instance of a common motif in biological cognition. While the analogies are not perfect, we suspect that there is still fruitful conceptual overlap from increased collaboration between neuroscience and interpretability.”

Implications For SEO?

Arthur C. Clarke wrote that advanced technology is indistinguishable from magic. I think that once you understand a technology it becomes more relatable and less like magic. Not all knowledge has a utilitarian use and I think understanding how an LLM perceives content is useful to the extent that it’s no longer magical. Will this research make you a better SEO? It deepens our understanding of how language models organize and interpret content structure, makes it more understandable and less like magic.

Read about the research here:

When Models Manipulate Manifolds: The Geometry of a Counting Task

Featured Image by Shutterstock/Krot_Studio

Measuring Visibility When Rankings Disappear [Webinar] via @sejournal, @hethr_campbell

Learn How to Track What Really Matters in AI Search

Tools like ChatGPT, Perplexity, and Google’s AI Mode no longer deliver ranked results; they deliver answers. So what happens when traditional SEO metrics no longer apply?

Join AJ Ghergich, Global VP of AI and Consulting Services at Botify, and Frank Vitovitch, VP of Solutions Consulting at Botify, for a live webinar that reveals how to measure visibility in the new search era.

What You’ll Learn

Why Attend

This session will help you move beyond outdated ranking metrics and build smarter frameworks for measuring performance in AI search. You’ll walk away with a clear, data-driven approach to visibility that keeps your team ahead of change.

Register now to learn how to track success in AI search with confidence and clarity.

🛑 Can’t make it live? Register anyway and we’ll send you the on-demand recording.

Google Q3 Report: AI Mode, AI Overviews Lift Total Search Usage via @sejournal, @MattGSouthern

Google used its Q3 earnings call to argue that AI features are expanding search usage rather than cannibalizing it.

CEO Sundar Pichai described an “expansionary moment for Search,” adding that Google’s AI experiences “highlight the web” and send “billions of clicks to sites every day.”

Pichai said overall queries and commercial queries both grew year over year, and that the growth rate increased in Q3 versus Q2, largely driven by AI Overviews and AI Mode.

What Did Google Report In Its Q3 Earnings?

AI Mode & AI Overviews

Pichai reported “strong and consistent” week-over-week growth for AI Mode in the U.S., with queries doubling in the quarter.

He said Google rolled AI Mode out globally across 40 languages, reached over 75 million daily active users, and shipped more than 100 improvements in Q3.

He also said AI Mode is already driving “incremental total query growth for Search.”

Pichai reiterated that AI Overviews “drive meaningful query growth,” noting the effect was “even stronger” in Q3 and more pronounced among younger users.

Revenue: By The Numbers

Alphabet posted $102.3 billion in revenue, its first $100B quarter. “Google Search & other” revenue reached $56.6 billion, up from $49.4 billion a year earlier.

YouTube ads revenue reached $10.26 billion in Q3. Pichai said YouTube “has remained number one in streaming watch time in the U.S. for more than two years, according to Nielsen.”

Pichai added that in the U.S. “Shorts now earn more revenue per watch hour than traditional in-stream.”

The quarter also included a $3.5 billion European Commission fine that Alphabet notes when discussing margins. Excluding that charge, operating margin was 33.9%.

Why It Matters

Google is telling Wall Street that AI surfaces expand search rather than replace it. If that holds, the company has reason to put AI Mode and AI Overviews in front of more queries.

The near-term implication for marketers is a distribution shift inside Google, not a pullback from search.

What’s missing is as important as what was said. Google didn’t share outbound click share from AI experiences or new reporting to track them. Expect adoption to grow while measurement lags. Teams will be relying on their own analytics to judge impact.

The revenue backdrop supports continued investment. “Search & other” rose year over year and Google highlighted growth in commercial queries. Paid budgets are likely to remain with Google as AI-led sessions take up a larger share of usage.

Looking Ahead

Google plans to keep pushing AI-led search surfaces. Pichai said the company is “looking forward to the release of Gemini 3 later this year,” which would give AI Mode and AI Overviews a stronger model foundation if the timing holds.

Google described Chrome as “a browser powered by AI” with deeper integrations to Gemini and AI Mode and “more agentic capabilities coming soon.”

The company also raised 2025 capex guidance to $91–$93 billion to meet AI demand, which supports continued investment in search infrastructure and features.


Featured Image: Photo Agency/Shutterstock