How And Why Google Rewrites Your Hard-Earned Headlines

TL;DR

  1. Google can and does rewrite headlines and titles frequently. Almost anything on your page could be used.
  2. The title is not all that matters. The entirety of your page – from the title to the on-page content – should remove ambiguity.
  3. The title tag is the most important term. Stick to 12 words and 600 pixels to avoid truncation and maximize value from each word.
  4. Google uses three rough concepts – Semantic title and content alignment, satisfactory click behavior, and searcher intent alignment – for this.
Image Credit: Harry Clarkson-Bennett

This is based on the Google leak documentation and Shaun Anderson’s excellent article on title tag rewriting. I’ve jazzed it to make it more news and publisher-specific.

“On average, five times as many people read the headline as read the body copy.”
David Ogilvy

No idea if that’s true or not.

I’m sure it’s some old-age advertising BS. But alongside the featured image, it is our shop window. Headlines are the gatekeepers. They need to be clickable, work for humans and machines, and prioritize clarity.

So, when you’ve spent a long time crafting a headline for your own story, why-oh-why does Google mess you around?

I’m sure you get a ton of questions from other people in the SEO team and the wider newsroom (or the legal team) about this.

Something like:

Why is our on-page headline being pulled into the SERP?

Or

We can just have the same on-page headline and title tag, can’t we? Why does it matter?

You could rinse and repeat this conversation and theory for almost anything. Meta descriptions are the most obvious situation, where some research shows they’re rewritten 70% of the time. The answer will, unfortunately, always be that, because Google can and does do what it wants.

But it helps to know the what and the why when having these conversations.

Mark Williams-Cook and team did some research to show that up to 80% of meta descriptions were being rewritten and the rewriting increased traffic. Maybe the machine knows best after all.

Why Does Google Rewrite Title Tags?

The search giant uses document understanding, query matching, content rewriting, and user engagement data to determine when a title or H1 should be changed in SERPs.

It rewrites them because it knows what is best satisfying users in real time. An area of search where we as publishers are at the bleeding edge. When you have access to that much data and you take a share of ad revenue, it would be a little obtuse not to optimize for clicks in real-time.

Image Credit: Harry Clarkson-Bennett

Does Length Matter?

No innuendos, please; this is a professional newsletter.

Google’s official documentation doesn’t define a limit for title tags. I think it’s just based on the title becoming truncated. Given Google now rewrites so much, longer, more keyword-rich and descriptive titles, longer titles could help with ranking in Top Stories and traditional search results.

According to Gary Illyes, there is real value in having longer title tags:

“The title tag (length), is an externally made-up metric. Technically there’s a limit, but it’s not a small number…

Try to keep it precise to the page, but I wouldn’t think about whether it’s long enough…”

Sara Taher ran some interesting analysis (albeit on evergreen content only) that showed the average title length falls between 42-46 characters. If titles are too long, Google will probably cut them off or rewrite them. Precision matters for evergreen search.

What Are The Key Determinants?

Based on the Google leak and Shaun’s analysis, I’d say there are three concepts at play Google uses to determine whether a title should be rewritten. I have made this up, by the way, so feel free to use your own.

  • Semantic title and content alignment.
  • Satisfactory click behavior.
  • Searcher intent alignment.

Semantic Title And Content Alignment

This is undoubtedly the most prominent section. Your on-page content and title/headline have to align.

This is why clickbait content and content written directly for Google Discover is so risky. Because you’re writing a cheque that you can’t cash. Create content specifically for a platform like Discover, and you will erode your quality signals over time.

Image Credit: Harry Clarkson-Bennett

The titlematchScoreh1ContentScore, and spammyTitleDetection review the base quality of a headline based on the page’s content and query intent. Mismatched titles, headlines, and keyword-stuffed versions are, at best, rewritten.

At worst, they downgrade the quality of your site algorithmically.

The titleMatchAnchorText ensures our title tags and header(s) are compared to internal and external anchors and evaluated in comparison to the hierarchy of the page (the headingHierarchyScore).

Finally, the “best” title is chosen from on-page elements via the snippetTitleExtraction. While Google primarily uses the title or H1 tag, any visible element can be used if it “best represents the page.”

Satisfactory Click Behavior

Much more straightforward. Exactly how Google uses user engagement signals (think of Navboost’s good vs bad click signals) to best cultivate a SERP for a particular term and cohort of people.

Image Credit: Harry Clarkson-Bennett

The titleClickSatisfaction metric combines click data at a query level with on-page engagement data (think scroll depth, time on page, on-page interactions, pogo-sticking).

So, ranking adjustments are made if Google believes the title used in the SERP is underperforming against your prior performance and the competition. So, the title you see could be one of many tests happening simultaneously, I suspect.

For those unfamiliar with Navboost, it is one of Google’s primary ranking engines. It’s based on user interaction signals, like clicks, hovers, scrolls, and swipes, over 13 months to refine rankings.

For news publishers, Glue helps rank content in real time for fresh, real-time events. Source and page level authority. It’s a fundamental part of how news SEO really works.

Searcher Intent Alignment

Searcher intent really matters when it comes to page titles. And Google knows this far better than we do. So, if the content on your page (headings, paragraphs, images, et al.) and search intent isn’t reflected by your page title, it’s gone.

Image Credit: Harry Clarkson-Bennett

Once a page title has been identified as not fit for purpose, the pageTitleRewriter metric is designed to rewrite “unhelpful or misleading page titles.”

And page titles are rewritten at a query level. The queryIntentTitleAlignment measures how the page title aligns with searcher intent. Once this is established, the page alignment and query intent are reviewed to ensure the title best reflects the page at a query level.

Then the queryDependentTitleSelection adjusts the title based on the specifics of the search and searcher. Primarily at the query and location-level. The best contextual match is picked.

Suggestions For Publishers

I’ll try to do this (in a vague order of precedence):

  1. Make your title stand out. Be clickable. Front-load entities. Use power words, numbers, or punctuation where applicable.
  2. Stick to 12 words and 600 pixels to avoid truncation and maximize value from each word.
  3. Your title tag better represent the content on your page effectively for people and machines.
  4. Avoid keyword stuffing. Entities in headlines = good. Search revolves around entities. People, places, and organizations are the bedrock of search and news in particular. Just don’t overdo it.
  5. Do not lean too heavily into clickbait headlines. There’s a temptation to do more for Discover at the minute. The headlines on that platform tend to sail a little too close to the clickbait wind.
  6. Make sure your title best reflects the user intent and keep things simple. The benefit of search is that people are directly looking for an answer. Titles don’t always have to be wildly clicky, especially with evergreen content. Simple, direct language helps pass titleLanguageClarity checks and reduces truncation
  7. Utilize secondary (H2s) and tertiary (H3s) headings on your page. This has multiple benefits. A well broken-up page encourages quality user engagement. It increases the chances of your article ranking for longer-tail queries. And, it helps provide the relevant context to your page for Google.
  8. Monitor CTR and run headline testing on-site. If you have the capacity to run headline testing in real-time, fantastic. If not, I suggest taking headline and CTR data at scale and building a model that helps you understand what makes a headline clickable at a subfolder or topic level. Do emotional, first-person headlines with a front-loaded entity perform best in /politics, for example?
  9. Control your internal anchor text. Particularly important for evergreen content. But even with news, there are five headlines to pay attention to. And internal links (and their anchors) are a pivotal one. The matching anchor text reinforces trust in the topic.

If you are looking into developing your Discover profile, I would recommend testing the OG title if you want to test “clickier” headlines that aren’t visible on page.

Final Thoughts

So, the goal isn’t just to have a well-crafted headline. The goal is to have a brilliant set of titles – clickable, entity and keyword rich, highly relevant. As Shaun says, it’s to create a constellation of signals – the , the </p> <h1>, the URL, the intro paragraph – that remove all ambiguity.</h1> <p>

As ever, clicks are an immensely powerful signal. Google has more data points than I’ve had hot dinners, so had a pretty good ideas what will do well. But real clicks can override this. The goldmineNavboostFactor is proof that click behavior influences which title is displayed.

The title tag is the most important headline on the page when it comes to search. More so than the

. But they have to work together. To draw people in and engage them instantly.

But it all matters. Removing ambiguity is always a good thing. Particularly in a world of AI slop.

More Resources: 


This post was originally published on Leadership In SEO.


Featured Image: Billion Photos/Shutterstock

SEO Is Not A Tactic. It’s Infrastructure For Growth via @sejournal, @billhunt

In the age of AI, many companies still treat SEO as a bolt-on tactic, something to patch in after the website is designed, the content is written, and the campaigns are launched. As I explored in “Why Your SEO Isn’t Working – And It’s Not the Team’s Fault,” the real obstacles aren’t a lack of knowledge or talent. They’re embedded in how companies structure ownership, prioritize resources, and treat SEO as a tactic. It’s infrastructure. And unless it’s treated as such, most organizations will never realize their full growth potential.

Search is no longer about reacting to keywords; it’s about structuring your entire digital presence to be discoverable, interpretable, and aligned with the customer journey. When done right, SEO becomes the connective tissue across content, product, and performance marketing.

Effectively Engage Intent-Driven Prospects

As I first argued in my 1994 business school thesis, and still believe today, search is the best opportunity companies have to engage “interest-driven” prospects. These are people actively declaring their needs, preferences, and intentions via a search interface. All we have to do is listen and nurture them in their journey.

When organizations structure content and infrastructure to meet that demand, they not only reduce friction – they unlock scalable demand capture.

Search:

  • Works across the funnel: awareness, consideration, conversion.
  • Reduces customer acquisition cost (CAC) by meeting customers on their terms.
  • Surfaces unmet demand signals that never show up in customer relationship management (CRM).
  • Reveals how people describe, evaluate, and compare products.
  • Can be a cost-effective tactic for removing friction by matching sales and marketing content precisely with the needs of the person seeking it.

In short, SEO gives you real-time visibility into what people want and how to serve them better. But only if the business treats it as a growth engine – not a last-minute add-on.

Case In Point: Search Left Out Of The Business

In one engagement, we analyzed 2.8 million keywords for a large enterprise with a $50 million PPC budget. The goal? Understand how well they were showing up across the full buying journey. This was a significant data and mathematical problem. For each product or service, we identified the buyer’s journey from awareness to support. We then created a series of rules to develop and classify queries representing searchers in each phase.

We could easily see the query chains of users from their first discovery query all the way through the buy cycle until they were looking for support information. It wasn’t perfect, but it did capture over 100 patterns of content types sought in different phases. By monitoring these pages and user paths, we were better able to satisfy their information needs and convert them into customers.

We checked organic rank: If the page wasn’t in the top five or had a paid ad, we counted it as having no exposure. Once we had the full picture, we saw the dysfunction clearly:

  • In the critical early non-branded discovery phase, we had no presence for nearly 400 million queries related to technologies the company sold.
  • Even more shocking, we missed 93% of 130 million queries tied to implementation-specific searches – like power specs, BTU requirements, or images for engineering diagrams.

The content existed, but it was buried in PDFs or trapped in crawl-unfriendly support sections. These were highly motivated searchers building proposals or writing budget justifications. We were making it hard for them to find what they needed.

To build our business case for change, we took all of these queries and layered in marketing qualified lead (MQL) and sales qualified lead (SQL) metrics to quantify the potential missed opportunity. Using conservative assumptions to avoid executive panic, we demonstrated that this gap represented over $580 million in unrealized revenue.

This wasn’t a content gap – it was a mindset and infrastructure failure. Search wasn’t seen as a system. It wasn’t connected to growth.

SEO As Strategic Growth Infrastructure

But what we uncovered wasn’t just a content gap but a mindset and infrastructure failure. Search wasn’t seen as a system. It wasn’t connected to growth. Organic search had been siloed into a tactical role, and paid search was framed as an acquisition driver, both disconnected from each other and from how the business grows. The result? A website optimized for internal org charts, not for how customers think, search, and decide. This is where the true value of SEO as infrastructure comes into focus. It’s not just about saving money on media; it’s about building systems that align with the full buyer journey.

When SEO is embedded into product planning, content creation, and experience design, you don’t just show up more often. You present the right content at the right time to advance the user to the next step, whether that’s deeper research, a sales inquiry, or successful onboarding. This isn’t about creating more content. It’s about orchestrating a connected, intent-responsive experience that nurtures buyers across every phase of the journey. That’s the shift from SEO-as-tactic to SEO-as-infrastructure. When treated as infrastructure, SEO provides a high-leverage system that reveals market opportunities, drives persistent visibility, and reduces acquisition costs over time.

Done right, SEO delivers:

  • Scalable, evergreen visibility across product lines and geographies.
  • Lower marginal acquisition costs as rankings compound.
  • Faster adaptation to evolving user needs and market trends.
  • Systemic alignment between product, content, and experience.

Just like investing in cloud infrastructure enables engineering agility, investing in SEO infrastructure enables commercial agility, giving product, marketing, and sales teams the insight and systems to execute faster and smarter. I believe AI search results will act as a system-wide health check: It reveals messaging gaps, content blind spots, unclear product positioning, and even operational issues that frustrate customers. It’s the clearest signal you’ll ever get about what customers want and whether you’re delivering.

And as digital maturity rises, functions once seen as tactical, like SEO, are now key contributors to:

  • Operational leverage.
  • Customer acquisition.
  • Digital product-market fit.
  • Margin protection at scale.

Technical infrastructure is a key enabler of this shift. Sites that embed SEO principles into their content management system (CMS), development workflows, and indexing architecture aren’t just faster, they’re more findable, interpretable, and durable in an AI-shaped ecosystem. It’s the technical foundation that powers business visibility.

SEO is no longer just about rankings. It’s:

  • A lens into unmet customer demand.
  • A framework for reducing acquisition costs.
  • A lever for improving digital experiences.
  • A driver of compounding traffic and long-term growth.

This mirrors the broader theme in “Closing the Digital Performance Gap” – where we argue that digital systems like SEO must be treated as capital investments, not just marketing tactics. When commissioned correctly, SEO becomes an accelerant, not a dependency. Without that mindset shift at the executive level, web performance remains fragmented.

But Isn’t SEO Dead? Let’s Clear That Up

Yes, zero-click results are rising, especially for simple facts and generic queries. But that’s not where business growth happens. Most high-value customer journeys, especially in B2B, enterprise, or considered purchases, don’t end with a snippet. They involve exploration, comparison, and validation. They require depth. They demand trust. And they often result in a click. This is even more critical with AI search providing richer information.

The users who do click after scanning AI results are often more intent-driven, more informed, and further along in the buying process. That makes it more critical – not less – to ensure your site is structured to show up, be interpreted correctly, and deliver value when it matters most. SEO isn’t dead. Lazy SEO is. The fundamentals haven’t changed: Show up when it matters, deliver what people need, and reduce friction at every touchpoint. That’s not going away – no matter how AI evolves.

Final Thought

In “From Line Item to Leverage,” we made the case that digital infrastructure, when aligned to strategy, drives measurable shareholder impact. SEO is a prime example: It compounds over time, improves capital efficiency, and scales without inflating costs. To win in today’s environment, SEO must be commissioned like infrastructure: planned early, engineered with purpose, and connected to business strategy. Because the most significant growth levers are rarely flashy – they’re usually buried under decades of organizational neglect, waiting to be unlocked as a competitive advantage.

To achieve this, organizations must move beyond silos and recognize the chain reaction between searcher needs and business outcomes. That means understanding what potential customers want, ensuring that content exists in the correct format and mode, and making it discoverable and indexable.

Search marketing can be a cost-effective tactic for removing friction by matching sales and marketing content precisely with the needs of the person seeking it. In today’s AI-first environment, search becomes even more vital. It’s your early detection system for what customers care about – and the most capital-efficient lever you have to meet them there.

More Resources:


Featured Image: Master1305/Shutterstock

Why Some Brands Win in AI Overviews While Others Get Ignored [Webinar] via @sejournal, @hethr_campbell

Turn Reviews Into Real Visibility, Trust, and Conversions

Reviews are no longer just stars on a page. They are key trust signals that influence both humans and AI. With AI increasingly shaping which brands consumers trust, it is critical to know the review tactics that drive visibility, loyalty, and ROI.

Join our November 5, 2025 webinar to get a research-backed playbook that turns reviews and AI into measurable gains in search visibility, conversions, and credibility.

What You Will Learn

  • How trust signals like recency, authenticity, and response style influence rankings and conversions.
  • Where consumers are reading, leaving, and acting on reviews across Google, social media, and other platforms.
  • Proven frameworks for responding to reviews that build credibility, mitigate risks, and increase loyalty.

Why You Cannot Miss This Webinar

Based on a study of over 1,000 U.S. consumers, this session translates those insights into actionable frameworks to prove ROI, protect reputation, and strengthen client retention.

Register now to learn the latest AI and review tactics that help your brand get chosen and trusted.

🛑 Can’t make it live? Sign up anyway, and we will send you the on-demand recording.

Surfer SEO Acquired By Positive Group via @sejournal, @martinibuster

The French technology group Positive acquired Surfer, the popular content optimization tool. The acquisition helps Positive create a “full-funnel” brand visibility solution together with its marketing and CRM tools.

The acquisition of Surfer extends Positive’s reach from marketing software to AI-based brand visibility. Positive described the deal as part of a European AI strategy that supports jobs and protects data. Positive’s revenue has grown fivefold in the past five years, rising from €50 million to an expected €70 million in 2025.

Surfer SEO

Founded in 2017, Surfer developed SEO tools based on language models that help marketers improve visibility on both search engines and AI assistants, which have become a growing source of website traffic and customers.

Sign Of Broader Industry Trends

The acquisition shows that search optimization continues to be an important part of business marketing as AI search and chat play a larger role in how consumers learn about products, services, and brands. This deal enables Positive to offer AI-based visibility solutions alongside its CRM and automation products, expanding its technology portfolio.

What Acquisition Means For Customers

Positive Group, based in France, is a technology solutions company that develops digital tools for marketing, CRM, automation, and data management. It operates through several divisions: User (marketing and CRM), Signitic (email signatures), and now Surfer (AI search optimization). The company is majority-owned by its executives, employs about 400 people, and keeps its servers in France and Germany. Surfer, based in Poland, brings experience in AI content optimization and a strong presence in North America. Together, they combine infrastructure, market knowledge, and product development within one technology-focused group.

Lucjan Suski, CEO and co-founder of Surfer, commented:

“SEO is evolving fast, and it matters more than ever before. We help marketers win the AI SEO era. Positive helps them grow across every other part of their digital strategy. Together, we’ll give marketers the complete toolkit to lead across AI search, email marketing automation, and beyond.”

According to Mathieu Tarnus, Positive’s founding president, and Paul de Fombelle, its CEO:

“Artificial intelligence is at the heart of our value proposition. With the acquisition of Surfer, our customers are moving from optimizing their traditional SEO positioning to optimizing their brand presence in the responses provided by conversational AI assistants. Surfer stands out from established market players by directly integrating AI into content creation and optimization.”

The acquisition adds Surfer’s AI optimization capabilities to Positive’s product ecosystem, helping customers improve visibility in AI-generated answers. For both companies, the deal is an opportunity to expand their capabilities in AI-based brand visibility.

Featured Image by Shutterstock/GhoST RideR 98

Google Announces A New Era For Voice Search via @sejournal, @martinibuster

Google announced an update to its voice search, which changes how voice search queries are processed and then ranked. The new AI model uses speech as input for the search and ranking process, completely bypassing the stage where voice is converted to text.

The old system was called Cascade ASR, where a voice query is converted into text and then put through the normal ranking process. The problem with that method is that it’s prone to mistakes. The audio-to-text conversion process can lose some of the contextual cues, which can then introduce an error.

The new system is called Speech-to-Retrieval (S2R). It’s a neural network-based machine-learning model trained on large datasets of paired audio queries and documents. This training enables it to process spoken search queries (without converting them into text) and match them directly to relevant documents.

Dual-Encoder Model: Two Neural Networks

The system uses two neural networks:

  1. One of the neural networks, called the audio encoder, converts spoken queries into a vector-space representation of their meaning.
  2. The second network, the document encoder, represents written information in the same kind of vector format.

The two encoders learn to map spoken queries and text documents into a shared semantic space so that related audio and text documents end up close together according to their semantic similarity.

Audio Encoder

Speech-to-Retrieval (S2R) takes the audio of someone’s voice query and transforms it into a vector (numbers) that represents the semantic meaning of what the person is asking for.

The announcement uses the example of the famous painting The Scream by Edvard Munch. In this example, the spoken phrase “the scream painting” becomes a point in the vector space near information about Edvard Munch’s The Scream (such as the museum it’s at, etc.).

Document Encoder

The document encoder does a similar thing with text documents like web pages, turning them into their own vectors that represent what those documents are about.

During model training, both encoders learn together so that vectors for matching audio queries and documents end up near each other, while unrelated ones are far apart in the vector space.

Rich Vector Representation

Google’s announcement says that the encoders transform the audio and text into “rich vector representations.” A rich vector representation is an embedding that encodes meaning and context from the audio and the text. It’s called “rich” because it contains the intent and context.

For S2R, this means the system doesn’t rely on keyword matching; it “understands” conceptually what the user is asking for. So even if someone says “show me Munch’s screaming face painting,” the vector representation of that query will still end up near documents about The Scream.

According to Google’s announcement:

“The key to this model is how it is trained. Using a large dataset of paired audio queries and relevant documents, the system learns to adjust the parameters of both encoders simultaneously.

The training objective ensures that the vector for an audio query is geometrically close to the vectors of its corresponding documents in the representation space. This architecture allows the model to learn something closer to the essential intent required for retrieval directly from the audio, bypassing the fragile intermediate step of transcribing every word, which is the principal weakness of the cascade design.”

Ranking Layer

S2R has a ranking process, just like regular text-based search. When someone speaks a query, the audio is first processed by the pre-trained audio encoder, which converts it into a numerical form (vector) that captures what the person means. That vector is then compared to Google’s index to find pages whose meanings are most similar to the spoken request.

For example, if someone says “the scream painting,” the model turns that phrase into a vector that represents its meaning. The system then looks through its document index and finds pages that have vectors with a close match, such as information about Edvard Munch’s The Scream.

Once those likely matches are identified, a separate ranking stage takes over. This part of the system combines the similarity scores from the first stage with hundreds of other ranking signals for relevance and quality in order to decide which pages should be ranked first.

Benchmarking

Google tested the new system against Cascade ASR and against a perfect-scoring version of Cascade ASR called Cascade Groundtruth. S2R beat Cascade ASR and very nearly matched Cascade Groundtruth. Google concluded that the performance is promising but that there is room for additional improvement.

Voice Search Is Live

Although the benchmarking revealed that there is some room for improvement, Google announced that the new system is live and in use in multiple languages, calling it a new era in search. The system is presumably used in English.

Google explains:

“Voice Search is now powered by our new Speech-to-Retrieval engine, which gets answers straight from your spoken query without having to convert it to text first, resulting in a faster, more reliable search for everyone.”

Read more:

​​Speech-to-Retrieval (S2R): A new approach to voice search

Featured Image by Shutterstock/ViDI Studio

Review Of AEO/GEO Tactics Leads To A Surprising SEO Insight via @sejournal, @martinibuster

GEO/AEO is criticized by SEOs who claim that it’s just SEO at best and unsupported lies at worst. Are SEOs right, or are they just defending their turf? Bing recently published a guide to AI search visibility that provides a perfect opportunity to test whether optimization for AI answers recommendations is distinct from traditional SEO practices.

Chunking Content

Some AEO/GEO optimizers are saying that it’s important to write content in chunks because that’s how AI and LLMs break up a pages of content, into chunks of content. Bing’s guide to answer engine optimization, written by Krishna Madhavan, Principal Product Manager at Bing, echoes the concept of chunking.

Bing’s Madhavan writes:

“AI assistants don’t read a page top to bottom like a person would. They break content into smaller, usable pieces — a process called parsing. These modular pieces are what get ranked and assembled into answers.”

The thing that some SEOs tend to forget is that chunking content is not new. It’s been around for at least five years. Google introduced their passage ranking algorithm back in 2020. The passages algorithm breaks up a web page into sections to understand how the page and a section of it is relevant to a search query.

Google says:

“Passage ranking is an AI system we use to identify individual sections or “passages” of a web page to better understand how relevant a page is to a search.”

Google’s 2020 announcement described passage ranking in these terms:

“Very specific searches can be the hardest to get right, since sometimes the single sentence that answers your question might be buried deep in a web page. We’ve recently made a breakthrough in ranking and are now able to better understand the relevancy of specific passages. By understanding passages in addition to the relevancy of the overall page, we can find that needle-in-a-haystack information you’re looking for. This technology will improve 7 percent of search queries across all languages as we roll it out globally.”

As far as chunking is concerned, any SEO who has optimized content for Google’s Featured Snippets can attest to the importance of creating passages that directly answer questions. It’s been a fundamental part of SEO since at least 2014, when Google introduced Featured Snippets.

Titles, Descriptions, and H1s

The Bing guide to ranking in AI also states that descriptions, headings, and titles are important signals to AI systems.

I don’t think I need to belabor the point that descriptions, headings, and titles are fundamental elements of SEO. So again, there is nothing her to differentiate AEO/GEO from SEO.

Lists and Tables

Bing recommends bulleted lists and tables as a way to easily communicate complex information to users and search engines. This approach to organizing data is similar to an advanced SEO method called disambiguation. Disambiguation is about making the meaning and purpose of a web page as clear as possible, to make it less ambiguous.

Making a page less ambiguous can incorporate semantic HTML to clearly delineate which part of a web page is the main content (MC in the parlance of Google’s third-party quality rater guidelines) and which part of the web page is just advertisements, navigation, a sidebar, or the footer.

Another form of disambiguation is through the proper use of HTML elements like ordered lists (OL) and the use of tables to communicate tabular data such as product comparisons or a schedule of dates and times for an event.

The use of HTML elements (like H, OL, and UL) give structure to on-page information, which is why it’s called structured information. Structured information and structured data are two different things. Structured information is on the page and is seen in the browser and by crawlers. Structured data is meta data that only a bot will see.

There are studies that structured information helps AI Agents make sense of a web page, so I have to concede that structured information is something that is particularly helpful to AI Agents in a unique way.

Question And Answer Pairs

Bing recommends Q&A’s, which are question and answer pairs that an AI can use directly. Bing’s Madhavan writes:

“Direct questions with clear answers mirror the way people search. Assistants can often lift these pairs word for word into AI-generated responses.”

This is a mix of passage ranking and the SEO practice of writing for featured snippets, where you pose a question and give the answer. It’s a risky approach to create an entire page of questions and answers but if it feels useful and helpful then it may be worth doing.

Something to keep in mind is that Google’s systems consider content lacking in unique insight on the same level of spam. Google also considers content created specifically for search engines as low quality as well.

Anyone considering writing questions and answers on a web page for the purpose of AI SEO should first consider the whether it’s useful for people and think deeply about the quality of the question and answer pairs. Otherwise it’s just a page of rote made for search engine content.

Be Precise With Semantic Clarity

Bing also recommends semantic clarity. This is also important for SEO. Madhavan writes:

  • “Write for intent, not just keywords. Use phrasing that directly answers the questions users ask.
  • Avoid vague language. Terms like innovative or eco mean little without specifics. Instead, anchor claims in measurable facts.
  • Add context. A product page should say “42 dB dishwasher designed for open-concept kitchens” instead of just “quiet dishwasher.”
  • Use synonyms and related terms. This reinforces meaning and helps AI connect concepts (quiet, noise level, sound rating).”

They also advise to not use abstract words like “next-gen” or “cutting edge” because it doesn’t really say anything. This is a big, big issue with AI-generated content because it tends to use abstract words that can completely be removed and not change the meaning of the sentence or paragraph.

Lastly, they advise to not use decorative symbols, which is good a tip. Decorative symbols like the arrow → symbol don’t really communicate anything semantically.

All of this advice is good. It’s good for SEO, good for AI, and like all the other AI SEO practices, there is nothing about it that is specific to AI.

Bing Acknowledges Traditional SEO

The funny thing about Bing’s guide to ranking better for AI is that it explicitly acknowledges that traditional SEO is what matters.

Bing’s Madhavan writes:

“Whether you call it GEO, AIO, or SEO, one thing hasn’t changed: visibility is everything. In today’s world of AI search, it’s not just about being found, it’s about being selected. And that starts with content.

…traditional SEO fundamentals still matter.”

AI Search Optimization = SEO

Google and Bing have incorporated AI into traditional search for about a decade. AI Search ranking is not new. So it should not be surprising that SEO best practices align with ranking for AI answers. The same considerations also parallel with considerations about users and how they interact with content.

Many SEOs are still stuck in the decades-old keyword optimization paradigm and maybe for them these methods of disambiguation and precision are new to them. So perhaps it’s a good thing that the broader SEO industry catches up with many of these concepts for optimizing content and to recognize that there is no AEO/GEO, it’s still just SEO.

Featured Image by Shutterstock/Roman Samborskyi

Google Says What Content Gets Clicked On AI Overviews via @sejournal, @martinibuster

Google’s Liz Reid, Vice President of Search, recently said that AI Overviews shows what kind of content makes people click through to visit a site. She also said that Google expanded the concept of spam to include content that does not bring the creator’s perspective and depth.

People’s Preferences Drives What Search Shows

Liz Reid affirmed that user behavior tells them what kinds of content people want to see, like short-form videos and so on. That behavior causes Google to want to show that to them and that the system itself will begin to learn and adjust to the kinds of content (forums, text, video, etc.) that they prefer.

She said:

“…we do have to respond to who users want to hear from, right? Like, we are in the business of both giving them high quality information, but information that they seek out. And so we have over time adjusted our ranking to surface more of this content in response to what we’ve heard from users.

…You see it from users, right? Like we do everything from user research to we run an experiment. And so you take feedback from what you hear, from research about what users want, you then test it out, and then you see how users actually act. And then based on how users act, the system then starts to learn and adjust as well.”

The important insight is that user preferences play an active role in shaping what appears in AI search results. Google’s ranking systems are designed to respond not just to quality but to the types of content users seek out and engage with. This means that shifts in user behavior related to content preferences directly influence what is surfaced. The system continuously adapts based on real-world feedback. The takeaway here is that SEOs and creators should actively gauge what kind of content users are engaging with and be ready to pivot in response to changes.

The conversation is building up toward where Reid says exactly what kinds of content engages users, based on the feedback they get through user behavior.

AI-Generated Is Not Always Spam

Liz next affirms that AI-generated content where she essentially confirms that the bar they’re using to decide what’s high and low quality is agnostic to whether the content is created by a human or an AI.

She said:

“Now, AI generated content doesn’t necessarily equal spam.

But oftentimes when people are referring to it, they’re referring to the spam version of it, right? Or the phrase AI slop, right? This content that feels extremely low value across, okay? And we really want to make an effort that that doesn’t surface.”

Her point is pretty clear that all content is judged by the same standard. So if content is judged to be low quality, it’s judged based on the merits of the content, not by the origin.

People Click On Rich Content

At this point in the interview Reid stops talking about low quality content and turns to discussing the kind of content that makes people click through to a website. She said that user behavior tells them that users don’t want superficial content and that the click patterns shows that more people click through to content that has depth, expresses a unique perspective that does not mirror what everyone else is saying and that these kinds of content engages users. This is the kind of content that gets clicks on AI search.

Reid explained:

“But what we see is people want content from that human perspective. They want that sense of like, what’s the unique thing you bring to it, okay? And actually what we see on what people click on, on AI Overviews, is content that is richer and deeper, okay?

That surface-level AI generated content, people don’t want that because if they click on that, they don’t actually learn that much more than they previously got. They don’t trust the result anymore.

So what we see with AI Overviews is that we surface these sites and get fewer what we call bounce clicks. A bounce click is like you click on your site, Yeah, I didn’t want that, and you go back.

AI Overviews gives some content, and then we get to surface deeper, richer content, and we’ll look to continue to do that over time so that we really do get that creator content and not the AI generated.”

Reid’s comments indicate that click patterns indicate content offering a distinct perspective or insight derived from experience performs better than low-effort content. This indicates that there is intention within AI Overviews to not amplify generic output and to uprank content that demonstrates a firm knowledge of the topic.

Google’s Ranking Weights

Here’s an interesting part that explains what gets up-ranked and down-ranked, expressed in a way I’ve not seen before. Reid said that they’ve extended the concept of spam to also include content that repeats what’s already well known. She also said that they are giving more ranking weight to content that brings a unique perspective or expertise to the content.

Here Reid explains the downranking:

“Now, it is hard work, but we spend a lot of time and we have a lot of expertise built on this such that we’ve been able to take the spam rate of what actually shows up, down.

And as well as we’ve sort of expanded beyond this concept of spam to sort of low-value content, right? This content that doesn’t add very much, kind of tells you what everybody else knows, it doesn’t bring it…”

And this is the part where she says Google is giving more ranking weight to content that contains expertise:

“…and tried to up-weight more and more content specifically from someone who really went in and brought their perspective or brought their expertise, put real time and craft into the work.”

Takeaways

How To Get More Upranked On AI Overviews

1. Create “Richer and Deeper” Content

Reid said, “people want content from that human perspective. They want that sense of like, what’s the unique thing you bring to it, okay? And actually what we see on what people click on, on AI Overviews, is content that is richer and deeper, okay?”

Takeaway:
Publish content that shows original thought, unique insights, and depth rather than echoing what’s already widely said. In my opinion, using software that analyzes the content that’s already ranking or using a skyscraper/10x content strategy is setting yourself up for doing exactly the opposite of what Liz Reid is recommending. A creator will never express a unique insight by echoing what a competitor has already done.

2. Reflect Human Perspective

Reid said, “people want content from that human perspective. They want that sense of like, what’s the unique thing you bring to it.”

Takeaway: Incorporate your own analysis, experiences, or firsthand understanding so that the content is authentic and expresses expertise.

3. Demonstrate Expertise and Craft

Reid shared that Google is trying “to up-weight more and more content specifically from someone who really went in and brought their perspective or brought their expertise, put real time and craft into the work.”

Takeaway:
Effort, originality, and subject-matter knowledge are the qualities that Google is up-weighting to perform better within AI Overviews.

Reid draws a clear distinction between content that repeats what is already widely known and content that adds unique value through perspective or expertise. Google treats superficial content like spam and lowers the weights of the rankings to reduce its visibility, while actively “upweighting” content that demonstrates effort and insight, what she termed as the craft. Craft means skill and expertise, mastery of something. The message here is that originality and actual expertise are important for ranking well, particularly in AI Overviews and I would think the same applies for AI Mode.

Watch the interview from about the 18 minute mark:

Google Reminds SEOs How The URL Removals Tool Works via @sejournal, @martinibuster

Google’s John Mueller answered a question about removing hacked URLs that are showing in the index. He explained how to remove the sites from appearing in the search results and then discussed the nuances involved in dealing with this specific situation.

Removing Hacked Pages From Google’s SERPs

The person asking the question was the victim of the Japanese hacking attack, so-called because the attackers create hundreds or even thousands of rogue Japanese language web pages. The person had dealt with the issue and removed the spammy infected web pages, leaving them with 404 pages that are still referenced in Google’s search results.

They now want to remove them from Google’s search index so that the site is no longer associated with those pages.

They asked:

“My site recently got a Japanese attack. However, I shifted that site to a new hosting provider and have removed all data from there.

However, the fact is that many Japanese URLs have been indexed.

So how do I deindex those thousands of URLs from my website?”

The question reflects a common problem in the aftermath of a Japanese hack attack, where hacked pages stubbornly remain indexed long after the pages were removed. This shows that site recovery is not complete once the malicious content is removed; Google’s search index needs to clear the pages, and that can take a frustratingly long time.

How To Remove Japanese Hack Attack Pages From Google

Google’s John Mueller recommended using the URL Removals Tool found in Search Console. Contrary to the implication inherent in the name of the tool, it doesn’t remove a URL from the search index; it just removes it from showing in Google’s search results faster if the content has already been removed from the site or blocked from Google’s crawler. Under normal circumstances, Google will remove a page from the search results after the page is crawled and noted to be blocked or gone (404 error response).

Three Prerequisites For URL Removals Tool

  1. The page is removed and returns a 404 or 410 server response code.
  2. The URL is blocked from indexing by a robots meta tag:
  3. The URL is prevented from being crawled by a robots.txt file.

Google’s Mueller responded:

“You can use the URL removal tool in search console for individual URLs (also if the URLs all start with the same thing). I’d use that for any which are particularly visible (check the performance report, 24 hours).

This doesn’t remove them from the index, but it hides them within a day. If the pages are invalid / 404 now, they’ll also drop out over time, but the removal tool means you can stop them from being visible “immediately”. (Redirecting o 404 are both ok, technically a 404 is the right response code)”

Mueller clarified that the URL Removals Tool does not delete URLs from Google’s index but instead hides them from search results, faster than natural recrawling would. His explanation is a reminder that the tool has a temporary search visibility effect and is not a way to permanently remove a URL from Google’s index itself. The actual removal from the search index happens after Google verifies that the page is actually gone or blocked from crawling or indexing.

Featured Image by Shutterstock/Asier Romero

Your Brand Is Being Cited By AI. Here’s How To Measure It via @sejournal, @DuaneForrester

Search has never stood still. Every few years, a new layer gets added to how people find and evaluate information. Generative AI systems like ChatGPT, Copilot Search, and Perplexity haven’t replaced Google or Bing. They’ve added a new surface where discovery happens earlier, and where your visibility may never show up in analytics.

Call it Generative Engine Optimization, call it AI visibility work, or just call it the next evolution of SEO. Whatever the label, the work is already happening. SEO practitioners are already tracking citations, analyzing which content gets pulled into AI responses, and adapting strategies as these platforms evolve weekly.

This work doesn’t replace SEO, rather it builds on top of it. Think of it as the “answer layer” above the traditional search layer. You still need structured content, clean markup, and good backlinks, among the other usual aspects of SEO. That’s the foundation assistants learn from. The difference is that assistants now re-present that information to users directly inside conversations, sidebars, and app interfaces.

If your work stops at traditional rankings, you’ll miss the visibility forming in this new layer. Tracking when and how assistants mention, cite, and act on your content is how you start measuring that visibility.

Your brand can appear in multiple generative answers without you knowing. These citations don’t show up in any analytics tool until someone actually clicks.

Image Credi: Duane Forrester

Perplexity explains that every answer it gives includes numbered citations linking to the original sources. OpenAI’s ChatGPT Search rollout confirms that answers now include links to relevant sites and supporting sources. Microsoft’s Copilot Search does the same, pulling from multiple sources and citing them inside a summarized response. And Google’s own documentation for AI overviews makes it clear that eligible content can be surfaced inside generative results.

Each of these systems now has its own idea of what a “citation” looks like. None of them report it back to you in analytics.

That’s the gap. Your brand can appear in multiple generative answers without you knowing. These are the modern zero-click impressions that don’t register in Search Console. If we want to understand brand visibility today, we need to measure mentions, impressions, and actions inside these systems.

But there’s yet another layer of complexity here: content licensing deals. OpenAI has struck partnerships with publishers including the Associated Press, Axel Springer, and others, which may influence citation preferences in ways we can’t directly observe. Understanding the competitive landscape, not just what you’re doing, but who else is being cited and why, becomes essential strategic intelligence in this environment.

In traditional SEO, impressions and clicks tell you how often you appeared and how often someone acted. Inside assistants, we get a similar dynamic, but without official reporting.

  • Mentions are when your domain, name, or brand is referenced in a generative answer.
  • Impressions are when that mention appears in front of a user, even if they don’t click.
  • Actions are when someone clicks, expands, or copies the reference to your content.

These are not replacements for your SEO metrics. They’re early indicators that your content is trusted enough to power assistant answers.

If you read last week’s piece, where I discussed how 2026 is going to be an inflection year for SEOs, you’ll remember the adoption curve. During 2026, assistants are projected to reach around 1 billion daily active users, embedding themselves into phones, browsers, and productivity tools. But that doesn’t mean they’re replacing search. It means discovery is happening before the click. Measuring assistant mentions is about seeing those first interactions before the analytics data ever arrives.

Let’s be clear. Traditional search is still the main driver of traffic. Google handles over 3.5 billion searches per day. In May 2025, Perplexity processed 780 million queries in a full month. That’s roughly what Google handles in about five hours.

The data is unambiguous. AI assistants are a small, fast-growing complement, not a replacement (yet).

But if your content already shows up in Google, it’s also being indexed and processed by the systems that train and quote inside these assistants. That means your optimization work already supports both surfaces. You’re not starting over. You’re expanding what you measure.

Search engines rank pages. Assistants retrieve chunks.

Ranking is an output-aligned process. The system already knows what it’s trying to show and chooses the best available page to match that intent. Retrieval, on the other hand, is pre-answer-aligned. The system is still assembling the information that will become the answer and that difference can change everything.

When you optimize for ranking, you’re trying to win a slot among visible competitors. When you optimize for retrieval, you’re trying to be included in the model’s working set before the answer even exists. You’re not fighting for position as much as you’re fighting for participation.

That’s why clarity, attribution, and structure matter so much more in this environment. Assistants pull only what they can quote cleanly, verify confidently, and synthesize quickly.

When an assistant cites your site, it’s doing so because your content met three conditions:

  1. It answered the question directly, without filler.
  2. It was machine-readable and easy to quote or summarize.
  3. It carried provenance signals the model trusted: clear authorship, timestamps, and linked references.

Those aren’t new ideas. They’re the same best practices SEOs have worked with for years, just tested earlier in the decision chain. You used to optimize for the visible result. Now you’re optimizing for the material that builds the result.

One critical reality to understand: citation behavior is highly volatile. Content cited today for a specific query may not appear tomorrow for that same query. Assistant responses can shift based on model updates, competing content entering the index, or weighting adjustments happening behind the scenes. This instability means you’re tracking trends and patterns, not guarantees (not that ranking was guaranteed, but they are typically more stable). Set expectations accordingly.

Not all content has equal citation potential, and understanding this helps you allocate resources wisely. Assistants excel at informational queries (”how does X work?” or “what are the benefits of Y?”). They’re less relevant for transactional queries like “buy shoes online” or navigational queries like “Facebook login.”

If your content serves primarily transactional or branded navigational intent, assistant visibility may matter less than traditional search rankings. Focus your measurement efforts where assistant behavior actually impacts your audience and where you can realistically influence outcomes.

The simplest way to start is manual testing.

Run prompts that align with your brand or product, such as:

  • “What is the best guide on [topic]?”
  • “Who explains [concept] most clearly?”
  • “Which companies provide tools for [task]?”

Use the same query across ChatGPT Search, Perplexity, and Copilot Search. Document when your brand or URL appears in their citations or answers.

Log the results. Record the assistant used, the prompt, the date, and the citation link if available. Take screenshots. You’re not building a scientific study here; you’re building a visibility baseline.

Once you’ve got a handful of examples, start running the same queries weekly or monthly to track change over time.

You can even automate part of this. Some platforms now offer API access for programmatic querying, though costs and rate limits apply. Tools like n8n or Zapier can capture assistant outputs and push them to a Google Sheet. Each row becomes a record of when and where you were cited. (To be fair, it’s more complicated than 2 short sentences make it sound, but it’s doable by most folks, if they’re willing to learn some new things.)

This is how you can create your first “ai-citation baseline“ report if you’re willing to just stay manual in your approach.

But don’t stop at tracking yourself. Competitive citation analysis is equally important. Who else appears for your key queries? What content formats do they use? What structural patterns do their cited pages share? Are they using specific schema markup or content organization that assistants favor? This intelligence reveals what assistants currently value and where gaps exist in the coverage landscape.

We don’t have official impression data yet, but we can infer visibility.

  • Look at the types of queries where you appear in assistants. Are they broad, informational, or niche?
  • Use Google Trends to gauge search interest for those same queries. The higher the volume, the more likely users are seeing AI answers for them.
  • Track assistant responses for consistency. If you appear across multiple assistants for similar prompts, you can reasonably assume high impression potential.

Impressions here don’t mean analytics views. They mean assistant-level exposure: your content seen in an answer window, even if the user never visits your site.

Actions are the most difficult layer to observe, but not because assistant ecosystems hide all referrer data. The tracking reality is more nuanced than that.

Most AI assistants (Perplexity, Copilot, Gemini, and paid ChatGPT users) do send referrer data that appears in Google Analytics 4 as perplexity.ai / referral or chatgpt.com / referral. You can see these sources in your standard GA4 Traffic Acquisition reports. (useful article)

The real challenges are:

Free-tier users don’t send referrers. Free ChatGPT traffic arrives as “Direct” in your analytics, making it impossible to distinguish from bookmark visits, typed URLs, or other referrer-less traffic sources. (useful article)

No query visibility. Even when you see the referrer source, you don’t know what question the user asked the AI that led them to your site. Traditional search gives you some query data through Search Console. AI assistants don’t provide this.

Volume is still small but growing. AI referral traffic typically represents 0.5% to 3% of total website traffic as of 2025, making patterns harder to spot in the noise of your overall analytics. (useful article)

Here’s how to improve tracking and build a clearer picture of AI-driven actions:

  1. Set up dedicated AI traffic tracking in GA4. Create a custom exploration or channel group using regex filters to isolate all AI referral sources in one view. Use a pattern like the excellent example in this Orbit Media article to capture traffic from major platforms ( ^https://(www.meta.ai|www.perplexity.ai|chat.openai.com|claude.ai|gemini.google.com|chatgpt.com|copilot.microsoft.com)(/.*)?$ ). This separates AI referrals from generic referral traffic and makes trends visible.
  2. Add identifiable UTM parameters when you control link placement. In content you share to AI platforms, in citations you can influence, or in public-facing URLs. Even platforms that send referrer data can benefit from UTM tagging for additional attribution clarity. (useful article)
  3. Monitor “Direct” traffic patterns. Unexplained spikes in direct traffic, especially to specific landing pages that assistants commonly cite, may indicate free-tier AI users clicking through without referrer data. (useful article)
  4. Track which landing pages receive AI traffic. In your AI traffic exploration, add “Landing page + query string” as a dimension to see which specific pages assistants are citing. This reveals what content AI systems find valuable enough to reference.
  5. Watch for copy-paste patterns in social media, forums, or support tickets that match your content language exactly. That’s a proxy for text copied from an assistant summary and shared elsewhere.

Each of these tactics helps you build a more complete picture of AI-driven actions, even without perfect attribution. The key is recognizing that some AI traffic is visible (paid tiers, most platforms), some is hidden (free ChatGPT), and your job is to capture as much signal as possible from both.

Machine-Validated Authority (MVA) isn’t visible to us as it’s an internal trust signal used by AI systems to decide which sources to quote. What we can measure are the breadcrumbs that correlate with it:

  • Frequency of citation
  • Presence across multiple assistants
  • Stability of the citation source (consistent URLs, canonical versions, structured markup)

When you see repeat citations or multi-assistant consistency, you’re seeing a proxy for MVA. That consistency is what tells you the systems are beginning to recognize your content as reliable.

Perplexity reports almost 10 billion queries a year across its user base. That’s meaningful visibility potential even if it’s small compared to search.

Microsoft’s Copilot Search is embedded in Windows, Edge, and Microsoft 365. That means millions of daily users see summarized, cited answers without leaving their workflow.

Google’s rollout of AI Overviews adds yet another surface where your content can appear, even when no one clicks through. Their own documentation describes how structured data helps make content eligible for inclusion.

Each of these reinforces a simple truth: SEO still matters, but it now extends beyond your own site.

Start small. A basic spreadsheet is enough.

Columns:

  • Date.
  • Assistant (ChatGPT Search, Perplexity, Copilot).
  • Prompt used.
  • Citation found (yes/no).
  • URL cited.
  • Competitor citations observed.
  • Notes on phrasing or ranking position.

Add screenshots and links to the full answers for evidence. Over time, you’ll start to see which content themes or formats surface most often.

If you want to automate, set up a workflow in n8n that runs a controlled set of prompts weekly and logs outputs to your sheet. Even partial automation will save time and let you focus on interpretation, not collection. Use this sheet and its data to augment what you can track in sources like GA4.

Before investing heavily in assistant monitoring, consider resource allocation carefully. If assistants represent less than 1% of your traffic and you’re a small team, extensive tracking may be premature optimization. Focus on high-value queries where assistant visibility could materially impact brand perception or capture early-stage research traffic that traditional search might miss.

Manual quarterly audits may suffice until the channel grows to meaningful scale. This is about building baseline understanding now so you’re prepared when adoption accelerates, not about obsessive daily tracking of negligible traffic sources.

Executives understand and prefer dashboards, not debates about visibility layers, so show them real-world examples. Put screenshots of your brand cited inside ChatGPT or Copilot next to your Search Console data. Explain that this is not a new algorithm update but a new front end for existing content. It’s up to you to help them understand this critical difference.

Frame it as additive reach. You’re showing leadership that the company’s expertise is now visible in new interfaces before clicks happen. That reframing keeps support for SEO strong and positions you as the one tracking the next wave.

It’s worth noting that citation practices exist within a shifting legal landscape. Publishers and content creators have raised concerns about copyright and fair use as AI systems train on and reproduce web content. Some platforms have responded with licensing agreements, while legal challenges continue to work through courts.

This environment may influence how aggressively platforms cite sources, which sources they prioritize, and how they balance attribution with user experience. The frameworks we build today should remain flexible as these dynamics evolve and as the industry establishes clearer norms around content usage and attribution.

AI assistant visibility is not yet a major traffic source. It’s a small but growing signal of trust.

By measuring mentions and citations now, you build an early-warning system. You’ll see when your content starts appearing in assistants long before any of your analytics tools do. This means that when 2026 arrives and assistants become a daily habit, you won’t be reacting to the curve. You’ll already have data on how your brand performs inside these new systems.

If you extend the concept here of “data” to a more meta level, you could say it’s already telling us that the growth is starting, it’s explosive, and it’s about to have an impact in consumer’s behaviors. So now is the moment to take that knowledge and focus it on the more day-to-day work you do and start to plan for how those changes impact that daily work.

Traditional SEO remains your base layer. Generative visibility sits above it. Machine-Validated Authority lives inside the systems. Watching mentions, impressions, and actions is how we start making what’s in the shadows measurable.

We used to measure rankings because that’s what we could see. Today, we can measure retrieval for the same reason. This is just the next evolution of evidence-based SEO. Ultimately, you can’t fix what you can’t see. We cannot see how trust is assigned inside the system, but we can see the outputs of each system.

The assistants aren’t replacing search (yet). They’re simply showing you how visibility behaves when the click disappears. If you can measure where you appear in those layers now, you’ll know when the slope starts to change and you’ll already be ahead of it.

More Resources:


Featured Image: Anton Vierietin/Shutterstock


This post was originally published on Duane Forrester Decodes.

Google Says It Surfaces More Video, Forums, And UGC via @sejournal, @MattGSouthern

Google says it has adjusted rankings to surface more short-form video, forums, and user-generated content in response to how people search.

Liz Reid, VP and head of Google Search, discussed the changes in a Wall Street Journal Bold Names podcast interview.

What Reid Said

Reid described a shift in where people go for certain questions, especially among younger users:

“There’s a behavioral shift that is happening in conjunction with the move to AI, and that is a shift of who people are going to for a set of questions. And they are going to short-form video, they are going to forums, they are going to user-generated content a lot more than traditional sites.”

She added:

“We do have to respond to who users want to hear from. We are in the business of both giving them high quality information but information that they seek out. And so we have over time adjusted our ranking to surface more of this content in response to what we’ve heard from users.”

To illustrate the behavior change, she gave a lifestyle example:

“Where are you getting your cooking? Are you getting your cooking recipes from a newspaper? Are you getting your cooking recipes from YouTube?”

Reid also highlighted a pattern with search updates:

“One of the things that’s always true about Google Search is that you make changes and there are winners and losers. That’s true on any ranking update.”

Ads And Query Mix

Reid said the impact of AI Overviews on ads is offset by people running more searches overall:

“The revenue with AI Overviews has been relatively stable… some queries may get less clicks on ads, but also it grows overall queries so people do more searches. And so those two things end up balancing out.”

She noted most queries have no ads:

“Most queries don’t have any ads at all… that query is sort of unaffected by ads.”

Reid also described how lowering friction (e.g., Lens, multi-page answers via AI Overviews) increases total searches.

Attribution & Personalization

Reid highlighted work on link prominence and loyal-reader connections:

“We’ve started doing more with inline links that allows you to say according to so-and-so with a big link for whoever the so-and-so is… building both the brand, as well as the click through.”

Quality Signals & Low-Value Content

On quality and spam posture:

“We’ve… expanded beyond this concept of spam to sort of low-value content.”

She said richer, deeper material tends to drive the clicks from AI experiences.

How Google Tests Changes

Asked whether there is a “push” as well as a “pull,” Reid described the evaluate-and-learn loop:

“You take feedback from what you hear from research about what users want, you then test it out, and then you see how users actually act. And then based on how users act, the system then starts to learn and adjust as well.”

Why This Matters

In certain cases, your pages may face increased competition from forum threads and short videos.

That means improvements in quality and technical SEO alone might not fully account for traffic fluctuations if the distribution of formats has changed.

If hit by a Google update, teams should examine where visibility decreases and identify which query types are impacted. From there, determine if competing results have shifted to forum threads or short videos.

Open Questions

Reid didn’t provide timing for when the adjustments began or metrics indicating how much weighting changed.

It’s unclear which categories are most affected or whether the impact will expand further.

Looking Ahead

Reid’s comments confirm that Google has adjusted ranking to reflect evolving user behavior.

Given this, it makes sense to consider creating complementary formats like short videos while continuing to invest in in-depth expertise where traditional pages still win.


Featured Image: Michael Vi/Shutterstock