The 2025 SEO wrap-up: What we learned about search, content, and trust

SEO didn’t stand still in 2025. It didn’t reinvent itself either. It clarified what actually matters. If you followed The SEO Update by Yoast monthly webinars this year, you’ll recognize the pattern. Throughout 2025, our Principal SEOs, Carolyn Shelby and Alex Moss, cut through the noise to explain not just what was changing but why it mattered as AI-powered search reshaped visibility, trust, and performance. If you missed some sessions or want the full picture in one place, this wrap-up is for you. We’re looking back at how SEO evolved over the year, what those changes mean in practice, and what they signal going forward.

Key takeaways

  • In 2025, SEO shifted its focus from rankings to visibility management, as AI-driven search reshaped priorities
  • Key developments included the rise of AI Overviews, a shift from clicks to citations, and increased importance of clarity and trust
  • Brands needed to prioritize structured, credible content that AI systems could easily interpret to remain visible
  • By December, SEO transformed to retrieval-focused strategies, where success rested on clarity, relevance, and E-E-A-T signals
  • Overall, 2025 clarified that the fundamentals still matter but emphasized the need for precision in content for AI-driven systems

Table of contents

SEO in 2025: month-by-month overview

Month Key evolutions Core takeaways
January AI-powered, personalized search accelerated. Zero-click results increased. Brand signals, E-E-A-T, performance, and schema shifted from optimizations to requirements. SEO expanded from ranking pages to representing trusted brands that machines can understand.
February Massive AI infrastructure investments. AI Overviews pushed organic results down. Traffic dropped while brand influence and revenue held steady. SEO outcomes can no longer be measured by traffic alone. Authority and influence matter more than raw clicks.
March AI Overviews expanded as clicks declined. Brand mentions appeared to play a larger role in AI-driven citation and selection behavior than links alone. Search behavior grew despite fewer referrals. Visibility fractured across systems. Trust and brand recognition became the differentiators for inclusion.
April Schema and structure proved essential for AI interpretation. Multimodal and personalized search expanded. Zero-click behavior increased further. SEO shifted from optimization to interpretation. Clarity and structure determine reuse.
May Discovery spread beyond Google. AI Overviews reached mass adoption. Citations replaced visits as success signals. SEO outgrew the SERP. Presence across platforms and AI systems became critical.
June – July AI Mode became core to search. Ads entered AI answers. Indexing alone no longer offers guaranteed visibility. Reporting lagged behind reality. Traditional SEO remained necessary but insufficient. Resilience and adaptability became essential.
August Visibility without value became a real risk. SEO had to tie exposure to outcomes beyond the number of sessions. Visibility without value became a real risk. SEO had to tie exposure to outcomes beyond sessions.
September AI Mode neared default status. Legal, licensing, and attribution pressures intensified. Persona-based strategies gained relevance. Control over visibility is no longer guaranteed. Trust and credibility are the only durable advantages.
October Search Console data reset expectations. AI citations outweighed rankings. AI search became the destination. SEO success depends on presence inside AI systems, not just SERP positions.
November AI Mode became core to search. Ads entered AI answers. Indexing alone is no longer a guarantee of visibility. Reporting lagged behind reality. Clarity and structure beat scale. Authority decides inclusion.
December SEO fully shifted to retrieval-based logic. AI systems extracted answers, not pages. E-E-A-T acted as a gatekeeper. SEO evolved into visibility management for AI-driven search. Precision replaced volume.

January: SEO enters the age of representation

January set the tone for the year. Not through a single disruptive update, but through a clear signal that SEO was moving away from pure rankings toward something broader. The search was becoming more personalized, AI-driven, and selective about which sources it chose to surface. Visibility was no longer guaranteed just because you ranked well.

Do read: Perfect prompts: 10 tips for AI-driven SEO content creation

From the start of the year, it was clear that SEO in 2025 would reward brands that were trusted, technically sound, and easy for machines to understand.

What changed in January

Here are a few clear trends that began to shape how SEO worked in practice:

  • AI-powered search became more personalized: Search results reflected context more clearly, taking into account location, intent, and behavior. The same query no longer produced the same result for every user
  • Zero-click searches accelerated: More answers appeared directly in search results, reducing the need to click through, especially for informational and local queries
  • Brand signals and reviews gained weight: Search leaned more heavily on real-world trust indicators like brand mentions, reviews, and overall reputation
  • E-E-A-T became harder to ignore: Clear expertise, ownership, and credibility increasingly acted as filters, not just quality guidelines
  • The role of schema started to shift: Structured data mattered less for visual enhancements and more for helping machines understand content and entities

What to take away from January

January wasn’t about tactics. It was about direction.

SEO started rewarding clarity over cleverness. Brands over pages. Trust over volume. Performance over polish. If search engines were going to summarize, compare, and answer on your behalf, you needed to make it easy for them to understand who you are, what you offer, and why you are credible.

That theme did not fade as the year went on. It became the foundation for everything that followed.

Do check out the full recording of The SEO update by Yoast – January 2025 Edition webinar.

February: scale, money, and AI made the shift unavoidable

If January showed where search was heading, February showed how serious the industry was about getting there. This was the month where AI stopped feeling like a layer on top of search and started looking like the foundation underneath it.

Massive investments, changing SERP layouts, and shifting performance metrics all pointed to the same conclusion. Search was being rebuilt for an AI-first world.

What changed in February

As the month unfolded, the signs became increasingly difficult to ignore.

  • AI Overviews pushed organic results further down: AI Overviews appeared in a large share of problem-solving queries, favoring authoritative sources and summaries over traditional organic listings
  • Traffic declined while brand value increased: High-profile examples showed sessions dropping even as revenue grew. Visibility, influence, and brand trust started to matter more than raw sessions
  • AI referrals began to rise: Referral traffic from AI tools increased, while Google’s overall market share showed early signs of pressure. Discovery started spreading across systems, not just search engines

What to take away from February

February made January’s direction feel permanent.

When AI systems operate at this scale, they change how visibility works. Rankings still mattered, but they no longer told the full story. Authority, brand recognition, and trust increasingly influenced whether content was surfaced, summarized, or ignored.

The takeaway was clear. SEO could no longer be measured only by traffic. It had to be understood in terms of influence, representation, and relevance across an expanding search ecosystem.

Catch the full discussion in The SEO Update by Yoast – February 2025 Edition webinar recording.

March: visibility fractured, trust became the differentiator

By March, the effects of AI-driven search were no longer theoretical. The conversation shifted from how search was changing to who was being affected by it, and why.

This was the month where declining clicks, citation gaps, and publisher pushback made one thing clear. Search visibility was fragmenting across systems, and trust became the deciding factor in who stayed visible.

What changed in March

The developments in March added pressure to trends that had already been forming earlier in the year.

  • AI Overviews expanded while clicks declined: Studies showed that AI Overviews appeared more frequently, while click-through rates continued to decline. Visibility increasingly stopped at the SERP
  • Brand mentions mattered more than links alone: Citation patterns across AI platforms varied, but one signal stayed consistent. Brands mentioned frequently and clearly were more likely to surface
  • Search behavior continued to grow despite fewer clicks: Overall search volume increased year over year, showing that users weren’t searching less; they were just clicking less
  • AI search struggled with attribution and citations: Many AI-powered results failed to cite sources consistently, reinforcing the need for strong brand recognition rather than reliance on direct referrals
  • Search experiences became more fragmented: New entry points like Circle to Search and premium AI modes introduced additional layers to discovery, especially among younger users
  • Structured signals evolved for AI retrieval: Updates to robots meta tags, structured data for return policies, and “sufficient context” signals showed search engines refining how content is selected and grounded

Also read: Structured data with schema for search and AI

What to take away from March

March exposed the tension at the heart of modern SEO.

Search demand was growing, but traditional traffic was shrinking. AI systems were answering more questions, but often without clear attribution. In that environment, being a recognizable, trusted brand mattered more than being the best-optimized page.

The implication was simple. SEO was no longer just about earning clicks. It was about earning inclusion, recognition, and trust across systems that don’t always send users back.

Watch the complete recording of The SEO Update by Yoast – March 2025 Edition.

April: machines started deciding how content is interpreted

By April, the focus shifted again. The question was no longer whether AI would shape search, but how machines decide what content means and when to surface it.

After March exposed visibility gaps and attribution issues, April zoomed in on interpretation. How AI systems read, classify, and extract information became central to SEO outcomes.

What changed in April

April brought clarity to how modern search systems process content.

  • Schema has proven its value beyond rankings: Microsoft has confirmed that schema markup helps large language models understand content. Bing Copilot used structured data to generate clearer, more reliable answers, reinforcing the schema’s role in interpretation rather than visual enhancement
  • AI-driven search became multimodal: Image-based queries expanded through Google Lens and Gemini, allowing users to search using photos and visuals instead of text alone
  • AI Overviews expanded during core updates: A noticeable surge in AI Overviews appeared during Google’s March core update, especially in travel, entertainment, and local discovery queries
  • Clicks declined as summaries improved: AI-generated content summaries reduced the need to click through, accelerating zero-click behavior across informational and decision-based searches
  • Content structure mattered more than special optimizations: Clear headings that boost readability, lists, and semantic cues helped AI systems extract meaning. There were no shortcuts. Standard SEO best practices carried the weight

What to take away from April

April shifted SEO from optimization to interpretation.

Search engines and AI systems didn’t just look for relevance. They looked for clarity. Content that was well-structured, semantically clear, and grounded in real entities was easier to understand, summarize, and reuse.

The lesson was subtle but important. You didn’t need new tricks for AI search. You needed content that was easier for machines to read and harder to misinterpret.

Want the full context? Watch the complete The SEO Update by Yoast – April 2025 Edition webinar.

May: discovery spread beyond search engines

By May, it was no longer sufficient to discuss how search engines interpret content. The bigger question became where discovery was actually happening.

SEO started expanding beyond Google. Visibility fractured across platforms, AI tools, and ecosystems, forcing brands to think about presence rather than placement.

What changed in May

The month highlighted how search and discovery continued to decentralize.

  • Search behavior expanded beyond traditional search engines: Around 39% of consumers now use Pinterest as a search engine, with Gen Z leading adoption. Discovery increasingly happened inside platforms, not just through search bars
  • AI Overviews reached mass adoption: AI Overviews reportedly reached around 1.5 billion users per month and appeared in roughly 13% of searches, with informational queries driving most of that growth
  • Clicks continued to give way to citations: As AI summaries became more common, being referenced or cited mattered more than driving a visit, especially for top-of-funnel queries
  • AI-powered search diversified across tools: Chat-based search experiences added shopping, comparison, and personalization features, further shifting discovery away from classic result pages
  • Economic pressure on content ecosystems increased: Industry voices warned that widespread zero-click answers were starting to weaken the incentives for content creation across the web
  • Trust signals faced stricter scrutiny: Updated rater guidelines targeted fake authority, deceptive design patterns, and manufactured credibility

What to take away from May

May reframed SEO as a visibility problem, not a traffic problem.

When discovery happens across platforms, summaries, and AI systems, success depends on how clearly your content communicates meaning, credibility, and relevance. Rankings still mattered, but they were no longer the primary measure of success.

The message was clear. SEO had outgrown the SERP. Brands that focused on authenticity, semantic clarity, and structured information were better positioned to stay visible wherever search happened next.

Watch the full The SEO Update by Yoast – May 2025 Edition webinar to see all insights in context.

By early summer, SEO entered a more uncomfortable phase. Visibility still mattered, but control over how and where content appeared became increasingly limited.

June and July were about adjustment. Search moved closer to AI assistants, ads blended into answers, and traditional SEO signals no longer guaranteed exposure across all search surfaces.

What changed in June and July

This period introduced some of the clearest operational shifts of the year.

  • AI Mode became a first-class search experience: AI Mode was rolled out more broadly, including incognito use, and began to merge into core search experiences. Search was no longer just results. It was conversation, summaries, and follow-ups
  • Ads entered AI-generated answers: Google introduced ads inside AI Overviews and began testing them in conversational AI Mode. Visibility now competes not only with other pages, but with monetized responses
  • Measurement lagged behind reality: Search Console confirmed AI Mode data would be included in performance reports, but without separate filters or APIs. Visibility changed more rapidly than reporting tools could keep pace.
  • Citations followed platform-specific preferences: Different AI systems favored different sources. Some leaned heavily on encyclopedic content, others on community-driven platforms, reinforcing that one SEO strategy would not fit every system
  • Most AI-linked pages still ranked well organically: Around 97% of URLs referenced in AI Mode ranked in the top 10 organic results, showing that strong traditional SEO remained a prerequisite, even if it was no longer sufficient
  • Content had to resist summarization: Leaks and tests showed that some AI tools rarely surfaced links unless live search was triggered. Generic, easily summarized modern content became easier to replace
  • Infrastructure became an SEO concern again: AI agents increased crawl and request volume, pushing performance, caching, and server readiness back into focus
  • Search moved beyond text: Voice-based interactions, audio summaries, image-driven queries, and AI-first browsers expanded how users searched and consumed information

What to take away from June and July

This period forced a mindset shift.

SEO could no longer assume that ranking, indexing, or even traffic guaranteed visibility. AI systems decided when to summarize, when to cite, and when to bypass pages entirely. Ads, assistants, and alternative interfaces now often sit between users and websites more frequently than before.

The conclusion was pragmatic. Strong fundamentals still mattered, but they weren’t the finish line. SEO now requires resilience: content that carries authority, resists simplification, loads fast, and stays relevant even when clicks don’t follow.

By the end of July, one thing was clear. SEO wasn’t disappearing. It was operating under new constraints, and the rest of the year would test how well teams adapted to them.

Missed the session? You can watch the full The SEO Update by Yoast – June 2025 Edition recording here.

August: the gap between visibility and value widened

By August, SEO teams were staring at a growing disconnect. Visibility was increasing, but traditional outcomes were harder to trace back to it.

This was the month when the mechanics of AI-driven search became more transparent and more uncomfortable.

What changed in August

August surfaced the operational realities behind AI-powered discovery.

  • Impressions rose while clicks continued to decline: AI Overviews dominated the results, driving exposure without generating traffic. In some cases, conversions still improved, but attribution became harder to prove
  • The “great decoupling” became measurable: Visibility and performance stopped moving in sync. SEO teams saw growth in impressions even as sessions declined
  • Zero-click searches accelerated further: No-click behavior climbed toward 69%, reinforcing that many user journeys now ended inside search interfaces
  • AI traffic stayed small but influential: AI-driven referrals still accounted for under 1% of traffic for most sites, yet they shaped expectations around answers, speed, and convenience
  • Retrieval logic shifted toward context and intent: New retrieval approaches prioritized meaning, relationships, and query context over keyword matching

Must read: On-SERP SEO can help you battle zero-click results

What to take away from August

August made one thing unavoidable.

It reinforced the reality that SEO could no longer rely on traffic as the primary proof of value. Visibility still mattered, but only when paired with outcomes that could survive reduced clicks and blurred attribution.

The lesson was strategic. SEO needed to connect visibility to conversion, brand lift, or long-term trust, not just sessions. Otherwise, its impact would be increasingly hard to defend.

Didn’t catch the live session? You can still watch the full The SEO Update by Yoast – August 2025 Edition webinar.

September: control, attribution, and trust were renegotiated

September pushed the conversation further. It wasn’t just about declining clicks anymore. It was about who controlled discovery, attribution, and access to content.

This was the month where legal, technical, and strategic pressures collided.

What changed in September

September reframed SEO around governance and credibility.

  • AI Mode moved closer to becoming the default: Search experiences shifted toward AI-driven answers with conversational follow-ups and multimodal inputs
  • The decline of the open web was acknowledged publicly: Court filings and public statements confirmed what many publishers were already feeling. Traditional web traffic was under structural pressure
  • Legal scrutiny intensified: High-profile settlements and lawsuits highlighted growing challenges around training data, summaries, and lost revenue
  • Licensing entered the SEO conversation: New machine-readable licensing approaches emerged as early attempts to restore control and consent
  • Snippet visibility became a gateway signal: AI tools relied heavily on search snippets for real-time answers, making concise, extractable content more critical
  • Persona-based strategies gained traction: SEO began shifting from keyword targeting to persona-driven content aligned with how AI systems infer intent
  • Trust eroded around generic, formulaic, AI writing styles: Formulaic, overly polished AI content raised credibility concerns, reinforcing the need for editorial judgment
  • Measurement tools lost stability again: Changes to search parameters disrupted rank tracking, reminding teams that SEO reporting would remain volatile

What to take away from September

September forced SEO to grow up again.

Control over visibility, attribution, and content use was no longer guaranteed. Trust, clarity, and credibility became the only durable advantages in an ecosystem shaped by AI intermediaries.

The takeaway was sobering but useful. SEO could still drive value, but only when it is aligned with real user needs, strong brand signals, and content that earned its place in AI-driven answers.

Want to dig a little deeper? Watch the full The SEO Update by Yoast – September 2025 Edition webinar.

October: AI search became the destination

October marked a turning point in how SEO performance needed to be interpreted. The data didn’t just shift. It reset expectations entirely.

This was the month when SEO teams had to accept that AI-powered search was no longer a layer on top of results. It was becoming the place where searches ended.

What changed in October

October brought clarity, even if the numbers looked uncomfortable.

  • AI Mode reshaped user behavior: Around a third of searches now involve AI agents, with most sessions staying inside AI panels. Clicks became the exception, not the default
  • AI citations increasingly rivalled rankings: Visibility increasingly depended on whether content was selected, summarized, or cited by AI systems, not where it ranked
  • Search engines optimized for ideas, not pages: Guidance from search platforms reinforced that AI systems extract concepts and answers, not entire URLs
  • Metadata lost some direct control: Tests of AI-generated meta descriptions suggested that manual optimization would carry less influence over how content appears
  • Commerce and search continued to merge: AI-driven shopping experiences expanded, signaling that transactional intent would increasingly be handled inside AI interfaces

What to take away from October

October reframed SEO as presence within AI systems.

Traffic still mattered, but it was no longer the primary outcome. The real question became whether your content appeared at all inside AI-driven answers. Clarity, structure, and extractability replaced traditional ranking gains as the most reliable levers.

From this point on, SEO had to treat AI search as a destination, not just a gateway.

November: structure and credibility decided inclusion

If October reset expectations, November showed what actually worked.

This month narrowed the gap between theory and practice. It became clearer why some content consistently surfaced in AI results, while other content disappeared.

What changed in November

November focused on how AI systems select and trust sources.

  • Structured content outperformed clever content: Clear headings, predictable formats, and direct answers made it easier for AI systems to extract and reuse information
  • Schema supported understanding, not visibility alone: Structured data remained valuable, but only when paired with clean, readable on-page content
  • AI-driven shopping and comparisons accelerated: Product data quality, consistency, and accessibility directly influenced whether brands appeared in AI-assisted decision flows
  • Citation pools stayed selective: AI systems relied on a relatively small set of trusted sources, reinforcing the importance of brand recognition and authority
  • Search tooling evolved toward themes, not keywords: Grouped queries and topic-based insights replaced one-keyword performance views

What to take away from November

November made one thing clear. SEO wasn’t about producing more content or optimizing harder. It was about making content easier to understand and harder to ignore.

Clarity beats creativity. Structure beat scale. Authority determined whether content was reused at all.

This month quietly reinforced the fundamentals that would define SEO going forward.

For a complete breakdown, check out the full The SEO Update by Yoast – October and November 2025 Edition recording.

December: SEO moved from ranking to retrieval

December tied the entire year together.

Instead of introducing new disruptions, it clarified what 2025 had been building toward all along. SEO was no longer primarily about ranking pages. It was about enabling retrieval.

What changed in December

The year-end review highlighted the new reality of SEO.

  • Search systems retrieved answers, not pages: AI-driven search experiences pulled snippets, definitions, and summaries instead of directing users to full articles
  • Literal language still mattered: Despite advances in understanding, AI systems relied heavily on exact phrasing. Terminology choices directly affected retrieval
  • Content structure became mandatory: Front-loaded answers, short paragraphs, lists, and clear sections made content usable for AI systems
  • Relevance replaced ranking as the core signal: Being the clearest and most contextually relevant answer mattered more than traditional ranking factors
  • E-E-A-T acted as a gatekeeper: Recognized expertise, authorship, and trust signals determined whether content was eligible for reuse
  • Authority reduced AI errors: Strong credibility signals helped AI systems select more reliable sources and reduced hallucinated answers

What to take away from December

December didn’t declare the end of SEO. It defined its next phase.

SEO matured into visibility management for AI-driven systems. Success depended on clarity, credibility, and structure, not shortcuts or volume. The fundamentals still worked, but only when applied with discipline.

By the end of 2025, the direction was clear. SEO didn’t get smaller. It got more precise.

Missed the session? You can watch the full The SEO Update by Yoast – December 2025 Edition recording here.

SEO evolved into visibility management for AI-driven search. Precision replaced volume.

2025 didn’t rewrite SEO. It clarified it.

Search moved from ranking pages to retrieving answers. From rewarding volume to rewarding clarity. From clicks to credibility. And from optimization tricks to systems-level understanding.

The fundamentals still matter. Technical health, helpful content, and strong SEO foundations are non-negotiable. But they are no longer the finish line. What separates visible brands from invisible ones now is how clearly their content can be understood, trusted, and reused by AI-driven search systems.

Going into 2026, the goal isn’t to outsmart search engines. It’s to make your expertise unmistakable. Write for humans, structure for machines, and build authority that holds up even when clicks don’t follow.

SEO didn’t get smaller this year. It got more precise. Stay with us for our 2026 verdict on where search goes next.

Google Reveals The Top Searches Of 2025 via @sejournal, @MattGSouthern

In 2025, Google’s AI tool Gemini topped global searches. People tracked cricket matches between India and England, looked up details on the new Pope, and searched for information about Iran and the TikTok ban. They followed LA fires and government shutdowns.

But between the headlines, they also looked up Pedro Pascal and Mikey Madison. They wanted to make hot honey and marry me chicken. They planned trips to Prague and Edinburgh. They searched for bookstores from Livraria Lello in Porto to Powell’s in Portland.

Google’s Year in Search tracks what spiked. These lists show queries that grew the fastest relative to 2024, ranging from breaking news to entertainment, sports, and lifestyle. Together, they present a picture of what captured attention throughout the year.

Top Searches Of 2025

Google’s AI assistant Gemini became the top trending search globally, showing how widely AI tools were embraced throughout the year. The rest of the top 10 was filled with sports, with cricket matches between India and England, the Club World Cup, and the Asia Cup capturing a lot of public interest.

The global top 10 trending searches were:

Global top 10:

  1. Gemini
  2. India vs England
  3. Charlie Kirk
  4. Club World Cup
  5. India vs Australia
  6. Deepseek
  7. Asia Cup
  8. Iran
  9. iPhone17
  10. Pakistan and India

The US list reflected different priorities and diverged from global trends, with Charlie Kirk at the top and entertainment properties ranking highly. KPop Demon Hunters secured the second position.

The US top 10 trending searches were:

US top 10:

  1. Charlie Kirk
  2. KPop Demon Hunters
  3. Labubu
  4. iPhone 17
  5. One Big Beautiful Bill Act
  6. Zohran Mamdani
  7. DeepSeek
  8. Government shutdown
  9. FIFA Club World Cup
  10. Tariffs

News & Current Events

Natural disasters and political events shaped what news topics people were searching for. The LA Fires, Hurricane Melissa, and the TikTok ban drew worldwide interest, while in the US, folks were most often searching about topics like the One Big Beautiful Bill Act and the government shutdown.

Global top 10:

  1. Charlie Kirk assassination
  2. Iran
  3. US Government Shutdown
  4. New Pope chosen
  5. LA Fires
  6. Hurricane Melissa
  7. TikTok ban
  8. Zohran Mamdani elected
  9. USAID
  10. Kamchatka Earthquake and Tsunami

US top 10:

  1. One Big Beautiful Bill Act
  2. Government shutdown
  3. Charlie Kirk assasination
  4. Tariffs
  5. No Kings protest
  6. Los Angles fires
  7. New Pope chosen
  8. Epstein files
  9. U.S. Presidential Inauguration
  10. Hurricane Melissa

AI-Generated Content Leads US Trends

AI-generated content captured everyone’s attention in the US, with AI-created images and characters popping up all over different categories. The viral AI Barbie, AI action figures, and Ghibli-style AI art topped this year’s trends.

The top US trends included:

  1. AI action figure
  2. AI Barbie
  3. Holy airball
  4. AI Ghostface
  5. AI Polaroid
  6. Chicken jockey
  7. Bacon avocado
  8. Anxiety dance
  9. Unfortunately, I do love
  10. Ghibli

People

Music artists and political figures were among the most searched people worldwide. d4vd, Kendrick Lamar, and the newly elected Pope Leo XIV attracted the most international attention. In the US, searches mainly centered on political appointees such as Zohran Mamdani and Karoline Leavitt.

Global top 10:

  1. d4vd
  2. Kendrick Lamar
  3. Jimmy Kimmel
  4. Tyler Robinson
  5. Pope Leo XIV
  6. Vaibhav Sooryavanshi
  7. Shedeur Sanders
  8. Bianca Censori
  9. Zohran Mamdani
  10. Greta Thunberg

US top 10:

  1. Zohran Mamdani
  2. Tyler Robinson
  3. d4vd
  4. Erika Kirk
  5. Pope Leo XIV
  6. Shedeur Sanders
  7. Bonnie Blue
  8. Karoline Leavitt
  9. Andy Byron
  10. Jimmy Kimmel

    Entertainment

    Actors

    Breakthrough performances drove increased actor searches. Mikey Madison saw a spike in global searches after her acclaimed role in Anora, while Pedro Pascal led searches in the US.

    Global top 5:

    1. Mikey Madison
    2. Lewis Pullman
    3. Isabela Merced
    4. Song Ji Woo
    5. Kaitlyn Dever

    US top 5:

    1. Pedro Pascal
    2. Malachi Barton
    3. Walton Goggins
    4. Pamela Anderson
    5. Charlie Sheen

    Movies

    Expected franchise entries and original films topped movie searches. Anora was the top globally, while KPop Demon Hunters gained US popularity, alongside major releases such as The Minecraft Movie and Thunderbolts.

    Global top 5:

    1. Anora
    2. Superman
    3. Minecraft Movie
    4. Thunderbolts*
    5. Sinners

    US top 5:

    1. KPop Demon Hunters
    2. Sinners
    3. The Minecraft Movie
    4. Happy Gilmore 2
    5. Thunderbolts*

        Books

        Contemporary romance and classic literature were the most searched genres. Colleen Hoover’s “Regretting You” and Rebecca Yarros’s “Onyx Storm” topped both global and US charts, while George Orwell’s “Animal Farm” and “1984” saw a resurgence in popularity.

        Global top 10:

        1. Regretting You – Colleen Hoover
        2. Onyx Storm – Rebecca Yarros
        3. Lights Out – Navessa Allen
        4. The Summer I Turned Pretty – Jenny Han
        5. The Housemaid – Freida McFadden
        6. Frankenstein – Mary Shelley
        7. It – Stephen King
        8. Animal Farm – George Orwell
        9. The Witcher – Andrzej Sapkowski
        10. Diary Of A Wimpy Kid – Jeff Kinney

        US top 10:

        1. Regretting You – Colleen Hoover
        2. Onyx Storm – Rebecca Yarros
        3. Lights Out – Navessa Allen
        4. The Summer I Turned Pretty – Jenny Han
        5. The Housemaid – Freida McFadden
        6. It – Stephen King
        7. Animal Farm – George Orwell
        8. The Great Gatsby – F. Scott Fitzgerald
        9. To Kill a Mockingbird – Harper Lee
        10. 1984 – George Orwell

        Podcasts

        Podcast searches were driven by political commentary and celebrity-hosted shows. The Charlie Kirk Show ranked first both worldwide and in the US, while sports podcast New Heights and Michelle Obama’s “IMO” gained attention in the US.

        Global top 10:

        1. The Charlie Kirk Show
        2. New Heights
        3. This Is Gavin Newsom
        4. Khloé In Wonder Land
        5. Good Hang With Amy Poehler
        6. Candace
        7. The Meidastouch Podcast
        8. The Ruthless Podcast
        9. The Venus Podcast
        10. The Mel Robbins Podcast

        US top 10:

        1. New Heights
        2. The Charlie Kirk Show
        3. IMO with Michelle Obama and Craig Davidson
        4. This Is Gavin Newsom
        5. Good Hang With Amy Poehler
        6. Khloé In Wonder Land
        7. The Severance Podcast
        8. The Rosary in a Year
        9. Unbothered
        10. The Bryce Crawford Podcast

        Sports Events

        International soccer tournaments attracted the most global sports searches. The FIFA Club World Cup, Asia Cup, and ICC Champions Trophy were the top interests worldwide, while in the US, searches centered on domestic events like the Ryder Cup and UFC championships.

        Global top 10:

        1. FIFA Club World Cup
        2. Asia Cup
        3. ICC Champions Trophy
        4. ICC Women’s World Cup
        5. Ryder Cup
        6. EuroBasket
        7. Concacaf Gold Cup
        8. 4 Nations Face-Off
        9. UFC 313
        10. UFC 311

        US top 10:

        1. Ryder Cup
        2. 4 Nations Face-Off
        3. UFC 313
        4. UFC 311
        5. College Football Playoff
        6. Super Bowl LX
        7. NBA Finals
        8. World Series
        9. Stanley Cup Finals
        10. March Madness

        Lifestyle And Gaming

        Anticipated game releases led search trends. Arc Raiders was the most-searched title globally, while Clair Obscur: Expedition 33 was the top search in the US, alongside popular titles such as Battlefield 6 and Hollow Knight: Silksong.

        Global top 5 games:

        1. Arc Raiders
        2. Battlefield 6
        3. Strands
        4. Split Fiction
        5. Clair Obscur: Expedition 33

        US top 5 games:

        1. Clair Obscur: Expedition 33
        2. Battlefield 6
        3. Hollow Knight: Silksong
        4. ARC Raiders
        5. The Elder Scrolls IV: Oblivion Remastered

          Music (US Only)

          Emerging artists and well-known musicians drove music searches. d4vd led in musician searches, whereas Taylor Swift led song rankings with various tracks, including “Wood” and “The Fate of Ophelia.”

          Top 5 musicians:

          1. d4vd
          2. KATSEYE
          3. Bad Bunny
          4. Sombr
          5. Doechii

          Top 5 songs:

          1. Wood – Taylor Swift
          2. DtMF – Bad Bunny
          3. Golden – HUNTR/X
          4. The Fate of Ophelia – Taylor Swift
          5. Father Figure – Taylor Swift

          Travel (US Only)

          Major cities and popular European destinations drove travel itinerary searches. Boston, Seattle, and Tokyo led domestic travel plans, while Prague and Edinburgh were notably popular for European trips.

          Top 10 travel itinerary searches:

          1. Boston
          2. Seattle
          3. Tokyo
          4. New York
          5. Prague
          6. London
          7. San Diego
          8. Acadia National Park
          9. Edinburgh
          10. Miami

            Google Maps

            Google Maps data represents the most-searched locations on Maps in 2025.

            Bookstores

            Historic and iconic bookstores drew worldwide attention on Google Maps. Portugal’s Livraria Lello and Tokyo’s Animate Ikebukuro were the most searched internationally, while Powell’s City of Books in Portland ranked highest in US bookstore interest.

            Global top 5:

            1. Livraria Lello, Porto District, Portugal
            2. animate Ikebukuro main store, Tokyo, Japan
            3. El Ateneo Grand Splendid, Buenos Aires, Argentina
            4. Shakespeare and Company, Île-de-France, France
            5. Libreria Acqua Alta, Veneto, Italy

            US top 5:

            1. Powell’s City of Books, Portland, Oregon
            2. Strand Book Store, New York, New York
            3. The Last Bookstore, Los Angeles, California
            4. Kinokuniya New York, New York, New York
            5. Stanford University Bookstore, Stanford, California

                Looking Back

                That’s what caught attention in 2025. People searched for breaking news about natural disasters and political changes. They tracked sports tournaments and looked up new AI tools. They followed major world events.

                And between those searches, they looked up actors after breakthrough performances, found recipes they saw on social feeds, and planned trips to places they’d been thinking about for years.

                The trends don’t tell you what mattered most. They tell you what people were curious about when they had a spare moment, whether that was understanding a major news event or finding the perfect travel itinerary.

                You can watch the full Google Year In Search video below:

                The full Year in Search data is at trends.withgoogle.com/year-in-search/2025.

                More resources:

                Ironman, Not Superman via @sejournal, @DuaneForrester

                I recently became frustrated while working with Claude, and it led me to an interesting exchange with the platform, which led me to examining my own expectations, actions, and behavior…and that was eye-opening. The short version is I want to keep thinking of AI as an assistant, like a lab partner. In reality, it needs to be seen as a robot in the lab – capable of impressive things, given the right direction, but only within a solid framework. There are still so many things it’s not capable of, and we, as practitioners, sometimes forget this and make assumptions based on what we wish a platform is capable of, instead of grounding it in the reality of the limits.

                And while the limits of AI today are truly impressive, they pale in comparison to what people are capable of. Do we sometimes overlook this difference and ascribe human characteristics to the AI systems? I bet we all have at one point or another. We’ve assumed accuracy and taken direction. We’ve taken for granted “this is obvious” and expected the answer to “include the obvious.” And we’re upset when it fails us.

                AI sometimes feels human in how it communicates, yet it does not behave like a human in how it operates. That gap between appearance and reality is where most confusion, frustration, and misuse of large language models actually begins. Research into human computer interaction shows that people naturally anthropomorphize systems that speak, respond socially, or mirror human communication patterns.

                This is not a failure of intelligence, curiosity, or intent on the part of users. It is a failure of mental models. People, including highly skilled professionals, often approach AI systems with expectations shaped by how those systems present themselves rather than how they truly work. The result is a steady stream of disappointment that gets misattributed to immature technology, weak prompts, or unreliable models.

                The problem is none of those. The problem is expectation.

                To understand why, we need to look at two different groups separately. Consumers on one side, and practitioners on the other. They interact with AI differently. They fail differently. But both groups are reacting to the same underlying mismatch between how AI feels and how it actually behaves.

                The Consumer Side, Where Perception Dominates

                Most consumers encounter AI through conversational interfaces. Chatbots, assistants, and answer engines speak in complete sentences, use polite language, acknowledge nuance, and respond with apparent empathy. This is not accidental. Natural language fluency is the core strength of modern LLMs, and it is the feature users experience first.

                When something communicates the way a person does, humans naturally assign it human traits. Understanding. Intent. Memory. Judgment. This tendency is well documented in decades of research on human computer interaction and anthropomorphism. It is not a flaw. It is how people make sense of the world.

                From the consumer’s perspective, this mental shortcut usually feels reasonable. They are not trying to operate a system. They are trying to get help, information, or reassurance. When the system performs well, trust increases. When it fails, the reaction is emotional. Confusion. Frustration. A sense of having been misled.

                That dynamic matters, especially as AI becomes embedded in everyday products. But it is not where the most consequential failures occur.

                Those show up on the practitioner side.

                Defining Practitioner Behavior Clearly

                A practitioner is not defined by job title or technical depth. A practitioner is defined by accountability.

                If you use AI occasionally for curiosity or convenience, you are a consumer. If you use AI repeatedly as part of your job, integrate its output into workflows, and are accountable for downstream outcomes, you are a practitioner.

                That includes SEO managers, marketing leaders, content strategists, analysts, product managers, and executives making decisions based on AI-assisted work. Practitioners are not experimenting. They are operationalizing.

                And this is where the mental model problem becomes structural.

                Practitioners generally do not treat AI like a person in an emotional sense. They do not believe it has feelings or consciousness. Instead, they treat it like a colleague in a workflow sense. Often like a capable junior colleague.

                That distinction is subtle, but critical.

                Practitioners tend to assume that a sufficiently advanced system will infer intent, maintain continuity, and exercise judgment unless explicitly told otherwise. This assumption is not irrational. It mirrors how human teams work. Experienced professionals regularly rely on shared context, implied priorities, and professional intuition.

                But LLMs do not operate that way.

                What looks like anthropomorphism in consumer behavior shows up as misplaced delegation in practitioner workflows. Responsibility quietly drifts from the human to the system, not emotionally, but operationally.

                You can see this drift in very specific, repeatable patterns.

                Practitioners frequently delegate tasks without fully specifying objectives, constraints, or success criteria, assuming the system will infer what matters. They behave as if the model maintains stable memory and ongoing awareness of priorities, even when they know, intellectually, that it does not. They expect the system to take initiative, flag issues, or resolve ambiguities on its own. They overweight fluency and confidence in outputs while under-weighting verification. And over time, they begin to describe outcomes as decisions the system made, rather than choices they approved.

                None of this is careless. It is a natural transfer of working habits from human collaboration to system interaction.

                The issue is that the system does not own judgment.

                Why This Is Not A Tooling Problem

                When AI underperforms in professional settings, the instinct is to blame the model, the prompts, or the maturity of the technology. That instinct is understandable, but it misses the core issue.

                LLMs are behaving exactly as they were designed to behave. They generate responses based on patterns in data, within constraints, without goals, values, or intent of their own.

                They do not know what matters unless you tell them. They do not decide what success looks like. They do not evaluate tradeoffs. They do not own outcomes.

                When practitioners assign thinking tasks that still belong to humans, failure is not a surprise. It is inevitable.

                This is where thinking of Ironman and Superman becomes useful. Not as pop culture trivia, but as a mental model correction.

                Ironman, Superman, And Misplaced Autonomy

                Superman operates independently. He perceives the situation, decides what matters, and acts on his own judgment. He stands beside you and saves the day.

                That is how many practitioners implicitly expect LLMs to behave inside workflows.

                Ironman works differently. The suit amplifies strength, speed, perception, and endurance, but it does nothing without a pilot. It executes within constraints. It surfaces options. It extends capability. It does not choose goals or values.

                LLMs are Ironman suits.

                They amplify whatever intent, structure, and judgment you bring to them. They do not replace the pilot.

                Once you see that distinction clearly, a lot of frustration evaporates. The system stops feeling unreliable and starts behaving predictably, because expectations have shifted to match reality.

                Why This Matters For SEO And Marketing Leaders

                SEO and marketing leaders already operate inside complex systems. Algorithms, platforms, measurement frameworks, and constraints you do not control are part of daily work. LLMs add another layer to that stack. They do not replace it.

                For SEO managers, this means AI can accelerate research, expand content, surface patterns, and assist with analysis, but it cannot decide what authority looks like, how tradeoffs should be made, or what success means for the business. Those remain human responsibilities.

                For marketing executives, this means AI adoption is not primarily a tooling decision. It is a responsibility placement decision. Teams that treat LLMs as decision makers introduce risk. Teams that treat them as amplification layers scale more safely and more effectively.

                The difference is not sophistication. It is ownership.

                The Real Correction

                Most advice about using AI focuses on better prompts. Prompting matters, but it is downstream. The real correction is reclaiming ownership of thinking.

                Humans must own goals, constraints, priorities, evaluation, and judgment. Systems can handle expansion, synthesis, speed, pattern detection, and drafting.

                When that boundary is clear, LLMs become remarkably effective. When it blurs, frustration follows.

                The Quiet Advantage

                Here is the part that rarely gets said out loud.

                Practitioners who internalize this mental model consistently get better results with the same tools everyone else is using. Not because they are smarter or more technical, but because they stop asking the system to be something it is not.

                They pilot the suit, and that’s their advantage.

                AI is not taking control of your work. You are not being replaced. What is changing is where responsibility lives.

                Treat AI like a person, and you will be disappointed. Treat it like a syste,m and you will be limited. Treat it like an Ironman suit, and YOU will be amplified.

                The future does not belong to Superman. It belongs to the people who know how to fly the suit.

                More Resources:


                This post was originally published on Duane Forrester Decodes.


                Featured Image: Corona Borealis Studio/Shutterstock

                SEO Pulse: AI Mode Hits 75M Users, Gemini 3 Flash Launches via @sejournal, @MattGSouthern

                In this week’s Pulse: updates include AI Mode’s growth and missing features, what Google’s latest model brings to search, and what drives citations across different AI experiences.

                Google’s Nick Fox confirmed that AI Mode has reached 75 million daily active users, but the personal context features promised at I/O are still in internal testing.

                Google launched Gemini 3 Flash with improved speed and performance. Ahrefs research showed AI Mode and AI Overviews cite different URLs.

                Here’s what matters for you that happened this week.

                Google’s AI Mode Hits 75M Daily Users, But Personal Context Still Delayed

                Google’s Nick Fox confirmed AI Mode has grown to 75 million daily active users worldwide, but acknowledged personal context features announced at I/O seven months ago remain in internal testing.

                Key Facts:

                In an interview on the AI Inside podcast, Fox said personal context features that would connect AI Mode to Gmail and other Google apps are “still to come” with no public timeline.

                AI Mode queries run two to three times longer than traditional searches. Google rolled out a preferred sources feature globally and announced improvements to links within AI experiences.

                Why This Matters

                The personal context delay affects how you should think about AI Mode optimization. If you’ve been preparing for a world where AI Mode knows users’ email confirmations and calendar entries, that world isn’t arriving soon. Currently, users manually add context to longer queries.

                That changes what you prioritize. Content still needs to answer the longer, more specific questions users are asking. But the automated personalization layer that might have made some informational queries feel self-contained inside Google’s interface isn’t active yet.

                The 75 million daily active user figure matters for traffic planning. AI Mode is no longer a small experiment. It’s a significant channel that’s still evolving. The query length data (two to three times longer than traditional searches) suggests users are having conversations rather than making quick lookups, which affects what content formats and depth work best.

                What People Are Saying

                AI Inside shared additional highlights on LinkedIn:

                “Nick Fox suggests that optimizing for Google’s AI experiences mirrors the approach for traditional search: building a great site with great content

                … focus on building for users and creating content that resonates with human readers.”

                Read our full coverage: Google’s AI Mode Personal Context Features “Still To Come”

                Google Launches Gemini 3 Flash With Faster Performance

                Google launched Gemini 3 Flash, its latest AI model focused on speed and efficiency, and immediately shipped it in search products.

                Key Facts:

                Gemini 3 Flash delivers improved performance across benchmarks while maintaining faster response times than previous models. It’s now the default model in the Gemini app, and AI Mode for Search.

                Why SEOs Should Pay Attention

                Google’s shipping speed for Gemini 3 Flash suggests how AI model updates might flow into search products going forward. Rather than waiting months between model releases and search integration, you’re now dealing with immediate deployment of new models that can change how AI features behave.

                Faster performance matters for user experience in AI Mode and AI Overviews, where latency affects whether people continue using it or switch to traditional results. Faster models make longer multi-turn interactions more practical, potentially leading to more search sessions.

                What People Are Saying

                Robby Stein, SVP of Product for Google Search, posted about the rollout on LinkedIn:

                “3 Flash brings the incredible reasoning capabilities of Gemini 3 Pro, at the speed you expect of Search. So AI Mode better interprets your toughest, multi-layered questions – considering each of your constraints or requirements – and provides a visually digestible response along with helpful links to dive deeper on the web.”

                Rhiannon Bell, VP of user experience for Google Search, noted that this update brings Gemini 3 Pro to more users. Bell highlights the ability of 3 Pro to redesign search results:

                “My team is constantly thinking about what “helpful” design means, and Gemini 3 Pro is allowing us to fundamentally re-architect what a helpful Search response looks like.

                Hema Budaraju, vice president of product management for Search at Google, highlighted the “speed and smarts”:

                “As product builders, we often need to balance speed and smarts. Today, we’re bringing that even closer together: Gemini 3 Flash is rolling out globally in Search as the new default model for AI Mode… We’re also putting our Pro models in more hands. Gemini 3 Pro is now available to everyone in the U.S

                Read our full coverage: Google Gemini 3 Flash Becomes Default In Gemini App & AI Mode

                AI Mode & AI Overviews Cite Same URLs Only 13.7% Of The Time

                Ahrefs analyzed 730,000 query pairs and found AI Mode and AI Overviews reach semantically similar conclusions 86% of the time, but cite the same specific URLs just 13.7% of the time.

                Key Facts:

                Ahrefs compared AI Mode and AI Overview responses across identical queries. While both experiences frequently agree on general information, they’re pulling that information from different sources.

                Why SEOs Should Pay Attention

                You’re dealing with a split optimization target. Getting cited in AI Overviews doesn’t automatically get you cited in AI Mode, even when both systems are answering the same query with similar information. These are two separate citation engines, not one system with different interfaces.

                If you track which AI experience appears for your target queries, you can focus citation efforts accordingly. For queries where AI Mode dominates, publishing frequency and content freshness may matter more. For queries where AI Overviews appear, authority signals and deep resource coverage may matter more.

                The 13.7% overlap suggests many sites will see uneven results across surfaces. You might do well in one experience without automatically carrying that visibility into the other.

                What People Are Saying

                Despina Gavoyannis, senior SEO specialist at Ahrefs, summarized the results on LinkedIn:

                “Only 13.7% citation overlap … 86% semantic similarity … In short, 9 out of 10 times, AI Mode and AI Overviews agreed on what to say; they just said it differently and cited different sources.”

                Read our full coverage: Google AI Mode & AI Overviews Cite Different URLs, Per Ahrefs Report

                Theme Of The Week: AI Search In Practice, Not Theory

                Each story this week shows AI search moving from promise to operational reality.

                AI Mode’s 75 million daily users and immediate Gemini 3 Flash deployment reveal Google’s AI features are production systems at scale, not experimental labs. The personal context delay shows the gap between what was announced and what’s shipping. The citation study quantifies how these systems work differently despite appearing similar.

                For you, this week’s about treating AI search as current infrastructure rather than future speculation. Optimize for how AI Mode and AI Overviews work today, longer manual queries without personal context, immediate model updates that can change behavior, and separate optimization targets for each experience.

                The features Google promised at I/O aren’t here yet, but 75 million people are using what is here.

                Top Stories Of The Week:

                More Resources:


                Featured Image: Pixel-Shot/Shutterstock

                Search & Social: How To Engineer Cross-Channel Synergy via @sejournal, @rio_seo

                When your search and social strategies are intertwined, they work together like a well-oiled machine, and your search visibility can multiply.

                For years, SEO and social media teams more often than not operated in silos, rarely engaging with each other and never working in tandem. SEO focused on optimizing for the latest Google algorithm update while social media teams worked earnestly to respond to brand mentions.

                Today, these functions must merge from parallel paths to transparent collaboration. Audience engagement on social platforms can influence how search engines interpret trust, authority, and relevance.

                Google’s Helpful Content evolution highlighted social platforms in the search engine results pages (SERPs). Discussion forums like Reddit and Quora often surface answers to queries at the top of the SERPs, especially answers that have plenty of comments and upvotes.

                Current marketing means SEO and social go hand-in-hand, building unified systems to ensure cross-channel amplification is maximized. Together, these two divergent roles work towards the same goal of helping your business rank higher, improve brand recognition, and build a consistent story across every single touchpoint.

                Why Search And Social Belong Together

                Search and social belong together. They aren’t focusing on divergent tactics; they’re working in unison to compound your marketing and SEO efforts. The marriage of the two helps improve customer experiences from first search to reading reviews to aid in the decision-making phase of the sales journey.

                Here’s what that synergy might look like in practice.

                1. Social Creates The Spark Of Discovery

                A decade ago, traditional blue links reigned supreme. Social media today is “top of the funnel” for organic search. According to GWI, nearly half (46%) of Gen Z turns to social media first when conducting product research. Not Google. But, many of those users will later turn to search to validate and compare what they discovered on social media.

                Social media content shouldn’t just be entertaining or chasing the latest viral trend. It must answer questions your customers are asking. Smart marketing leaders analyze trending social conversations to discover the right queries and phrases people are using related to their products or services. They’re then working with SEO teams to optimize for those terms in the form of visual and written content, as well as back-end optimizations.

                Knowing that social sentiment is often the early determinant of rising search demand, it’s crucial for CMOs, SEOs, and social marketers alike to watch for engagement spikes around an emerging topic and create high-quality content quickly in order to turn buzz into business.

                2. Search Anchors And Sustains The Momentum

                Social engagement is fast and fickle. What’s trending one day is quickly forgotten the next. Search visibility, on the other hand, is a slow process that doesn’t happen overnight. Together, they create the right balance of speed and longevity. A social post may receive thousands of comments in a matter of hours, but an optimized landing page built on that same topic can rank and drive sales for years to come.

                Consider Gong, which generates roughly 2.2 million visits a month from organic traffic, according to SimilarWeb. The social media platform invests effort into growing its LinkedIn. At the bottom of Gong’s blog posts, they don’t ask their readers to navigate to a demo or related blog post, they invite them to follow their LinkedIn, and their efforts are paying off.

                Gong has 315,000 followers on LinkedIn. Its competitor, Chorus, meanwhile, has about a third of the followers. Additionally, Gong shares about 10-15 posts on its company page per week. The velocity has paid off, as many of its posts receive thousands of interactions and hundreds of comments. This type of momentum is what Google favors and pays attention to, making them more likely to be highlighted in the SERPs.

                3. Shared Data Creates Precision

                When SEO and social data remain separated, it’s impossible to see the bigger picture and extract key takeaways. Integrating both data sets helps marketing leaders identify what’s working and what isn’t. It showcases what content is delivering return on investment and which should be repurposed. It identifies patterns such as posts that earn high engagement but low search volume or blog posts that earn clicks but fail to be shared on social.

                By cross-referencing these insights, teams gain a 360° view of their performance. That level of insight fuels smarter creative, better results, and higher ROI.

                How To Engineer Cross-Channel Synergy

                Bridging the gap between SEO and social teams requires work. When two teams are accustomed to working independently, structure and strategy must come into play. Below are the five tactics to ensure cross-team synergy is as seamless as possible.

                1. Share Objectives

                Merge SEO and social teams with intent, aligning on KPIs to ensure everyone is working towards the same goal. Creating joint goals, such as brand visibility, intent coverage, and more, helps teams come together to maximize organizational success.

                For example, both SEOs and social marketers should work towards visibility, tracking growth of branded keywords, hashtags, and mentions (both on social and search). Joint goals motivate teams to work closely together, turning to one another to pave the path towards success. This shared measurement philosophy removes team rivalry and breeds co-creators of growth.

                2. Plan Content Around Signals

                Building content around internal agendas rarely works well. Cross-channel listening opens the door to conversions content marketers often aren’t involved in. Social media marketers leverage social listening to detect emotional signals (what people care about now) and SEOs measure search data to discern what users will look for next. Merging the two together enables content marketers to create click-worthy and relevant content that meets audiences exactly where interest turns into action.

                Forecasting content identifies future search demand by tracking early-stage social conversations, leading to a strategy that stays well ahead of your competitors.

                3. Implement A Content Relay System

                Top-performing brands treat search and social as relay partners. They work together for the greater good of the organization and embrace the team player ideology. Here’s how the content relay model works when implemented right:

                1. Social Spark: Social media teams create a thought leadership thread, poll, or conversation starter in hopes of attracting interest and engagement.
                2. Search Foundation: Based on the responses, social hands off those insights to content to produce a more detailed blog or landing page. SEO helps optimize the content to improve the chances of appearing in the SERPs.
                3. Social Reinforcement: Once the piece has been optimized for search, share the content with social with audience-driven context (you asked, we answered/analyzed).
                4. Search Reinforcement: Embed high-performing social content (such as quotes, videos, or user-generated content) into pages for richer signals. Use structured data to tell search engines what the content is and how to index it.

                Every piece of content fuels another, creating a loop of engagement, validation, and authority that compounds across platforms and the content’s lifetime extends.

                4. Pair AI With Human Expertise

                AI isn’t a replacement for human creativity and expertise. It’s merely an aid to help power smarter business decisions. In the case of social media and search, AI-powered tools can be used to help analyze language consistency and detect sentiment shifts. For example, if users are consistently complaining about long wait times at your fast-food chain in Memphis, TN, AI can flag this as an issue that needs to be resolved before your reputation and bottom line suffer.

                Similarly, AI can also identify when your top-performing social post is driving branded search volume or when a keyword starts trending related to your products or services in user-generated content. Intelligent automation enables your team to be notified in real time, allowing you to strike while the iron is hot.

                5. Align Leadership And Cultural Change

                Marketing leaders must create environments where SEOs and social media team members understand why and how they’re working together. This might include:

                • Hosting bi-weekly meetings with both teams to get both teams up to speed on shared goals and priorities.
                • Creating “bridge roles” like Audience Insights Manager.
                • Recognizing shared wins (e.g., content that ranked and went viral on TikTok).
                • Transparency into what both teams are working on and towards
                • In-person team building events to allow both teams to connect outside of work

                A good company culture that fosters collaboration is imperative for team building, employee retention, and business success. When collaboration feels like extra work or leaves one team in the dark, performance and employee satisfaction suffer.

                6. Embrace An Ecosystem Mentality

                Once marketing leaders align data, culture, and goals, your organization’s ecosystem begins to operate like a living, breathing organism. Search informs social, social accelerates search, and together they improve the longevity of your business. In return, your business becomes more resilient to Google’s constant algorithm evolution. Siloed strategy starts to shift from stagnant results to seamless execution.

                A Real-World Case: Social And Search Synergy In Action

                When I worked with a leading fast-casual Mexican restaurant, the business had seen inconsistent reviews across its hundreds of locations. We centralized customer feedback, identified common complaints and praises, which led to a revamped online reputation.

                Within just two months, according to our agency internal rating metrics, the chain’s average star rating rose from 4.2 to 4.4, five-star reviews increased by 32%, and no one-star reviews were left during that time period. Positive feedback trends emerged almost immediately, signaling local teams were acting on customer feedback faster and more diligently.

                The ripple effects reached both search and social ecosystems as improved reviews and higher star ratings typically lead to a boost in visibility in Google Search and Maps. Simultaneously, the same credibility fueled social proof across the brand’s social platforms, where patrons frequently leave both positive and negative feedback.

                Search visibility was boosted due to review quality, and social visibility was also enhanced because of customer advocacy. Together, they created a unified trust signal that influenced consumer behavior across every touchpoint. That represents the power of marrying search and social; a blissful union that drives favorable outcomes like visibility that converts.

                Future-Facing: The Algorithmic Convergence Of Search And Social

                We are now in an era where search and social converge effortlessly. TikTok is an influential discovery engine, while Google’s prominent AI Overviews pull in content that resembles social threads. Social content and discussion forums are now indexed prominently in the SERPs.

                SEO should maintain semantic and emotional consistency at every step of discovery across the digital buyer’s journey across all channels.

                Marketing executives should ask themselves the following:

                • How do we establish a unified signal map? How does your audience move from discovery to intent? Which social triggers lead to which search behaviors?
                • How can we centralize our listening structure? Does our social listening platform allow us to integrate with our search analytics technology?
                • How can we create rapid-response workflows to capitalize on trending topics before our competitors do?
                • Do we need to reevaluate our reporting cadence? How do we move from channel-based reports to intent-based dashboards that track trending topics across platforms?
                • Are we relying too heavily on AI? Do we use human judgment to craft narratives that align with our brand’s voice and ethics?

                Search and social are no longer divergent roles that never speak to one another. They’re an integral effort that plays for the same team and can amplify one another to create something bigger and better than either could solo.

                More Resources:


                Featured Image: SvetaZi/Shutterstock

                Core Web Vitals Champ: Open Source Versus Proprietary Platforms via @sejournal, @martinibuster

                The Core Web Vitals Technology Report by the open source HTTPArchive community ranks content management systems by how well they perform on Google’s Core Web Vitals (CWV). The November 2025 data shows a significant gap between platforms with the highest ranked CMS scoring 84.87% of sites passing CWV, while the lowest ranked CMS scored 46.28%.

                What’s of interest this month is that the top three Core Web Vitals champs are all closed source proprietary platforms while the open source systems were at the bottom of the pack.

                Importance Of Core Web Vitals

                Core Web Vitals (CWV) are metrics created by Google to measure how fast, stable, and responsive a website feels to users. Websites that load quickly and respond smoothly keep visitors engaged and tend to perform better in terms of sales, reads, and add impressions, while sites that fall short frustrate users, increase bounce rates, and perform less well for business goals. CWV scores reflect the quality of the user experience and how a site performs under real-world conditions.

                How the Data Is Collected

                The CWV Technology Report combines two public datasets.

                The Chrome UX Report (CrUX) uses data from Chrome users who opt in to share performance statistics as they browse. This reflects how real users experience websites.
                The HTTP Archive runs lab-based tests that analyze how sites are built and whether they follow performance best practices.

                Together, the report I generated provides a snapshot of how each content management system performs on Core Web Vitals.

                Ranking By November 2025 CWV Score

                Duda Is The Number One Ranked Core Web Vitals Champ

                Duda ranked first in November 2025, with 84.87% of sites built on the platform delivering a passing Core Web Vitals score. It was the only platform in this comparison where more than four out of five sites achieved a good CWV score. Duda has consistently ranked #1 for Core Web Vitals for several years now.

                Wix Ranked #2

                Wix ranked second, with 74.86% of sites passing CWV. While it trailed Duda by ten percentage points, Wix was just about four percentage points ahead of the third place CMS in this comparison.

                Squarespace Ranked #3

                Squarespace ranked third, at 70.39%. Its CWV pass rate placed it closer to Wix than to Drupal, maintaining a clear position in the top three ranked publishing platforms.

                Drupal Ranked #4

                Drupal ranked fourth, with 63.27% of sites passing CWV. That score put Drupal in the middle of the comparison, below the three private label site builders. This is a curious situation because the bottom three CMS’s in this comparison are all open source platforms.

                Joomla Ranked #5

                Joomla ranked fifth, at 56.92%. While more than half of Joomla sites passed CWV, the platform remained well behind the top performers.

                WordPress Ranked Last at position #6

                WordPress ranked last, with 46.28% of sites passing Core Web Vitals. Fewer than half of WordPress sites met the CWV thresholds in this snapshot. What’s notable about WordPress’s poor ranking is that it lags behind the fifth place Joomla by about ten percentage points. So not only is WordPress ranked last in this comparison, it’s decisively last.

                Why the Numbers Matter

                Core Web Vitals scores translate into measurable differences in how users experience websites. Platforms at the top of the ranking deliver faster and more stable experiences across a larger share of sites, while platforms at the bottom expose a greater number of users to slower and less responsive pages. The gap between Duda and WordPress in the November 2025 comparison was nearly 40 percentage points, 38.59 percentage points.

                While an argument can be made that the WordPress ecosystem of plugins and themes may be to blame for the low CWV scores, the fact remains that WordPress is dead last in this comparison. Perhaps WordPress needs to become more proactive about how themes and plugins perform, such as come up with standards that they have to meet in order to gain a performance certification. That might cause plugin and theme makers to prioritize performance.

                Do Content Management Systems Matter For Ranking?

                I have mentioned this before and will repeat it this month. There have been discussions and debates about whether the choice of content management system affects search rankings. Some argue that plugins and flexibility make WordPress easier to rank in Google. But the fact is that private platforms like Duda, Wix, and Squarespace have all focused on providing competitive SEO functionalities that automate a wide range of technical SEO tasks.

                Some people insist that Core Web Vitals make a significant contribution to their rankings and I believe them. But in general, the fact is that CWV performance is a minor ranking factor.

                Nevertheless, performance still matters for outcomes that are immediate and measurable, such as user experience and conversions, which means that the November 2025 HTTPArchive Technology Report should not be ignored.

                The HTTPArchive report is available here but it will be going away and replaced very soon. I’ve tried the new report and, unless I missed something, it lacks a way to constrain the report by date.

                Featured Image by Shutterstock/Red Fox studio

                Google Says Ranking Systems Reward Content Made For Humans via @sejournal, @martinibuster

                Google’s Danny Sullivan discussed SEO and AI where they observed that their ranking systems are tuned for one thing, regardless if it’s classic search or AI search. What he talked about was optimizing for people, which is something I suspect the search marketing industry will increasingly be talking about.

                Nothing New You Need To Be Doing For AI Search

                The first thing Danny Sullivan discussed was that despite there being new search experiences powered by AI there isn’t anything new that they need to be doing.

                John Mueller asked:

                “So everything kind of around AI, or is this really a new thing? It feels like these fads come and go. Is AI in fad? How do you think?”

                Danny Sullivan responded:

                “Oh gosh, my favorite thing is that we should be calling it LMNOPEO because there’s just so many acronyms for it. It’s GEO for generative engine optimization or AEO for answer engine optimization and AIEO. I don’t know. There’s so many different names for it.

                I used to write about SEO and search. I did that for like 20 years. And part of me is just so relieved. I don’t have to do that aspect of it anymore to try to keep up with everything that people are wondering about.

                And on the other hand, you still have to kind of keep up on it because we still try to explain to people what’s going on. And I think the good news is like, There’s not a lot you actually really need to be worrying about.

                It’s understandable. I think people keep having these questions, right? I mean, you see search formats changing, you see all sorts of things happening and you wonder, well, is there something new I should be doing? Totally get that.

                And remember, we, John and I and others, we all came together because we had this blog post we did in May, which we’ll drop a link to or we’ll point you to somehow to it, but it was… we were getting asked again and again, well, what should we be doing? What should we be thinking about?

                And we all put our heads together and we talked with the engineers and everything else. So we came up with nothing really that different.”

                Google’s Systems Are Tuned To Rank Human Optimized Content

                Danny Sullivan next turned to discussing what Google’s systems are designed to rank, which is content that satisfies humans. Robbie Stein, currently Vice President of Product for Google Search, recently discussed the signals Google uses to identify helpful content, discussing how human feedback contributes to helping ranking systems understand what helpful content looks like.

                While Danny didn’t get into exact details about the helpfulness signals the way Stein did, Danny’s comments confirmed the underlying point that Robbie Stein was making about how their systems are tuned to identify content that satisfies humans.

                Danny continued explaining what SEOs and creators should know about Google’s ranking systems. He began by acknowledging that it’s reasonable that people see a different search experience and conclude that they must be doing something different.

                He explained:

                “…I think people really see stuff and they think they want to be doing something different. …It is the natural reaction you have, but we talk about sort of this North Star or the point that you should be heading to.”

                Next he explained how all of Google’s ranking systems are engineered to rank content that was made for humans and specifically calls out content that is created for search engines as examples of what not to do.

                Danny continued his answer:

                “And when it comes to all of our ranking systems, it’s about how are we trying to reward content that we think is great for people, that it was written for human beings in mind, not written for search algorithms, not written for LLMs, not written for LMNO, PEO, whatever you want to call it.

                It’s that everything we do and all the things that we tailor and all the things that we try to improve, it’s all about how do we reward content that human beings find satisfying and say, that was what I was looking for, that’s what I needed. So if all of our systems are lining up with that, it’s that thing about you’re going to be ahead of it if you’re already doing that.

                To whereas the more you’re trying to… Optimize or GEO or whatever you think it is for a specific kind of system, the more you’re potentially going to get away from the main goal, especially if those systems improve and get better, then you’re kind of having to shift and play a lot of catch up.

                So, you know, we’re going to talk about some of that stuff here with the big caveat, we’re only talking about Google, right? That’s who we work for. So we don’t say what, anybody else’s AI search, chat search, whatever you want to kind of deal with and kind of go with it from there. But we’ll talk about how we look at things and how it works.”

                What Danny is clearly saying is that Google is tuned to rank content that’s written for humans and that optimizing for specific LLMs sets up a situation where it could backfire.

                Why Optimizing For LLMs Is Misguided

                Although Danny didn’t mention it, this is the right moment to point out that OpenAI, Perplexity, and Claude together have a total traffic referral volume of less than 1%. So it’s clearly a mistake to optimize content for LLMs at the risk of losing significant traffic from search engines.

                Content that is genuinely satisfying to people remains aligned with what Google’s systems are built to reward.

                Why SEOs Don’t Believe Google

                Google’s insistence that their algorithms are tuned toward user satisfaction is not new. They have been saying it for over two decades, and over the years it has been a given that Google was overstating their technology. That is no longer the case.

                Arguably, since at least 2018’s Medic broad core update, Google has been making genuine strides toward actually delivering search results that are influenced by user behavior signals that guide Google’s machines toward understanding what kind of content people like, plus AI and neural networks that are better able to match content to a search query.

                If there is any doubt about this, check out the interview with Robbie Stein, where he explains exactly how human feedback, in aggregate, influences the search results.

                Is Human Optimized Content The New SEO?

                So now we are at a point where links no longer are the top ranking criteria. Google’s systems have the ability to understand queries and content and match one to the other. User behavior data, which has been a part of Google’s algorithms since at least 2004, plays a strong role in helping Google understand what kinds of content satisfy users.

                It may be well past time for SEOs and creators to let go of the old SEO playbooks and start focusing on optimizing their websites for humans.

                Featured Image by Shutterstock/Bas Nastassia

                Who Benefits When The Line Between SEO And GEO Is Blurred via @sejournal, @DuaneForrester

                The search industry is entering a transition that many people still treat as a footnote. The systems consumers rely on are changing, and the way information is gathered, summarized, and delivered is changing with them. Yet the public messaging around what businesses should do sounds as familiar as ever. The narrative says the fundamentals are the same. The advice sounds the same. The expectations sound the same. The message is that SEO still covers everything that matters.

                But the behavior of the consumer says otherwise. The way modern systems retrieve and present information says otherwise. And the incentives of the companies that shape those systems explain why the narrative has not kept up with reality.

                This is not a story about conflict. It is not about calling out any company or naming any platform. It is about understanding why continuity messaging persists and why businesses cannot afford to take it at face value. The shift from a click-driven model to an answer-driven model is measurable, visible, and documented. The only question is who benefits when the line between SEO and GEO stays blurry, and who loses when it does.

                Image Credit: Duane Forrester

                The Shift Is Already Visible In The Data

                Let’s start with some data. Certainly not all the data, but some, at least. Bain and Company published research showing that about 80% of consumers who use search now rely on AI-written summaries for at least 40% of their queries. They also found that organic traffic across many categories has fallen by 15-25% because of this shift.

                Pew Research analyzed how people behave when AI summaries appear on the results page. Their findings show that people click traditional links in about eight percent of visits when an AI summary is present. When the summary is absent, that number rises to roughly fifteen percent.

                Ahrefs published a study showing that when AI summaries appear, the click-through rate of the top organic result drops by about 34%.

                Seer Interactive measured outcomes across thousands of queries and found a 61% decline in organic click-through on informational queries that surfaced an AI summary. Paid click-through dropped by 68% for the same class of queries.

                BrightEdge expanded the picture. They compared outputs across multiple AI answer engines and found that different systems disagree with each other about brand mentions roughly 62% of the time.

                These sources do not frame the shift as speculation. They show structural change. Consumers click less when AI summaries appear. They rely more on answer layers. They perform fewer traditional searches. And the systems producing those answers do not behave the same way.

                Given this, why is the message still that nothing significant has changed and that existing SEO practices still cover the full scope of visibility work?

                Continuity Is Not Accidental. It Is Incentivized

                The answer lies in incentives. Established platforms rely on a steady stream of aligned content that fits their current systems and supports the development of the answer structures they use today. They need predictability in that supply. If businesses abruptly redirected their focus toward optimizing for environments outside the classic ranking model, the flow of content into traditional indexing systems would change. Telling the world that the best path forward is to keep improving content in the same ways they always have offers stability. It reduces confusion. It keeps expectations manageable. And it slows the need for new measurement frameworks that reveal how much the system has shifted away from click-based visibility.

                Agencies and consultants also benefit when the line stays blurry. If GEO is described as nothing more than SEO with a different label, they can market the same playbooks with fewer operational changes. They do not need to retrain teams in retrieval-based behavior. They do not need to produce new deliverables or learn new data models. They can continue selling the same work, packaged for a new era, without changing the underlying skill set. For many firms, the incentives favor consistency rather than reinvention.

                Tool vendors tied to traditional SEO signals benefit from the same continuity. If GEO is framed as the same as SEO, the pressure to rebuild their systems around vector retrieval, chunk inspection, citation tracking, and cross-engine output analysis decreases. Re-architecting tools to support answer era optimization is expensive. Downplaying the distinction buys time.

                None of these incentives are wrong. They are normal. Every industry reacts this way when a shift threatens the established workflows, revenue models, and expectations. But these incentives explain why the message of continuity persists even when the data shows otherwise.

                This Is Where SEO And GEO Genuinely Overlap

                So, where does SEO end and GEO begin? The overlap is real. If your content is thin, outdated, or buried behind inaccessible structures, you will struggle everywhere. Technical fundamentals still matter. Clear writing still matters. Structured data still matters. Authority still matters. These are non-negotiable for both SEO and GEO.

                But the differences are too large to ignore. SEO focuses on pages and rankings. GEO focuses on fragments and retrieval. SEO aims to earn the click. GEO aims to earn presence inside the answer the consumer sees. SEO tracks impressions and click-through. GEO tracks citations, mentions, and answer share. SEO studies snippets. GEO studies how different systems pull, blend, and frame information. SEO treats the page as the unit of value. GEO treats the block as the unit of value.

                This Is Where The Work Begins To Diverge

                Modern answer engines retrieve specific content blocks, synthesize them, and present the result in compressed form. They may cite a source. They may not. They may mention a brand directly. They may not. They may surface a recommendation from a third party that never appears in traditional analytics. They may pull from locations you do not control.

                In this environment, the mechanics of visibility change. You now need to design content in discrete, self-contained blocks that can be safely lifted and reused. You need to make entity relationships, attributes, and actions machine-readable. You need to track how AI systems present your information across different platforms. You need to understand that retrieval behavior varies across systems and that answers diverge even when content remains the same. You also need metrics that describe visibility on surfaces where no click occurs.

                Consumer Behavior Explains The Rest

                Consumer behavior reinforces this need. Deloitte found that adoption of generative AI more than doubled year over year, and that 38% of consumers now use it for real tasks rather than experimentation.

                Recent 2025 consumer data shows that many people already rely on generative AI tools to find and understand information, not just to generate content or complete tasks. A nationally representative survey of more than 5,000 U.S. adults, conducted in April 2025 and published in June 2025, found that consumers are using AI tools for everyday information needs, including answering questions, explaining topics, and summarizing complex material.

                When people ask questions directly and trust the answer they receive, the role of the page shifts. The business still needs pages, but the consumer may never see them. The information is what matters. The structure is what matters. The clarity is what matters. The authority signal is what matters. The ability of the system to retrieve and use your content is what matters.

                Traffic Is No Longer A Reliable Proxy For Influence

                And humbly, I think we need to move past conversations like “this platform only sends one percent of my traffic, so it’s hard to justify the investment.” That framing assumes traffic is still the primary signal of influence. In an answer-driven environment, that assumption no longer holds. Consumers increasingly get what they need without ever visiting a site, even when that site’s information directly shaped the answer they trusted. A system may never deliver more than single-digit referral traffic, not because it lacks impact, but because consumer behavior has changed. The most meaningful new signals to watch are adoption, frequency of use, and the types of tasks people rely on each system for. Those metrics tell you where influence is forming, even when clicks never happen.

                This is why businesses cannot treat SEO and GEO as interchangeable. The fundamentals overlap, but the goals do not. SEO helps you win in ranking environments. GEO helps you stay visible in answer environments. SEO prepares your site for discovery. GEO prepares your information for use. SEO earns the visit. GEO earns the recommendation.

                When the line between SEO and GEO stays blurry, the incumbents benefit from stability. Agencies benefit from simplicity. Vendors benefit from delayed change. But the businesses relying on visibility lose clarity. They chase rankings that look strong while losing share in the answer layers their customers have a rapidly growing reliance on. They measure success by clicks even as those clicks decline. They optimize pages while the systems shaping decisions optimize information blocks.

                The shift does not replace SEO. It adds to it. It builds on it. It requires everything SEO already demands, plus new work that reflects how information is retrieved and used in modern systems. Leaders need clear definitions so they can plan effectively. The teams doing the work need clear expectations so they can build the right skills. And executives need accurate metrics so they can make informed decisions. New metrics beyond the scope of established SEO-centric data points we operate with today.

                Clarity, Not Comfort, Is The Real Advantage

                Clarity is the unlock. Not alarm. Not hype. Not denial. Just clarity. The industry is moving toward answer-driven discovery. The companies that understand this will position themselves to win across environments, not just inside a ranking model that served the last decade well. Visibility now lives in multiple layers. The business that adapts to those layers will own its share of attention. The ones that rely on continuity messaging will fall behind without realizing it until the results flatten.

                The sands are shifting. The work is changing. And the businesses willing to see the difference between SEO and GEO will be the ones ready for the environments consumers have growing trust in. At some point in our near future, I expect platforms to start sharing AI-related data with businesses. We already see the shift beginning with third-party tool providers, as many are leaning into this shift. Now we need the platforms themselves to share their first-party data with us. But until crucial questions around revenue generation, traffic delivery, and decision-making metrics are answered, we’ll be in flux.

                More Resources:


                This post was originally published on Duane Forrester Decodes.


                Featured Image: Polinmrrr/Shutterstock

                The Future Of Content In An AI World: Provenance & Trust In Information

                When Emily Epstein shared her perspective on LinkedIn about how “people didn’t stop reading books when encyclopedias came out,” it sparked a conversation about the future of primary sources in an AI-driven world.

                In this episode, Katie Morton, Editor-in-Chief of Search Engine Journal, and Emily Anne Epstein, Director of Content at Sigma, dig into her post and unpack what AI really means for publishers, content creators, and marketers now that AI tools present shortcuts to knowledge.

                Their discussion highlights the importance of provenance, the layers involved in online knowledge acquisition, and the need for more transparent editorial standards.

                If you’re a content creator, this episode can help you gain insight into how to provide value as the competition for attention becomes a competition for trust.

                Watch the video or read the full transcript below:

                Katie Morton: Hello, everybody. I’m Katie Morton, Editor-in-Chief of Search Engine Journal, and today I’m sitting down with Emily Anne Epstein, Director of Content at Sigma. Welcome, Emily.

                Emily Ann Epstein: Thanks so much. I’m so excited to be here.

                Katie: Me too. Thanks for chatting with me. So Emily wrote a really excellent post on LinkedIn that caught my attention. Emily, for our audience, would you mind summarizing that post for us?

                Emily: So this should feel both shocking and non-shocking to everybody. But the idea is, people didn’t stop reading books when encyclopedias came out. And this is a response to the hysteria that’s going on with the way AI tools are functioning as summarizing devices for complicated and complex situations. And so the idea is, just because there’s a shortcut now to acquiring knowledge, it doesn’t mean we’re getting rid of the need for primary sources and original sources.

                These two different types of knowledge acquisition exist together, and they layer on top of one another. You may start your book report with an encyclopedia or ChatGPT search, but what you find there doesn’t matter if you can’t back it up. You can’t just say in a book report, “I heard it in Encarta.” Where did the information come from? I think about the way this is going to transform search: There’s simply going to be layers now.

                Maybe start your search with an AI tool, but you’ll need to finish somewhere else that organizes primary sources, provides deeper analysis, and even shows contradictions that go into creating knowledge.

                Because a lot of what these synthesized summaries do is present a calm, “impartial” view of reality. But we all know that’s not true. All knowledge is biased in some way because it cannot be “all-containing.”

                The Importance Of Provenance

                Katie: I want to talk about something you mentioned in your LinkedIn post: provenance. What needs to happen, whether culturally, editorially, or socially, for “show me the source material” to become standard in AI-assisted search?

                With Wikipedia or encyclopedias, ideally, people should still cite the original source, go deeper into the analysis, and be able to say, “Here’s where this information came from.” How do we get there so people aren’t just skimming surface-level summaries and taking them as gospel?

                Emily: First, people need to use these tools, and there needs to be a reckoning with how reliable they are. Thinking about provenance means thinking about knowledge acquisition as triangulation. So, when I was a journalist, you have to balance hearsay, direct quotes, press releases, and social media.

                You create your story from a variety of sources, so that way, you get something that’s in the middle and can explain multiple truths and realities. That comes from understanding that truth has never been linear, and reality is fracturing.

                What AI does, even more advanced than that, is deliver personalized responses. People are prompting their models differently, so we’re all working from different sets of information and getting different answers. Once reality is fractured to that degree, knowing where something comes from – the provenance – becomes essential for context.

                And triangulation won’t just be important for journalists; it’s going to be important for everyone because people make decisions based on the information that they receive.

                If you get bad inputs, you’ll get bad outputs, make bad decisions, and that affects everything from your work to your housing. People will need to triangulate a better version of reality that is more accurate than what they’re getting from the first person or the first tool they asked.

                Creators: Competing For Attention To Competing For Trust

                Katie: So if AI becomes the top layer in how people access information – designed to hold attention within its own ecosystem – what does that mean for content creators and publishers? It feels like they’re creating a commodity that AI then repackages as its own.

                How do you see that playing out for creators in terms of revenue and visibility?

                Emily: Instead of competing for attention, creators and publishers will compete for trust. That means making editorial standards more transparent. They’re going to have to show the work that they’re doing. Because with most AI tools, you don’t see how they work, it’s a bit of a black box.

                But if creators can serve as a “blockchain,” (a verifiable ledger of information sources) and they’re showing their sources and methods, that will be their value.

                Think about photography. When it first came out, it was considered a science. People thought photos were pure fact. Then, darkroom techniques like dodging and burning or combining multiple exposures showed that photos could lie.

                And when photography became an art form, people realized that the photographer’s role was to provide a filter. That’s where we are with AI. There are filters on every piece of information that we receive.

                And those organizations that make their filter transparent are going to be more successful, and people will return to them because again, they’re getting better information. They know where it’s coming from, so they can make better decisions and live better lives.

                AI Hallucinations & Deepfakes

                Emily: It was a shocking moment in the history of photography. that people could lie with photographs. And that’s sort of where we are right now. Everybody is using AI, and we know there are hallucinations, but we have to understand that we cannot trust this tool, generally speaking, unless it shows its work.

                Katie: And the risks are real. We’re already seeing AI voiceovers and video deepfakes mimicking creators often without their consent.

                Inspiring People To Go Deeper

                Katie: In your post, you ended with “people still doing the work of deciding what’s enough.” In an attention economy of speed and convenience, how do we help people go deeper?

                Emily: The idea that people don’t want to go deeper flies in the face of Wikipedia holes. People start with summarized information, but then click a citation, keep going further, watch another show, keep digging.

                People want more of what they want. If you give them a breadcrumb of fascinating information, they’ll want more or that. Knowledge acquisition has an emotional side. It gives you dopamine hits: “I found that, that’s for me.”

                And as content marketers, we have to provide that value for people where they say, ‘Wow, I am smarter because of this information. I like this brand because this brand has invested in my intelligence and my betterment.’

                And for content creators, that needs to be the gold star.

                Wrapping Up

                Katie: Right on. For those who want to follow your work, where can they find you?

                Emily: I’m dialoging and writing my thoughts on AI out loud and in public on LinkedIn. Come join me, and let’s think out loud together.

                Katie: Sounds great. And I’m always at searchenginejournal.com. Thank you so much, Emily, for taking the time today.

                Emily: Thank you!

                More Resources: 


                Featured Image: Paulo Bobita/Search Engine Journal

                Google’s Robby Stein Names 5 SEO Factors For AI Mode via @sejournal, @martinibuster

                Robby Stein, Vice President of Product for Google Search, recently sat down for an interview where he answered questions about how Google’s AI Mode handles quality, how Google evaluates helpfulness, and how it leverages its experience with search to identify which content is helpful, including metrics like clicks. He also outlined five quality SEO-related factors used for AI Mode.

                How Google Controls Hallucinations

                Stein answered a question about hallucinations, where an AI lies in its answers. He said that the quality systems within AI Mode are based on everything Google has learned about quality from 25 years of experience with classic search. The systems that determine what links to show and whether content is good are encoded within the model and are based on Google’s experience with classic search.

                The interviewer asked:

                “These models are non-deterministic and they hallucinate occasionally… how do you protect against that? How do you make sure the core experience of searching on Google remains consistent and high quality?”

                Robby Stein answered:

                “Yeah, I mean, the good news is this is not new. While AI and generative AI in this way is frontier, thinking about quality systems for information is something that’s been happening for 20, 25 years.

                And so all of these AI systems are built on top of those. There’s an incredibly rigorous approach to understanding, for a given question, is this good information? Are these the right links? Are these the right things that a user would value?

                What’s all the signals and information that are available to know what the best things are to show someone. That’s all encoded in the model and how the model’s reasoning and using Google search as a tool to find you information.

                So it’s building on that history. It’s not starting from scratch because it’s able to say, oh, okay, Robbie wants to go on this trip and is looking up cool restaurants in some neighborhood.

                What are the things that people who are doing that have been relying on on Google for all these years? We kind of know what those resources are we can show you right there. And so I think that helps a lot.

                And then obviously the models, now that you release the constraint on layout, obviously the models over time have also become just better at instruction following as well. And so you can actually just define, hey, here are my primitives, here are my design guidelines. Don’t do this, do this.

                And of course it makes mistakes at times, but I think just the quality of the model has gotten so strong that those are much less likely to happen now.”

                Stein’s explanation makes clear that AI Mode is encoded with everything learned from Google’s classic search systems rather than a rebuild from scratch or a break from them. The risk of hallucinations is managed by grounding AI answers in the same relevance, trust, and usefulness signals that have underpin classic search for decades. Those signals continue to determine which sources are considered reliable and which information users have historically found valuable. Accuracy in AI search follows from that continuity, with model reasoning guided by longstanding search quality signals rather than operating independently of them.

                How Google Evaluates Helpfulness In AI Mode

                The next question is about the quality signals that Google uses within AI Mode. Robby Stein’s answer explains that the way AI Mode determines quality is very much the same as with classic search.

                The interviewer asked:

                “And Robbie, as search is evolving, as the results are changing and really, again, becoming dynamic, what signals are you looking at to know that the user is not only getting what they want, but that is the best experience possible for their search?”

                Stein answered:

                “Yeah, there’s a whole battery of things. I mean, we look at, like we really study helpfulness and if people find information helpful.

                And you do that through evaluating the content kind of offline with real people. You do that online by looking at the actual responses themselves.

                And are people giving us thumbs up and thumbs downs?

                Are they appreciating the information that’s coming?

                And then you kind of like, you know, are they using it more? Are they coming back? Are they voting with their feet because it’s valuable to you.

                And so I think you kind of triangulate, any one of those things can lead you astray.

                There’s lots of ways that, interestingly, in many products, if the product’s not working, you may also cause you to use it more.

                In search, it’s an interesting thing.

                We have a very specific metric that manages people trying to use it again and again for the same thing.

                We know that’s a bad thing because it means that they can’t find it.

                You got to be really careful.

                I think that’s how we’re building on what we’ve learned in search, that we really feel good that the things that we’re shipping are being found useful by people.”

                Stein’s answer shows that AI Mode evaluates success using the same core signals used for search quality, even as the interface becomes more dynamic. Usefulness is not inferred from a single engagement signal but from a combination of human evaluation, explicit feedback, and behavioral patterns over time.

                Importantly, Stein notes that just because people use it a lot, presumably in a single session, that the increased usage alone is not treated as success, since repeated attempts to answer the same query indicate failure rather than satisfaction. The takeaway is that AI Mode’s success is judged by whether users are satisfied, and that it uses quality signals designed to detect friction and confusion as much as positive engagement. This carries over continuity from classic search rather than redefining what usefulness means.

                Five Quality Signals For AI Search

                Lastly, Stein answers a question about the ranking of AI generated content and if SEO best practices still help for ranking in AI. Stein’s answer includes five factors that are used for determining if a website meets their quality and helpfulness standards.

                Stein answered:

                “The core mechanic is the model takes your question and reasons about it, tries to understand what you’re trying to get out of this.

                It then generates a fan-out of potentially dozens of queries that are being Googled under the hood. That’s approximating what information people have found helpful for those questions.

                There’s a very strong association to the quality work we’ve done over 25 years.

                Is this piece of content about this topic?

                Has someone found it helpful for the given question?

                That allows us to surface a broader diversity of content than traditional Search, because it’s doing research for you under the hood.

                The short of it is the same things apply.

                1. Is your content directly answering the user’s question?
                2. Is it high quality?
                3. Does it load quickly?
                4. Is it original?
                5. Does it cite sources?

                If people click on it, value it, and come back to it, that content will rank for a given question and it will rank in the AI world as well.”

                Watch the interview starting about the one hour and twenty three minute mark: