AI Search Changes Everything – Is Your Organization Built To Compete? via @sejournal, @billhunt

Search has changed. Have you?

Search is no longer about keywords and rankings. It’s about relevance, synthesis, and structured understanding.

In the AI-powered era of Google Overviews, ChatGPT-style assistants, and concept-level rankings, traditional SEO tactics fall short.

Content alone won’t carry you. If your organization isn’t structurally and strategically aligned to compete in this new paradigm, you’re invisible even if you’re technically “ranking.”

This article builds on the foundation laid in my earlier article, From Building Inspector To Commissioning Authority,” where I argued that SEO must shift from reactive inspection to proactive orchestration.

It also builds upon my exploration of the real forces reshaping search, including the rise of Delphic Costs, where brands are extracted from the customer journey without attribution, and the organizational imperative to treat visibility as everyone’s responsibility, not just a marketing key performance indicator (KPI).

And increasingly, it’s not just about your monetization. It’s about the platform.

The Three Shifts Reshaping Search

1. Google AI Overviews: The Answer Layer Supersedes The SERP

Google is bypassing traditional listings with AI-generated answers. These overviews synthesize facts, concepts, and summaries across multiple sources.

Your content may power the answer, but without attribution, brand visibility, or clicks. In this model, being the source is no longer enough; being the credited authority is the new battle.

2. Generative Assistants: New Gatekeepers To Discovery

Tools like ChatGPT, Perplexity, and Gemini collapse the search journey into a single query/answer exchange. They prioritize clarity, conceptual alignment, and structured authority.

They don’t care about the quantity of backlinks; they care about structured understanding. Organizations relying on domain authority or legacy SEO tactics are being leapfrogged by competitors who embrace AI-readable content.

3. Concept-Based Ranking: From Keywords To Entities And Context

Ranking is no longer determined by exact-match phrases. It’s determined by how well your content reflects and reinforces the concepts, entities, and context behind a query.

AI systems think in knowledge graphs, not spreadsheets. They interpret meaning through structured data, relationships between entities, and contextual signals.

These three shifts mean that success now depends on how well your organization can make its expertise machine-readable and contextually integrated into AI ecosystems.

A New Era Of Monetization And Data Harvesting

Search platforms have evolved from organizing information to owning outcomes. Their mission is no longer to guide users to your site; it’s to keep users inside their ecosystem.

The more they can answer in place, the more behavioral data they collect, and the more control they retain over monetization.

Today, your content competes not just with other brands but with the platforms themselves. They’re generating “synthetic content” derived from your data – packaged, summarized, and monetized within their interfaces.

As Dotdash Meredith CEO Neil Vogel put it: “We were in the business of arbitrage. We’d buy traffic for a dollar, monetize it for two. That game is over. We’re now in the business of high-quality content that platforms want to reward.”

Behavioral consequence: If your content can’t be reused, monetized, or trained against, it’s less likely to be shown.

Strategic move: Make your content AI-friendly, API-ready, and citation-worthy. Retain ownership of your core value. Structured licensing, schema, and source attribution matter more than ever.

This isn’t just about visibility. It’s about defensibility.

The Strategic Risks

Enterprises that treat search visibility as a content problem – not a structural one – are walking blind into four key risks:

  • Disintermediation: You lose traffic, attribution, and control when AI systems summarize your insights without directing users to you. In an AI-mediated search world, your value can be extracted while your brand is excluded.
  • Market Dilution: Nimbler competitors who better align with AI content requirements will surface more often, even if they have less experience or credibility. This creates a reverse trust dynamic: newcomers gain exposure by leveraging the machine’s strengths, while legacy players lose visibility.
  • Performance Blind Spots: Traditional KPIs no longer capture the real picture. Traffic may appear stable while influence and presence erode behind the scenes. Executive dashboards often miss this erosion because they’re still tuned to clicks, not concept penetration or AI inclusion.
  • Delphic Costs: This, as defined by Andrei Broder and Preson McAfee, refers to the expenses incurred when AI systems extract your expertise without attribution or downstream benefits, resulting in brand invisibility despite active contributions. Being referenced but not represented becomes a strategic liability.

Are You Built To Compete?

Here’s a five-pillar diagnostic framework to assess your organization’s readiness for AI search:

1. Content Structure

  • Do you use schema markup to define your content’s meaning?
  • Are headings, tables, lists, and semantic formats prioritized?
  • Is your content chunked in ways AI systems can easily digest?
  • Are your most authoritative explanations embedded into the page using clear, concise writing and answer-ready?

2. Relevance Engineering

  • Do you map queries to concepts and entities?
  • Is your content designed for entity resolution, not just keyword targeting?
  • Are you actively managing topic clusters and knowledge structures?
  • Have you audited your internal linking and content silos to support knowledge graph connectivity?

3. Organizational Design (Shared Accountability)

  • Who owns “findability” in your organization?
  • Are SEO, content, product, and dev teams aligned around structured visibility?
  • Is there a commissioning authority that ensures strategy alignment from the start?
  • Do product launches and campaign rollouts include a visibility readiness review?
  • Are digital visibility goals embedded in executive and departmental KPIs?

In one example, a SaaS company I advised implemented monthly “findability sprints,” where product, dev, and content teams worked together to align schema, internal linking, and entity structure.

The result? A 30% improvement in AI-assisted surfacing – without publishing a single new page.

4. AI Feedback Loops

  • Are you tracking where and how your content appears in AI Overviews or assistants?
  • Do you have visibility into lost attribution or uncredited brand mentions?
  • Are you using tools or processes to monitor AI surface presence?
  • Have you incorporated AI visibility into your reporting cadence and strategic reviews?

5. Modern KPIs

  • Do your dashboards still prioritize traffic volume over influence?
  • Are you measuring presence in AI systems as part of performance?
  • Do your teams know what “visibility” actually means in an AI-dominant world?
  • Are your KPIs evolving to include citations, surface presence, and non-click influence metrics?

The Executive Mandate: From Visibility Theater To Strategic Alignment

Organizations must reframe search visibility as digital infrastructure, not a content marketing afterthought.

Just as commissioning authorities ensure a building functions as designed, your digital teams must be empowered to ensure your knowledge is discoverable, credited, and competitively positioned.

AI-readiness isn’t about writing more content. It’s about aligning people, process, and technology to match how AI systems access and deliver value. You can’t fix this with marketing alone. It requires a leadership-driven transformation.

Here’s how to begin:

  1. Reframe SEO as Visibility Engineering: Treat it as a cross-functional discipline involving semantics, structure, and systems design.
  2. Appoint a Findability or Answers Leader: This role connects the dots across content, code, schema, and reporting to ensure you are found and answering the market’s questions.
  3. Modernize Metrics: Track AI visibility, entity alignment, and concept-level performance – not just blue links.
  4. Run an AI Exposure Audit: Understand where you’re showing up, how you’re credited, and most critically, where and why you’re not. Just ask the AI system, and it will tell you exactly why you were not referenced.
  5. Reward Structural Alignment: Incentivize teams not just for publishing volume, but for findability performance. Celebrate contributions to visibility the same way you celebrate brand reach or campaign success. Make visibility a cross-team metric.

Final Thought: You Can’t Win If You’re Not Represented

AI is now the front end of discovery. If you’re not structured to be surfaced, cited, and trusted by machines, you’re losing silently.

You won’t fix this with a few blog posts or backlinks.

You fix it by building an organization designed to compete in the era of machine-mediated relevance.

This is your commissioning moment – not just to inspect the site after it’s built, but to orchestrate the blueprint from the start.

Welcome to the new search. Let’s build for it.

More Resources:


Featured Image: Master1305/Shutterstock

Earn 1,000+ Links & Boost Your SEO Visibility [Webinar] via @sejournal, @lorenbaker

Build the Authority You Need for AI-Driven Visibility

Struggling to get backlinks, even when your content is solid? 

You’re not alone. With Google’s AI Overviews and generative search dominating the results, traditional link-building tactics just don’t cut it anymore.

It’s time to earn the trust that boosts your brand’s visibility across Google, ChatGPT, and AI search engines.

Join Kevin Rowe, Founder & Head of Digital PR Strategy at PureLinq, on August 27, 2025, for an exclusive webinar. Learn the exact strategies Kevin’s team used to earn 1,000+ links and how you can replicate them without needing a massive budget or PR team.

What You’ll Learn:

  • How to identify media trends where your expertise is in demand.
  • The step-by-step process to create studies that can earn links on autopilot.
  • How to craft a story angle journalists will want to share.

Why This Webinar is Essential:

Earned links and citations are now key to staying visible in AI search results. This session will provide you with a proven, actionable playbook for boosting your SEO visibility and building the authority you need to succeed in this new era.

Register today to get the playbook for link-building success. Can’t attend live? Don’t worry, sign up anyway, and we’ll send you the full recording.

Is GEO the Same as SEO?

“Generative engine optimization” refers to tactics for increasing visibility in and traffic from AI answers. “Answer engine optimization” is synonymous with GEO, as are references to large language models, such as “LLM optimization.”

Whatever the name, optimizing for generative AI is different from traditional search engines. The distinction lies in the underlying technology:

  • LLM platforms don’t always perform a search to produce answers. The platforms use training data, which doesn’t typically have sources or URLs. It’s a knowledge base for accessing answers without necessarily knowing the origin.
  • Unlike search engines, LLMs don’t have an index or a cache of URLs. When they search, LLMs use external search engines, likely Google for ChatGPT.
  • After searching, AI crawlers go to the page, read it, and pull answers from it. AI crawlers are much less advanced than those of search engines and, accordingly, cannot render a page as easily as a Google crawler.

GEO-specific tactics include:

  • A brand in AI training data has long-term exposure in answers, but appearing in that data requires an approach beyond SEO. The keys are concise, relevant, problem-solving content on-site, and off-site exposure in reviews, forums, and other reputable mentions.
  • Being indexed by Google is more or less essential for AI answers, to a point. Additional optimization steps include (i) ensuring the site is accessible and crawlable by AI bots, (ii) structuring content to enable AI to pull answers easily, and (iii) optimizing for prompts, common needs, and, yes, keywords.
  • Keywords remain critical (and evolving) for GEO and SEO, although the former “fans out” to also answer likely follow-up prompts.

SEO

Reliance on Google varies by the genAI platform. ChatGPT, again, taps Google’s index. AI Overviews mostly summarize top-ranking URLs for the initial and fan-out queries. Higher rankings in organic search will likely directly elevate visibility in AI Overviews and Perplexity.

Google Search remains the most powerful discovery and visibility engine. And a brand that ranks high in Google is typically prominent, which drives visibility in AI Answers. As such, organic rankings also drive AI indirectly, through brand signals.

GEO

Thus GEO and SEO overlap. Pages that rank highly in organic search results will almost certainly end up in training data with elevated chances of appearing in AI answers.

Yet for training data, AI platforms continuously crawl sites with their own, limited bots and those of third-party providers, such as Common Crawl.

Hence AI platforms crawl pages via two paths: from links in organic search results and independently with their own (or outsourced) bots.

GEO kicks in when the bots reach a page. The sophistication of AI crawlers is much less than Google’s. GEO requires concise, relevant page content that’s easily accessed, without JavaScript, and succinctly summarizes a need and then answers it directly.

Structured data markup, such as from Schema.org, likely helps, too.

In short, a GEO-ready page has a clear purpose and clear answers, easily crawled.

Google Rolls Out ‘Preferred Sources’ For Top Stories In Search via @sejournal, @MattGSouthern

Google is rolling out a new setting that lets you pick which news outlets you want to see more often in Top Stories.

The feature, called Preferred Sources, is launching today in English in the United States and India, with broader availability in those markets over the next few days.

What’s Changing

Preferred Sources lets you choose one or more outlets that should appear more frequently when they have fresh, relevant coverage for your query.

Google will also show a dedicated From your sources section on the results page. You will still see reporting from other publications, so Top Stories remains a mix of outlets.

Google Product Manager Duncan Osborn says the goal is to help you “stay up to date on the latest content from the sites you follow and subscribe to.”

How To Turn It On

Image Credit: Google
  1. Search for a topic that is in the news.
  2. Tap the icon to the right of the Top stories header.
  3. Search for and select the outlets you want to prioritize.
  4. Refresh the results to see the updated mix.

You can update your selections at any time. If you previously opted in to the experiment through Labs, your saved sources will carry over.

In early testing through Labs, more than half of participants selected four or more sources. That suggests people value seeing a range of outlets while still leaning toward publications they trust.

Why It Matters

For publishers, Preferred Sources creates a direct way to encourage loyal readers to see more of your coverage in Search.

Loyal audiences are more likely to add your site as a preferred source, which can increase the likelihood of showing up for them when you have fresh, relevant reporting.

You can point your audience to the new setting and explain how to add your site to their list. Google has also published help resources for publishers that want to promote the feature to followers and subscribers.

This adds another personalization layer on top of the usual ranking factors. Google says you will still see a diversity of sources, and that outlets only appear more often when they have new, relevant content.

Looking Ahead

Preferred Sources fits into Google’s push to let you customize Search while keeping a variety of perspectives in Top Stories.

If you have a loyal readership, this feature is another reason to invest in retention and newsletters, and to make it easy for readers to follow your coverage on and off Search.

Is AI Cutting Into Your SEO Conversions? via @sejournal, @Kevin_Indig

Since March 2025 in the U.S. (and May elsewhere), many sites have noticed an uncomfortable pattern: organic conversions slipping.

It’s easy to blame falling traffic from Google’s intensified AI Overviews.

But purchase intent doesn’t just vanish. Does it?

If your conversions are holding steady, congratulations. If they’re not, the reasons may be more layered than you think.

In today’s Memo, I’m breaking down the five biggest forces I see behind SEO conversion declines across industries:

  • Loss of top-of-the-funnel (TOFU) traffic (and why it matters more than you thought).
  • Platform shifts pulling demand into other ecosystems.
  • Channel shifts from organic to paid search.
  • Attribution leakage that hides organic’s true impact.
  • Macro factors pressuring conversion rates.

I’ll also walk you through the signals to check, how to measure each, and – inside the premium section – the exact process I use to identify which drivers are hitting a site the hardest.

AI cutting SEO conversionsImage Credit: Kevin Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

How have your SEO conversions changed since Google intensified AI Overviews?

If they’ve grown – all the power to you!

If not, I’m seeing five underlying reasons that could be contributing to their decline across industry types:

  1. Loss of TOFU traffic.
  2. Platform shift.
  3. Channel shift.
  4. Attribution loss.
  5. Economic change.

Sites that are noticing an SEO conversion drop have seen it since 2025 (March in the U.S., May in other countries).

It’s logical to assume that the reason is a decline in organic traffic – makes sense – but purchase intent doesn’t just vanish.

Have your conversions gone to other sites, or could there be another explanation behind their decline?

Let’s dig in.

For decades, SEOs have created top-of-the-funnel content (like “what is X” or “why you need X”). This kind of content often has an unclear impact on the bottom line.

Now that organic clicks are dropping, conversions are dropping (to a lower degree) as well.

Was top-of-the-funnel content more impactful than we thought all along?

I raised that theory first in the AI Halftime Report:

AIOs are really mostly TOFU queries. In that case, TOFU content always had more impact on the bottom line than we were able to prove and we can expect the traffic decline to level off.

Or AIOs impact way more than MOFU and BOFU queries as well (which is what I think), and we’re in for a long decline of traffic.

If true, I expect revenue that’s attributed to organic search to decline at a lower rate – or not at all for certain companies – since purchase intent doesn’t just go away. Therefore, revenue results would relate more to our ability to influence purchase intent.

That’s where my concept of “Demand Activation” elasticity comes in.

In economics, price elasticity measures how much demand changes when prices change.

In marketing, Demand Activation elasticity describes how much eventual purchase behavior changes when you influence someone early in their journey.

Think about Demand Activation as how many potential customers you influence to buy from you.

If the “elasticity” is high, being visible at the very top of the funnel creates a disproportionate downstream impact on revenue, even if you can’t directly attribute it in analytics.

If this turns out to be correct, it’s an argument for earning AI visibility.

If Demand Activation has the impact I think it has, being visible in ChatGPT, AI Mode & Co. has stronger downstream effects than we can directly attribute. I’ve certainly seen more high-pipeline deals and purchases come from ChatGPT for some of my clients.

To illustrate the concept, let’s consider an economic example.

I’ve been searching for an excuse to write about the economic impact if Germany were to open stores on Sundays for a long time: Would people buy more if they could, or would purchases simply spread out across more days?

Studies by the EHI, IFO, and IW Köln show that people in Germany would actually buy more if stores were open on Sundays, especially non-food items. [123]

Stores in Germany do open a few Sundays a year.

And during those rare occasions, people shop more, especially for impulse buys.

Some research suggests that it’s mainly driven by events and tourism in higher spend areas, but looking at EU neighbors with an open-Sunday policy, like the Netherlands, we see consistently higher incremental retail spend.

To bring it back to Search, exposure early on in the user journey (as in “more open Sundays”) might have a stronger downstream impact (like more top-of-funnel visits) than we thought. Therefore, it could be critical to be broadly visible in LLMs.

Signals To Check:

1. TOFU traffic decline vs. MOFU/BOFU.

  • How to measure: In Search Console, filter queries using a TOFU regex (remove branded terms). Compare YOY clicks for TOFU vs. MOFU/BOFU.

2. Branded search volume change.

  • How to measure: Use Google Trends or a classic keyword tool (Ahrefs, Semrush) to track branded search volume over time. Correlate drops with TOFU traffic declines and conversions from organic.

3. Assisted conversions drop.

  • How to measure: In GA4 or another MTA model, compare YOY assisted conversions from organic search. A sharp drop suggests TOFU content was influencing downstream revenue.

Another explanation is that conversions are happening more on other platforms instead of Google Search.

While Google’s ad market share has grown over the last five years, search behavior has diversified across multiple ecosystems:

  • TikTok, YouTube, Reddit, LinkedIn, Instagram, niche forums – all of which have their own “search” layers.
  • YouTube has long been the second-largest search engine in the world.
  • Reddit is now the second-largest site in the U.S. (only Wikipedia is bigger), and Google is surfacing Reddit content more prominently, except in ecommerce.

The biggest shift, however, may be to LLMs.

ChatGPT alone sees 2.5 billion prompts per day.[4] While many prompts are additive to Google Search and exploratory in intent, it’s unlikely there’s no overlap with purchase-driven queries.

Why this is happening now:

  • Google’s increased integration of Reddit results (high-trust user content) changes click patterns.
  • New LLM model releases (ChatGPT o3, Gemini 2.5) improve quality and speed, keeping users inside AI environments longer.
  • AI-first platforms are beginning to feel less “experimental” and more like a default research tool.
Image Credit: Kevin Indig

Signals To Check:

1. Referral traffic from non-Google search platforms.

  • How to measure: In GA4, track YOY referral traffic from YouTube, Reddit, TikTok, LinkedIn. See if gains coincide with Google organic losses.

2. Share of search activity across platforms.

  • How to measure: Use Similarweb, Statcounter, or GWI to compare platform-specific search volumes and market share over time.

3. Self-reported attribution.

  • How to measure: Ask users to fill out a short survey about where they first and last saw your brand after signing up or buying.

It is also possible that the clicks that would have gone to organic search are now going to paid search. The logic is simple:

When AI Overviews or zero-click results satisfy most of the informational need, the only prominent offers left are often ads.

If users still want to explore a product or service after reading an AIO, they might be more likely to click the sponsored result than scroll further to organic links.

The timing matches AIO rollout phases. If we see Google reporting strong Search revenue growth while organic traffic declines, it is a sign the demand has not disappeared – it is just being monetized differently.

Image Credit: Kevin Indig

Alphabet’s Q1 2025 10‑Q [5] reveals that paid clicks from Google Search either grew or hit 0% growth in the last 10 quarters, but never declined.

Impressions (from Google Network), on the other hand, saw the opposite trend.

Whenever paid impressions drop, paid clicks go up because lower ad inventory means advertisers need to pay more for traffic.

Q2 2025 earnings [6] highlighted that Search ad revenue grew 12% year‑over‑year. Industry benchmark data reveals that the average Google CPC in 2025 lands at $5.26 – up approximately 13% year-over-year.[7]

So, less ad inventory leads to higher CPCs and more paid clicks.

Since we don’t know how many AI Overviews Google shows ads for, we can’t say with certainty that more clicks are going to ads as a direct result, but the data does show that more clicks are going to ads.

Signals To Check:

1. Organic vs. paid search traffic share.

  • How to measure: In GA4, compare YOY sessions from organic search vs. paid search. Look for paid’s share increasing as organic drops.

2. Paid search impression and click growth.

  • How to measure: Pull impressions and clicks from your Google Ads account (or industry benchmarks) over the last 12 months and compare to pre-AIO periods.

3. CPC and CPA trends.

  • How to measure: In Google Ads or industry benchmarks, track YOY CPC and CPA changes in your vertical. Rising CPC with organic decline suggests a mix shift.

One (popular) possibility is that the influence of organic search has not changed much, but the way we measure it has.

Essentially, classic attribution methods are broken.

In the classic model, the path was:

Search → Click → Landing Page → Conversion

Now, the path may look more like:

Search (or AIO) → Brand Recall → Direct Visit → Conversion

AI Overviews answer the user’s question before the click, so when they are ready to buy, they bypass the search click entirely and go straight to the homepage or an app.

In analytics, that conversion shows up as direct traffic, not organic search.

Attribution leakage has always been a challenge for SEO, but AI-driven summaries and brand mentions make it worse.

Because it’s a demand capture channel, consider that SEO takes much more time between first and last touch to convert users than the default 90-day lookback window.

Often, the last touch is prone to paid channels because advertising tips people over the edge.

Also, it’s not uncommon for users to switch devices during a purchase cycle, making attribution way harder. Lastly, most attribution tools are geared towards advertising.

If you only track last-click conversions, you may underestimate the true contribution of search visibility.

Signals To Check:

1. Direct conversions are up while organic conversions are down.

  • How to measure: In GA4, compare YOY direct channel conversions vs. organic. Look for inverse movement.

2. Branded search stable or rising.

  • How to measure: Use Google Trends or a keyword tool to track branded search queries. Stability with organic session decline suggests clicks are being skipped.

3. Multi-touch attribution still shows search influence.

  • How to measure: In GA4 (data-driven model) or a dedicated attribution tool, check if search remains a common first or assist touchpoint even when last-click conversions fall.

Are SEO conversion rates down because people simply have less money?

There is credible evidence that macro conditions in the U.S. are weighing on conversion rates:

1. Price sensitivity and promotion dependence

Adobe reports that shoppers were unusually price elastic during the holiday season of 2024.

A 1% drop in price produced a roughly 1.03% rise in demand, indicating elevated sensitivity to discounts. That effect implies conversions were heavily promotion-led.[8]

Adobe’s Digital Price Index shows online prices have fallen for 33 straight months through May 2025, suggesting merchants are discounting to stimulate demand.

Sustained discounting typically lifts conversions only when price cuts are material, and it compresses margins.[9]

2. Consumer caution and mix shift

Salesforce’s Shopping Index commentary notes U.S. shoppers “buying less,” prioritizing essentials, and trading down in 2025.

It also cites 0% U.S. ecommerce sales growth in Q1 2025, consistent with softer sensitivity to purchase.[10]

Consumer confidence has improved slightly but remains soft relative to 2024, which tends to dampen conversion rates.[11]

3. Household finance constraints

The New York Fed reports total household debt at a record $18.39 trillion in Q2 2025, with delinquency rates up from earlier periods and credit card balances at $1.21 trillion.

Higher borrowing costs and rising delinquencies constrain checkout conversion, especially for lower-income cohorts.[12]

4. Observed conversion pressure in digital benchmarks

Contentsquare’s 2025 Digital Experience Benchmark finds online conversion rates fell 6.1% year over year, attributing much to experience friction.

In context with the macro signals above, this supports a broader environment where it is harder to turn visits into orders without heavier incentives.[13]

But…

Overall, U.S. ecommerce dollars are still growing in many periods, including +5.6% year-over-year in Q1 2025 and strong holiday spend, so demand has not collapsed.

Growth is being “bought” through price cuts and promotions, which can mask weaker underlying conversion propensity.[1415]

Also, you could argue that these economic conditions have been in place for a few years.

Why would they impact SEO conversions so much now?

Signals To Check:

1. Organic conversion rate trend vs. other channels.

  • Track monthly SEO conversion rates alongside paid search, direct, and email.
  • If all channels decline in parallel, macroeconomic pressure is a likely driver.
  • If organic drops disproportionately, AI Overviews are adding to the decline.

2. Correlation with economic indicators.

  • Compare organic CR trends to macro metrics like CPI, inflation rate, Consumer Confidence Index, and online price trends (Adobe DPI).
  • Look for statistically significant correlations, like CR rising when CPI falls or confidence increases.
  • If patterns are similar across Paid Search and Direct, macroeconomic factors are likely influencing purchase readiness.

3. Promotion elasticity

  • Measure CR lift during promotions vs. baseline for organic, paid, and direct traffic.
  • A bigger lift than in prior years – especially if mirrored across channels – indicates conversions are increasingly discount-driven, a sign of macro pressure.

If you’re experiencing a decline in SEO conversions in 2025, it’s likely not due to one specific reason.

In fact, it’s likely that all five options are playing into SEO conversion drops across the web.

To what degree each option has an impact matters from site to site and industry to industry.

That’s why it’s so important to run the analysis I recommend in each section above for your own data.

AI Mode will intensify the downward trend of SEO conversions.

I don’t think SEO will decline to zero because a small fraction of people will still click, even in AI Mode.

And Google won’t show AI Mode everywhere, because adoption is generational (see the UX study of AIOs for more info).

I think AI Mode will launch at a broader scale (like showing up for more queries overall) when Google figures out monetization.

Plus, ChatGPT is not yet monetizing, so advertisers go to Google and Meta – for now. And that’s my hypothesis as to why Google Search is continuing to grow.

At least for the time being.

It’ll be interesting to see what happens next in the coming months.


Featured Image: Paulo Bobita/Search Engine Journal

Google Says AI-Generated Content Should Be Human Reviewed via @sejournal, @martinibuster

Google’s Gary Illyes confirmed that AI content is fine as long as the quality is high. He said that “human created” isn’t precisely the right way to describe their AI content policy, and that a more accurate description would be “human curated.”

The questions were asked by Kenichi Suzuki in the context of an exclusive interview with Illyes.

AI Overviews and AI Mode Models

Kenichi asked about the AI models used for AI Overviews and AI Mode, and he answered that they are custom Gemini models.

Illyes answered:

“So as you noted, the the model that we use for AIO (for AI Overviews) and for AI mode is a custom Gemini model and that might mean that it was trained differently. I don’t know the exact details, how it was trained, but it’s definitely a custom model.”

Kenichi then asked if AI Overviews (AIO) and AI Mode use separate indexes for grounding.

Grounding is where an LLM will connect answers to a database or a search index so that answers are more reliable, truthful, and based on verifiable facts, helping to cut down on hallucinations. In the context of AIO and AI Mode, grounding generally happens with web-based data from Google’s index.

Suzuki asked:

“So, does that mean that AI Overviews and AI Mode use separate indexes for grounding?”

Google’s Illyes answered:

“As far as I know, Gemini, AI Overview and AI Mode all use Google search for grounding. So basically they issue multiple queries to Google Search and then Google Search returns results for that those particular queries.”

Kenichi was trying to get an answer regarding the Google Extended crawler, and Illyes’s response was to explain when the Google Extended crawler comes into play.

“So does that mean that the training data are used by AIO and AI Mode collected by regular Google and not Google Extended?”

And Illyes answered:

“You have to remember that when grounding happens, there’s no AI involved. So basically it’s the generation that is affected by the Google extended. But also if you disallow Google Extended then Gemini is not going to ground for your site.”

AI Content In LLMs And Search Index

The next question that Illyes answered was about whether AI content published online is polluting LLMs. Illyes said that this is not a problem with the search index, but it may be an issue for LLMs.

Kenichi’s question:

“As more content is created by AI, and LLMs learn from that content. What are your thoughts on this trend and what are its potential drawbacks?”

Illyes answered:

“I’m not worried about the search index, but model training definitely needs to figure out how to exclude content that was generated by AI. Otherwise you end up in a training loop which is really not great for for training. I’m not sure how much of a problem this is right now, or maybe because how we select the documents that we train on.”

Content Quality And AI-Generated Content

Suzuki then followed up with a question about content quality and AI.

He asked:

“So you don’t care how the content is created… so as long as the quality is high?”

Illyes confirmed that a leading consideration for LLM training data is content quality, regardless of how it was generated. He specifically cited the factual accuracy of the content as an important factor. Another factor he mentioned is that content similarity is problematic, saying that “extremely” similar content shouldn’t be in the search index.

He also said that Google essentially doesn’t care how the content is created, but with some caveats:

“Sure, but if you can maintain the quality of the content and the accuracy of the content and ensure that it’s of high quality, then technically it doesn’t really matter.

The problem starts to arise when the content is either extremely similar to something that was already created, which hopefully we are not going to have in our index to train on anyway.

And then the second problem is when you are training on inaccurate data and that is probably the riskier one because then you start introducing biases and they start introducing counterfactual data in your models.

As long as the content quality is high, which typically nowadays requires that the human reviews the generated content, it is fine for model training.”

Human Reviewed AI-Generated Content

Illyes continued his answer, this time focusing on AI-generated content that is reviewed by a human. He emphasizes human review not as something that publishers need to signal in their content, but as something that publishers should do before publishing the content.

Again, “human reviewed” does not mean adding wording on a web page that the content is human reviewed; that is not a trustworthy signal, and it is not what he suggested.

Here’s what Illyes said:

“I don’t think that we are going to change our guidance any time soon about whether you need to review it or not.

So basically when we say that it’s human, I think the word human created is wrong. Basically, it should be human curated. So basically someone had some editorial oversight over their content and validated that it’s actually correct and accurate.”

Takeaways

Google’s policy, as loosely summarized by Gary Illyes, is that AI-generated content is fine for search and model training if it is factually accurate, original, and reviewed by humans. This means that publishers should apply editorial oversight to validate the factual accuracy of content and to ensure that it is not “extremely” similar to existing content.

Watch the interview:

Featured Image by Shutterstock/SuPatMaN

Google Says AI-Generated Content Will Not Cause Ranking Penalty via @sejournal, @martinibuster

Google’s Gary Illyes recently answered the question of whether AI-generated images used together with “legit” content can impact rankings. Gary discussed whether it had an impact on SEO and called attention to a technical issue involving server resources that is a possible outcome.

Does Google Penalize for AI-Generated Content?

How does Google react to AI image content when it’s encountered in the context of a web page? Google’s Gary Illyes answered that question within the context of a Q&A and offered some follow-up observations about how it could lead to extra traffic from Google Image Search. The question was asked at about the ten-minute mark of the interview conducted by Kenichi Suzuki and published on YouTube.

This is the question that was asked:

“Say if there’s a content that the content itself is legit, the sentences are legit but and also there are a lot of images which are relevant to the content itself, but all of them, let’s say all of them are generated by AI. Will that content or the overall site, is it going to be penalized or not?”

This is an important and reasonable question because Google ran an update about a year ago that appeared to de-rank low quality AI-generated content.

Google’s Gary Ilyes’ answer was clear that AI-generated content will not result in penalization and that it has no direct impact on SEO.

He answered:

“No, no. So AI generated image doesn’t impact the SEO. Not direct.

So obviously when you put images on your site, you will have to sacrifice some resources to those images… But otherwise you are not going to, I don’t think that you’re going to see any negative impact from that.

If anything, you might get some traffic out of image search or video search or whatever, but otherwise it should just be fine.”

AI-Generated Content

Gary Illyes did not discuss authenticity; however it’s a good thing to consider in the context of using AI-generated content. Authenticity is an important quality for users, especially in contexts where there is an expectation that an illustration is a faithful depiction of an actual outcome or product. For example, users expect product illustrations to accurately reflect the products they are purchasing and screenshots of food to reasonably represent the completed dishes after following the recipe instructions.

Google often says that content should be created for users and that many questions about SEO are adequately answered by the context of how users will react to it. Illyes did not reflect on any of that, but it is something that publishers should consider if they care about how content resonates with users.

Gary’s answer makes it clear that AI-generated content will not have a negative impact on SEO.

Featured Image by Shutterstock/Besjunior

Google Web Guide: How It’s Reshaping The SERP And What It Means For Your SEO Strategy via @sejournal, @cyberandy

For decades, the digital world has been defined by hyperlinks, a simple, powerful way to connect documents across a vast, unstructured library. Yet, the foundational vision for the web was always more ambitious.

It was a vision of a Semantic Web, a web where the relationships between concepts are as important as the links between pages, allowing machines to understand the context and meaning of information, not just index its text.

With its latest Search Labs experiment, Web Guide (that got me so excited), Google is taking an important step in this direction.

Google’s Web Guide is designed to make it easier to find the information, not just webpages. It is optimized as an alternative to AI Mode and AI Overview for tackling complex, multi-part questions or to explore a topic from multiple angles.

Built using a customized version of the Gemini AI model, Web Guide organizes search results into helpful, easy-to-browse groups.

This is a pivotal moment. It signals that the core infrastructure of search is now evolving to natively support the principle of semantic understanding.

Web Guide represents a shift away from a web of pages and average rankings and toward a web of understanding and hyper-personalization.

This article will deconstruct the technology behind Web Guide, analyzing its dual impact on publishers and refining a possibly new playbook for the era of SEO or Generative Engine Optimization (GEO) if you like.

I personally don’t see Web Guide as just another feature; I see it as a glimpse into the future of how knowledge shall be discovered and consumed.

How Google’s Web Guide Works: The Technology Behind The Hyper-Personalized SERP

At its surface, Google Web Guide is a visual redesign of the search results page. It replaces the traditional, linear list of “10 blue links” with a structured mosaic of thematic content.

For an exploratory search like [how to solo travel in Japan], a user might see distinct, expandable clusters for “comprehensive guides,” “personal experiences,” and “safety recommendations.”

This allows users to immediately drill down into the facet of their query that is most relevant to them.

But, the real revolution is happening behind the scenes. This curation is powered by a custom version of Google’s Gemini model, but the key to its effectiveness is a technique known as “query fan-out.”

When a user enters a query, the AI doesn’t just search for that exact phrase. Instead, it deconstructs the user’s likely intent into a series of implicit, more specific sub-queries, “fanning out” to search for them in parallel.

For the “solo travel in Japan” query, the fan-out might generate internal searches for “Japan travel safety for solo women,” “best blogs for Japan travel,” and “using the Japan Rail Pass.”

By casting this wider net, the AI gathers a richer, more diverse set of results. It then analyzes and organizes these results into the thematic clusters presented to the user. This is the engine of hyper-personalization.

The SERP is no longer a one-size-fits-all list; it’s a dynamically generated, personalized guide built to match the multiple, often unstated, intents of a specific user’s query. (Here is the early analysis I did by analyzing the network traffic – HAR file – behind a request.)

To visualize how this works in semantic terms, let’s consider the query “things to know about running on the beach,” which the AI breaks down into the following facets:

Screenshot from search for [things to know about running on the beach], Google, August 2025
running on the beach fan-outImage from author, August 2025

The WebGuide UI is composed of several elements designed to provide a comprehensive and personalized experience:

  • Main Topic: The central theme or query that the user has entered.
  • Branches: The main categories of information generated in response to the user’s query. These branches are derived from various online sources to provide a well-rounded overview.
  • Sites: The specific websites from which the information is sourced. Each piece of information within the branches is attributed to its original source, including the entity name and a direct URL.

Let’s review Web Guide in the context of Google’s other AI initiatives.

Feature Primary Function Core Technology Impact on Web Links
AI Overviews Generate a direct, synthesized answer at the top of the SERP. Generative AI, Retrieval-Augmented Generation. High negative impact. Designed to reduce clicks by providing the answer directly. It is replacing featured snippets, as recently demonstrated by Sistrix for the UK market.
AI Mode Provide a conversational, interactive, generative AI experience. Custom version of Gemini, query fan-out, chat history. High negative impact. Replaces traditional results with a generated response and mentions.
Web Guide Organize and categorize traditional web link results. Custom version of Gemini, query fan-out. Moderate/Uncertain impact. Aims to guide clicks to more relevant sources.

Web Guide’s unique role is that of an AI-powered curator or librarian.

It adds a layer of AI organization while preserving the fundamental link-clicking experience, making it a strategically distinct and potentially less contentious implementation of AI in search.

The Publisher’s Conundrum: Threat Or Opportunity?

The central concern surrounding any AI-driven search feature is the potential for a severe loss of organic traffic, the economic lifeblood of most content creators. This anxiety is not speculative.

Cloudflare’s CEO has publicly criticized these moves as another step in “breaking publishers’ business models,” a sentiment that reflects deep apprehension across the digital content landscape.

This fear is contextualized by the well-documented impact of Web Guide’s sibling feature, AI Overviews.

A critical study by the Pew Research Center revealed that the presence of an AI summary at the top of a SERP dramatically reduces the likelihood that a user will click on an organic link, a nearly 50% relative drop in click-through rate in its analysis.

Google has mounted a vigorous defense, claiming it has “not observed significant drops in aggregate web traffic” and that the clicks that do come from pages with AI Overviews are of “higher quality.”

Amid this, Web Guide presents a more nuanced picture. There is a credible argument that, by preserving the link-clicking paradigm, it could be a more publisher-friendly application of AI.

Its “query fan-out” technique could benefit high-quality, specialized content that has struggled to rank for broad keywords.

In this optimistic view, Web Guide acts as a helpful librarian, guiding users to the right shelf in the library rather than just reading them a summary at the front desk.

However, even this more “link-friendly” approach cedes immense editorial control to an opaque algorithm, making the ultimate impact on net traffic uncertain to say the least.

The New Playbook: Building For The “Query Fan-Out”

The traditional goal of securing the No. 1 ranking for a specific keyword is rapidly becoming an outdated and insufficient goal.

In this new landscape, visibility is defined by contextual relevance and presence within AI-generated clusters. This requires a new strategic discipline: Generative Engine Optimization (GEO).

GEO expands the focus from optimizing for crawlers to optimizing for discoverability within AI-driven ecosystems.

The key to success in this new paradigm lies in understanding and aligning with the “query fan-out” mechanism.

Pillar 1: Build For The “Query Fan-Out” With Topical Authority

The most effective strategy is to pre-emptively build content that maps directly to the AI’s likely “fan-out” queries.

This means deconstructing your areas of expertise into core topics and constituent subtopics, and then building comprehensive content clusters that cover every facet of a subject.

This involves creating a central “pillar” page for a broad topic, which then links out to a “constellation” of highly detailed, dedicated articles that cover every conceivable sub-topic.

For “things to know about running on the beach,” (the example above) a publisher should create a central guide that links to individual, in-depth articles such as “The Benefits and Risks of Running on Wet vs. Dry Sand,” “What Shoes (If Any) Are Best for Beach Running?,” “Hydration and Sun Protection Tips for Beach Runners,” and “How to Improve Your Technique for Softer Surfaces.”

By creating and intelligently interlinking this content constellation, a publisher signals to the AI that their domain possesses comprehensive authority on the entire topic.

This dramatically increases the probability that when the AI “fans out” its queries, it will find multiple high-quality results from that single domain, making it a prime candidate to be featured across several of Web Guide’s curated clusters.

This strategy must be built upon Google’s established E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) principles, which are amplified in an AI-driven environment.

Pillar 2: Master Technical & Semantic SEO For An AI Audience

While Google states there are no new technical requirements for AI features, the shift to AI curation elevates the importance of existing best practices.

  • Structured Data (Schema Markup): This is now more critical than ever. Structured data acts as a direct line of communication to AI models, explicitly defining the entities, properties, and relationships within your content. It makes content “AI-readable,” helping the system understand context with greater precision. This could mean the difference between being correctly identified as a “how-to guide” versus a “personal experience blog,” and thus being placed in the appropriate cluster.
  • Foundational Site Health: The AI model needs to see a page the same way a user does. A well-organized site architecture, with clean URL structures that group similar topics into directories, provides strong signals to the AI about your site’s topical structure. Crawlability, a good page experience, and mobile usability are essential prerequisites for competing effectively.
  • Write with semiotics in mind: As Gianluca Fiorelli would say, focus on the signals behind the message. AI systems now rely on hybrid chunking; they break content into meaning-rich segments that combine text, structure, visuals, and metadata. The clearer your semiotic signals (headings, entities, structured data, images, and relationships), the easier it is for AI to interpret the purpose and context of your content. In this AI-gated search environment, meaning and context have become your new keywords.

The Unseen Risks: Bias In The Black Box

A significant criticism of AI-driven systems like Web Guide lies in their inherent opacity. These “black boxes” pose a formidable challenge to accountability and fairness.

The criteria by which the Gemini model decides which categories to generate and which pages to include are not public, raising profound questions about the equity of the curation process.

There is a significant risk that the AI will not only reflect but also amplify existing societal and brand biases. A compelling example is to review complex issues to test the fairness of the Web Guide.

Screenshot from search for [Are women more likely to be prescribed antidepressants for physical symptoms?], Google, August 2025

Medical diagnostic queries are complex and can easily reveal biases.

Screenshot from search for [Will AI eliminate most white-collar jobs?], Google, July 2025

Once again, UGC is used and might not always bring the right nuance between doom narratives and overly optimistic positions.

Since the feature is built upon these same core systems of traditional Search, it is highly probable that it will perpetuate existing biases.

Conclusion: The Age Of The Semantic AI-Curated Web

Google’s Web Guide is not a temporary UI update; it is a manifestation of a deeper, irreversible transformation in information discovery.

It represents Google’s attempt to navigate the passage between the old world of the open, link-based web and the new world of generative, answer-based AI.

The “query fan-out” mechanism is the key to understanding its impact and the new strategic direction. For all stakeholders, adaptation is not optional.

The strategies that guaranteed success in the past are no longer sufficient. The core imperatives are clear: Embrace topical authority as a direct response to the AI’s mechanics, master the principles of Semantic SEO, and prioritize the diversification of traffic sources. The era of the 10 blue links is over.

The era of the AI-curated “chunks” has begun, and success will belong to those who build a deep, semantic repository of expertise that AI can reliably understand, trust, and surface.

More Resources:


Featured Image: NicoElNino/Shutterstock

Why Semantic HTML Matters For SEO And AI

I’ve had this post in drafts for a while, mostly as a container for me to drop bits into for when I get time to expand it into a proper newsletter.

Then, my good friend Jono Alderson published his excellent piece on semantic HTML, and for a few weeks, I lost the will to complete mine.

But, I thought I should finish my version anyway, as my focus is slightly different and perhaps a bit more practical than Jono’s.

You should still definitely read Jono’s blog; it says all I want to say and more.

Semantic HTML

Let’s start with a quick overview of what semantic HTML is.

As the language upon which the web is built, HTML is a markup that surrounds text to provide it with structure.

The

tag around a block of content indicates that it is a paragraph of text.

The

tag around a sentence shows that it is the page’s main heading.

The

    tag indicates the start of an ordered (usually numbered) list.

    The tag indicates you’ll be loading an image onto the webpage. And so forth.

    Semantic HTML was used to code every webpage.

    Content was surrounded by specific tags that indicated what each bit of content was meant for, and then CSS was applied to make it look good. It wasn’t perfect by any means, but it worked.

    It also meant that you could look at the raw HTML source of a webpage and see what the page was trying to deliver, and how. The HTML signposted the structure and meaning of each bit of content on the page. You could see the purpose of the page just by looking at its code.

    Then WYSIWYG editors and later JavaScript frameworks arrived on the scene, and HTML took a backseat. Instead of

    and

    , we got endless nestings of

    and tags.

    The end result is webpage HTML that lacks structure and has no meaning, until it is completely rendered in the browser and visually painted onto a screen. Only then will the user (and a machine system trying to emulate a user) understand what the page’s purpose is.

    It’s why Google goes through the effort of rendering pages as part of its indexing process (even though it really doesn’t want to).

    We know Google doesn’t usually have the time to render a news article before it needs to rank it in Top Stories and elsewhere. The raw HTML is therefore immensely important for news publishers.

    Good HTML allows Google to effortlessly extract your article content and rank your story where it deserves in Google’s ecosystem.

    Semantic HTML is a key factor here. This is the reason why SEOs like me insist that an article’s headline is wrapped in the

    heading tag, and that this is the only instance of

    on an article page.

    The H1 headline indicates a webpage’s primary headline. It signposts where the article begins, so that Google can find the article content easily.

    Which HTML Tags Are Semantic?

    Beyond the

    heading tag, there are many other semantic HTML elements you can implement that allow Google to more easily extract and index your article content.

    In no particular order, the elements you should be using are:

    • Paragraphs: Don’t use
      and tags to format the article into paragraphs. There’s been a tag for that for as long as HTML has existed, and it’s the

      tag. Use it.

    • Subheadings: Use

      /

      /

      subheading tags to give your page structure. Use subheadings in an article to preface specific sections of content in your article. Use subheadings for the headers above concrete structural elements, such as recommended articles.

  1. Images: Always use the tag if you want to show an image that you’d like Google to see as well. Google explicitly recommends this.
  2. Clickable Links: When linking to another page, either internal or external, use the tag with an “href” value containing the target URL. It’s the only kind of link that Google will definitely follow.
  3. Relational Links: The tag allows you to create a relationship between the current URL and another URL. This can be a canonical page, a stylesheet, an alternative language version of the current page, etc.
  4. Lists: Bullet lists should use the
      tag, and numbered lists should use

        tag. You can make them look however you want with CSS, but do use the list tags as the foundation.
  5. Emphasis: When you want to highlight a specific word or phrase, there are semantic HTML tags you should use for that: for italics, and for bold.
  6. All the above tags, with the exception of , are intended for the content of the webpage, providing structure and meaning to the text.

    There are additional semantic HTML tags that are intended to provide structure and meaning to the code of the page.

    These tags allow Google to identify different elements on the page, such as the navigation vs. a sidebar, and process them accordingly.

    Semantic HTMLSemantic HTML image from W3Schools.com (Image Credit: Barry Adams)
    • The  and  tags exist to separate the page’s metadata (in the ) from the actual content (in the ). Every HTML page starts with those two.

    •  can be used to wrap around the head section of the page, where the logo, navigation, and other stylistic elements sit.


    •  should be used for your site’s main navigation. Mega menus, hamburger menus, top navigation links, whatever form your navigation takes, you should wrap it in the

    • You can use 

      tags to divide your page into multiple sections. One section could be the article; another could be the comments below the article.


    •  is the tag that shows where the page’s actual main article text begins (including the headline). This is a very valuable tag for news publishers.

    • With 

       you can indicate blocks of content like a sidebar of trending stories, recommended articles, or the latest news.


    •  is used for, you guessed it, the footer of the webpage.

    These structural semantic tags help search engines understand the purpose and value of each section of HTML.

    It enables Google to rapidly index your content and process the different elements of your pages appropriately.

    There are many more semantic HTML tags at your disposal, for various different purposes. Chances are, there’s an HTML element for every imaginable use case.

    Rather than cram your code full of

    tags to make something happen, first see if there’s a proper HTML element that does the trick.

    How Does It Help AI?

    We know that LLMs like ChatGPT and Perplexity crawl the open web for training data, as well as for specific user queries that require content from the web.

    What some of you may not know is that LLMs do not render JavaScript when they process webpages.

    Google is the exception to the rule, as it has devoted a great deal of resources to rendering webpages as part of indexing.

    Because Google’s Gemini is the only LLM built on Google’s index, Gemini is the only LLM that uses content from fully rendered webpages.

    So, if you want to have any chance of showing up as a cited source in ChatGPT or Perplexity, you’d do well to ensure your complete page content is available in your raw, unrendered HTML.

    Using semantic HTML to structure your code and provide meaning also helps these LLMs easily identify your core content.

    It’s much simpler for ChatGPT to parse a few dozen semantic HTML tags rather than several hundred (or even thousand) nested

    tags to find a webpage’s main content.

    If and when the “agentic web” comes to life (I’m skeptical), semantic HTML is likely a crucial aspect of success.

    With meaningless

    and tags, it’s much easier for an AI agent to misunderstand what actions it should perform.

    When you use semantic HTML for things like buttons, links, and forms, the chances of an AI agent failing its task are much lower.

    The meaning inherent in proper HTML tags will tell the AI agent where to go and what to do.

    What About Structured Data?

    You may think that structured data has made semantic HTML obsolete.

    After all, with structured data, you can provide machine systems with the necessary information about a page’s content and purpose in a simple machine-readable format.

    This is true to an extent. However, structured data was never intended to replace semantic HTML. It serves an entirely different purpose.

    Structured data has limitations that semantic HTML doesn’t have.

    Structured data won’t tell a machine which button adds a product to a cart, what subheading precedes a critical paragraph of text, and which links the reader should click on for more information.

    By all means, use structured data to enrich your pages and help machines understand your content. But you should also use semantic HTML for the same reasons.

    Used together, semantic HTML and structured data are an unbeatable combination.

    Build Websites, Not Web Apps

    I could go off on a 2,500-word rant about why we should be building websites instead of web apps and how the appification of the web is anathema to the principles on which the World Wide Web was founded, but I’ll spare you that particular polemic.

    Suffice it to say that web apps for content-delivery websites (like news sites) are almost always inferior to plain old-fashioned websites.

    And websites are built, or should be, on HTML. Make use of all that HTML has to offer, and you’re avoiding 90% of the technical SEO pitfalls that web apps tend to faceplant themselves into.

    That’s it for another edition. Thanks for reading and subscribing, and I’ll see you at the next one!


    This post was originally published on SEO For Google News.


    Featured Image: N Universe/Shutterstock

Google Is Testing An AI-Powered Finance Page via @sejournal, @martinibuster

Google announced that they’re testing a new AI-powered Google Finance tool. The new tool enables users to ask natural language questions about finance and stocks, get real-time information about financial and cryptocurrency topics, and access new charting tools that visualize the data.

Three Ways To Access Data

Google’s AI finance page offers three ways to explore financial data:

  1. Research
  2. Charting Tools
  3. Real-Time Data And News

Screenshot Of Google Finance

The screenshot above shows a watchlist panel on the left, a chart in the middle, a “latest updates” section beneath that, and a “research” section on the right hand panel.

Research

The new finance page enables users to ask natural language questions about finance, including the stock market, and the AI will return comprehensive answers, plus links to the websites where the relevant answers can be found.

Closeup Screenshot Of Research Section

Charting Tools

Google’s finance page also features charting tools that enable users to visualize financial data.

According to Google:

“New, powerful charting tools will help you visualize financial data beyond simple asset performance. You can view technical indicators, like moving average envelopes, or adjust the display to see candlestick charts and more.”

Real-Time Data

The new finance page also provides real-time data and tools, enabling users to explore finance news, including cryptocurrency information. This part features a live news feed.

The AI-powered page will roll out over the next few weeks on Google.com/finance/.

Read more at Google:

We’re testing a new, AI-powered Google Finance.

Featured Image by Shutterstock/robert_s