Core Update Done, GSC Bug Fixed, Mueller On Gurus – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect when you can start analyzing core update performance, how much you can trust your impression data, and what Google’s CEO thinks AI will do to software security.

Here’s what matters for you and your work.

March 2026 Core Update Is Complete

Google’s March 2026 core update finished rolling out on April 8. The Google Search Status Dashboard confirms the completion.

Key facts: The rollout took 12 days, starting March 27 and finishing April 8. That’s within Google’s two-week estimate and faster than the December update, which took 18 days. Google called it “a regular update” and didn’t publish a companion blog post or new guidance. This was the third confirmed update in roughly five weeks, following the February Discover core update and the March spam update.

Why This Matters

You can now run a clean before-and-after comparison in Search Console. Google recommends waiting at least one full week after completion before drawing conclusions, which means mid-April is the earliest window for reliable analysis.

A ranking drop after a core update does not mean your site violated a policy. Core updates reassess content quality across the web. Some pages move up while others move down. Roger Montti, writing for Search Engine Journal, suggested the spam-then-core sequencing may not have been a coincidence, describing it as clearing the table before recalibrating quality signals.

What SEO Professionals Are Saying

Lily Ray, VP, SEO & AI Search at Amsive, noted on X that YouTube has gained visibility since the core update began rolling out:

“Just checked a client that ranked in AI Overviews last week and now the top 4 links in AI Overviews are all YouTube.

Let me guess: the core update was another way for Google to boost YouTube, like it did with the Discover core update.”

Aleyda Solís, SEO consultant and founder of Orainti, is running a poll on LinkedIn asking how the update impacted peoples’ websites. Currently, most respondants say the impact of the update with either positve or not noticeable.

Read our full coverage: Google Confirms March 2026 Core Update Is Complete

Google Fixes Search Console Bug That Inflated Impressions For Nearly A Year

Google confirmed a logging error in Search Console that over-reported impressions starting May 13, 2025. The company updated its Data Anomalies page on April 3 to acknowledge the issue.

Key facts: The bug ran for nearly 11 months before Google publicly acknowledged it. Clicks and other metrics were not affected. Google said the fix will roll out over the next several weeks, and sites may see a decrease in reported impressions during that period.

Why This Matters

If your impression numbers have looked unusually healthy since last May, this bug is likely part of the reason. The correction will change what your Performance report shows, but it will not change how your site actually performed in search. The impressions were logged incorrectly. Your actual visibility may not have changed.

Teams that reported impression-based metrics to clients or stakeholders since May were working with inflated numbers. Click data provides a cleaner signal for performance analysis while the fix rolls out. Treat May 13, 2025 as a data annotation point, similar to how you would mark an algorithm update date in your reporting.

What SEO Professionals Are Saying

Brodie Clark, independent SEO consultant, flagged the issue on March 30, four days before Google’s acknowledgment. He wrote:

“Heads-up: there is something bizarre going on with Google Search Console data right now.

Similar to the changes that came to light after the disabling of &num=100, impressions are again skyrocketing for specific surfaces on desktop.”

Clark documented impression spikes across merchant listings and Google Images filters on multiple ecommerce sites and called for the Search Console team to investigate.

Chris Long, co-founder of Nectiv, wrote on LinkedIn: “Holy moly SEOs. It turns out Google has been accidentally inflating impressions in Search Console reports for ALMOST A YEAR.” Long noted that Google did not indicate how much impressions would decrease, and that the profiles he checked appeared stable so far.

Source: Google Data Anomalies in Search Console

Pichai Says AI Could ‘Break Pretty Much All Software’

Google CEO Sundar Pichai said AI models are “going to break pretty much all software out there” during a podcast conversation with Stripe CEO Patrick Collison. The interview covered AI infrastructure constraints and security risks.

Key facts: Pichai framed software security as a hidden constraint on AI deployment alongside memory supply and energy. When investor Elad Gil mentioned hearing that black market zero-day prices were falling because AI was increasing the supply of discoverable vulnerabilities, Pichai said he was “not at all surprised.”

Why This Matters

The security conversation may feel distant from daily SEO work, but it connects to the infrastructure your sites run on. If AI accelerates the pace at which vulnerabilities are found and exploited, the window between a flaw existing and an attacker using it gets shorter. That puts more pressure on maintaining current patches and auditing dependencies.

Pichai’s comments were conversational, not a formal Google policy statement. But they came from someone who oversees both the company’s AI models and its threat intelligence operation. Google’s threat teams have been warning about software security risks tied to faster vulnerability discovery.

Read our full coverage: Pichai Says AI Could ‘Break Pretty Much All Software’

Mueller Calls Self-Described SEO Gurus ‘Clueless Imposters’

Google’s John Mueller responded to a blog post by SEO professional Preeti Gupta about how the word “guru” is misused in the SEO industry. Mueller shared his view on Bluesky.

Key facts: Mueller wrote:

“To me, when someone self-declares themselves as an SEO guru, it’s an extremely obvious sign that they’re a clueless imposter. SEO is not belief-based, nobody knows everything, and it changes over time. You have to acknowledge that you were wrong at times, learn, and practice more.”

Gupta’s original post explained that in India the word guru carries deep cultural and spiritual meaning that is trivialized when SEO practitioners use it as a self-applied label.

Why This Matters

The core of what Mueller said is that SEO changes over time and that nobody has it all figured out.

Just look at what happened this week. Core updates continue to happen without a clear explanation of what changed. A basic logging bug in Search Console went unnoticed for nearly a year. The tools and signals we rely on every day are imperfect, and treating any methodology or perspective as settled knowledge is how mistakes get made.

Read Roger Montti’s full coverage: Google’s Mueller On SEO Gurus Who Are “Clueless Imposters”

Theme Of The Week: The Day-to-Day Work Continue

The speculation about where search is going has never been louder. But this week’s events were a core update finishing, a data bug getting patched, and a Google Search Advocate reminding people that nobody has all the answers.

The future Pichai describes may be coming, but it hasn’t arrived yet. Right now, the job is still reading your Search Console data, waiting for a core update to settle, and staying honest about what you do and do not know.

Mueller’s comment that SEO “is not belief-based” and “changes over time” is as good a summary of this week as any. Those who will succeed in the next version of search are probably the ones paying attention to this version first.

Top Stories Of The Week:

Here are the main links from this week’s coverage.

More Resources:

For more context, these earlier stories help fill in the background.


Featured Image: [Photographer]/Shutterstock

Google Answers If Outbound Links Pass “Poor Signals” via @sejournal, @martinibuster

Google’s John Mueller responded to a question about how Google treats outbound links from a site that has a link-related penalty. His answer suggests the situation may not work in the way many assume.

An SEO asked on Bluesky whether a site that has what they described as a “link penalty” could affect the value of outbound links. The question is somewhat vague because a link penalty can mean different things.

  • Was the site buying or building low quality inbound links?
  • Was the site selling links?
  • Was the site involved in some kind of link building scheme?

Despite the vagueness of the question, there’s a legitimate concern underlying it, which is about whether getting links from a site that lost rankings could also transfer harmful signals to other sites.

They asked:

“Hey @johnmu.com hypothetically speaking. If a site has a link penalty are the outbound links from that site devalued? Or do they have the ability to pass on poor signals.. ie bad neighbours?”

There are a number of link related algorithms that I have written about in the past. And as often happens in SEO, other SEOs will pick up on what I wrote and paraphrase it without mentioning my article. Then someone else will paraphrase that and after a couple generations of that there are some weird ideas circulating around.

Poor Signals AKA Link Cooties

If you really want to dig deep into link-related algorithms, I wrote a long and comprehensive article titled What Is Google’s Penguin Algorithm. Many of the research papers discussed in that article were never written about by anyone until I wrote about them. I strongly encourage you to read that article, but only if you’re ready to commit to a really deep dive into the topic.

Another one is about an algorithm that starts with a seed set of trusted sites, and then the further a site is from that seed set, the likelier that site is spam. That’s about link distance ranking, ranking links. Nobody had ever written about this link distance ranking patent until I wrote about it first. Over the years, other SEOs have written about it after reading my article, and though they don’t link to my article, they’re mostly paraphrasing what I wrote. You know how I can tell those SEOs copied my article? They use the phrase “link distance ranking,” a phrase that I invented. Yup! That phrase does not exist in the patent. I invented it, lol.

The other foundational article that I wrote is about Google’s Link Graph and how it plays into ranking web pages. Everything I write is easy to understand and is based on research papers and patents that I link to so that you can go and read them yourself.

The idea behind the research papers and patents is that there are ways to use the link relationships between sites to identify what a site is about, but also whether it’s in a spammy neighborhood, which means low-quality content and/or manipulated links.

The articles about Link Graphs and link distance ranking algorithms are the ones that are related to the question that was asked about outbound links passing on a negative signal. The thing about it is that those algorithms aren’t about passing a negative signal. They’re based on the intuition that good sites link to other good sites, and spammy sites tend to link to other spammy sites. There’s no outbound link cooties being passed from site to site.

So what probably happened is that one SEO copied my article, then added something to it, and fifty others did the same thing, and then the big takeaway ends up being about outbound link cooties. And that’s how we got to this point where someone’s asking Mueller if sites pass “poor signals” (link cooties) to the sites they link to.

Google May Ignore Links From Problematic Sites

Google’s John Mueller was seemingly confused about the question, but he did confirm that Google basically just ignores low quality links. In other words, there are no “link cooties” being passed from one site to another one.

Mueller responded:

“I’m not sure what you mean with ‘has a link penalty’, but in general, if our systems recognize that a site links out in a way that’s not very helpful or aligned with our policies, we may end up ignoring all links out from that site. For some sites, it’s just not worth looking for the value in links.”

Mueller’s answer suggests that Google does not necessarily treat links from problematic sites as harmful but may instead choose to ignore them entirely. This means that rather than passing value or negative signals, those links may simply be excluded from consideration.

That doesn’t mean that links aren’t used to identify spammy sites. It just means that spamminess isn’t something that is passed from one site to another.

Ignoring Links Is Not The Same As Passing Negative Signals

The distinction about ignoring links is important because it separates two different ideas that are easily conflated.

  • One is that a link can lose value or be discounted.
  • The other is that a link can actively pass negative signals.

Mueller’s explanation aligns with the idea that Google simply ignores low-quality links altogether. In that case, the links are not contributing positively, but they are also not spreading a negative signal to other sites. They’re just ignored.

And that kind of aligns with the idea of something else that I was the first to write about, the Reduced Link Graph. A link graph is basically a map of the web created from all the link relationships from one page to another page. If you drop all the links that are ignored from that link graph, all the spammy sites drop out. That’s the reduced link graph.

Mueller cited two interesting factors for ignoring links: helpfulness and the state of not being aligned with their policies. That helpfulness part is interesting, also kind of vague, but it kind of makes sense.

Takeaways:

  • Links from problematic low quality sites may be ignored
  • Links don’t pass on “poor signals”
  • Negative signal propagation is highly likely not a thing
  • Google’s systems appear to prioritize usefulness and policy alignment when evaluating links
  • If you write an article based on one of mine, link back to it. 🙂

Featured Image by Shutterstock/minifilm

Google March Core Update Left 4 Losers For Every Winner In Germany via @sejournal, @MattGSouthern

A SISTRIX analysis of German search data found far more losers than winners after Google’s March core update.

The analysis revealed 134 domains experiencing confirmed visibility losses and 32 with gains. SISTRIX determined these figures by examining 1,371 domains showing significant visibility changes, then applying filters such as a 52-week Visibility Index history, 30 days of daily data, and visual confirmation of each domain’s trend.

The update began rolling out on March 27 and was completed on April 8, 12 days after launch. It was the first broad core update of 2026 and arrived two days after Google finished the March 2026 spam update.

The SISTRIX data covers the German search market specifically. Results in other markets may differ.

What The Data Shows

Online shops accounted for the largest share of losers, with 39 of 134. Losses cut across verticals, hitting fashion (cecil.de, down 30%), electronics (media-dealer.de, down 37%), gardening (123zimmerpflanzen.de, down 27%), and B2B supply retailers. Larger German brands like notebooksbilliger.de and expert.de also declined, each losing about 11%.

Seven language and education tools lost visibility together, forming the most distinct cluster among the losers. verbformen.de fell 30%, bab.la dropped 22%, and korrekturen.de, studysmarter.de, linguee.de, openthesaurus.de, and reverso.net all declined by 7% to 15%. These sites offer conjugation tables, translations, synonyms, and study tools.

SISTRIX reports that recipe and food portals have faced pressure from Featured Snippets and, more recently, AI Overviews. The March update affected several of them. kuechengoetter.de lost 29%, schlemmer-atlas.de fell 25%, and eatsmarter.de dropped 18%. chefkoch.de, Germany’s largest recipe site, remained stable.

Among user-generated content platforms, gutefrage.net (Germany’s equivalent of Quora) lost about 24% of its visibility. SISTRIX noted that the site has been declining since mid-2025, when its Visibility Index peaked at 127. It was around 62 before this update and dropped to 47. x.com also fell 25% in German search visibility.

Who Gained

The 32 winners were dominated by official websites and established brands.

audible.de was the largest gainer at 172%, jumping from a Visibility Index of about 3 to over 8. ratiopharm.de gained 12%, commerzbank.de gained 11%, and government sites like hessen.de and arbeitsagentur.de gained 5-8%.

Four German airport websites grew in parallel. Stuttgart Airport rose 22%, Cologne-Bonn 18%, Hamburg 17%, and Munich 8%. SISTRIX described the airport gains as the clearest cluster signal among winners, which may point to a broader ranking pattern rather than isolated site-level changes.

chatgpt.com gained 32% and bing.com gained 19% in German search visibility, though both started from low baselines (Visibility Index under 5). SISTRIX attributed this more to rising demand for brand search than to algorithmic preference.

Why This Matters

The German data covers a single market, and SISTRIX’s methodology captures domains with a Visibility Index above 1, so smaller sites aren’t represented in this dataset. But the patterns are worth watching.

The language tool cluster is notable. Seven sites offering similar functionality all lost visibility at the same time. SISTRIX raises the question of whether these losses reflect Google devaluing such sites or a shift in user behavior as AI tools cover similar functions.

If you’re tracking your own site’s performance after the March core update, Google recommends waiting at least one full week after the update is complete before drawing conclusions. Your baseline period should be before March 27, compared with performance after April 8.

Looking Ahead

SISTRIX plans to publish additional market analyses. Their English-language core update tracking page covers UK and US radar data but hasn’t yet published the detailed winners-and-losers breakdown for those markets.

Google hasn’t commented on what specific changes the March 2026 core update made. As with all core updates, pages can move up or down as Google’s systems reassess quality across the web.


Featured Image: nitpicker/Shutterstock

What 400 Sites Reveal About Organic Traffic Gains via @sejournal, @MattGSouthern

An analysis of more than 400 websites by Zyppy founder Cyrus Shepard identifies five characteristics associated with whether a site gained or lost estimated organic traffic over the past 12 months.

Shepard classified sites by revisiting many of the same ones covered in Lily Ray’s December core update analysis, categorizing them by business model, content type, and other features, then measuring correlation with traffic changes. Traffic estimates come from third-party tools, not verified Search Console data.

Five features showed the strongest association with traffic gains, measured by Spearman correlation:

  1. Offers a Product or Service: 70% of winning sites offered their own product or service, compared to 34% of losing sites. Service-based offerings like subscriptions and digital goods performed well alongside physical products.
  2. Allows Task Completion: 83% of winners let users complete the task they searched for, versus 50% of losers. Sites don’t need to sell anything to score here.
  3. Proprietary Assets: 92% of winners owned something difficult to replicate, such as unique datasets, user-generated content, or specialized software. Among losers, that figure was 57%.
  4. Tight Topical Focus: Winners tended to cover a single narrow topic deeply. Shepard noted that a general “topical focus” classification showed no difference between winners and losers, but tightening the definition to single-topic depth revealed the pattern.
  5. Strong Brand: 32% of winners had high branded search volume relative to their overall traffic, compared to 16% of losers. Shepard measured brand strength by examining each site’s top 20 keywords for navigational branded terms using Ahrefs data.

The effects were additive. Sites with zero features had a 13.5% win rate. Sites with all five reached 69.7%.

What Didn’t Correlate

The study also tested features Shepard expected to matter but found no correlation with traffic changes. These included first-hand experience, personal perspectives, user-generated content, community platforms, and uniqueness of information.

Shepard cautioned against misreading those findings.

He suggested these features may already be baked into Google’s algorithm from earlier updates, meaning they could still matter even if they don’t show differential results between winners and losers in this dataset.

Why This Matters

Shepard’s findings suggest that sites offering a product, completing a task, or owning harder-to-replicate assets were more likely to show estimated organic traffic gains in this dataset. The study puts specific numbers behind that pattern, though it doesn’t establish causation.

The additive pattern is the most useful finding for those evaluating their position. A site with one winning feature had a win rate (15%) roughly the same as a site with no winning features (13%). The gap only widened at three or more features.

Roger Montti’s analysis for Search Engine Journal in December identified related patterns from the other direction, noting that Google’s topical classifications have become more precise and that core updates sometimes correct over-ranking rather than penalizing sites.

Looking Ahead

The correlation values in this study are moderate (0.206–0.391), and the methodology relies on third-party traffic estimates rather than verified analytics. Correlation doesn’t establish causation.

Sites that offer products may perform better for reasons beyond Google’s ranking preferences, including higher return-visitor rates and more natural backlink profiles.

The full dataset is public, which means others can test these classifications against their own data.


Featured Image: Master1305/Shutterstock

Your Owned Content Is Losing To A Stranger’s Reddit Comment via @sejournal, @DuaneForrester

The next time you ask an AI what product to buy, which agency to hire, or which software platform actually works, pay attention to where the answer comes from. Increasingly, it does not come from the vendor’s own website. It comes from a stranger’s Reddit comment written eighteen months ago, upvoted 847 times by people who tried the thing themselves.

This is not an accident. It’s architecture.

The Reddit Effect

The financial architecture behind Reddit’s presence in AI answers became public in early 2024. Google signed an initial licensing agreement with Reddit worth a reported $60 million per year, with total disclosed licensing across multiple AI companies reaching $203 million. That arrangement gave Google real-time access to Reddit’s posts and comments for training its AI models and powering AI Overviews, and the terms are now being renegotiated upward. Reddit executives have said current agreements undervalue the platform’s discussions, which now fuel everything from ChatGPT to Google’s generative answers.

The citation data confirms how central Reddit has become. Between August 2024 and June 2025, Reddit was the most cited domain in both Google AI Overviews and Perplexity, and the second most cited source in ChatGPT, trailing only Wikipedia. In Google’s AI Overviews specifically, Reddit citations grew 450% between March and June 2025. A separate study from early 2024 found Reddit appearing in results more than 97% of the time for queries related to products and reviews.

Reddit’s visibility in traditional search has fluctuated over this period, with organic rankings dropping noticeably in early January 2025. But its foothold in the AI answer layer has proven more durable than its SERP position, because these are different systems pulling from the same data source. Reddit’s hold on the AI layer reflects something structural about the content itself, not just a licensing arrangement.

Why Community Signals Work For AI

To understand why community platforms have become load-bearing infrastructure for AI answers, you need to hold two ideas at once.

First, community signals enter AI systems through two distinct pathways, not one. In the parametric pathway, community content gets baked into model weights during training and becomes part of what the model knows before anyone types a query. In the retrieval pathway, community content gets pulled in real time through retrieval-augmented generation (RAG) when the model needs current, specific, or contested information. Brands absent from community platforms before a model’s training cutoff face a significantly harder problem than brands simply absent from recent crawls. They are invisible at both layers simultaneously.

Second, the quality filtering that community platforms apply, through upvotes, accepted answers, reply chains, and sustained engagement, functions as a proxy signal that training pipelines have learned to weight. OpenAI’s training data hierarchy explicitly places Reddit content with three or more upvotes at Tier 2, directly below Wikipedia and licensed publisher partners. A heavily upvoted Reddit thread is treated as more credible input than most published content on the open web, because it carries the accumulated validation of hundreds or thousands of independent human judgments.

When multiple independent voices converge on the same recommendation across a thread, that convergence pattern looks different to a retrieval system than a single authoritative publication making the same claim. It is the AI equivalent of a strong link graph, distributed and uncoordinated agreement that no single actor manufactured. About 48% of AI citations now come from community platforms like Reddit and YouTube, with 85% of brand mentions originating from third-party pages rather than owned domains. The model is telling you something about where it trusts the signal.

The Manipulation Risk

Any system that rewards community consensus will attract people who want to manufacture it, and this one is no exception. The SEO parallel is exact: The same logic that made link spam profitable for decades is now making fake community engagement attractive to anyone who understands how AI systems weigh these signals.

The Trap Plan incident in late 2025 is the clearest recent case study. A marketing firm posted approximately 100 fake organic comments promoting a game on Reddit, then published a blog post documenting the campaign’s approach. The screenshots circulated everywhere. The post was ultimately deleted, but the reputational damage was not. A thread naming the company indexed in Google and sat in search results alongside legitimate coverage, visible to every potential customer searching the brand.

The detection infrastructure is more robust than in the early link spam era. Reddit’s automated systems flag coordinated inauthentic behavior through patterns in posting timing, account age, karma accumulation, and comment structure, and moderator communities actively watch for coordinated campaigns. The community itself maintains a strong norm against manufactured consensus, and the backlash when a campaign is exposed tends to be proportional to how authentic it claimed to be.

There is also a structural dimension that goes beyond individual campaigns. Research by Originality.ai found that 15% of Reddit posts in 2025 were likely AI-generated, up from 13% in 2024. That is not just brands gaming the system. It is a broader contamination of the community signal itself, creating a feedback loop where AI trains on Reddit content that increasingly contains AI-generated material designed to look like human consensus. The argument for building authentic community presence now, before detection systems become more aggressive about filtering synthetic signals, is a strategic one, not a moral one. Manufactured signals degrade faster than authentic ones, and the penalty when they collapse is worse than the benefit while they worked.

What Brands Should Actually Do

The practical implication is not “post more on Reddit.” It is more precise than that.

Monitor brand mentions across Reddit, Stack Overflow, Quora, and review platforms not as a reputation exercise but as entity intelligence. The narrative that forms in community discussions, the specific language, the repeated associations, the persistent objections, is the narrative AI systems are more likely to reproduce than anything on your own website. If community threads consistently describe your enterprise product as “great for small teams,” that characterization will surface in AI answers regardless of how your positioning page reads.

Ensure subject matter experts are participating in relevant communities under their real identities, contributing answers to questions they actually know well. The upvote accumulation those answers generate is a durable quality signal that persists across training cycles. One genuinely helpful response in a relevant technical subreddit or a well-supported Stack Overflow answer does more long-term structural work than ten pieces of owned content, because it carries community validation that owned content cannot provide.

Create content that community members actively want to reference. Original research, specific benchmarks, documented case studies with real numbers, these are the formats that generate organic community citations, which in turn generate the kind of third-party mentions that AI systems treat as consensus rather than marketing. A practical rule of thumb that holds in community engagement generally: 80% of participation should contribute genuine value with no promotional intent, and the 20% that mentions your product should only appear when it is the honest answer to the question being asked.

Think of community presence as a context moat with a long construction timeline. Unlike most marketing assets, authentic community reputation compounds slowly and is genuinely difficult for competitors to replicate quickly. A brand that has been a good-faith participant in its relevant communities for two years has something that cannot be acquired in a quarter.

The Review Layer

Most brands managing reviews understand that aggregate star ratings affect purchase decisions. Fewer understand that the specific review content, the language customers use, the features they praise or criticize, the comparisons they draw to competitors, is increasingly the raw material for how AI describes your brand at the moment of recommendation.

The numbers make the stakes concrete. Domains with profiles on review platforms have three times higher chances of being chosen by ChatGPT as a source compared to sites without such presence. In a G2 survey of B2B software buyers in August 2025, 87% reported that AI chatbots are changing how they research products, and half now start their buying journey in an AI chatbot rather than Google, a 71% increase in just four months. When a procurement director asks an AI to recommend CRM options for a 50-person team, the answer draws from review platform content, not from vendor websites.

Here is where the landscape shifts in a way that most review management programs have not caught up with yet. Not all review platforms are accessible to AI retrieval systems, and the differences are significant.

A June 2025 analysis of 456,570 AI citations found that review platforms divide into three distinct categories based on crawler access policies. Platforms like Clutch and SourceForge allow full crawler access, and their content surfaces regularly in AI-generated answers. Platforms like G2 and Capterra operate with selective access that permits some retrieval. Major platforms (Yelp is an example) block AI crawlers at the robots.txt level, which means reviews written there, however numerous or positive, are structurally unavailable to AI retrieval at the point of recommendation.

The citation data reflects this directly. For Perplexity, 75% of review site citations in the software category come from G2. Clutch dominates AI citations in the agency and digital services category. The market prominence of a review platform and its accessibility to AI crawlers are different variables, and review management strategy that conflates them is directing effort toward platforms where the AI visibility signal cannot be retrieved regardless of review volume.

This is not an argument that major platform reviews are worthless. They still matter significantly for direct consumer decision-making, traditional search, and brand reputation overall. It is an argument that the AI visibility value of a review depends specifically on whether the platform permits retrieval, and that understanding has material consequences for where teams prioritize cultivating review volume when AI answer visibility is the goal.

One additional layer of complexity: robots.txt compliance among AI crawlers is not guaranteed. Analysis by Tollbit found that 13.26% of AI bot requests ignored robots.txt directives in Q2 2025, up from 3.3% in Q4 2024. The boundary between “blocked” and “accessible” is not as clean in practice as it is in policy. The implication is to treat your entire review footprint as potentially accessible to AI retrieval while being deliberate about which platforms receive active cultivation for AI visibility specifically.

The Broader Picture

Community presence has always been a trust signal. What has changed is that the systems making purchase recommendations at scale are now reading those signals directly, at the platform level, and weighting them above the content brands produce about themselves.

SEO professionals who have spent years optimizing owned content for search visibility now face a layer of visibility that operates on fundamentally different inputs. The link-building parallel is not rhetorical. Just as the profession eventually accepted that links from authoritative external sources outweigh on-page optimization in many contexts, the community signal layer is demonstrating the same dynamic for AI-generated answers. Authority comes from outside the brand’s control, which means the work of building it looks less like content production and more like sustained, authentic participation in the places where buyers actually talk.

The brands that start building authentic community presence now are constructing a signal that compounds. Genuine community reputation is difficult to manufacture at scale, genuinely difficult for competitors to replicate quickly, and structurally favored by the same AI systems that are increasingly the first stop in the purchase journey. Later entrants will find it expensive to match.

If you want to learn more about topics like these, take a look at my newest book on Amazon: The Machine Layer: How to Stay Visible and Trusted in the Age of AI Search. It’s written to help you not only understand the topics I write about here, but also to help you learn more about LLMs and consumer behavior, build ways to grow conversations within your organization, and can serve as a workbook with multiple frameworks included.

More Resources:


Featured Image: ginger_polina_bublik/Shutterstock; Paulo Bobita/Search Engine Journal

Why Your New SEO Vendor Can’t Build on a Broken Foundation via @sejournal, @TaylorDanRW

There is a common expectation in SEO that needs to be challenged, and it usually appears as soon as a new agency or consultant takes over performance.

Many businesses assume that fresh expertise should lead to quick wins, as if changing vendors resets everything and removes the issues that held performance back before. This view ignores how search works and overlooks the lasting impact of previous decisions.

Quick wins can still exist, but they should be seen as small steps rather than complete solutions. Changes such as improving page titles, updating content, or fixing isolated issues can lead to short-term improvements, but they do not address deeper problems.

Relying too heavily on these quick fixes can create a false sense of progress while leaving the core issues unresolved.

Inheriting History Is Never Starting Fresh

A new SEO vendor does not start with a clean slate, and they are never working in isolation from what came before. They inherit the full history of the website, including past strategies, technical decisions, and content choices, whether those were effective or not. That inherited position becomes the real starting point, and in many cases, it is far more restrictive than stakeholders expect.

Poor SEO does not simply fail to deliver results, as it often creates long-term problems that take time to fix. If a site has built up low-quality backlinks, published thin or duplicated content, or ignored technical issues, it develops a reputation that search engines take into account. This means that even strong improvements can take time to show results, as they must first counterbalance what already exists.

The impact of past decisions tends to build over time, shaping how a domain is viewed and ranked. Practices such as buying links at scale, creating large volumes of low-value pages, or focusing only on short-term gains often leave a lasting footprint. Search engines respond to this by becoming more cautious, which affects not just old content but also any new work that is introduced.

Technical debt is another major factor that often goes unnoticed until a new vendor begins to investigate properly. Many websites grow over time without clear structure or oversight, leading to issues such as broken internal links, inefficient crawl paths, duplicate content, and problems with how pages are rendered. These issues directly affect how search engines access and understand a site, which makes them a priority to fix before growth can happen.

Stabilization Comes Before Growth

The early stages of a new SEO engagement are usually focused on stabilizing the site rather than driving immediate growth. This involves detailed audits, identifying crawl and indexation issues, and making sure important pages are accessible and prioritized correctly. Although this work is essential, it does not always lead to instant improvements in rankings, which can be frustrating if expectations are not aligned.

Rebuilding trust is another key part of the process, especially if a site has used poor practices in the past. Search engines are designed to reward consistency and reliability over time, and trust cannot be restored through quick changes. It requires steady improvements in content quality, link profile, and overall site structure, supported by signals that show genuine value to users.

Brand Strength As A Limiting Factor

Brand strength also plays a larger role than many businesses realize, and its absence can limit SEO performance even when technical issues are addressed. Websites with little presence outside their own domain, few mentions across the web, and low branded search demand often struggle to compete. Search engines look for signals that a brand is recognized and trusted, which means visibility beyond the site itself matters.

A lack of investment in brand building creates additional work for any new SEO vendor, as they may need to introduce digital PR, content promotion, and strategies that increase visibility across relevant platforms. These efforts take time to build momentum and rarely deliver immediate results, which reinforces the need for a longer-term view.

Frustration often comes from the fact that much of the early work is not visible in the form of traffic or ranking gains. Audits, clean-up work, and structural improvements are not always obvious to stakeholders, but they are necessary to remove the barriers that limit performance. Without this foundation, any gains from new content or links are likely to be limited.

Accountability And Communication

Accountability still matters, and a new SEO vendor should be clear about what they are doing and why. They should explain the starting position, set realistic timelines, and outline the steps needed to improve performance. Clear communication helps build trust and ensures that stakeholders understand what progress looks like at each stage.

Realistic timelines are often longer than businesses expect, especially when there are significant issues to address. The first few months are usually focused on fixing problems and improving site health, followed by a period where early signals begin to improve. More noticeable growth in rankings and traffic often comes later, once the foundation is stronger and search engines respond to consistent improvements.

A shift in mindset is needed to get the most from SEO, moving away from the idea of quick fixes towards a more long-term approach. Past decisions, whether they involve shortcuts or a lack of investment, shape current performance and cannot be undone instantly. Accepting this reality allows businesses to focus on building sustainable growth rather than chasing immediate results.

The Long-Term View

Bringing in a new SEO vendor should be seen as the start of a process rather than the end of a problem. The best outcomes come from understanding the starting point, investing in the necessary work, and staying consistent over time. This approach creates the conditions for steady improvement rather than short bursts of activity that do not last.

The key point is that SEO reflects both history and current effort, and ignoring the past leads to unrealistic expectations. A new vendor can bring structure, expertise, and direction, but they cannot remove the impact of previous actions overnight. What they can do is build a stronger position over time, provided they are given the space and support to do it properly.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

Google’s CEO Predicts Search Will Become An AI Agent Manager via @sejournal, @martinibuster

In a recent interview, Google’s CEO, Sundar Pichai, explained how search is changing in response to advances in AI. The discussion centered on a simple question: If AI can act, plan, and execute, then what role will search play in the future?

Information Queries May Become Agent AI Search

The interviewer asked whether search remains a product or becomes something else as AI systems begin handling tasks instead of returning results.

They asked:

“What do you view as a future of search? Is it a distribution mechanism? Is it a future product? Is it one of N ways people are going to interact with the world?”

Had Pichai been interviewed by members of the publishing and SEO community, his answer may have received some pushback. He answered that search does not get replaced, but continues to expand as new capabilities are introduced and user expectations change.

He said:

“I feel like in search, with every shift, you’re able to do more with it.

And we have to absorb those new capabilities and keep evolving the product frontier.

If it’s mobile, the product evolved pretty quickly, you’re getting out of a New York subway, you’re looking for web pages, you want to go somewhere, how do you find it? So you’re constantly shifting, people’s expectations shift, and you’re moving along.

If I fast forward, a lot of what are just information seeking queries will be agentic search. You will be completing tasks, you have many threads running.”

In the first example of a person coming out of a New York subway, yes, someone may search for a web page, but will Google show the user a web page or treat it like data by summarizing it?

The second example completely removes the user from search and inserts agents in the middle. That scenario implicitly treats web pages as data.

Will Search Exist In Ten Years?

Pichai was asked what the future of search will be like in ten years. His answer suggests that the future of search will involve many information-seeking queries being handled as tasks carried out by agentic AI systems. Furthermore, search will be more like an orchestration layer that sits between the user and AI agents.

The exact question he was asked is:

“Will search exist in ten years?”

Google’s CEO responded:

“It keeps evolving. Search would be an agent manager, right, in which you’re doing a lot of things.

I think in some ways, I use anti-gravity today, and you have a bunch of agents doing stuff.

And I can see search doing versions of those things, and you’re getting a bunch of stuff done.”

At this point, the interviewer tried to get Pichai to return to the question of the actual search paradigm, if that will exist in ten years. Pichai declined to expressly state whether the search paradigm will still exist.

He continued his answer:

“Today in AI mode in search, people do deep research queries. So that doesn’t quite fit the definition of what you’re saying. But kind of people adapted to that.

So I think people will do long-running tasks, can be asynchronous.”

What he described is a version of search that manages actions across multiple steps, where multiple processes can run at once instead of returning a single set of ranked results. And yet, it’s weirdly abstract because he’s talking about queries but fails to mention websites or web pages in that specific context.

What’s going on? His next answer brings it into sharper focus.

Who Is The Flea And Who Is The Dog?

The interviewer picked up on Pichai’s mention of adaptation, made an analogy to evolution, and then asked:

“It’s almost like, does that former version or paradigm eventually go away? And what was search becomes an agent and your future interface is an agent, and the search box in ten years or n years is no longer the–“

Pichai interrupted the interviewer to say that it’s no longer possible to look ahead five or ten years because the models are changing, what people do is rapidly changing, and given that pace, the only thing to do is to embrace it.

He explained:

“The form factor of devices are going to change. I/O is going to radically change. And so …I think you can paralyze yourself thinking ten years ahead. But we are fortunate to be in a moment where you can think a year ahead, and the curve is so steep. It’s exciting to just do that year ahead, right?

Whereas in the past, you may need to sit and envision five years out, unlike the models are going to be dramatically different in a year’s time.

…I think it’ll evolve, but it’s an expansionary moment. I think what a lot of people underestimate in these moments is, it feels so far from a zero-sum game to me, right? The value of what people are going to be able to do is also on some crazy curve, right?

I think the more you view it as a zero-sum game, it looks difficult. It can become a zero-sum game if you’re innovating or the product is not evolving.

But as long as you’re at the cutting edge of doing those things, and we’re doing both search and Gemini, and so they will overlap in certain ways. They will profoundly diverge in certain ways, right? And so I think it’s good to have both and embrace it.”

What Google’s CEO is doing is rejecting the possibility of becoming obsolescent by deliberately focusing on competitive agility and embracing uncertainty as a strategic advantage.

That might work for Google, but what about websites?

I think businesses also need to embrace competitive agility and get out of the mental attitude of fleas on the dog. And yet, online businesses, publishers, and the SEO community are not fleas because Google itself is the one feeding off the web’s content.

What About Websites?

The interview lasted for over an hour, and at no point did Pichai mention websites. He mentioned web pages twice, once as something to understand with technology and once in the example of a person emerging from a subway who is looking for a web page. In both of those instances, the context was not Google Search looking for or fetching a web page in response to a query.

Given that Google Search is used by billions of people every day, it’s a bit odd that websites aren’t mentioned at all by the CEO of the world’s most successful search engine.

Google Confirms March 2026 Core Update Is Complete via @sejournal, @MattGSouthern

Google’s March core update has finished rolling out, according to the Google Search Status Dashboard.

The dashboard updated at 6:12 AM PDT on April 8 with the completion note: “The rollout was complete as of April 8, 2026.” The update began on March 27 at 2:00 AM PT, making the total rollout 12 days.

That’s within Google’s original two-week estimate and faster than the December 2025 core update, which took 18 days.

What Google Said About This Update

Google called the March 2026 core update “a regular update designed to better surface relevant, satisfying content for searchers from all types of sites.”

The company didn’t publish a companion blog post or announce specific goals for this update. It also didn’t share new guidance with the completion notice.

Core updates involve broad changes to Google’s ranking systems. They aren’t targeted at specific types of content or policy violations. Pages can move up or down based on how the update reassesses quality across the web.

Three Updates In One Month

March was unusually active for Google’s ranking systems. The core update was the third confirmed update in roughly five weeks.

The February Discover core update finished rolling out on February 27 after 22 days. That was the first time Google publicly labeled a core update as Discover-only.

The March 2026 spam update rolled out and completed in under 20 hours on March 24-25. That was the shortest confirmed spam update in the dashboard’s history.

The core update followed two days later on March 27.

Roger Montti, writing for Search Engine Journal, noted that the spam-then-core sequencing may not have been a coincidence. He wrote that spam fighting is logically part of the broader quality reassessment in a core update, comparing it to “clearing the table” before recalibrating the core ranking signals.

How The Rollout Compared To Recent Core Updates

The March rollout was the second-shortest of the past five broad core updates.

Only the December 2024 update finished faster.

Why This Matters

The completed rollout means you can now compare pre-update and post-update performance in Search Console across a full window. Google recommends waiting at least one full week after completion before drawing conclusions from the data.

Your baseline period should be the weeks before March 27, compared against performance after April 8. Keep in mind that the March spam update completed on March 25, so any ranking changes between March 24-27 could be from either update.

A drop in rankings after a core update doesn’t mean your site violated a policy. Core updates reassess content quality across the web, and some pages move up while others move down.

Looking Ahead

Google will likely continue making smaller, unannounced core updates between the larger confirmed rollouts. The company updated its core updates documentation in December to say that smaller core updates happen on an ongoing basis.


Featured Image: Rohit-Tripathi/Shutterstock

GEO Was Invented On Sand Hill Road

I’ve been putting this one off.

Not because the argument is hard to make – it isn’t – but because the behavior it’s about has been a fixture of the SEO industry for as long as I’ve worked in it. The shiny new object arrives, the FOMO kicks in, the conference decks update, and an entire professional class reshuffles its vocabulary to match whatever acronym landed that quarter. I wrote recently about how AI content scaling is just content spinning with better grammar – the tools change, the qualitative wall doesn’t. The acronym cycle runs on the same engine.

But this time, the shiny object didn’t emerge from practitioners observing a genuine shift and trying to name it. It was manufactured upstream – by venture capital, amplified by engagement farming, and adopted by professionals whose primary motivation isn’t “this is real” but “I can’t afford to look like I’m not keeping up.”

So here we are.

The Investment Thesis

In May 2025, Andreessen Horowitz published a blog post titled “How Generative Engine Optimization (GEO) Rewrites the Rules of Search.” It appeared in their enterprise newsletter, written by two a16z partners, Zach Cohen and Seema Amble. Public, on their website, available to anyone with a browser.

The post declared that the “$80 billion+ SEO market just cracked” and that “a new paradigm is emerging.” It name-dropped three GEO tools – Profound, Goodie, and Daydream – as platforms enabling brands to track how they appear in AI-generated responses. It described a future where GEO companies would “fine-tune their own models” and “own the loop” between insight and iteration. a16z promoted it across their social channels, including a post from the firm’s official account: “SEO is slowly losing its dominance. Welcome to GEO.”

Screenshot from X, April 2026

Also: a16z is an investor in Profound.

The blog post creates demand for the category. The category creates demand for the tools. The tools are in their portfolio. A sales funnel with a byline.

Marc Andreessen’s “Software is eating the world” wasn’t just an essay – it was a prospectus dressed in editorial clothing. The GEO post follows the same logic: identify the wave, position your bets as the inevitable response, publish the narrative that makes both feel like settled truth. Even sympathetic coverage noticed. The Alts.co write-up noted plainly that “a16z is drawing attention to GEO because it’s a chance to peddle/pump their own investments.”

What Happens When Nobody Checks The Source

Ten months later, in March 2026, someone on X described the blog post as “a 34-page internal memo” that a16z had “quietly published” and which had received only “200 views.” It cited a specific statistic: portfolio companies ranking No.1 on Google saw “a 34% drop in organic traffic in 12 months.” I’m not interested in the individual. This post is one of hundreds following the same pattern, and the pattern is what matters.

None of this is real.

The blog post isn’t 34 pages. It isn’t internal. It wasn’t quietly published. The specific opening line and the 34% stat don’t appear in the actual piece. You can verify this yourself right now.

This isn’t a16z’s doing. An engagement farmer found an old blog post and repackaged it with fictional scaffolding because that format performs better on social media. A “leaked internal memo” is sexier than a newsletter. “200 views” creates scarcity. Invented statistics create authority.

And it worked. People shared it, built threads around it, didn’t check whether the memo existed. Why would they? The narrative was too good.

Two independent forces – a VC firm doing standard narrative-building, and an engagement farmer doing standard engagement farming – converge on the same result. The VC seeds the category. The farmer, months later, independently amplifies a distorted version. Professionals absorb the distortion because nobody goes back to check the primary source.

Not coordination. Convergence. And a category becomes “real” without anybody establishing that it is.

The Willing Participants

VCs and engagement farmers can’t take all the credit. SEO professionals are the most culpable link in the chain.

One widely-shared post on X captures the mentality – and I’m citing the behavior, not the person, because this position is everywhere in the industry right now. The argument: Clients don’t want to hear that GEO is “just SEO repackaged.” Neither does your executive team. Tell them “it’s just SEO,” and you’ll be “perceived as a legacy outdated thinker.” You might even be “replaced by a GEO agency.” The conclusion: “whether you like it or not… it’s in your best interest to get aboard the AI train.”

Image Credit: Pedro Dias

The argument is not that GEO works. Not that it measures anything meaningful. Not that it produces better outcomes for clients. The argument is that if you don’t adopt the label, you will lose your job.

Ambulance chasing dressed as career advice.

And here’s what makes SEO professionals more culpable than the VCs or the engagement farmers: they don’t just absorb the fear. They market it. They repackage the anxiety about their own relevance and sell it downstream to clients and executives who are even less equipped to evaluate the claims. The VC creates the narrative. The engagement farmer amplifies it. The SEO professional walks into a client meeting and says, “You need a GEO strategy, or you’ll be invisible to AI,” knowing full well they can’t define what that means in terms the client could verify.

This is how SEO professionals undermine their own credibility. Not by being wrong about the technical shift, but by selling certainty they don’t have about a category they didn’t bother to verify, using someone else’s terminology to paper over their own lack of understanding.

Nobody held a gun to anyone’s head and said, “Put GEO on your LinkedIn headline.” SEO professionals are choosing to adopt terminology they haven’t evaluated, from sources they haven’t verified, for tools they can’t validate; and then surfing that same fear factor into client budgets. If the only way you can sell your expertise is by rebranding it every eighteen months, the problem isn’t the label. It’s the confidence.

The people most capable of evaluating whether GEO is a real discipline are the same people adopting it fastest. Every hour they spend chasing the vocabulary is an hour not spent building the understanding that would make them impossible to replace. I’ve written about how AI is hollowing out the junior pipeline: the apprenticeship layer where practitioners actually learn judgment. The acronym treadmill accelerates that. It replaces depth with breadth, understanding with terminology, and professional development with professional performance.

What’s Actually Underneath

Strip away the a16z framing, the fabricated memos, and the professional anxiety, and ask the boring question: what would you actually do differently if you took GEO seriously?

I’ve argued before that grounding is just retrieval: When an AI system cites a source, it’s running a search task, not exercising editorial judgment. Indexing, vector search, relevance scoring. The same principles we’ve been working with for two decades, with a generative interface on top. GEO isn’t a second discipline standing alongside SEO. It’s old retrieval visibility in a trench coat pretending to be two disciplines. And your data interpretation skills – perched comfortably atop Mount Dunning-Kruger – don’t trump the clear, demonstrable logic of how a retrieval engine works. If you can’t explain why a result appeared, you have no business selling a service that claims to optimize for it.

The a16z post itself confirms this, perhaps accidentally. The advice it gives brands pursuing GEO is a greatest hits of SEO best practices: structured content, authoritative backlinks (rebranded as “earned media”), schema markup, topical authority. It even recommends “short, dense, citation-worthy paragraphs” and “specific claims with verifiable numbers” – which is, and I cannot stress this enough, just competent writing.

David McSweeney has been doing SEO since before some of these GEO startups’ founders graduated. He’s spent years writing about the same tactics now being repackaged under the GEO label (content freshness, digital PR, community participation, link building) and has the publication dates to prove it. His summary of the GEO pitch: take advantage of the fact that businesses don’t understand AI systems rely on traditional search, and extract more money from them.

Screenshot from X, April 2026

He called it the grift. I think that’s generous. A grift implies individual con artists. This is structural: a category manufactured at the top, distorted in the middle, and adopted at the bottom. Not because it describes anything new, but because the professional cost of ignoring it feels higher than the professional cost of pretending it’s real.

You’re Not In The Driver’s Seat

Your job as a competent professional is to understand what these abbreviations actually mean, where they come from, and what – if anything – they change about your work.

If you can explain to your clients and your leadership what AI systems actually do, how they retrieve information, what’s genuinely measurable, and what isn’t – you will never be in a reactive position. You will never be the person scrambling to add “GEO” to a slide deck because someone on X told you it was the future.

If instead you let yourself be dragged around by whatever narrative venture capitalists need you to believe this quarter, you will always be reacting. One blog post away from a strategy pivot. Buying tools sold by people who benefit from your insecurity. That’s a choice. Not a fate.

The underlying mechanics of how content gets discovered – search engine crawler, LLM grounding system, RAG pipeline – haven’t undergone a paradigm shift. The interface has shifted. Users get answers synthesized from sources rather than a list of links.

But “the interface changed” doesn’t sell software. “Everything you know is obsolete and you need our dashboard” does.

Follow The Money

a16z benefits because the GEO narrative creates demand for their portfolio companies. The tool startups benefit because the narrative creates their market. The engagement farmers benefit because fabricated memos drive impressions. The agencies that rebrand as “GEO specialists” benefit because they can charge more for the same services with a shinier label.

Who doesn’t benefit? The practitioners doing solid, foundational work. Those people don’t need a new acronym. They need the industry to stop mistaking marketing for methodology.

And the clients. The clients are where the fear chain terminates, and the invoices begin. A new line item for work that should have been happening already under the SEO retainer, or that can’t be reliably measured in the first place. The VC manufactures the category. The SEO professional absorbs it and marks it up. The client pays for it. A game of telephone where the bill lands on the last person in the room who doesn’t speak the language.

I’ve written separately about the measurement problem with these tools – the non-determinism, the gap between parametric and retrieved knowledge, the dashboards built on methodological sand. The tools a16z promotes in that blog post have the same structural limitations. The dashboards look great. The numbers move. Whether the numbers mean anything is a question nobody selling the dashboard has an incentive to answer.

Meanwhile, the actual crisis gets no airtime. Organic search traffic across major U.S. publishers dropped 42% after AI Overviews expanded. Rankings didn’t change. Traffic did. That’s the real problem. Not which three-letter acronym to put on your slide deck, but the fact that the economic model underpinning content production on the open web is breaking. GEO doesn’t address that. It doesn’t even pretend to. It just gives everyone something to be busy with while the floor drops out.

The cycle time is getting shorter. We went from “AEO” to “GEO” in about eighteen months. Give it another year, and there’ll be another acronym, another VC blog post, another fabricated memo, and another round of professionals trying to decide whether the latest three letters are worth putting on their LinkedIn headline.

Or you could just do good work and understand what you’re doing well enough to explain it without borrowed terminology. But I suppose that doesn’t have the same ring to it on a pitch deck.

More Resources:


This post was originally published on The Inference.


Featured Image: Summit Art Creations/Shutterstock

Why Product Feeds Shouldn’t Be The Most Ignored SEO System In Ecommerce

Most ecommerce brands obsess over category pages and backlinks or product optimizations, while their product feeds remain auto-generated and underoptimized. Product feeds act as the backbone of ecommerce site catalogs and have long been the sole remit of PPC teams, but in the new era of AI Search, this is changing.

Back in 2023, Search Console added enhancements to the Shopping tab Listings report to help brands to get a better understanding of how their products were being seen in the Merchant Center.

We’ve also seen the emergence of OpenAI’s Product Feed specification as a specific requirement to allow ChatGPT to accurately index and display products. Although more recently, we’ve seen announcements that OpenAI has ended Instant Checkout and considering new directions.

These changes are pulling product feed visibility directly into the SEO performance ecosystem and aligning it as general “search infrastructure,” not just “ads infrastructure.”

In this article, we’ll be talking you through the value that product feeds can bring to businesses and how SEO aligns with this.

SEO’s Role In Product Feeds

In ecommerce, product feeds are often seen as “set it and forget it” assets, but treating these feeds as simply raw data is an immediate missed opportunity to boost visibility across organic search, shopping, and agentic commerce in the future.

While a standard product feed provides basic data to search bots, an optimized feed enhances attribute accuracy to ensure your products appear for high-intent search queries. By refining your product data, you bridge the gap between technical specs and consumer needs, increasing both visibility and click-through rates.

SEO can help to optimize feeds across four main pillars:

1. Semantic Query Mapping

SEOs don’t just use basic product names. They use consumer language built out of query mapping and intent-matching.

By front-loading titles with high-intent keywords and “long-tail” descriptions that include attributes like color, material, or use-case, products are more likely to appear where the user’s intent is highest.

Example:

Instead of “Men’s Waterproof Jacket Black”

SEO Driven Product Feed: “Brand X Men’s Waterproof Running Jacket – Black Lightweight Performance Shell”

2. Taxonomy Logic

Taxonomy is important to stop your products from being lost in the void. A misplaced product can quickly become a lost sale.

By refining categorization and product grouping, general terms like “tactical hiking boots” won’t get buried under generalized categories like “general footwear.”

Building a logical hierarchy allows algorithms to crawl and understand the catalog with higher confidence of exactly who the product is targeting. All products within your feed will be automatically assigned a product category.

Ensuring your taxonomy, as well as the titles, descriptions, and GTIN information, will help to ensure that products are correctly categorized according to [google_product_category] attribute.

3. Structured Data

In Google Shopping, structured data acts as the anchor of “truth” that connects your website to your Merchant Center feed.

Structured data allows Google and other bots to directly pull product data from your HTML, creating a form of automated data validation. If, for example, your feed says a product is $50, but your schema says $60, Google will likely disapprove the listing.

In many cases, high-performing feeds rely on structured data to update price and availability in real-time. If you run a flash sale, Google’s crawler can detect the change via schema and updates your Shopping Ads, preventing “out of stock” clicks.

When it comes to agentic commerce, agents will query schema properties to see if your product fits the user’s specific constraints.

Structured data provides hard facts and allows agents to see if a product is “agent-ready” for checkout.

4. Analytical Review

Having a highly analytical mind that is always looking for opportunity, SEOs can help to identify any “ghost products” and diagnose whether the issues are down to attributes, images, or descriptions, providing ongoing optimization recommendations.

As we move into an era of AI-driven discovery, the quality of a brand’s feed data can quickly become a reflection of a brand’s reputation.

By providing more context within the feed, you are more likely to see your brand get recommended in conversational search and show up in organic shopping.

What Ecommerce Brands Get Wrong With Product Feed Optimization

The majority of issues that we see in product feeds come from inconsistencies and a lack of depth within the feed.

From conversations with brand managers, this seems to stem from a lack of ownership within a channel and a lack of understanding of the impact of what these inconsistencies can have.

In some cases, feeds can be disapproved due to having inaccurate price status due to inconsistency between the feed and a landing page.

Other common issues include:

  • Auto-generated Shopify titles.
  • No keyword layering.
  • Inconsistent variants.
  • Missing GTIN/MPN.
  • Thin descriptions.
  • Feed data not aligned with on-page SEO.

This is where having the eyes of an SEO who is used to ongoing technical auditing and hygiene maintenance, and understands the value of structured data and content for context, can be vital in product feed performance.

How Product Feeds Directly Impact Organic & AI Visibility

Quite simply, the more context you can provide in your product feed, the more chances you have of being shown or cited in traditional search and in AI engines.

If a product feed is missing critical attributes like size, color, material, compatibility, or use case, the product won’t just rank lower; it will become ineligible for more specific, high-intent queries.

As search queries grow longer and intent becomes more nuanced, i.e., searchers looking for “men’s waterproof trail running jacket black medium” rather than just “men’s trail running jacket,” feeds need to evolve past being simple descriptors.

They need to properly layer structured attributes that mirror how real customers search and filter online. The more complete the product feed, the more opportunities there will be for your products to appear online across Shopping to AI-generated citations.

What Product Feed Optimization Actually Looks Like

There are a few stages of product feed optimization that SEOs need to be both aware of and able to deliver.

Keyword & Intent Architecture

SEOs should approach product feeds the same way they approach category and content strategy.

Keyword research should be conducted at a product level, identifying high-intent modifiers such as size, material, compatibility, and demographic, and layer those attributes both into product titles and feed data.

Rather than relying on generic exports from Shopify or another ecommerce platform, product titles should reflect real organic search behavior around how customers actually query products.

Structured Data Alignment

SEOs should also make sure that feed attributes match on-page schema.

Keeping a close eye on Merchant Center for any potential issues, such as missing GTINs or prices not matching, and making any necessary adjustments to schema/structured data, will help to ensure that the feed is consistent and context is fully delivered to bots.

Variant Consolidation Strategy

This leans heavily into faceted navigation – which ecommerce SEOs have been battling for years.

By determining when product variations should be grouped under a single parent entity versus a standalone URL, SEOs can have more control over any unnecessary duplication and cannibalization.

This can also help to protect crawl efficiencies across large product catalogs and declutter product feeds.

Feed Health Monitoring

Similar to how SEOs regularly run technical crawls of websites to maintain hygiene and pick up any issues, SEOs should also treat feed governance as part of their regular checks.

This includes actively monitoring feed errors and addressing any Merchant Center issues that might limit visibility.

Prioritizing AI Search Readiness

A large opportunity for the future of search comes with agentic commerce, and product feeds are going to align directly with this.

By ensuring feeds are clearly structured and contain complete and accurate attributes, SEOs can reinforce strong product entity signals and provide clarity, which AI systems rely on to determine what to display in comparisons and recommendations.

Final Thoughts

Product feeds are no longer just paid media assets; they are core search infrastructure that directly impacts organic shopping visibility and AI-driven discovery.

Even the strongest category pages can’t compensate for inconsistent or poorly structured data at scale.

As search becomes more conversational and comparative, structured product clarity is going to be the difference between brands that are cited and brands that are not.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock