How To Win In Generative Engine Optimization (GEO) via @sejournal, @maltelandwehr

This post was sponsored by Peec.ai. The opinions expressed in this article are the sponsor’s own.

The first step of any good GEO campaign is creating something that LLM-driven answer machines actually want to link out to or reference.

GEO Strategy Components

Think of experiences you wouldn’t reasonably expect to find directly in ChatGPT or similar systems:

  • Engaging content like a 3D tour of the Louvre or a virtual reality concert.
  • Live data like prices, flight delays, available hotel rooms, etc. While LLMs can integrate this data via APIs, I see the opportunity to capture some of this traffic for the time being.
  • Topics that require EEAT (experience, expertise, authoritativeness, trustworthiness).

LLMs cannot have first-hand experience. But users want it. LLMs are incentivized to reference sources that provide first-hand experience. That’s just one of the things to keep in mind, but what else?

We need to differentiate between two approaches: influencing foundational models versus influencing LLM answers through grounding. The first is largely out of reach for most creators, while the second offers real opportunities.

Influencing Foundational Models

Foundational models are trained on fixed datasets and can’t learn new information after training. For current models like GPT-4, it is too late – they’ve already been trained.

But this matters for the future: imagine a smart fridge stuck with o4-mini from 2025 that might – hypothetically – favor Coke over Pepsi. That bias could influence purchasing decisions for years!

Optimizing For RAG/Grounding

When LLMs can’t answer from their training data alone, they use retrieval augmented generation (RAG) – pulling in current information to help generate answers. AI Overviews and ChatGPT’s web search work this way.

As SEO professionals, we want three things:

  1. Our content gets selected as a source.
  2. Our content gets quoted most within those sources.
  3. Other selected sources support our desired outcome.

Concrete Steps To Succeed With GEO

Don’t worry, it doesn’t take rocket science to optimize your content and brand mentions for LLMs. Actually, plenty of traditional SEO methods still apply, with a few new SEO tactics you can incorporate into your workflow.

Step 1: Be Crawlable

Sounds simple but it is actually an important first step. If you aim for maximum visibility in LLMs, you need to allow them to crawl your website. There are many different LLM crawlers from OpenAI, Anthropic & Co.

Some of them behave so badly that they can trigger scraping and DDoS preventions. If you are automatically blocking aggressive bots, check in with your IT team and find a way to not block LLMs you care about.

If you use a CDN, like Fastly or Cloudflare, make sure LLM crawlers are not blocked by default settings.

Step 2: Continue Gaining Traditional Rankings

The most important GEO tactic is as simple as it sounds. Do traditional SEO. Rank well in Google (for Gemini and AI Overviews), Bing (for ChatGPT and Copilot), Brave (for Claude), and Baidu (for DeepSeek).

Step 3: Target the Query Fanout

The current generation of LLMs actually does a little more than simple RAG. They generate multiple queries. This is called query fanout.

For example, when I recently asked ChatGPT “What is the latest Google patent discussed by SEOs?”, it performed two web searches for “latest Google patent discussed by SEOs patent 2025 SEO forum” and “latest Google patent SEOs 2025 discussed”.

Advice: Check the typical query fanouts for your prompts and try to rank for those keywords as well.

Typical fanout-patterns I see in ChatGPT are appending the term “forums” when I ask what people are discussing and appending “interview” when I ask questions related to a person. The current year (2025) is often added as well.

Beware: fanout patterns differ between LLMs and can change over time. Patterns we see today may not be relevant anymore in 12 months.

Step 4: Keep Consistency Across Your Brand Mentions

This is something simple everyone should do – both as a person and an enterprise. Make sure you are consistently described online. On X, LinkedIn, your own website, Crunchbase, Github – always describe yourself the same way.

If your X and LinkedIn profiles say you are a “GEO consultant for small businesses”, don’t change it to “AIO expert” on Github and “LLMO Freelancer” in your press releases.

I have seen people achieve positive results within a few days on ChatGPT and Google AI Overviews by simply having a consistent self description across the web. This also applies to PR coverage – the more and better coverage you can obtain for your brand, the more likely LLMs are to parrot it back to users.

Step 5: Avoid JavaScript

As an SEO, I always ask for as little JavaScript usage as possible. As a GEO, I demand it!

Most LLM crawlers cannot render JavaScript. If your main content is hidden behind JavaScript, you are out.

Step 6: Embrace Social Media & UGC

Unsurprisingly, LLMs seem to rely on reddit and Wikipedia a lot. Both platforms offer user-generated-content on virtually every topic. And thanks to multiple layers of community-driven moderation, a lot of junk and spam is already filtered out.

While both can be gamed, the average reliability of their content is still far better than on the internet as a whole. Both are also regularly updated.

reddit also provides LLM labs with data into how people discuss topics online, what language they use to describe different concepts, and knowledge on obscure niche topics.

We can reasonably assume that moderated UGC found on platforms like reddit, Wikipedia, Quora, and Stackoverflow will stay relevant for LLMs.

I do not advocate spamming these platforms. However, if you can influence how you and competitors show up there, you might want to do so.

Step 7: Create For Machine-Readability & Quotability

Write content that LLMs understand and want to cite. No one has figured this one out perfectly yet, but here’s what seems to work:

  • Use declarative and factual language. Instead of writing “We are kinda sure this shoe is good for our customers”, write “96% of buyers have self-reported to be happy with this shoe.
  • Add schema. It has been debated many times. Recently, Fabrice Canel (Principal Product Manager at Bing) confirmed that schema markup helps LLMs to understand your content.
  • If you want to be quoted in an already existing AI Overview, have content with similar length to what is already there. While you should not just copy the current AI Overview, having high cosine similarly helps. And for the nerds: yes, given normalization, you can of course use the dot product instead of cosine similarity.
  • If you use technical terms in your content, explain them. Ideally in a simple sentence.
  • Add summaries of long text paragraphs, lists of reviews, tables, videos, and other types of difficult-to-cite content formats.

Step 8: Optimize your Content

Start of the paper GEO: Generative Engine Optimization (arXiv:2311.09735)The original GEO paper

If we look at GEO: Generative Engine Optimization (arXiv:2311.09735) , What Evidence Do Language Models Find Convincing? (arXiv:2402.11782v1), and similar scientific studies, the answer is clear. It depends!

To be cited for some topics in some LLMs, it helps to:

  • Add unique words.
  • Have pro/cons.
  • Gather user reviews.
  • Quote experts.
  • Include quantitative data and name your sources.
  • Use easy to understand language.
  • Write with positive sentiment.
  • Add product text with low perplexity (predictable and well-structured).
  • Include more lists (like this one!).

However, for other combinations of topics and LLMs, these measures can be counterproductive.

Until broadly accepted best practices evolve, the only advice I can give is do what is good for users and run experiments.

Step 9: Stick to the Facts

For over a decade, algorithms have extracted knowledge from text as triples like (Subject, Predicate, Object) — e.g., (Lady Liberty, Location, New York). A text that contradicts known facts may seem untrustworthy. A text that aligns with consensus but adds unique facts is ideal for LLMs and knowledge graphs.

So stick to the established facts. And add unique information.

Step 10: Invest in Digital PR

Everything discussed here is not just true for your own website. It is also true for content on other websites. The best way to influence it? Digital PR!

The more and better coverage you can obtain for your brand, the more likely LLMs are to parrot it back to users.

I have even seen cases where advertorials were used as sources!

Concrete GEO Workflows To Try

Before I joined Peec AI, I was a customer. Here is how I used the tool – and how I advise our customers to use it.

Learn Who Your Competitors Are

Just like with traditional SEO, using a good GEO tool will often reveal unexpected competitors. Regularly look at a list of automatically identified competitors. For those who surprise you, check in which prompts they are mentioned. Then check the sources that led to their inclusion. Are you represented properly in these sources? If not, act!

Is a competitor referenced because of their PeerSpot profile but you have zero reviews there? Ask customers for a review.

Was your competitor’s CEO interviewed by a Youtuber? Try to get on that show as well. Or publish your own videos targeting similar keywords.

Is your competitor regularly featured on top 10 lists where you never make it to the top 5? Offer the publisher who created the list an affiliate deal they cannot decline. With the next content update, you’re almost guaranteed to be the new number one.

Understand the Sources

When performing search grounding, LLMs rely on sources.

Typical LLM Sources: Reddit & Wikipedia

Look at the top sources for a large set of relevant prompts. Ignore your own website and your competitors for a second. You might find some of these:

  • A community like Reddit or X. Become part of the community and join the discussion. X is your best bet to influence results on Grok.
  • An influencer-driven website like YouTube or TikTok. Hire influencers to create videos. Make sure to instruct them to target the right keywords.
  • An affiliate publisher. Buy your way to the top with higher commissions.
  • A news and media publisher. Buy an advertorial and/or target them with your PR efforts. In certain cases, you might want to contact their commercial content department.

You can also check out this in-depth guide on how to deal with different kinds of source domains.

Target Query Fanout

Once you have observed which searches are triggered by query fanout for your most relevant prompts, create content to target them.

On your own website. With posts on Medium and LinkedIn. With press releases. Or simply by paying for article placements. If it ranks well in search engines, it has a chance to be cited by LLM-based answer engines.

Position Yourself for AI-Discoverability

Generative Engine Optimization is no longer optional – it’s the new frontline of organic growth. At Peec AI, we’re building the tools to track, influence, and win in this new ecosystem.

Generative Engine Optimization is no longer optional – it’s the new frontline of organic growth. We currently see clients growing their LLM traffic by 100% every 2 to 3 months. Sometimes with up to 20x the conversation rate of typical SEO traffic!

Whether you’re shaping AI answers, monitoring brand mentions, or pushing for source visibility, now is the time to act. The LLMs consumers will trust tomorrow are being trained today.


Image Credits

Featured Image: Image by Peec.ai Used with permission.

Google Warns: CSS Background Images Aren’t Indexed via @sejournal, @MattGSouthern

In a recent Search Off the Record podcast, Google’s Search Relations team cautioned developers against using CSS for all website images.

While CSS background images can enhance visual design, they’re invisible to Google Image Search. This could lead to missed opportunities in image indexing and search visibility.

Here’s what Google’s Search Advocates advise.

The CSS Image Problem

During the episode, John Mueller shared a recurring issue:

“I had someone ping me I think last week or a week before on social media: “It looks like my developer has decided to use CSS for all of the images because they believe it’s better.” Does this work?”

According to the Google team, this approach stems from a misunderstanding of how search engines interpret images.

When visuals are added via CSS background properties instead of standard HTML image tags, they may not appear in the page’s DOM, and therefore can’t be indexed.

As Martin Splitt explained:

“If you have a content image, if the image is part of the content… you want an img, an image tag or a picture tag that actually has the actual image as part of the DOM because you want us to see like ah so this page has this image that is not just decoration. It is part of the content and then image search can pick it up.”

Content vs. Decoration

The difference between a content image and a decorative image is whether it adds meaning or is purely cosmetic.

Decorative images, such as patterned backgrounds, atmospheric effects, or animations, can be safely implemented using CSS.

When the image conveys meaning or is referenced in the content, CSS is a poor fit.

Splitt offered the following example:

“If I have a blog post about this specific landscape and I want to like tell people like look at this amazing panoramic view of the landscape here and then it’s a background image… the problem is the content specifically references this image, but it doesn’t have the image as part of the content.”

In such cases, placing the image in HTML using the img or picture tag ensures it’s understood as part of the page’s content and eligible for indexing in Google Image Search.

What Makes CSS Images Invisible?

Splitt explained why this happens:

“For a user looking at the browser, what are you talking about, Martin? The image is right there. But if you look at the DOM, it absolutely isn’t there. It is just a CSS thing that has been loaded to style the page.”

Because Google parses the DOM to determine content structure, images styled purely through CSS are often overlooked, especially if they aren’t included as actual HTML elements.

This distinction reflects a broader web development principle.

Splitt adds:

“There is ideally a separation between the way the site looks and what the content is.”

What About Stock Photos?

The team addressed the use of stock photos, which are sometimes added for visual appeal rather than original content.

Splitt says:

“The meaning is still like this image is not mine. It’s a stock image that we bought or licensed but it is still part of the content,” the team noted.

While these images may not rank highly due to duplication, implementing them in HTML still helps ensure proper indexing and improves accessibility.

Why This Matters

The team highlighted several examples where improper implementation could reduce visibility:

  • Real estate listings: Home photos used as background images won’t show up in relevant image search queries.
  • News articles: Charts or infographics added via CSS can’t be indexed, weakening discoverability.
  • E-commerce sites: Product images embedded in background styles may not appear in shopping-related searches.

What To Do Next

Google’s comments indicate that you should follow these best practices:

  • Use HTML (img or picture) tags for any image that conveys content or is referenced on the page.
  • Reserve CSS backgrounds for decorative visuals that don’t carry meaning.
  • If users might expect to find an image via search, it should be in the HTML.
  • Proper implementation helps not only with SEO, but also with accessibility tools and screen readers.

Looking Ahead

Publishers should be mindful of how images are implemented.

While CSS is a powerful tool for design, using it to deliver content-related images may conflict with best practices for indexing, accessibility, and long-term SEO strategy.

Listen to the full podcast episode below:


Featured Image: Roman Samborskyi/Shutterstock

Google Confirms CSS Class Names Don’t Influence SEO via @sejournal, @MattGSouthern

In a recent episode of Google’s Search Off the Record podcast, Martin Splitt and John Mueller clarified how CSS affects SEO.

While some aspects of CSS have no bearing on SEO, others can directly influence how search engines interpret and rank content.

Here’s what matters and what doesn’t.

Class Names Don’t Matter For Rankings

One of the clearest takeaways from the episode is that CSS class names have no impact on Google Search.

Splitt stated:

“I don’t think it does. I don’t think we care because the CSS class names are just that. They’re just assigning a specific somewhat identifiable bit of stylesheet rules to elements and that’s it. That’s all. You could name them all “blurb.” It would not make a difference from an SEO perspective.”

Class names, they explained, are used only for applying visual styling. They’re not considered part of the page’s content. So they’re ignored by Googlebot and other HTML parsers when extracting meaningful information.

Even if you’re feeding HTML into a language model or a basic crawler, class names won’t factor in unless your system is explicitly designed to read those attributes.

Why Content In Pseudo Elements Is A Problem

While class names are harmless, the team warned about placing meaningful content in CSS pseudo elements like :before and :after.

Splitt stated:

“The idea again—the original idea—is to separate presentation from content. So content is in the HTML, and how it is presented is in the CSS. So with before and after, if you add decorative elements like a little triangle or a little dot or a little light bulb or like a little unicorn—whatever—I think that is fine because it’s decorative. It doesn’t have meaning in the sense of the content. Without it, it would still be fine.”

Adding visual flourishes is acceptable, but inserting headlines, paragraphs, or any user-facing content into pseudo elements breaks the core principle of web development.

That content becomes invisible to search engines, screen readers, and any other tools that rely on parsing the HTML directly.

Mueller shared a real-world example of how this can go wrong:

“There was once an escalation from the indexing team that said we should contact the site and tell them to stop using before and after… They were using the before pseudo class to add a number sign to everything that they considered hashtags. And our indexing system was like, it would be so nice if we could recognize these hashtags on the page because maybe they’re useful for something.”

Because the hashtag symbols were added via CSS, they were never seen by Google’s systems.

Splitt tested it live during the recording and confirmed:

“It’s not in the DOM… so it doesn’t get picked up by rendering.”

Oversized CSS Can Hurt Performance

The episode also touched on performance issues related to bloated stylesheets.

According to data from the HTTP Archive’s 2022 Web Almanac, the median size of a CSS file had grown to around 68 KB for mobile and 72 KB for desktop.

Mueller stated:

“The Web Almanac says every year we see CSS grow in size, and in 2022 the median stylesheet size was 68 kilobytes or 72 kilobytes. … They also mentioned the largest one that they found was 78 megabytes. … These are text files.”

That kind of bloat can negatively impact Core Web Vitals and overall user experience, which are two areas that do influence rankings. Frameworks and prebuilt libraries are often the cause.

While developers can mitigate this with minification and unused rule pruning, not everyone does. This makes CSS optimization a worthwhile item on your technical SEO checklist.

Keep CSS Crawlable

Despite CSS’s limited role in ranking, Google still recommends making CSS files crawlable.

Mueller joked:

“Google’s guidelines say you should make your CSS files crawlable. So there must be some kind of magic in there, right?”

The real reason is more technical than magical. Googlebot uses CSS files to render pages the way users would see them.

Blocking CSS can affect how your pages are interpreted, especially for layout, mobile-friendliness, or elements like hidden content.

Practical Tips For SEO Pros

Here’s what this episode means for your SEO practices:

  • Stop optimizing class names: Keywords in CSS classes won’t help your rankings.
  • Check pseudo elements: Any real content, like text meant to be read, should live in HTML, not in :before or :after.
  • Audit stylesheet size: Large CSS files can hurt page speed and Core Web Vitals. Trim what you can.
  • Ensure CSS is crawlable: Blocking stylesheets may disrupt rendering and impact how Google understands your page.

The team also emphasized the importance of using proper HTML tags for meaningful images:

“If the image is part of the content and you’re like, ‘Look at this house that I just bought,’ then you want an img, an image tag or a picture tag that actually has the actual image as part of the DOM because you want us to see like, ah, so this page has this image that is not just decoration.”

Use CSS for styling and HTML for meaning. This separation helps both users and search engines.

Listen to the full podcast episode below:

Google Clarifies Structured Data Rules For Returns & Loyalty Programs via @sejournal, @MattGSouthern

Google has updated its structured data documentation to clarify how merchants should implement markup for return policies and loyalty programs.

The updates aim to reduce confusion and ensure compatibility with Google Search features.

Key Changes In Return Policy Markup

The updated documentation clarifies that only a limited subset of return policy data is supported at the product level.

Google now explicitly states that comprehensive return policies must be defined using the MerchantReturnPolicy type under the Organization markup. This ensures a consistent policy across the full catalog.

In contrast, product-level return policies, defined underOffer, should be used only for exceptions and support fewer properties.

Google explains in its return policy documentation:

“Product-level return policies support only a subset of the properties available for merchant-level return policies.”

Loyalty Program Markup Must Be Separate

For loyalty programs, Google now emphasizes that the MemberProgram structured data must be defined under the Organization markup, either on a separate page or in Merchant Center.

While loyalty benefits like member pricing and points can still be referenced at the product level via UnitPriceSpecification, the program structure itself must be maintained separately.

Google notes in the loyalty program documentation:

“To specify the loyalty benefits… separately add UnitPriceSpecification markup under your Offer structured data markup.”

What’s Not Supported

Google’s documentation now states that shipping discounts and extended return windows offered as loyalty perks aren’t supported in structured data.

While merchants may still offer these benefits, they won’t be eligible for enhanced display in Google Search results.

This is particularly relevant for businesses that advertise such benefits prominently within loyalty programs.

Why It Matters

The changes don’t introduce new capabilities, but they clarify implementation rules that have been inconsistently followed or interpreted.

Merchants relying on offer-level markup for return policies or embedding loyalty programs directly in product offers may need to restructure their data.

Here are some next steps to consider:

  • Audit existing markup to ensure return policies and loyalty programs are defined at the correct levels.
  • Use product-level return policies only when needed, such as for exceptions.
  • Separate loyalty program structure from loyalty benefits, using MemberProgram under Organization, and validForMemberTier under Offer.

Staying compliant with these updated guidelines ensures eligibility for structured data features in Google Search and Shopping.


Featured Image: Roman Samborskyi/Shutterstock

Google: Many Top Sites Have Invalid HTML And Still Rank via @sejournal, @MattGSouthern

A recent discussion on Google’s Search Off the Record podcast challenges long-held assumptions about technical SEO, revealing that most top-ranking websites don’t use valid HTML.

Despite these imperfections, they continue to rank well in search results.

Search Advocate John Mueller and Developer Relations Engineer Martin Splitt referenced a study by former Google webmaster Jens Meiert, which found that only one homepage among the top 200 websites passed HTML validation tests.

Mueller highlighted:

“0.5% of the top 200 websites have valid HTML on their homepage. One site had valid HTML. That’s it.”

He described the result as “crazy,” noting that the study surprised even developers who take pride in clean code.

Mueller added:

“Search engines have to deal with whatever broken HTML is out there. It doesn’t have to be perfect, it’ll still work.”

When HTML Errors Matter

While most HTML issues are tolerated, certain technical elements, such as metadata, must be correctly implemented.

Splitt said:

“If something is written in a way that isn’t HTML compliant, then the browser will make assumptions.”

That usually works fine for visible content, but can fail “catastrophically” when it comes to elements that search engines rely on.

Mueller said:

“If [metadata] breaks, then it’s probably not going to do anything in your favor.”

SEO Is Not A Technical Checklist

Google also challenged the notion that SEO is a box-ticking exercise for developers.

Mueller said:

“Sometimes SEO is also not so much about purely technical things that you do, but also kind of a mindset.”

Splitt said:

“Am I using the terminology that my potential customers would use? And do I have the answers to the things that they will ask?”

Naming things appropriately, he said, is one of the most overlooked SEO skills and often more important than technical precision.

Core Web Vitals and JavaScript

Two recurring sources of confusion, Core Web Vitals and JavaScript, were also addressed.

Core Web Vitals

The podcast hosts reiterated that good Core Web Vitals scores don’t guarantee better rankings.

Mueller said:

“Core Web Vitals is not the solution to everything.”

Mueller added:

“Developers love scores… it feels like ‘oh I should like maybe go from 85 to 87 and then I will rank first,’ but there’s a lot more involved.”

JavaScript

On the topic of JavaScript, Splitt said that while Google can process it, implementation still matters.

Splitt said:

“If the content that you care about is showing up in the rendered HTML, you’ll be fine generally speaking.”

Splitt added:

“Use JavaScript responsibly and don’t use it for everything.”

Misuse can still create problems for indexing and rendering, especially if assumptions are made without testing.

What This Means

The key takeaway from the podcast is that technical perfection isn’t 100% necessary for SEO success.

While critical elements like metadata must function correctly, the vast majority of HTML validation errors won’t prevent ranking.

As a result, developers and marketers should be cautious about overinvesting in code validation at the expense of content quality and search intent alignment.

Listen to the full podcast episode below:

Google’s ‘srsltid’ Parameter Appears In Organic URLs, Creating Confusion via @sejournal, @MattGSouthern

Google’s srsltid parameter, originally meant for product tracking, is now showing on blog pages and homepages, creating confusion among SEO pros.

Per a recent Reddit thread, people are seeing the parameter attached not just to product pages, but also to blog posts, category listings, and homepages.

Google Search Advocate John Mueller responded saying, “it doesn’t cause any problems for search.”  However, it may still raise more questions than it answers.

Here’s what you need to know.

What Is the srsltid Parameter Supposed to Do?

The srsltid parameter is part of Merchant Center auto-tagging. It’s designed to help merchants track conversions from organic listings connected to their product feeds.

When enabled, the parameter is appended to URLs shown in search results, allowing for better attribution of downstream behavior.

A post on Google’s Search Central community forum clarifies that these URLs aren’t indexed.

As Product Expert Barry Hunter (not affiliated with Google) explained:

“The URLs with srsltid are NOT really indexed. The param is added dynamically at runtime. That’s why they don’t show as indexed in Search Console… but they may appear in search results.”

While it’s true the URLs aren’t indexed, they’re showing up in indexed pages reported by third-party tools.

Why SEO Pros Are Confused

Despite Google’s assurances, the real-world impact of srsltid is causing confusion for these reasons:

  • Inflated URL counts: Tools often treat URLs with unique parameters as separate pages. This inflates site page counts and can obscure crawl reports or site audits.
  • Data fragmentation: Without filtering, analytics platforms like GA4 split traffic between canonical and parameterized URLs, making it harder to measure performance accurately.
  • Loss of visibility in Search Console: As documented in a study by Oncrawl, sites saw clicks and impressions for srsltid URLs drop to zero around September, even though those pages still appeared in search results.
  • Unexpected reach: The parameter is appearing on pages beyond product listings, including static pages, blogs, and category hubs.

Oncrawl’s analysis also found that Googlebot crawled 0.14% of pages with the srsltid parameter, suggesting minimal crawling impact.

Can Anything Be Done?

Google hasn’t indicated any rollback or revision to how srsltid works in organic results. But you do have a few options depending on how you’re affected.

Option 1: Disable Auto-Tagging

You can turn off Merchant Center auto-tagging by navigating to Tools and settings > Conversion settings > Automatic tagging. Switching to UTM parameters can provide greater control over traffic attribution.

Option 2: Keep Auto-Tagging, Filter Accordingly

If you need to keep auto-tagging active:

  • Ensure all affected pages have correct canonical tags.
  • Configure caching systems to ignore srsltid as a cache key.
  • Update your analytics filters to exclude or consolidate srsltid traffic.

Blocking the parameter in robots.txt won’t prevent the URLs from appearing in search results, as they’re added dynamically and not crawled directly.

What This Means

The srsltid parameter may not affect rankings, but its indirect impact on analytics and reporting is being felt.

When performance reporting shifts without explanation, SEO pros need to provide answers. Understanding how srsltid functions work, and how it doesn’t, helps mitigate confusion.

Staying informed, filtering correctly, and communicating with stakeholders are the best options for navigating this issue.


Featured Image: Roman Samborskyi/Shutterstock

The Smart SEO Team’s Guide To Timing & Executing A Large-Scale Site Migration via @sejournal, @inmotionhosting

This post was sponsored by InMotion Hosting. The opinions expressed in this article are the sponsor’s own.

We’ve all felt it, that sinking feeling in your stomach when your site starts crawling instead of sprinting.

Page speed reports start flashing red. Search Console is flooding your inbox with errors.

You know it’s time for better hosting, but here’s the thing: moving a large website without tanking your SEO is like trying to change tires while your car is still moving.

We’ve seen too many migrations go sideways, which is why we put together this guide.

Let’s walk through a migration plan that works. One that’ll future-proof your site without disrupting your rankings or overburdening your team.

Free Website Migration Checklist

Step 1: Set Your Performance Goals & Audit Your Environment

Establish Performance Benchmarks

Before you touch a single line of code, you need benchmarks. Think of these as your “before” pictures in a website makeover.

If you skip this step, you’ll regret it later. How will you know if your migration was successful if you don’t know where you started?

Gather your current page speed numbers, uptime percentages, and server response times. These will serve as proof that the migration was worth it.

Document Current Site Architecture

Next, let’s identify what’s working for your site and what’s holding it back. Keep a detailed record of your current setup, including your content management system (CMS), plugins, traffic patterns, and peak periods.

Large sites often have unusual, hidden connections that only reveal themselves at the worst possible moments during migrations. Trust us, documenting this now prevents those 2 AM panic attacks later.

Define Your Website Migration Goals

Let’s get specific about what success looks like. Saying “we want the site to be faster” is like saying “we want more leads.” It sounds great, but how do you measure it?

Aim for concrete targets, such as:

  • Load times under 2 seconds on key pages (we like to focus on product pages first).
  • 99.99% uptime guarantees (because every minute of downtime is money down the drain).
  • Server response times under 200ms.
  • 30% better crawl efficiency (so Google sees your content updates).

We recommend running tests with Google Lighthouse and GTmetrix at different times of day. You’d be surprised how performance can vary between your morning coffee and afternoon slump.

Your top money-making pages deserve special attention during migration, so keep tabs on those.

Step 2: Choose The Right Hosting Fit

Not all hosting options can handle the big leagues.

We’ve seen too many migrations fail because someone picked a hosting plan better suited for a personal blog than an enterprise website.

Match Your Needs To Solutions

Let’s break down what we’ve found works best.

Managed VPS is excellent for medium-sized sites. If you’re receiving 100,000 to 500,000 monthly visitors, this might be your sweet spot. You’ll have the control you need without the overkill.

Dedicated servers are what we recommend for the major players. If you’re handling millions of visitors or running complex applications, this is for you.

What we appreciate about dedicated resources is that they eliminate the “noisy neighbor” problem, where someone else’s traffic spike can tank your performance. Enterprise sites on dedicated servers load 40-60% faster and rarely experience those resource-related outages.

WordPress-optimized hosting is ideal if you’re running WordPress. These environments come pre-tuned with built-in caching and auto-updates. Why reinvent the wheel, right?

Understand The Must-Have Features Checklist

Let’s talk about what your web hosting will need for SEO success.

Free Website Migration Checklist

NVMe SSDs are non-negotiable these days. They’re about six times faster than regular storage for database work, and you’ll feel the difference immediately.

A good CDN is essential if you want visitors from different regions to have the same snappy experience. Server-level caching makes a huge difference, as it reduces processing work and speeds up repeat visits and search crawls.

Illustration showing how caching works on a websiteImage created by InMotion Hosting, June 2025

Staging environments aren’t optional for big migrations. They’re your safety net. Keep in mind that emergency fixes can cost significantly more than setting up staging beforehand.

And please ensure you have 24/7 migration support from actual humans. Not chatbots, real engineers who answer the phone when things go sideways at midnight.

Key Considerations for Growth

Think about where your site is headed, not just where it is now.

Are you launching in new markets? Planning a big PR push? Your hosting should handle growth without making you migrate again six months later.

One thing that often gets overlooked: redirect limits. Many platforms cap at 50,000-100,000 redirects, which sounds like a lot until you’re migrating a massive product catalog.

Step 3: Prep for Migration – The Critical Steps

Preparation separates smooth migrations from disasters. This phase makes or breaks your project.

Build Your Backup Strategy

First things first: backups, backups, backups. We’re talking complete copies of both files and databases.

Don’t dump everything into one giant folder labeled “Site Stuff.” Organizing backups by date and type. Include the entire file system, database exports, configuration files, SSL certificates, and everything else.

Here’s a common mistake we often see: not testing the restore process before migration day. A backup you can’t restore is wasted server space. Always conduct a test restore on a separate server to ensure everything works as expected.

Set Up the New Environment and Test in Staging

Your new hosting environment should closely mirror your production environment. Match PHP versions, database settings, security rules, everything. This isn’t the time to upgrade seven different things at once (we’ve seen that mistake before).

Run thorough pre-launch tests on staging. Check site speed on different page types. Pull out your phone and verify that the mobile display works.

Use Google’s testing tools to confirm that your structured data remains intact. The goal is no surprises on launch day.

Map Out DNS Cutover and Minimize TTL for a Quick Switch

DNS strategy might sound boring, but it can make or break your downtime window.

Here’s what works: reduce your TTL to at least 300 seconds (5 minutes) about 48 hours before migration. This makes DNS changes propagate quickly when you flip the switch.

Have all your DNS records prepared in advance: A records, CNAMEs for subdomains, MX records for email, and TXT records for verification. Keep a checklist and highlight the mission-critical ones that would cause panic if forgotten.

Freeze Non-Essential Site Updates Before Migration

This might be controversial, but we’re advocates for freezing all content and development changes for at least 48 hours before migration.

The last thing you need is someone publishing a new blog post right as you’re moving servers.

You can use this freeze time for team education. It’s a perfect moment to run workshops on technical SEO or explain how site speed affects rankings. Turn downtime into learning time.

Step 4: Go-Live Without the Guesswork

Migration day! This is where all your planning pays off, or where you realize what you forgot.

Launch Timing Is Everything

Choose your timing carefully. You should aim for when traffic is typically lowest.

For global sites, consider the “follow-the-sun” approach. This means migrating region by region during their lowest traffic hours. While it takes longer, it dramatically reduces risk.

Coordinate Your Teams

Clear communication is everything. Everyone should know exactly what they’re doing and when.

Define clear go/no-go decision points. Who makes the call if something looks off? What’s the threshold for rolling back vs. pushing through?

Having these conversations before you’re in the middle of a migration saves a ton of stress.

Live Performance Monitoring

Once you flip the switch, monitoring becomes your best friend. Here are the key items to monitor:

  • Watch site speed across different page types and locations.
  • Set up email alerts for crawl errors in Search Console.
  • Monitor 404 error rates and redirect performance.

Sudden spikes in 404 errors or drops in speed need immediate attention. They’re usually signs that something didn’t migrate correctly.

The faster you catch these issues, the less impact they’ll have on your rankings.

Post-Migration Validation

After launch, run through a systematic checklist:

  • Test redirect chains (we recommend Screaming Frog for this).
  • Make sure internal links work.
  • Verify your analytics tracking (you’d be surprised how often this breaks).
  • Check conversion tracking.
  • Validate SSL certificates.
  • Watch server logs for crawl issues.

One step people often forget: resubmitting your sitemap in Search Console as soon as possible. This helps Google discover your new setup faster.

Even with a perfect migration, most large sites take 3-6 months for complete re-indexing, so patience is key.

Step 5: Optimize, Tune, and Report: How To Increase Wins

The migration itself is just the beginning. Post-migration tuning is where the magic happens.

Fine-Tune Your Configuration

Now that you’re observing real traffic patterns, you can optimize your setup.

Start by enhancing caching rules based on actual user behavior. Adjust compression settings, and optimize those database queries that seemed fine during testing but are sluggish in production.

Handling redirects at the server level, rather than through plugins or CMS settings, is faster and reduces server load.

Automate Performance Monitoring

Set up alerts for issues before they become problems. We recommend monitoring:

  • Page speed drops by over 10%.
  • Uptime drops.
  • Changes in crawl rates.
  • Spikes in server resource usage.
  • Organic traffic drops by over 20%.

Automation saves you from constantly checking dashboards, allowing you to focus on improvements instead of firefighting.

Analyze for SEO Efficiency

Server logs tell you a lot about how well your migration went from an SEO perspective. Look for fewer crawl errors, faster Googlebot response times, and better crawl budget usage.

Improvements in crawl efficiency mean Google can discover and index your new content much faster.

Measure and Report Success

Compare your post-migration performance to those baseline metrics you wisely collected.

When showing results to executives, connect each improvement to business outcomes. For example:

  • “Faster pages reduced our bounce rate by 15%, which means more people are staying on the site.”
  • “Better uptime means we’re not losing sales during peak hours.”
  • “Improved crawl efficiency means our new products get indexed faster.”

Pro tip: Build easy-to-read dashboards that executives can access at any time. This helps build confidence and alleviate concerns.

Ready to Execute Your High-Performance Migration?

You don’t need more proof that hosting matters. Every slow page load and server hiccup already demonstrates it. What you need is a plan that safeguards your SEO investment while achieving tangible improvements.

This guide provides you with that playbook. You now know how to benchmark, choose the right solutions, and optimize for success.

This approach can be applied to sites of all sizes, ranging from emerging e-commerce stores to large enterprise platforms. The key lies in preparation and partnering with the right support team.

If you’re ready to take action, consider collaborating with a hosting provider that understands the complexities of large-scale migrations. Look for a team that manages substantial redirect volumes and builds infrastructure specifically for high-traffic websites. Your future rankings will thank you!

Image Credits

Featured Image: Image by InMotion Hosting. Used with permission.

In-Post Image: Images by InMotion Hosting. Used with permission.

How Much Code Should SEO Pros Know? Google Weighs In via @sejournal, @MattGSouthern

Google’s Martin Splitt and Gary Illyes recently addressed a common question in search marketing: how technical do SEO professionals need to be?

In a Search Off the Record podcast, they offered guidance on which technical skills are helpful in SEO and discussed the long-standing friction between developers and SEO professionals.

Splitt noted:

“I think in order to optimize a system or work with a system so deeply like SEOs do, you have to understand some of the characteristics of the system.”

However, he clarified that strong coding skills aren’t a requirement for doing effective SEO work.

The Developer-SEO Divide

Splitt, who regularly speaks at both developer and SEO events, acknowledged that the relationship between these groups can sometimes be difficult.

Splitt says:

“Even if you go to a developer event and talk about SEO, it is a strained relationship you’re entering.”

He added that developers often approach SEO conversations with skepticism, even when they come from someone with a developer background.

This disconnect can cause real-world problems.

Illyes shared an example of a large agency that added a calendar plugin across multiple websites, unintentionally generating “100 million URLs.” Google began crawling all of them, creating a major crawl budget issue.

What SEO Pros Need To Know

Rather than recommending that SEO professionals learn to code, Splitt advises understanding how web technologies function.

Splitt states:

“You should understand what is a header, how does HTTPS conceptually work, what’s the certificate, how does that influence how the connection works.”

He also advised being familiar with the differences between web protocols, such as HTTP/2 and HTTP/1.1.

While SEO pros don’t need to write in programming languages like C, C++, or JavaScript, having some awareness of how JavaScript affects page rendering can be helpful.

Context Matters: Not All SEOs Need The Same Skills

Google also pointed out that SEO is a broad discipline, and the amount of technical knowledge needed can vary depending on your focus.

Splitt gave the example of international SEO. He initially said these specialists might not need technical expertise, but later clarified that internationalization often includes technical components too.

“SEO is such a broad field. There are people who are amazing at taking content international… they specialize on a much higher layer as in like the content and the structure and language and localization in different markets.”

Still, he emphasized that people working in more technical roles, or in generalist positions, should aim to understand development concepts.

What This Means

Here’s what the discussion means for SEO professionals:

  • Technical understanding matters, but being able to code is not always necessary. Knowing HTTP protocols, HTML basics, and how JavaScript interacts with pages can go a long way.
  • Your role defines your needs. If you’re working on content strategy or localization, deep technical knowledge might not be essential. But if you’re handling site migrations or audits, that knowledge becomes more critical.
  • Context should guide your decisions. Applying advice without understanding the “why” can lead to problems. SEO isn’t one-size-fits-all.
  • Cross-team collaboration is vital. Google’s comments suggest there’s still a divide between developers and SEO teams. Improving communication between these groups could prevent technical missteps that affect rankings.

Looking Ahead

As websites become more complex and JavaScript frameworks continue to grow, technical literacy will likely become more important.

Google’s message is clear: SEOs don’t need to become developers, but having a working understanding of how websites function can make you far more effective.

For companies, closing the communication gap between development and marketing remains a key area of opportunity.

Listen to the full podcast episode below:


Featured Image: Roman Samborskyi/Shutterstock

See What AI Sees: AI Mode Killed the Old SEO Playbook — Here’s the New One via @sejournal, @mktbrew

This post was sponsored by MarketBrew. The opinions expressed in this article are the sponsor’s own.

Is Google using AI to censor thousands of independent websites?

Wondering why your traffic has suddenly dropped, even though you’re doing SEO properly?

Between letters to the FTC describing a systematic dismantling of the open web by Google to SEO professionals who may be unaware that their strategies no longer make an impact, these changes represent a definite re-architecting of the web’s entire incentive structure.

It’s time to adapt.

While some were warning about AI passage retrieval and vector scoring, the industry largely stuck to legacy thinking. SEOs continued to focus on E-E-A-T, backlinks, and content refresh cycles, assuming that if they simply improved quality, recovery would come.

But the rules had changed.

Google’s Silent Pivot: From Keywords to Embedding Vectors

In late 2023 and early 2024, Google began rolling out what it now refers to as AI Mode.

What Is Google’s AI Mode?

AI Mode breaks content into passages, embeds those passages into a multi-dimensional vector space, and compares them directly to queries using cosine similarity.

In this new model, relevance is determined geometrically rather than lexically. Instead of ranking entire pages, Google evaluates individual passages. The most relevant passages are then surfaced in a ChatGPT-like interface, often without any need for users to click through to the source.

Beneath this visible change is a deeper shift: content scoring has become embedding-first.

What Are Embedding Vectors?

Embedding vectors are mathematical representations of meaning. When Google processes a passage of content, it converts that passage into a vector, a list of numbers that captures the semantic context of the text. These vectors exist in a multi-dimensional space where the distance between vectors reflects how similar the meanings are.

Instead of relying on exact keywords or matching phrases, Google compares the embedding vector of a search query to the embedding vectors of individual passages. This allows it to identify relevance based on deeper context, implied meaning, and overall intent.

Traditional SEO practices like keyword targeting and topical coverage do not carry the same weight in this system. A passage does not need to use specific words to be considered relevant. What matters is whether its vector lands close to the query vector in this semantic space.

How Are Embedding Vectors Different From Keywords?

Keywords focus on exact matches. Embedding vectors focus on meaning.

Traditional SEO relied on placing target terms throughout a page. But Google’s AI Mode now compares the semantic meaning of a query and a passage using embedding vectors. A passage can rank well even if it doesn’t use the same words, as long as its meaning aligns closely with the query.

This shift has made many SEO strategies outdated. Pages may be well-written and keyword-rich, yet still underperform if their embedded meaning doesn’t match search intent.

What SEO Got Wrong & What Comes Next

The story isn’t just about Google changing the game, it’s also about how the SEO industry failed to notice the rules had already shifted.

Don’t: Misread the Signals

As rankings dropped, many teams assumed they’d been hit by a quality update or core algorithm tweak. They doubled down on familiar tactics: improving E-E-A-T signals, updating titles, and refreshing content. They pruned thin pages, boosted internal links, and ran audits.

But these efforts were based on outdated models. They treated the symptom, visibility loss, not the cause: semantic drift.

Semantic drift happens when your content’s vector no longer aligns with the evolving vector of search intent. It’s invisible to traditional SEO tools because it occurs in latent space, not your HTML.

No amount of backlinks or content tweaks can fix that.

This wasn’t just platform abuse. It was also a strategic oversight.

SEO teams:

Many believed that doing what Google said, improving helpfulness, pruning content, and writing for humans, would be enough.

That promise collapsed under AI scrutiny.

But we’re not powerless.

Don’t: Fall Into The Trap of Compliance

Google told the industry to “focus on helpful content,” and SEOs listened, through a lexical lens. They optimized for tone, readability, and FAQs.

But “helpfulness” was being determined mathematically by whether your vectors aligned with the AI’s interpretation of the query.

Thousands of reworked sites still dropped in visibility. Why? Because while polishing copy, they never asked: Does this content geometrically align with search intent?

Do: Optimize For Data, Not Keywords

The new SEO playbook begins with a simple truth: you are optimizing for math, not words.

The New SEO Playbook: How To Optimize For AI-Powered SERPs

Here’s what we now know:

  1. AI Mode is real and measurable.
    You can calculate embedding similarity.
    You can test passages against queries.
    You can visualize how Google ranks.
  2. Content must align semantically, not just topically.
    Two pages about “best hiking trails” may be lexically similar, but if one focuses on family hikes and the other on extreme terrain, their vectors diverge.
  3. Authority still matters, but only after similarity.
    The AI Mode fan-out selects relevant passages first. Authority reranking comes later.
    If you don’t pass the similarity threshold, your authority won’t matter.
  4. Passage-level optimization is the new frontier.
    Optimizing entire pages isn’t enough. Each chunk of content must pull semantic weight.

How Do I Track Google AI Mode Data To Improve SERP Visibility?

It depends on your goals; for success in SERPs, you need to focus on tools that not only show you visibility data, but also how to get there.

Profound was one of the first tools to measure whether content appeared inside large language models, essentially offering a visibility check for LLM inclusion. It gave SEOs early signals that AI systems were beginning to treat search results differently, sometimes surfacing pages that never ranked traditionally. Profound made it clear: LLMs were not relying on the same scoring systems that SEOs had spent decades trying to influence.

But Profound stopped short of offering explanations. It told you if your content was chosen, but not why. It didn’t simulate the algorithmic behavior of AI Mode or reveal what changes would lead to better inclusion.

That’s where simulation-based platforms came in.

Market Brew approached the challenge differently. Instead of auditing what was visible inside an AI system, they reconstructed the inner logic of those systems, building search engine models that mirrored Google’s evolution toward embeddings and vector-based scoring. These platforms didn’t just observe the effects of AI Mode, they recreated its mechanisms.

As early as 2023, Market Brew had already implemented:

  • Passage segmentation that divides page content into consistent ~700-character blocks.
  • Embedding generation using Sentence-BERT to capture the semantic fingerprint of each passage.
  • Cosine similarity calculations to simulate how queries match specific blocks of content, not just the page as a whole.
  • Thematic clustering algorithms, like Top Cluster Similarity, to determine which groupings of passages best aligned with a search intent.

🔍 Market Brew Tutorial: Mastering the Top Cluster Similarity Ranking Factor | First Principles SEO

This meant users could test a set of prompts against their content and watch the algorithm think, block by block, similarity score by score.

Where Profound offered visibility, Market Brew offered agency.

Instead of asking “Did I show up in an AI overview?”, simulation tools helped SEOs ask, “Why didn’t I?” and more importantly, “What can I change to improve my chances?

By visualizing AI Mode behavior before Google ever acknowledged it publicly, these platforms gave early adopters a critical edge. The SEOs using them didn’t wait for traffic to drop before acting, they were already optimizing for vector alignment and semantic coverage long before most of the industry knew it mattered.

And in an era where rankings hinge on how well your embeddings match a user’s intent, that head start has made all the difference.

Visualize AI Mode Coverage. For Free.

SEO didn’t die. It transformed, from art into applied geometry.

AI Mode Visualizer Tutorial

To help SEOs adapt to this AI-driven landscape, Market Brew has just announced the AI Mode Visualizer, a free tool that simulates how Google’s AI Overviews evaluate your content:

  • Enter a page URL.
  • Input up to 10 search prompts or generate them automatically from a single master query using LLM-style prompt expansion.
  • See a cosine similarity matrix showing how each content chunk (700 characters) for your page aligns with each intent.
  • Click any score to view exactly which passage matched, and why.

🔗 Try the AI Mode Visualizer

This is the only tool that lets you watch AI Mode think.

Two Truths, One Future

Nate Hake is right: Google restructured the game. The data reflects an industry still catching up to the new playbook.

Because two things can be true:

  • Google may be clearing space for its own services, ad products, and AI monopolies.
  • And many SEOs are still chasing ghosts in a world governed by geometry.

It’s time to move beyond guesses.

If AI Mode is the new architecture of search, we need tools that expose how it works, not just theories about what changed.

We were bringing you this story back in early 2024, before AI Overviews had a name, explaining how embeddings and vector scoring would reshape SEO.

Tools like the AI Mode Visualizer offer a rare chance to see behind the curtain.

Use it. Test your assumptions. Map the space between your content and modern relevance.

Search didn’t end.

But the way forward demands new eyes.

________________________________________________________________________________________________

Image Credits

Featured Image: Image by MarketBrew. Used with permission.