In a recent episode of Google’s Search Off the Record podcast, Martin Splitt and John Mueller discussed when lazy loading helps and when it can slow pages.
Splitt used a real-world example on developers.google.com to illustrate a common pattern: making every image lazy by default can delay Largest Contentful Paint (LCP) if it includes above-the-fold visuals.
Splitt said:
“The content management system that we are using for developers.google.com … defaults all images to lazy loading, which is not great.”
Splitt used the example to explain why lazy-loading hero images is risky: you tell the browser to wait on the most visible element, which can push back LCP and cause layout shifts if dimensions aren’t set.
Splitt said:
“If you are using lazy loading on an image that is immediately visible, that is most likely going to have an impact on your largest contentful paint. It’s like almost guaranteed.”
How Lazy Loading Delays LCP
LCP measures the moment the largest text or image in the initial viewport is painted.
Normally, the browser’s preload scanner finds that hero image early and fetches it with high priority so it can paint fast.
When you add loading="lazy" to that same hero, you change the browser’s scheduling:
The image is treated as lower priority, so other resources start first.
The browser waits until layout and other work progress before it requests the hero image.
The hero then competes for bandwidth after scripts, styles, and other assets have already queued.
That delay shifts the paint time of the largest element later, which increases your LCP.
On slow networks or CPU-limited devices, the effect is more noticeable. If width and height are missing, the late image can also nudge layout and feel “jarring.”
SEO Risk With Some Libraries
Browsers now support a built-in loading attribute for images and iframes, which removes the need for heavy JavaScript in standard scenarios. WordPress adopted native lazy loading by default, helping it spread.
Splitt said:
“Browsers got a native attribute for images and iframes, the loading attribute … which makes the browser take care of the lazy loading for you.”
Older or custom lazy-loading libraries can hide image URLs in nonstandard attributes. If the real URL never lands in src or srcset in the HTML Google renders, images may not get picked up for indexing.
Splitt said:
“We’ve seen multiple lazy loading libraries … that use some sort of data-source attribute rather than the source attribute… If it’s not in the source attribute, we won’t pick it up if it’s in some custom attribute.”
How To Check Your Pages
Use Search Console’s URL Inspection to review the rendered HTML and confirm that above-the-fold images and lazy-loaded modules resolve to standard attributes. Avoid relying on the screenshot.
Splitt advised:
“If the rendered HTML looks like it contains all the image URLs in the source attribute of an image tag … then you will be fine.”
Ranking Impact
Splitt framed ranking effects as modest. Core Web Vitals contribute to ranking, but he called it “a tiny minute factor in most cases.”
What You Should Do Next
Keep hero and other above-the-fold images eager with width and height set.
Use native loading="lazy" for below-the-fold images and iframes.
If you rely on a library for previews, videos, or dynamic sections, make sure the final markup exposes real URLs in standard attributes, and confirm in rendered HTML.
Looking Ahead
Lazy loading is useful when applied selectively. Treat it as an opt-in for noncritical content.
Verify your implementation with rendered HTML, and watch how your LCP trends over time.
Featured Image: Screenshot from YouTube.com/GoogleSearchCentral, August 2025.
How to make your content visible in the age of AI search
So, what exactly is LLM Optimization? Well, the answer to that question depends on who you ask. For example, if you ask a machine learning engineer, they’ll tell you it’s all about tweaking prompts and token limits to get better performance from a large language model. In fact, Iguazio actually defines LLM optimization as improving the way models respond, which means smarter, faster, and with more contextual recognition.
If, on the other hand, you are a content strategist or SEO enthusiast, LLM optimization will mean something completely different to you and that is making sure that your content shows up in AI-generated search results. And, that needs to be true no matter whether you’re talking to ChatGPT, searching with Perplexity, or scanning Google’s new AI Mode for answers. Some call this ChatGPT SEO or Generative Engine Optimization.
So, if you fall into the latter of those two groups, ie: the people who want their content and product pages to be seen and clicked, then this article is for you. And, if you’d like to read on, we’ll show you why LLM optimization in an AI-search landscape isn’t some sort of luxury option; it’s an absolute necessity.
What are LLMs and why should you care?
AI engineers train Large Language models on huge amounts of text and data to generate answers, summaries, code, and human-like language. They’ve read everything (not just the Classics) and that includes blogs, news articles and your website.
The reason that’s important is that LLMs don’t crawl your website in real time like Search Engines do. What they do is read it, learn from it and when someone asks them a question, they try to recall what they saw and rephrase it into an answer. If your site shows up as the answer, “Great” but if not, you’ve got a visibility problem.
The new way of searching
Search is not just about Google anymore. Also, it’s not as if just one other thing has come to dominate which means we’re left with a rather messy mix of Perplexity answers, Chat GPT chats, Gemini summaries and voice assistants reading out answers while we try to do two tasks at once.
In short, people aren’t just searching, they’re conversing and if your content can’t hold its own in this environment then you’re missing out on visibility, traffic, and the ability to build trust. We’ll walk you through exactly how to fix that.
SEO vs. GEO vs. AEO vs. LLMO: Are we just rebranding SEO?
If you’ve been wondering whether you now need four different strategies for SEO (Search Engine Optimization), GEO (Generative Engine Optimization), AEO (Answer Engine Optimization), and LLMO (Large Language Model Optimization), relax, it’s not as big a deal as you might think. You see, despite all the buzzwords, the core of optimization hasn’t changed much.
All four terms point to the same central goal: making your content more findable, quotable, and credible in machine-generated output regardless of whether that comes from Google’s AI Overviews, ChatGPT, or an answer box on Bing.
So, should you overhaul your entire content strategy to ‘do LLMO’?
Not really. At least, not yet.
Most of what boosts your presence in LLMs is already what SEO professionals have been doing for years. Structured content, semantic clarity, topical authority, entity association, clean internal linking, it’s all classic SEO.
Where they slightly diverge:
SEO (Search Engine Optimization)
Relies on backlinks and site architecture to establish authority
GEO (Generative Engine Optimization
Puts extra emphasis on unlinked brand mentions and semantic association
AEO (Answer Engine Optimization)
Focuses on being the single best, most concise, and sourceable response to a specific query
LLMO (Large Language Model Optimization)
Leans into optimizing content not just for people or search crawlers but for LLMs reading in chunks, skipping JavaScript, and relying on embeddings and grounding datasets
But the thing is: you don’t need four different playbooks. All you need is one solid SEO foundation. In fact, this point is backed up by Google’s Gary Illyes who confirmed that AI Search does not require specialized optimization, saying that “AI SEO” is not necessary and that standard SEO is all that is needed for both AI Overviews and AI Mode.
Focus more on entity mentions, not just links
Treat your core site pages (home, pricing, about) and PDFs as important LLM fuel.
Remember that AI crawlers don’t render JavaScript, so client-side content might be invisible
Think about how LLMs process structure (chunking, context, citations), not just how humans skim it
So, if you’ve already been investing in foundational SEO, you’re already doing most of what GEO, AEO, and LLMO ae all about. That’s why not every new acronym needs you to have a whole rethink on your efforts. Sometimes, it’s just like SEO.
Key LLM SEO optimization techniques
Now that we know LLMs aren’t crawling our site but are understanding it, we need to think a little differently about how we create and construct content and for more on this, you may find this article extremely insightful. This is not about cramming in keywords or trying to play the algorithm, it’s about clarity, structure and credibility because these are the things LLMs care about when deciding what to quote, summarize or ignore. Below are some techniques that will help your content stay visible now that people are using generative search.
The bar has been raised on the quality of content
LLMs love clarity. The more natural and specific your language is, the easier it is for them to understand and reuse your content. That means not using jargon, avoiding ambiguity and instead, focusing on writing like you’re explaining something to a colleague.
To give an exact example:
Don’t Say:
“Our innovative tool revolutionizes the digital landscape for modern businesses.”
Instead Say:
“The Yoast SEO plugin for WordPress helps businesses to improve their website’s visibility and appear inn search results
Use Structure, Chunked Formatting
Chunked formatting means breaking your content into small pieces (chunks) of informatin that are easy to understand and remember. LLMs tend to prioritize the most easily digestible content construction – which means your headings, bullet points, and clearly defined sections must do a lot of heavy lifting. Not only does organizing your content like this help people to skim read, but it also helps machines understand what each section is about.
Structuring your content like this will help:
Write clear, descriptive H2s and 3s
Use bullet points that can provide standalone value
Include summaries and tables to give quick overviews
Be Factual, Transparent, and Authoritative
Just like Google, LLMs need to trust that your content is reliable before they start taking you seriously. This means you need to show your working out, quote sources, reveal authors, and follow the principles of E-E-A-T. Experience, Expertise, Authority, and Trust.
Include an author bio and credentials if possible (include a link to actual author bios and social profiles)
Name your sources when you use claims or statistics
Share real experiences if possible “As a small business owner…”
The more real, relatable and trustworthy your content looks, the more AI will like it.
Optimize for Summarization
LLMs won’t quote your entire blog post; they’ll only use snippets. Your job is to make those snippets irresistible. Start with strong lead sentences so that each paragraph begins with a clear point followed by context. Also, it’s a good idea to front-load your content. Don’t save your best bits for the end.
As a reminder:
Start each section with what you want the key takeaway to be
Keep paragraphs short and self-contained
Create standalone summary paragraphs as these often get quoted in AI generated answers
Use Schema
Behind every great summary is a structured content model. That’s where Schema markup comes in and to help the AI understand your content, you need to speak in a certain way.
Once you’ve got the basics completed, like clear writing, structure and trust signals, there’s still more you can do to give your content the best shot at visibility. These bonus strategies focus on how to make your site even more AI-friendly by anticipating how LLMs interpret and reuse information.
Use Explicit Context and Clear language
Humans have an incredible ability to be able to ‘fill in the blanks’ and still ‘get the message’ even if the information they got was vague or unclear. One of the biggest differences between humans and LLMs? Humans can infer meaning from vague references. LLMs on the other hand… well, let’s just say that it doesn’t come naturally to them.
In any case, the point is that if your article mentions “this tool” or “our product” without any context, an LLM might miss the connection entirely. The result? You’re left out of the answer, even if you’re the best source.
So, to give your content the clarity it deserves:
Use the full product or brand name, like “Yoast SEO plugin for WordPress,” not just “Yoast”
Define technical or niche terms before using them
Avoid vague language (“this page,” “the above section,” “click here”)
You don’t need to be repetitive, but you do need to be explicit rather than implicit.
Leverage FAQs and Conversational Formats
LLMs love FAQs because they’re direct, predictable, and easy to quote. They closely match real user intent and provide high-value snippets that tools like Perplexity and Gemini can pull from without much guesswork.
That said, there’s an important limitation to keep in mind if you’re using the Yoast SEO FAQ block in Gutenberg:
You cannot use H2 or H3 heading tags inside the FAQ block. The block creates its own question-answer formatting using custom HTML, which is great for structured data (FAQ Page schema), but it doesn’t support native heading tags which limits your ability to optimize AI readability and skimmability.
So, if your goal is to appear in AI-generated summaries or answer boxes, where headings like “What is LLM SEO?” make it easy for AI to quote your content, you might be better off using manual formatting.
Here’s how to get the best of both worlds:
STEP 1: Use H2 or H3 tags for each question (e.g., “What is llms.txt?”) and write a clear, short answer beneath it. This improves LLM visibility but doesn’t generate structured FAQ schema.
Step 2: Use the Yoast FAQ block for schema support but know that it won’t give you a proper heading structure.
Ultimately, the more your FAQs resemble natural, searchable questions — and are structured in a way that both humans and AI can easily parse — the more likely they are to be featured in answers.
Enhance Trust with Freshness Signals
Just like search engines, some LLMs give preference to newer content, but remember that we need to talk to them in a certain way to get the best out of them.
Older content can be overlooked. Worse, it can be quoted incorrectly if something has changed since you last hit publish.
Make sure your pages include:
A clear “last updated” timestamp (can we get a picture of what one would look like for clarification?)
Regular reviews for accuracy
Changelogs or update notes if applicable (especially for software or plugin content)
It doesn’t have to be complicated, even a simple “Last updated: June 2025” can help both readers and AI systems trust that your content is current.
Today, we’re entering a phase where who wrote your content is just as important as what it says. That means you need to highlight author visibility and put effort into signaling real-world experience.
Use Person schema to formally associate the content with a specific individual
Weave in relevant experience (“As an SEO consultant who works with SaaS brands…”)
Remember, LLMs are more likely to trust, quote, and amplify expert-authored content.
Use Internal Linking Strategically
Think of internal linking as your site’s nervous system. It helps both humans and LLMs understand what’s important, how topics relate, and where to go next.
But internal linking isn’t just about SEO hygiene anymore — it’s also a way to establish topic authority and help LLMs build a map of your expertise.
Do:
Cluster related articles together (e.g., link from “LLM Optimization” to “Schema Markup for SEO”)
Use descriptive anchor text like “read our full guide to Schema markup,” not just “click here”
Ensure every piece of content supports a broader narrative
The role of llms.txt. Giving AI search all the right signals
Now let’s talk about one of the most recent developments in LLM visibility; a little file called llms.txt.
Think of it as a sibling to robots.txt, but instead of guiding search engines, it tells AI tools how they’re allowed to interact with your content. Note: llms.txt is still an evolving standard, and support across AI tools may vary, but it’s a smart step toward asserting control
With llms.txt, you can:
Define how your content may be reused or summarized
Set clear expectations around attribution, licensing
It’s not just about protection, it’s about being proactive as AI usage accelerates.
LLM Optimization and SEO are part of the same family, but they serve different functions and require slightly different thinking.
Let’s compare:
Traditional SEO
LLM Optimization
Crawled and ranked by bots
Read, remembered, and reused by AIs
Emphasizes keywords
Emphasizes context and clarity
Optimizes for SERPs
Optimizes for AI-generated summaries and answers
The takeaway? You can’t ignore either. One brings traffic; the other boosts brand visibility within AI responses.
And considering that 42% of users now start their research with an LLM (not Google), you’ll want to be found in both places.
Common Mistakes to Avoid
Even well-meaning content creators fall into holes. So, take a look at the tips below to avoid any mishaps that could damage your LLM visibility:
Writing like a robot or allowing a robot to write for you (ironically, not appreciated by robots)
Leaving your content undated and unchanged for years
Publishing posts without any author information or editorial standards
Ignoring internal links or leaving orphaned pages
Using vague headings or anchor text like “read more” or “this article”
If your content looks generic, outdated, or anonymous, it won’t earn any trust. And, without trust, it won’t get quoted.
Tools and Resources to Get Started
Search used to be about visibility within SERPs. But now, it’s also about being seen in summaries, answers, snippets, and chats. LLMs aren’t just shaping the future of search; they’re shaping how your brand is perceived to both humans and robots alike.
To stand out:
Write with clarity and context
Structure for humans and machines
Cite your expertise and show your authors
Use tools like Yoast and llms.txt to signal your intent
Future-proof your visibility with Yoast SEO. From llms.txt integration to schema support, Yoast gives you all the tools you need to speak AI’s language and dominate both generative answers and search engines. Get started with Yoast SEO Premium nowand make it easy for AI to say something accurate, useful, and… ideally, about you.
Brendan Reid
Brendan is a seasoned writer with a particular interest in SMEs. What he really enjoys is being able to provide real, actionable steps that can be taken today to start making business better for everyone.
Over the past decade, digital marketers have witnessed a dramatic shift in how search budgets are allocated.
In the past decade, companies were funding SEO teams alongside PPC teams. However, a shift towards PPC-first has dominated the inbound marketing space.
Where Have SEO Budgets Gone?
Today, more than $150 billion is spent annually on paid search in the United States alone, while only $50 billion is invested in SEO.
With Google Ads, every dollar has a direct, reportable outcome:
Impressions.
Clicks.
Conversions.
SEO, by contrast, has long been:
A black box.
As a result, agencies and the clients that hire them followed the money, even when SEO’s results were higher.
PPC’s Direct Attribution Makes PPC Look More Important, But SEO Still Dominates
Hard facts:
SEO drives 5x more traffic than PPC.
Companies pay 3x more on PPC than SEO.
Image created by MarketBrew, August 2025
You Can Now Trace ROI Back To SEO
As a result, many SEO professionals and agencies want a way back to organic. Now, there is one, and it’s powered by attribution.
Attribution Is the Key to Measurable SEO Performance
Instead of sitting on the edge of the search engine’s black box, guessing what might happen, we can now go inside the SEO black box, to simulate how the algorithms behave, factor by factor, and observe exactly how rankings react to each change.
With this model in place, you are no longer stuck saying “trust us.”
You can say, “Here’s what we changed. Here’s how rankings moved. Here’s the value of that movement.” Whether the change was a new internal link structure or a content improvement, it’s now visible, measurable, and attributable.
For the first time, SEO teams have a way to communicate performance in terms executives understand: cause, effect, and value.
This transparency is changing the way agencies operate. It turns SEO into a predictable system, not a gamble. And it arms client-facing teams with the evidence they need to justify the budget, or win it back.
How Agencies Are Replacing PPC With Measurable Organic SEO
For agencies, attribution opens the door to something much bigger than better reporting; it enables a completely new kind of offering: performance-based SEO.
Traditionally, SEO services have been sold as retainers or hourly engagements. Clients pay for effort, not outcomes. With attribution, agencies can now flip that model and say: You only pay when results happen.
Enter Market Brew’s AdShifted feature to model this value and success as shown here:
Screenshot from a video by MarketBrew, August 2025
The AdShift tool starts by entering a keyword to discover up to 4* competitive URLs for the Keyword’s Top Clustered Similarities. (*including your own website plus 4 top-ranking competitors)
Screenshot of PPC vs. MarketBrew comparison dashboard by Marketbrew, August 2025
AdShift averages CPC and search volume across all keywords and URLs, giving you a reliable market-wide estimate and details for your brand towards a monthly PPC investment to rank #1.
Screenshot of a dashboard by Marketbrew, August 2025
AdShift then calculates YOUR percentage of replacement for PPC to fund SEO.
This allows you to model your own Performance Plan with variable discounts available to the Market Brew license fees with an always less than 50% of PPC Fee for clicks replaced by new SEO traffic.
Screenshot of a dashboard by Marketbrew, August 2025
AdShift simulates a PPC replacement plan option selected based on its keywords footprint to instantly see savings from the associated Performance Plans.
That’s the heart of the PPC replacement plan: a strategy you can use to gradually shift a clients’ paid search budgets into measurable performance-based SEO.
What Is A PPC Replacement Plan? Trackable SEO.
A PPC replacement plan is a strategy in which agencies gradually shift their clients’ paid search budgets into organic investments, with measurable outcomes and shared performance incentives.
Here’s how it works:
Benchmark Paid Spend: Identify the current Google Ads budget, i.e., $10,000 per month or $120,000 per year.
Forecast Organic Value: Use search engine modeling to predict the lift in organic traffic from specific SEO tasks.
Execute & Attribute: Complete tasks and monitor real-time changes in rankings and traffic.
Charge on Impact: Instead of billing for time, bill for results, often at a fraction of the client’s former ad spend.
This is not about replacing all paid spend.
Branded queries and some high-value targets may remain in PPC. But for the large, expensive middle of the keyword funnel, agencies can now offer a smarter path: predictable, attributable organic results, at a lower cost-per-click, with better margins.
And most importantly, instead of lining Google’s pockets with PPC revenue, your investments begin to fuel both organic and LLM searches!
Real-World Proof That SEO Attribution Works
Agencies exploring this new attribution-powered model aren’t just intrigued … they’re energized. For many, it’s the first time in years that SEO feels like a strategic growth engine, not just a checklist of deliverables.
“We’ve pitched performance SEO to three clients this month alone,” said one digital strategy lead. “The ability to tie ranking improvements to specific tasks changed the entire conversation.”
“Instead of walking into meetings looking to justify an SEO retainer, we enter with a blueprint representing a SEO/GEO/AEO Search Engine’s ‘digital twin’ with the AI-driven tasks that show exactly what needs to be changed and the rankings it produces. Clients don’t question the value … they ask what’s next.”
Several agencies report that new business wins are increasing simply because they offer something different. While competitors stick to vague SEO promises or expensive PPC management, partners leveraging attribution offer clarity, accountability, and control.
And when the client sees that they’re paying less and getting more, it’s not a hard sell, it’s a long-term relationship.
A Smarter, More Profitable Model for Agencies and SEOs
The traditional agency model in search has become a maze of expectations.
Managing paid search may deliver short-term wins, but it comes to a bidding war with only those with the biggest budgets winning. SEO, meanwhile, has often felt like a thankless task … necessary but underappreciated, valuable but difficult to prove.
Attribution changes that.
For agencies, this is a path back to profitability and positioning. With attribution, you’re not just selling effort … you’re selling outcomes. And because the work is modeled and measured in advance, you can confidently offer performance plans that are both client-friendly and agency-profitable.
For SEOs, this is about getting the credit they deserve. Attribution allows practitioners to demonstrate their impact in concrete terms. Rankings don’t just move, … they move because of you. Traffic increases aren’t vague, … they’re connected to your specific strategies.
Now, you can show this.
Most importantly, this approach rebuilds trust.
Clients no longer have to guess what’s working. They see it. In dashboards, in forecasts, in side-by-side comparisons of where they were and where they are now. It restores SEO to a place of clarity and control where value is obvious, and investment is earned.
The industry has been waiting for this. And now, it’s here.
From PPC Dependence to Organic Dominance — Now Backed by Data
Search budgets have long been upside down, pouring billions into paid clicks that capture a mere fraction of user attention, while underfunding the organic channel that delivers lasting value.
Why? Because SEO lacked attribution.
That’s no longer the case.
Today, agencies and SEO professionals have the tools to prove what works, forecast what’s next, and get paid for the real value they deliver. It’s a shift that empowers agencies to move beyond bidding-war PPC management and into a lower cost & higher ROAS, performance-based SEO.
This isn’t just a new service mode it’s a rebalancing of power in search.
Organic is back. It’s measurable. It’s profitable. And it’s ready to take center stage again.
The only question is: will you be the agency or brand that leads the shift or watch as others do it first?
Bing has updated its sitemap guidance with a renewed focus on the lastmod tag, highlighting its role in AI-powered search to determine which pages need to be recrawled.
While real-time tools like IndexNow offer faster updates, Bing says accurate lastmod values help keep content discoverable, especially on frequently updated or large-scale sites.
Bing Prioritizes lastmod For Recrawling
Bing says the lastmod field in your sitemap is a top signal for AI-driven indexing. It helps determine whether a page needs to be recrawled or can be skipped.
To make it work effectively, use ISO 8601 format with both date and time (e.g. 2004-10-01T18:23:17+00:00). That level of precision helps Bing prioritize crawl activity based on actual content changes.
Avoid setting lastmod to the time your sitemap was generated, unless the page was truly updated.
Bing also confirmed that changefreq and priority tags are ignored and no longer affect crawling or ranking.
Submission & Verification Tips
Bing recommends submitting your sitemap in one of two ways:
Reference it in your robots.txt file
Submit it via Bing Webmaster Tools
Once submitted, Bing fetches the sitemap immediately and rechecks it daily.
You can verify whether it’s working by checking the submission status, last read date, and any processing errors in Bing Webmaster Tools.
Combine With IndexNow For Better Coverage
To increase the chances of timely indexing, Bing suggests combining sitemaps with IndexNow.
While sitemaps give Bing a full picture of your site, IndexNow allows real-time URL-level updates—useful when content changes frequently.
The Bing team states:
“By combining sitemaps for comprehensive site coverage with IndexNow for fast, URL-level submission, you provide the strongest foundation for keeping your content fresh, discoverable, and visible.”
Sitemaps at Massive Scale
If you manage a large website, Bing’s sitemap capacity limits are worth your attention:
Up to 50,000 URLs per sitemap
50,000 sitemaps per index file
2.5 billion URLs per index
Multiple index files support indexing up to 2.5 trillion URLs
That makes the standard sitemap protocol scalable enough even for enterprise-level ecommerce or publishing platforms.
Fabrice Canel and Krishna Madhavan of Microsoft AI, Bing, noted that using these limits to their full extent helps ensure content remains discoverable in AI search.
Why This Matters
As search becomes more AI-driven, accurate crawl signals matter more.
Bing’s reliance on sitemaps, especially the lastmod field, shows that basic technical SEO practices still matter, even as AI reshapes how content is surfaced.
For large sites, Bing’s support for trillions of URLs offers scalability. For everyone else, the message is simpler: keep your sitemaps clean, accurate, and updated in real-time. This gives your content the best shot at visibility in AI search.
This post was sponsored by Peec.ai. The opinions expressed in this article are the sponsor’s own.
The first step of any good GEO campaign is creating something that LLM-driven answer machines actually want to link out to or reference.
GEO Strategy Components
Think of experiences you wouldn’t reasonably expect to find directly in ChatGPT or similar systems:
Engaging content like a 3D tour of the Louvre or a virtual reality concert.
Live data like prices, flight delays, available hotel rooms, etc. While LLMs can integrate this data via APIs, I see the opportunity to capture some of this traffic for the time being.
Topics that require EEAT (experience, expertise, authoritativeness, trustworthiness).
LLMs cannot have first-hand experience. But users want it. LLMs are incentivized to reference sources that provide first-hand experience. That’s just one of the things to keep in mind, but what else?
We need to differentiate between two approaches: influencing foundational models versus influencing LLM answers through grounding. The first is largely out of reach for most creators, while the second offers real opportunities.
Influencing Foundational Models
Foundational models are trained on fixed datasets and can’t learn new information after training. For current models like GPT-4, it is too late – they’ve already been trained.
But this matters for the future: imagine a smart fridge stuck with o4-mini from 2025 that might – hypothetically – favor Coke over Pepsi. That bias could influence purchasing decisions for years!
Optimizing For RAG/Grounding
When LLMs can’t answer from their training data alone, they use retrieval augmented generation (RAG) – pulling in current information to help generate answers. AI Overviews and ChatGPT’s web search work this way.
As SEO professionals, we want three things:
Our content gets selected as a source.
Our content gets quoted most within those sources.
Other selected sources support our desired outcome.
Concrete Steps To Succeed With GEO
Don’t worry, it doesn’t take rocket science to optimize your content and brand mentions for LLMs. Actually, plenty of traditional SEO methods still apply, with a few new SEO tactics you can incorporate into your workflow.
Step 1: Be Crawlable
Sounds simple but it is actually an important first step. If you aim for maximum visibility in LLMs, you need to allow them to crawl your website. There are many different LLM crawlers from OpenAI, Anthropic & Co.
Some of them behave so badly that they can trigger scraping and DDoS preventions. If you are automatically blocking aggressive bots, check in with your IT team and find a way to not block LLMs you care about.
If you use a CDN, like Fastly or Cloudflare, make sure LLM crawlers are not blocked by default settings.
Step 2: Continue Gaining Traditional Rankings
The most important GEO tactic is as simple as it sounds. Do traditional SEO. Rank well in Google (for Gemini and AI Overviews), Bing (for ChatGPT and Copilot), Brave (for Claude), and Baidu (for DeepSeek).
Step 3: Target the Query Fanout
The current generation of LLMs actually does a little more than simple RAG. They generate multiple queries. This is called query fanout.
For example, when I recently asked ChatGPT “What is the latest Google patent discussed by SEOs?”, it performed two web searches for “latest Google patent discussed by SEOs patent 2025 SEO forum” and “latest Google patent SEOs 2025 discussed”.
Advice: Check the typical query fanouts for your prompts and try to rank for those keywords as well.
Typical fanout-patterns I see in ChatGPT are appending the term “forums” when I ask what people are discussing and appending “interview” when I ask questions related to a person. The current year (2025) is often added as well.
Beware: fanout patterns differ between LLMs and can change over time. Patterns we see today may not be relevant anymore in 12 months.
Step 4: Keep Consistency Across Your Brand Mentions
This is something simple everyone should do – both as a person and an enterprise. Make sure you are consistently described online. On X, LinkedIn, your own website, Crunchbase, Github – always describe yourself the same way.
If your X and LinkedIn profiles say you are a “GEO consultant for small businesses”, don’t change it to “AIO expert” on Github and “LLMO Freelancer” in your press releases.
I have seen people achieve positive results within a few days on ChatGPT and Google AI Overviews by simply having a consistent self description across the web. This also applies to PR coverage – the more and better coverage you can obtain for your brand, the more likely LLMs are to parrot it back to users.
Step 5: Avoid JavaScript
As an SEO, I always ask for as little JavaScript usage as possible. As a GEO, I demand it!
Most LLM crawlers cannot render JavaScript. If your main content is hidden behind JavaScript, you are out.
Step 6: Embrace Social Media & UGC
Unsurprisingly, LLMs seem to rely on reddit and Wikipedia a lot. Both platforms offer user-generated-content on virtually every topic. And thanks to multiple layers of community-driven moderation, a lot of junk and spam is already filtered out.
While both can be gamed, the average reliability of their content is still far better than on the internet as a whole. Both are also regularly updated.
reddit also provides LLM labs with data into how people discuss topics online, what language they use to describe different concepts, and knowledge on obscure niche topics.
We can reasonably assume that moderated UGC found on platforms like reddit, Wikipedia, Quora, and Stackoverflow will stay relevant for LLMs.
I do not advocate spamming these platforms. However, if you can influence how you and competitors show up there, you might want to do so.
Step 7: Create For Machine-Readability & Quotability
Write content that LLMs understand and want to cite. No one has figured this one out perfectly yet, but here’s what seems to work:
Use declarative and factual language. Instead of writing “We are kinda sure this shoe is good for our customers”, write “96% of buyers have self-reported to be happy with this shoe.”
Add schema. It has been debated many times. Recently, Fabrice Canel (Principal Product Manager at Bing) confirmed that schema markup helps LLMs to understand your content.
If you want to be quoted in an already existing AI Overview, have content with similar length to what is already there. While you should not just copy the current AI Overview, having high cosine similarly helps. And for the nerds: yes, given normalization, you can of course use the dot product instead of cosine similarity.
If you use technical terms in your content, explain them. Ideally in a simple sentence.
Add summaries of long text paragraphs, lists of reviews, tables, videos, and other types of difficult-to-cite content formats.
Step 8: Optimize your Content
The original GEO paper
If we look at GEO: Generative Engine Optimization (arXiv:2311.09735) , What Evidence Do Language Models Find Convincing? (arXiv:2402.11782v1), and similar scientific studies, the answer is clear. It depends!
To be cited for some topics in some LLMs, it helps to:
Add unique words.
Have pro/cons.
Gather user reviews.
Quote experts.
Include quantitative data and name your sources.
Use easy to understand language.
Write with positive sentiment.
Add product text with low perplexity (predictable and well-structured).
Include more lists (like this one!).
However, for other combinations of topics and LLMs, these measures can be counterproductive.
Until broadly accepted best practices evolve, the only advice I can give is do what is good for users and run experiments.
Step 9: Stick to the Facts
For over a decade, algorithms have extracted knowledge from text as triples like (Subject, Predicate, Object) — e.g., (Lady Liberty, Location, New York). A text that contradicts known facts may seem untrustworthy. A text that aligns with consensus but adds unique facts is ideal for LLMs and knowledge graphs.
So stick to the established facts. And add unique information.
Step 10: Invest in Digital PR
Everything discussed here is not just true for your own website. It is also true for content on other websites. The best way to influence it? Digital PR!
The more and better coverage you can obtain for your brand, the more likely LLMs are to parrot it back to users.
I have even seen cases where advertorials were used as sources!
Concrete GEO Workflows To Try
Before I joined Peec AI, I was a customer. Here is how I used the tool – and how I advise our customers to use it.
Learn Who Your Competitors Are
Just like with traditional SEO, using a good GEO tool will often reveal unexpected competitors. Regularly look at a list of automatically identified competitors. For those who surprise you, check in which prompts they are mentioned. Then check the sources that led to their inclusion. Are you represented properly in these sources? If not, act!
Is a competitor referenced because of their PeerSpot profile but you have zero reviews there? Ask customers for a review.
Was your competitor’s CEO interviewed by a Youtuber? Try to get on that show as well. Or publish your own videos targeting similar keywords.
Is your competitor regularly featured on top 10 lists where you never make it to the top 5? Offer the publisher who created the list an affiliate deal they cannot decline. With the next content update, you’re almost guaranteed to be the new number one.
Understand the Sources
When performing search grounding, LLMs rely on sources.
Look at the top sources for a large set of relevant prompts. Ignore your own website and your competitors for a second. You might find some of these:
A community like Reddit or X. Become part of the community and join the discussion. X is your best bet to influence results on Grok.
Aninfluencer-driven website like YouTube or TikTok. Hire influencers to create videos. Make sure to instruct them to target the right keywords.
Anaffiliate publisher. Buy your way to the top with higher commissions.
Anews and mediapublisher. Buy an advertorial and/or target them with your PR efforts. In certain cases, you might want to contact their commercial content department.
Once you have observed which searches are triggered by query fanout for your most relevant prompts, create content to target them.
On your own website. With posts on Medium and LinkedIn. With press releases. Or simply by paying for article placements. If it ranks well in search engines, it has a chance to be cited by LLM-based answer engines.
Position Yourself for AI-Discoverability
Generative Engine Optimization is no longer optional – it’s the new frontline of organic growth. At Peec AI, we’re building the tools to track, influence, and win in this new ecosystem.
Generative Engine Optimization is no longer optional – it’s the new frontline of organic growth. We currently see clients growing their LLM traffic by 100% every 2 to 3 months. Sometimes with up to 20x the conversation rate of typical SEO traffic!
Whether you’re shaping AI answers, monitoring brand mentions, or pushing for source visibility, now is the time to act. The LLMs consumers will trust tomorrow are being trained today.
In a recent Search Off the Record podcast, Google’s Search Relations team cautioned developers against using CSS for all website images.
While CSS background images can enhance visual design, they’re invisible to Google Image Search. This could lead to missed opportunities in image indexing and search visibility.
Here’s what Google’s Search Advocates advise.
The CSS Image Problem
During the episode, John Mueller shared a recurring issue:
“I had someone ping me I think last week or a week before on social media: “It looks like my developer has decided to use CSS for all of the images because they believe it’s better.” Does this work?”
According to the Google team, this approach stems from a misunderstanding of how search engines interpret images.
When visuals are added via CSS background properties instead of standard HTML image tags, they may not appear in the page’s DOM, and therefore can’t be indexed.
As Martin Splitt explained:
“If you have a content image, if the image is part of the content… you want an img, an image tag or a picture tag that actually has the actual image as part of the DOM because you want us to see like ah so this page has this image that is not just decoration. It is part of the content and then image search can pick it up.”
Content vs. Decoration
The difference between a content image and a decorative image is whether it adds meaning or is purely cosmetic.
Decorative images, such as patterned backgrounds, atmospheric effects, or animations, can be safely implemented using CSS.
When the image conveys meaning or is referenced in the content, CSS is a poor fit.
Splitt offered the following example:
“If I have a blog post about this specific landscape and I want to like tell people like look at this amazing panoramic view of the landscape here and then it’s a background image… the problem is the content specifically references this image, but it doesn’t have the image as part of the content.”
In such cases, placing the image in HTML using the img or picture tag ensures it’s understood as part of the page’s content and eligible for indexing in Google Image Search.
What Makes CSS Images Invisible?
Splitt explained why this happens:
“For a user looking at the browser, what are you talking about, Martin? The image is right there. But if you look at the DOM, it absolutely isn’t there. It is just a CSS thing that has been loaded to style the page.”
Because Google parses the DOM to determine content structure, images styled purely through CSS are often overlooked, especially if they aren’t included as actual HTML elements.
This distinction reflects a broader web development principle.
Splitt adds:
“There is ideally a separation between the way the site looks and what the content is.”
What About Stock Photos?
The team addressed the use of stock photos, which are sometimes added for visual appeal rather than original content.
Splitt says:
“The meaning is still like this image is not mine. It’s a stock image that we bought or licensed but it is still part of the content,” the team noted.
While these images may not rank highly due to duplication, implementing them in HTML still helps ensure proper indexing and improves accessibility.
Why This Matters
The team highlighted several examples where improper implementation could reduce visibility:
Real estate listings: Home photos used as background images won’t show up in relevant image search queries.
News articles: Charts or infographics added via CSS can’t be indexed, weakening discoverability.
E-commerce sites: Product images embedded in background styles may not appear in shopping-related searches.
What To Do Next
Google’s comments indicate that you should follow these best practices:
Use HTML (img or picture) tags for any image that conveys content or is referenced on the page.
Reserve CSS backgrounds for decorative visuals that don’t carry meaning.
If users might expect to find an image via search, it should be in the HTML.
Proper implementation helps not only with SEO, but also with accessibility tools and screen readers.
Looking Ahead
Publishers should be mindful of how images are implemented.
While CSS is a powerful tool for design, using it to deliver content-related images may conflict with best practices for indexing, accessibility, and long-term SEO strategy.
In a recent episode of Google’s Search Off the Record podcast, Martin Splitt and John Mueller clarified how CSS affects SEO.
While some aspects of CSS have no bearing on SEO, others can directly influence how search engines interpret and rank content.
Here’s what matters and what doesn’t.
Class Names Don’t Matter For Rankings
One of the clearest takeaways from the episode is that CSS class names have no impact on Google Search.
Splitt stated:
“I don’t think it does. I don’t think we care because the CSS class names are just that. They’re just assigning a specific somewhat identifiable bit of stylesheet rules to elements and that’s it. That’s all. You could name them all “blurb.” It would not make a difference from an SEO perspective.”
Class names, they explained, are used only for applying visual styling. They’re not considered part of the page’s content. So they’re ignored by Googlebot and other HTML parsers when extracting meaningful information.
Even if you’re feeding HTML into a language model or a basic crawler, class names won’t factor in unless your system is explicitly designed to read those attributes.
Why Content In Pseudo Elements Is A Problem
While class names are harmless, the team warned about placing meaningful content in CSS pseudo elements like :before and :after.
Splitt stated:
“The idea again—the original idea—is to separate presentation from content. So content is in the HTML, and how it is presented is in the CSS. So with before and after, if you add decorative elements like a little triangle or a little dot or a little light bulb or like a little unicorn—whatever—I think that is fine because it’s decorative. It doesn’t have meaning in the sense of the content. Without it, it would still be fine.”
Adding visual flourishes is acceptable, but inserting headlines, paragraphs, or any user-facing content into pseudo elements breaks the core principle of web development.
That content becomes invisible to search engines, screen readers, and any other tools that rely on parsing the HTML directly.
Mueller shared a real-world example of how this can go wrong:
“There was once an escalation from the indexing team that said we should contact the site and tell them to stop using before and after… They were using the before pseudo class to add a number sign to everything that they considered hashtags. And our indexing system was like, it would be so nice if we could recognize these hashtags on the page because maybe they’re useful for something.”
Because the hashtag symbols were added via CSS, they were never seen by Google’s systems.
Splitt tested it live during the recording and confirmed:
“It’s not in the DOM… so it doesn’t get picked up by rendering.”
Oversized CSS Can Hurt Performance
The episode also touched on performance issues related to bloated stylesheets.
According to data from the HTTP Archive’s 2022 Web Almanac, the median size of a CSS file had grown to around 68 KB for mobile and 72 KB for desktop.
Mueller stated:
“The Web Almanac says every year we see CSS grow in size, and in 2022 the median stylesheet size was 68 kilobytes or 72 kilobytes. … They also mentioned the largest one that they found was 78 megabytes. … These are text files.”
That kind of bloat can negatively impact Core Web Vitals and overall user experience, which are two areas that do influence rankings. Frameworks and prebuilt libraries are often the cause.
While developers can mitigate this with minification and unused rule pruning, not everyone does. This makes CSS optimization a worthwhile item on your technical SEO checklist.
Keep CSS Crawlable
Despite CSS’s limited role in ranking, Google still recommends making CSS files crawlable.
Mueller joked:
“Google’s guidelines say you should make your CSS files crawlable. So there must be some kind of magic in there, right?”
The real reason is more technical than magical. Googlebot uses CSS files to render pages the way users would see them.
Blocking CSS can affect how your pages are interpreted, especially for layout, mobile-friendliness, or elements like hidden content.
Practical Tips For SEO Pros
Here’s what this episode means for your SEO practices:
Stop optimizing class names: Keywords in CSS classes won’t help your rankings.
Check pseudo elements: Any real content, like text meant to be read, should live in HTML, not in :before or :after.
Audit stylesheet size: Large CSS files can hurt page speed and Core Web Vitals. Trim what you can.
Ensure CSS is crawlable: Blocking stylesheets may disrupt rendering and impact how Google understands your page.
The team also emphasized the importance of using proper HTML tags for meaningful images:
“If the image is part of the content and you’re like, ‘Look at this house that I just bought,’ then you want an img, an image tag or a picture tag that actually has the actual image as part of the DOM because you want us to see like, ah, so this page has this image that is not just decoration.”
Use CSS for styling and HTML for meaning. This separation helps both users and search engines.
Google has updated its structured data documentation to clarify how merchants should implement markup for return policies and loyalty programs.
The updates aim to reduce confusion and ensure compatibility with Google Search features.
Key Changes In Return Policy Markup
The updated documentation clarifies that only a limited subset of return policy data is supported at the product level.
Google now explicitly states that comprehensive return policies must be defined using the MerchantReturnPolicy type under the Organization markup. This ensures a consistent policy across the full catalog.
In contrast, product-level return policies, defined underOffer, should be used only for exceptions and support fewer properties.
“Product-level return policies support only a subset of the properties available for merchant-level return policies.”
Loyalty Program Markup Must Be Separate
For loyalty programs, Google now emphasizes that the MemberProgram structured data must be defined under the Organization markup, either on a separate page or in Merchant Center.
While loyalty benefits like member pricing and points can still be referenced at the product level via UnitPriceSpecification, the program structure itself must be maintained separately.
“To specify the loyalty benefits… separately add UnitPriceSpecification markup under your Offer structured data markup.”
What’s Not Supported
Google’s documentation now states that shipping discounts and extended return windows offered as loyalty perks aren’t supported in structured data.
While merchants may still offer these benefits, they won’t be eligible for enhanced display in Google Search results.
This is particularly relevant for businesses that advertise such benefits prominently within loyalty programs.
Why It Matters
The changes don’t introduce new capabilities, but they clarify implementation rules that have been inconsistently followed or interpreted.
Merchants relying on offer-level markup for return policies or embedding loyalty programs directly in product offers may need to restructure their data.
Here are some next steps to consider:
Audit existing markup to ensure return policies and loyalty programs are defined at the correct levels.
Use product-level return policies only when needed, such as for exceptions.
Separate loyalty program structure from loyalty benefits, using MemberProgram under Organization, and validForMemberTier under Offer.
Staying compliant with these updated guidelines ensures eligibility for structured data features in Google Search and Shopping.