WordPress AI Engine Plugin Vulnerability Affects Up To 100,000 Websites via @sejournal, @martinibuster

A security advisory was issued for the AI Engine WordPress plugin, installed on over 100,000 websites, the fourth one this month. Rated 8.8, this vulnerability enables attackers with only subscriber-level authentication to upload malicious files when the REST API is enabled.

AI Engine Plugin: Fifth Vulnerability In 2025

This is the fourth vulnerability discovered in the AI Engine plugin in July, following the first one of the year discovered in June, making a total of five vulnerabilities discovered in the plugin so far in 2025. There were nine vulnerabilities discovered in 2024, one of which was rated 9.8 because it enabled unauthenticated attackers to upload malicious files, plus another rated 9.1 that also enabled arbitrary uploads.

Authenticated (Subscriber+) Arbitrary File Upload

The latest vulnerability enables authenticated file uploads. What makes this exploit more dangerous is that it requires only subscriber-level authentication for an attacker to take advantage of the security weakness. That isn’t as bad as a vulnerability that doesn’t require authentication, but it’s still rated 8.8 on a scale of 1 to 10.

Wordfence describes the vulnerability as being due to missing file type validation in a function related to the REST API in versions 2.9.3 and 2.9.4.

File type validation is a security measure typically used within WordPress to make sure that the content of a file matches the type of file being uploaded to the website.

According to Wordfence:

“This makes it possible for authenticated attackers, with Subscriber-level access and above, to upload arbitrary files on the affected site’s server when the REST API is enabled, which may make remote code execution possible.”

Users of the AI Engine plugin are recommended updating their plugin to the latest version, 2.9.5, or a newer version.

The plugin changelog for version 2.9.5 shares what was updated:

“Fix: Resolved a security issue related to SSRF by validating URL schemes in audio transcription and sanitizing REST API parameters to prevent API key misuse.

Fix: Corrected a critical security vulnerability that allowed unauthorized file uploads by adding strict file type validation to prevent PHP execution.”

Featured Image by Shutterstock/Jiri Hera

B2B Marketing Is Starting to Look a Lot Like B2C (And It’s Working) via @sejournal, @MattGSouthern

B2B marketers are taking a page from the B2C playbook and seeing real results.

According to LinkedIn’s B2B Marketing Benchmark Report, strategies once considered too informal for business audiences, like short-form video and influencer collabs, are now central to building trust and driving growth.

The study, based on responses from 1,500 senior marketers across six countries, found that 94% believe trust is the key to success in B2B.

But many brands are moving away from traditional lead-gen tactics and turning instead to emotionally resonant content and credible voices.

Lee Moskowitz, Growth Marketer and Podcast Host at Lee2B, is quoted in the report:

“We’re in an era of ‘AI slop,’ long sales cycles and growing buying committees. Brands need to build trust, prove their expertise and earn their place in the buying process.”

This shift toward more consumer-style tactics is evident in the adoption of video content across B2B teams.

B2B Video Marketing Hits a Tipping Point

Video is now foundational to B2B marketing, with 78% of marketers including it in their programs and over half planning to increase investments in the coming year.

Screenshot from: youtube.com/@LinkedInMktg, July 2025.

The most successful teams aren’t using video in isolation, they’re building multi-channel strategies that map to different funnel stages.

According to LinkedIn’s data, marketers with a video strategy are:

  • 2.2x more likely to say their brand is well trusted
  • 1.8x more likely to say their brand is well known

Popular formats include short-form social clips, brand storytelling, and customer testimonials. Content types long associated with B2C engagement are now proving effective in B2B.

Screenshot from: linkedin.com/business/marketing/blog/marketing-collective/2025-b2b-marketing-benchmar-the-video-influence-effect-starts-with-trust, July 2025.

AJ Wilcox, founder of B2Linked, states in the report:

“Capturing that major B2B deal requires trust, and nothing builds trust faster than personal video content. I feel more trusting of a brand after watching a 1-min clip of their founder talking than if I read five of their blog posts.”

B2B Influencer Marketing Moves Into the Mainstream

Fifty-five percent of marketers in the study said they now work with influencers. The top reasons include trust, authenticity, and credibility.

B2B influencers are typically subject matter experts, practitioners, or respected voices in their fields. And their impact appears to be tied to business outcomes: 84% of marketers using influencer marketing expect budget increases next year, compared to just 58% of non-users.

Brendan Gahan, CEO and Co-Founder of Creator Authority, states:

“This feels like a YouTube moment. LinkedIn is entering that same phase now. It already generates more weekly comments than Reddit. Its creator ecosystem is thriving and growing fast.”

Buyers trust people they relate to. Marketers are shifting their influencer strategies to reflect that, prioritizing alignment and authority over follower counts.

Screenshot from: linkedin.com/business/marketing/blog/marketing-collective/2025-b2b-marketing-benchmar-the-video-influence-effect-starts-with-trust, July 2025.

What This Means

Trust signals are becoming more important across the board, especially as search engines continue to emphasize expertise, authority, and trust (E-E-A-T). Relying on blog posts alone may no longer be enough to demonstrate what your brand stands for.

Video gives you a way to show expertise in a more personal, credible way. Whether it’s a founder explaining your product or a customer sharing their experience.

For long sales cycles and complex buying decisions, what’s working now looks a lot more human: authentic voices, visible experts, and content that’s easy to connect with.


Featured Image: Roman Samborskyi/Shutterstock

Why Your SEO Isn’t Working, And It’s Not The Team’s Fault via @sejournal, @billhunt

Over the years, I have been asked to audit numerous enterprise search programs, transforming them into world-class solutions.

Time and again, I found that the SEO teams were smart, capable, and executing the playbook, but the results weren’t materializing.

Rankings were volatile. Organic traffic plateaued. The executive team grew frustrated. Eventually, someone asked the inevitable: “Is our SEO team underperforming?”

Most of the time, the answer was no. The team wasn’t failing; the system around them was.

This article explores the structural, organizational, and leadership-level reasons why SEO fails inside even the most sophisticated enterprises.

Spoiler: It has little to do with keyword research or broken links, and everything to do with the invisible walls that constrain real performance.

It builds on themes from my article, “The New Role Of SEO In The Age Of AI,” where I explore how SEO is evolving into a broader organizational discipline, one rooted in systems thinking, structured content, and strategic alignment.

Misdiagnosing The Problem: SEO As A Siloed Function

In most companies, SEO is still viewed as a tactical function buried within marketing. It’s rarely integrated into upstream product planning, development processes, or digital governance.

So, when organic traffic and performance lag, leadership looks at the SEO team’s workflows, agency partners, or performance dashboard, but not at the system that surrounds them.

That’s like blaming the pit crew when the car hasn’t been upgraded in years.

5 Structural Reasons SEO Doesn’t Deliver

And now, in the AI era, there’s a new layer of complexity: the platform itself may be working against you.

Generative engines and search assistants are not just routing traffic; they’re rewriting how discovery happens.

If your content isn’t structured to be consumed and credited by AI, then even the best efforts by your SEO team won’t yield results.

Visibility isn’t just earned through optimization; it’s granted by systems trained to synthesize, summarize, and, sometimes, sidestep attribution entirely.

Here are the most common issues I see inside underperforming organizations:

1. No Executive Ownership Of Visibility

Every SEO team has the all-too-common story of being uninformed about a technical or content update until after it has already occurred, and then being expected to recover the lost performance magically.

That wasn’t an isolated oversight; it was an artifact of a siloed organization that didn’t truly value SEO.

When significant changes to the site’s architecture, platforms, or content workflows occur without input from search specialists, visibility suffers, regardless of the team’s skill level.

SEO success often hinges on decisions made far outside the SEO team’s control: site architecture, content management system (CMS) capabilities, translation workflows, and legal restrictions.

If no one at the leadership level owns findability as an outcome, SEO efforts get buried under technical debt and decision inertia.

2. Misaligned Incentives

SEO is a long-game discipline, but quarterly performance, traffic deltas, and campaign outcomes are the metrics most teams focus on.

When teams are rewarded for volume, not visibility, they focus on what’s easy to publish, not what’s hard to get discovered.

3. Content Without Strategy

In today’s search landscape, content must not only be helpful, but it must also be interpretable by machines. AI systems increasingly determine what gets surfaced, cited, or synthesized into answers.

If your content lacks structure, clarity, or semantic relevance, it may never reach the end user. This isn’t a failure of effort; it’s a failure to adapt to how visibility is brokered in an AI-first environment.

Companies often produce massive volumes of content with little to no strategy for discoverability, relevance, or user need.

One of the biggest mindset shifts needed is moving from “just accurate” to “genuinely helpful” content information that not only ranks but also resolves a user’s need, aligns with their search intent, and builds trust across formats and platforms.

If content isn’t structured for AI interpretation, indexed efficiently, or mapped to actual search behavior, it’s noise, not value.

4. Tech Bottlenecks And CMS Handcuffs

The SEO team may know what needs to be fixed, but can’t implement changes due to rigid CMS limitations, lack of dev resources, or cross-team politics.

SEO becomes a report generator, not a performance enabler.

5. Lack Of A Visibility Operating Model

Few organizations have a system for aligning product, content, UX, dev, and analytics around shared visibility goals.

Without a repeatable model and clearly identified roles, data handoffs, and escalation paths, SEO success is ad hoc and unsustainable.

It’s Not A Talent Problem. It’s A Systems Problem

Most SEO teams are aware of what needs to happen. But, unless they’re empowered structurally — with access, authority, and allies — they’re set up to fail.

It’s like asking a builder to construct a skyscraper with no blueprints, a shared plan, or the ability to move materials.

When executives recognize this as a systems issue, not a personnel one, transformation becomes possible.

What The C-Suite Should Be Asking Instead

Rather than “Why isn’t our SEO working?” leadership should be asking:

  • Who owns visibility at the organizational level?
  • Do our teams have a shared model for findability?
  • Are we rewarding the behaviors that lead to durable visibility, or just short-term volume?
  • Can our content and site architecture be understood by AI engines, as well as by humans?
  • Are our internal key performance indicators (KPIs) aligned with these new external discovery realities?

Reframing SEO As Infrastructure, Not Just A Channel

Modern SEO now sits at the intersection of content strategy, data modeling, and AI accessibility.

If you’re not designing your digital presence to be ingested by large language models or cited by answer engines, you’re ceding control to the platforms.

You’re optimizing for a web that no longer exists, and leaving performance on the table for competitors who’ve embraced AI-mode discoverability.

The most successful organizations treat SEO like digital infrastructure, a foundational capability embedded into everything from product design to knowledge management.

They invest in:

  • Schema and structured data governance.
  • Visibility Service Level Agreements (SLAs) across departments.
  • Shared taxonomies and content architectures.
  • Measurement frameworks that include AI surfacing and non-click impact.
  • Collaboration and knowledge sharing.

Final Thought: Clear The Path, Then Judge Performance

If your SEO isn’t delivering, don’t start by blaming the team. Start by auditing the system around them. Fix the structural blockers. Build the operating model. Assign executive ownership.

Then, and only then, can you ask whether the team is performing because even the best F1 driver can’t win a race if the vehicle they’ve been given is unreliable, outdated, or built without alignment between the systems.


Editor’s note: This article is the first in a series from Bill Hunt set to be published monthly. Each article will build on the others.

The series offers a clear, differentiated voice to speak the language of senior leadership while honoring the technical integrity of search.

More Resources:


Featured Image: Zamrznuti tonovi/Shutterstock

From B2B & B2C To B2Me: How AI Is Revealing The True Potential Of Individual-Centric Marketing via @sejournal, @purnavirji

A few weeks ago, I fell down a rabbit hole of cottagecore TikTok and Japanese jazz-funk from the ’70s. I didn’t search for it. I didn’t ask for it. But, somehow, my For You Page and Spotify knew. They knew before I did.

That’s the power of what I call B2Me, from broad strokes to a segment of one. And it’s changing everything.

As marketers, we’re moving from static personas to living identity graphs. As audiences, we’ve gone from craving options to craving intuition. We want brands that just get us.

Picture ads that shift based on your inferred mood, product recommendations that feel like they were plucked straight from your subconscious, content around what you were only just thinking about.

We’re marketing to real people in real time. And the brands that get it right, get rewarded with clicks, loyalty, and trust.

Demographics Were Always Broken (AI Just Made It Obvious)

For decades, we, marketers, clung to personas. Those convenient, yet ultimately flawed, cardboard cutouts like “Marketing Mike,” who supposedly loved artisanal everything, skateboarded to work, and breakfasted on avocado toast.

Meanwhile, the real Mike was out buying a motorcycle, years past his skateboarding phase, and loves gas station hotdogs.

“Women aged 25-34 with college degrees who live in New York and work in marketing” tells you nothing about what Natasha actually wants, what she’s struggling with, or what would make her say yes.

For too long, we’ve marketed to people who look like our customers instead of those who act like them.

Even today, many companies claiming “personalized marketing” are still relying on a demographic infrastructure from 2019, if not earlier. It’s a bit like driving forward while looking in the rearview mirror.

Demographics were always stereotypes in a data suit. AI strips that away and sees the person underneath.

That’s the essence of B2Me marketing: connecting with individuals based on observed behavior, not assumed demographics.

Decisions happen in fleeting, emotional moments. AI recognizes intent in real time, often before we do.

When was the last time an algorithm recommended something you didn’t know you wanted, but it was exactly what you wanted? Creepy? Maybe. Useful? Yes.

That’s the emotional layer AI is tapping into. It’s going beyond tracking behavior to interpreting intent. Frustration. Curiosity. Readiness. These are signals. And our job as marketers is to listen when they’re telling us, often without saying a word.

What True B2Me Looks Like

Coca-Cola tested this in Saudi Arabia. Instead of targeting “Millennials,” its AI agent analyzed millions of social posts across platforms like TikTok and LinkedIn, identifying people expressing cravings for fast food.

It then delivered 828,000 personalized coupon ads for discounted Coke products – 20,000 of which were clicked on – all without human intervention.

Overall, it executed roughly 8 million autonomous actions on behalf of its marketing team. That’s behavioral precision at unprecedented scale.

Meanwhile, a project management software company I observed found that its highest-converting customers weren’t the enterprise IT directors its demographic models targeted.

It was mid-level operations managers, the ones actually wrestling with the workflows. They weren’t filling out forms. But, they were driving the deals. The invisible layer of influence was profound.

B2Me strategies create compounding advantages. Each interaction refines AI’s understanding of individual patterns, leading to more precise future targeting. This can translate to:

  • Faster, more accurate intent recognition.
  • Superior message-market fit.
  • Measurably higher conversion rates.
  • Enhanced customer lifetime value.

Why Most “B2Me” Efforts Fail

Because they’re not really B2Me. They’re just demographic micro-segmentation with fancier plumbing.

I watched a SaaS company spend six months building an “AI-powered individual targeting system.” Its big breakthrough? Sending different subject lines to “Marketing Managers” versus “Marketing Directors.”

That’s not B2Me. That’s lipstick on a persona.

True B2Me watches behavior. It asks: What are they doing? What are they feeling? What are they trying to solve? And it zeroes in on the behavioral patterns that predict buying intent.

B2Me thrives on living identity graphs that continuously evolve based on what individuals consume, click, purchase, and how they navigate content.

Salesforce, through its focus on comprehensive customer data within frameworks like Customer 360, enables businesses to leverage behavioral signals, such as rapid tool adoption or shifts in company structure, to identify opportunities for digital transformation and improve targeting effectiveness.

These “digital transformation stress signals” convert significantly higher than demographic targeting, regardless of company size.

3 Ways To Implement B2Me

1. Target Behavior, Not Job Titles

Traditional: “Target CISOs at Fortune 500 companies.”

B2Me: “Target individuals researching security compliance solutions.”

Job titles aren’t always accurate predictors of buying behavior. Your best prospects might not match your ideal customer profile (ICP) on paper, but they’re showing you who they are through their actions.

2. Time Messages To Emotional States

AI’s true power lies in its ability to detect human intent and emotional states.

It can sense things like frustration (rapid scrolling, quick exits), curiosity (deep engagement, repeated visits), and buying readiness (pricing page visits, competitor research). This goes beyond what someone does to how they do it.

HubSpot’s platform and integrations support outreach timing based on behavioral frustration signals such as prospects engaging with content about data migration headaches or sales team bottlenecks.

3. Predict Needs Before Searches

Zoom capitalized on early remote work signals, such as increased interest in collaboration tools, distributed team hiring, and work-from-home content consumption, to scale rapidly during the pandemic

It identified “remote work scaling signals,” i.e., companies actively researching collaboration tools, posting jobs for distributed teams, and consuming work-from-home content.

This foresight allowed it to engage prospects and capture demand before competitors even fully recognized the shift.

Getting Started

1. Map Real Customer Behavior

Begin by auditing your current targeting. Most companies, from my observation, are still operating at 80% demographics, 20% behavior. It’s time to work on inverting that ratio.

Document what your actual best customers do before they buy:

  • What content truly resonates?
  • What questions consistently emerge during sales conversations?
  • What research triggers precede their engagement?
  • What are their preferred engagement channels?

2. Build Behavioral Audiences

Build behavioral audiences using the tools you already have in your search and social platforms.

These platforms are already prioritizing behavioral signals over static demographics, so lean into their capabilities.

Brand Still Wins

AI can distill patterns, but it can’t feel. It segments behavior, but it doesn’t grasp human motivation. It predicts clicks, but it can’t forge connection.

This is where brand is essential. It can serve as a definitive advantage in AI-mediated decisions.

When someone asks an AI assistant for customer relationship management (CRM) recommendations, which brands show up? And more importantly, how are they described?

You’re not just competing for human memory anymore. You’re competing for AI memory. And your brand is the shortcut.

When an AI recommends brands, it’s synthesizing reputation and consistency across thousands of complex touchpoints.

We can’t talk about brand without talking about trust.

We’ve always said “trust matters.” Now, AI exposes what trust really is: the gap between what you can do and what you should do.

Remember that Coca-Cola campaign? Eight million social posts analyzed, 828,000 personalized coupons delivered autonomously. Impressive results … and also a few debates about “surveillance marketing.”

AI exposes where trust was always fragile. Take surge pricing. AI can adjust rates based on your browser history, your device, even your cursor hesitation.

But, when customers notice? “Smart” becomes “sneaky.” Trust evaporates. Remember, trust isn’t a feature you add later. It’s the foundation.

The Right People At The Right Time With The Right Message

B2Me is about fundamentally better understanding your customer. AI can help us see patterns. But, only we can make meaning. Only we can build trust. Only we can decide what matters.

B2Me is empathy at scale, helping you see people, not personas. It empowers you to show up in the moments that matter, even the ones we’ll never see.

B2Me bridges the gap between what’s technically possible and what’s strategically smart.

You don’t need to have it all figured out tomorrow. You just need to start. And start by remembering that the most powerful force in marketing is still a thinking human.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Research Shows Differences In ChatGPT And Google AIO Answers via @sejournal, @martinibuster

New research from enterprise search marketing platform BrightEdge discovered differences in how Google and ChatGPT surface content. These differences matter to digital marketers and content creators because they show how content is recommended by each system. Recognizing the split enables brands to adapt their content strategies to stay relevant across both platforms.

BrightEdge’s findings were surfaced through an analysis of B2B technology, education, healthcare, and finance queries. It’s possible to cautiously extrapolate the findings to other niches where there could be divergences in how Google and ChatGPT respond, but that’s highly speculative, so this article won’t do that.

Core Differences: Task Vs. Information Orientation

BrightEdge’s research discovered that ChatGPT and Google AI Overviews take two different approaches to helping users take action. ChatGPT is more likely to recommend tools and apps, behaving in the role of a guide for making immediate decisions. Google provides informational content that encourages users to read before acting. This difference matters for SEO because it enables content creators and online stores to understand how their content is processed and presented to users of each system.

BrightEdge explains:

“In task-oriented prompts, ChatGPT overwhelmingly suggests tools and apps directly, while Google continues to link to informational content. While Google thrives as a research assistant, ChatGPT acts like a trusted coach for decision making, and that difference shapes which tool users instinctively choose for different needs.”

Divergence On Action-Oriented Queries

ChatGPT and Google tend to show similar kinds of results when users are querying for comparisons, but the results begin to diverge when the user intent implies they want to act. BrightEdge found that prompts about credit card comparisons or learning platforms generated similar kinds of results.

Questions with an action intent, like “how to create a budget” or “learn Python,” lead to different answers. ChatGPT appears to treat action intent prompts as requiring a response with tools, while Google treats them as requiring information.

BrightEdge notes that Healthcare has the highest rate of divergence:

“At 62% divergence, healthcare demonstrates the most significant split between platforms.

  • When prompts pertain to symptoms or medical information, both ChatGPT and Google will mention the CDC and The Mayo Clinic.
  • However, when prompted to help with things like “How to find a doctor,” ChatGPT pushes users towards Zocdoc, while Google points to hospital directories.”

B2B Technology niche has the second highest level of divergence:

“With 47% divergence, B2B tech shows substantial platform differences.

  • When comparing technology, such as cloud platforms, both suggest AWS and Azure.
  • When asked “How to deploy things (such as specific apps),” ChatGPT relies on tools like Kubernetes and the AWS CLI, while Google offers tutorials and Stack Overflow.”

Education follows closely behind B2B technology:

“At 45% divergence, education follows the same trend.

  • When comparing “Best online learning platforms,” both platforms surface Coursera, EdX, and LinkedIn Learning.
  • When a user’s prompt pertains to learning a skill such as “How to learn Python,” ChatGPT recommends Udemy, whereas Google directs users to user-generated content hubs like GitHub and Medium.”

Finance shows the lowest levels of divergence, at 39%.

BrightEdge concludes that this represents a “fundamental shift” in how AI platforms interpret intent, which means that marketers need to examine the intent behind the search results for each platform and make content strategy decisions based on that research.

Tools Versus Topics

BrightEdge uses the example of the prompt “What are some resources to help plan for retirement?” to show how Google and ChatGPT differ. ChatGPT offers calculators and tools that users can act on, while Google suggests topics for further reading.

Screenshot Of ChatGPT Responding With Financial Tools

There’s a clear difference in the search experience for users. Marketers, SEOs, and publishers should consider how to meet both types of expectations: practical, action-based responses from ChatGPT and informational content from Google.

Takeaways

  • Split In User Intent Interpretation:
    Google interprets queries as requests for information, while ChatGPT tends to interpret many of the same queries as a call for action that’s solved by tools.
  • Platform Roles:
    ChatGPT behaves like a decision-making coach, while Google acts as a research assistant.
  • Domain-Specific Differences:
    Healthcare has the highest divergence (62%), especially in task-based queries like finding a doctor.
    B2B Technology (47%) and Education (45%) also show significant splits in how guidance is delivered.
    Finance shows the least divergence (39%) in how results are presented.
  • Tools vs. Topics:
    ChatGPT recommends actionable resources; Google links to authoritative explainer content.
  • SEO Insight:
    Content strategies must reflect each platform’s interpretation of intent. For example, creating actionable responses for ChatGPT and comprehensive informational content for Google. This may even mean creating and promoting a useful tool that can surface in ChatGPT.

BrightEdge’s research shows that, for some queries, Google and ChatGPT interpret the same user intent in profoundly different ways. While Google treats action-oriented queries as a prompt to deliver informational content, ChatGPT responds by recommending tools and services users can immediately act on. This divergence calls attention to the need to understand when ChatGPT is delivering actionable responses in order for marketers and content creators to create platform-specific content and web experiences.

Read the original research:

Brand Visibility: ChatGPT and Google AI Approaches by Industry

Featured Image by Shutterstock/wenich_mit

How To Win In Generative Engine Optimization (GEO) via @sejournal, @maltelandwehr

This post was sponsored by Peec.ai. The opinions expressed in this article are the sponsor’s own.

The first step of any good GEO campaign is creating something that LLM-driven answer machines actually want to link out to or reference.

GEO Strategy Components

Think of experiences you wouldn’t reasonably expect to find directly in ChatGPT or similar systems:

  • Engaging content like a 3D tour of the Louvre or a virtual reality concert.
  • Live data like prices, flight delays, available hotel rooms, etc. While LLMs can integrate this data via APIs, I see the opportunity to capture some of this traffic for the time being.
  • Topics that require EEAT (experience, expertise, authoritativeness, trustworthiness).

LLMs cannot have first-hand experience. But users want it. LLMs are incentivized to reference sources that provide first-hand experience. That’s just one of the things to keep in mind, but what else?

We need to differentiate between two approaches: influencing foundational models versus influencing LLM answers through grounding. The first is largely out of reach for most creators, while the second offers real opportunities.

Influencing Foundational Models

Foundational models are trained on fixed datasets and can’t learn new information after training. For current models like GPT-4, it is too late – they’ve already been trained.

But this matters for the future: imagine a smart fridge stuck with o4-mini from 2025 that might – hypothetically – favor Coke over Pepsi. That bias could influence purchasing decisions for years!

Optimizing For RAG/Grounding

When LLMs can’t answer from their training data alone, they use retrieval augmented generation (RAG) – pulling in current information to help generate answers. AI Overviews and ChatGPT’s web search work this way.

As SEO professionals, we want three things:

  1. Our content gets selected as a source.
  2. Our content gets quoted most within those sources.
  3. Other selected sources support our desired outcome.

Concrete Steps To Succeed With GEO

Don’t worry, it doesn’t take rocket science to optimize your content and brand mentions for LLMs. Actually, plenty of traditional SEO methods still apply, with a few new SEO tactics you can incorporate into your workflow.

Step 1: Be Crawlable

Sounds simple but it is actually an important first step. If you aim for maximum visibility in LLMs, you need to allow them to crawl your website. There are many different LLM crawlers from OpenAI, Anthropic & Co.

Some of them behave so badly that they can trigger scraping and DDoS preventions. If you are automatically blocking aggressive bots, check in with your IT team and find a way to not block LLMs you care about.

If you use a CDN, like Fastly or Cloudflare, make sure LLM crawlers are not blocked by default settings.

Step 2: Continue Gaining Traditional Rankings

The most important GEO tactic is as simple as it sounds. Do traditional SEO. Rank well in Google (for Gemini and AI Overviews), Bing (for ChatGPT and Copilot), Brave (for Claude), and Baidu (for DeepSeek).

Step 3: Target the Query Fanout

The current generation of LLMs actually does a little more than simple RAG. They generate multiple queries. This is called query fanout.

For example, when I recently asked ChatGPT “What is the latest Google patent discussed by SEOs?”, it performed two web searches for “latest Google patent discussed by SEOs patent 2025 SEO forum” and “latest Google patent SEOs 2025 discussed”.

Advice: Check the typical query fanouts for your prompts and try to rank for those keywords as well.

Typical fanout-patterns I see in ChatGPT are appending the term “forums” when I ask what people are discussing and appending “interview” when I ask questions related to a person. The current year (2025) is often added as well.

Beware: fanout patterns differ between LLMs and can change over time. Patterns we see today may not be relevant anymore in 12 months.

Step 4: Keep Consistency Across Your Brand Mentions

This is something simple everyone should do – both as a person and an enterprise. Make sure you are consistently described online. On X, LinkedIn, your own website, Crunchbase, Github – always describe yourself the same way.

If your X and LinkedIn profiles say you are a “GEO consultant for small businesses”, don’t change it to “AIO expert” on Github and “LLMO Freelancer” in your press releases.

I have seen people achieve positive results within a few days on ChatGPT and Google AI Overviews by simply having a consistent self description across the web. This also applies to PR coverage – the more and better coverage you can obtain for your brand, the more likely LLMs are to parrot it back to users.

Step 5: Avoid JavaScript

As an SEO, I always ask for as little JavaScript usage as possible. As a GEO, I demand it!

Most LLM crawlers cannot render JavaScript. If your main content is hidden behind JavaScript, you are out.

Step 6: Embrace Social Media & UGC

Unsurprisingly, LLMs seem to rely on reddit and Wikipedia a lot. Both platforms offer user-generated-content on virtually every topic. And thanks to multiple layers of community-driven moderation, a lot of junk and spam is already filtered out.

While both can be gamed, the average reliability of their content is still far better than on the internet as a whole. Both are also regularly updated.

reddit also provides LLM labs with data into how people discuss topics online, what language they use to describe different concepts, and knowledge on obscure niche topics.

We can reasonably assume that moderated UGC found on platforms like reddit, Wikipedia, Quora, and Stackoverflow will stay relevant for LLMs.

I do not advocate spamming these platforms. However, if you can influence how you and competitors show up there, you might want to do so.

Step 7: Create For Machine-Readability & Quotability

Write content that LLMs understand and want to cite. No one has figured this one out perfectly yet, but here’s what seems to work:

  • Use declarative and factual language. Instead of writing “We are kinda sure this shoe is good for our customers”, write “96% of buyers have self-reported to be happy with this shoe.
  • Add schema. It has been debated many times. Recently, Fabrice Canel (Principal Product Manager at Bing) confirmed that schema markup helps LLMs to understand your content.
  • If you want to be quoted in an already existing AI Overview, have content with similar length to what is already there. While you should not just copy the current AI Overview, having high cosine similarly helps. And for the nerds: yes, given normalization, you can of course use the dot product instead of cosine similarity.
  • If you use technical terms in your content, explain them. Ideally in a simple sentence.
  • Add summaries of long text paragraphs, lists of reviews, tables, videos, and other types of difficult-to-cite content formats.

Step 8: Optimize your Content

Start of the paper GEO: Generative Engine Optimization (arXiv:2311.09735)The original GEO paper

If we look at GEO: Generative Engine Optimization (arXiv:2311.09735) , What Evidence Do Language Models Find Convincing? (arXiv:2402.11782v1), and similar scientific studies, the answer is clear. It depends!

To be cited for some topics in some LLMs, it helps to:

  • Add unique words.
  • Have pro/cons.
  • Gather user reviews.
  • Quote experts.
  • Include quantitative data and name your sources.
  • Use easy to understand language.
  • Write with positive sentiment.
  • Add product text with low perplexity (predictable and well-structured).
  • Include more lists (like this one!).

However, for other combinations of topics and LLMs, these measures can be counterproductive.

Until broadly accepted best practices evolve, the only advice I can give is do what is good for users and run experiments.

Step 9: Stick to the Facts

For over a decade, algorithms have extracted knowledge from text as triples like (Subject, Predicate, Object) — e.g., (Lady Liberty, Location, New York). A text that contradicts known facts may seem untrustworthy. A text that aligns with consensus but adds unique facts is ideal for LLMs and knowledge graphs.

So stick to the established facts. And add unique information.

Step 10: Invest in Digital PR

Everything discussed here is not just true for your own website. It is also true for content on other websites. The best way to influence it? Digital PR!

The more and better coverage you can obtain for your brand, the more likely LLMs are to parrot it back to users.

I have even seen cases where advertorials were used as sources!

Concrete GEO Workflows To Try

Before I joined Peec AI, I was a customer. Here is how I used the tool – and how I advise our customers to use it.

Learn Who Your Competitors Are

Just like with traditional SEO, using a good GEO tool will often reveal unexpected competitors. Regularly look at a list of automatically identified competitors. For those who surprise you, check in which prompts they are mentioned. Then check the sources that led to their inclusion. Are you represented properly in these sources? If not, act!

Is a competitor referenced because of their PeerSpot profile but you have zero reviews there? Ask customers for a review.

Was your competitor’s CEO interviewed by a Youtuber? Try to get on that show as well. Or publish your own videos targeting similar keywords.

Is your competitor regularly featured on top 10 lists where you never make it to the top 5? Offer the publisher who created the list an affiliate deal they cannot decline. With the next content update, you’re almost guaranteed to be the new number one.

Understand the Sources

When performing search grounding, LLMs rely on sources.

Typical LLM Sources: Reddit & Wikipedia

Look at the top sources for a large set of relevant prompts. Ignore your own website and your competitors for a second. You might find some of these:

  • A community like Reddit or X. Become part of the community and join the discussion. X is your best bet to influence results on Grok.
  • An influencer-driven website like YouTube or TikTok. Hire influencers to create videos. Make sure to instruct them to target the right keywords.
  • An affiliate publisher. Buy your way to the top with higher commissions.
  • A news and media publisher. Buy an advertorial and/or target them with your PR efforts. In certain cases, you might want to contact their commercial content department.

You can also check out this in-depth guide on how to deal with different kinds of source domains.

Target Query Fanout

Once you have observed which searches are triggered by query fanout for your most relevant prompts, create content to target them.

On your own website. With posts on Medium and LinkedIn. With press releases. Or simply by paying for article placements. If it ranks well in search engines, it has a chance to be cited by LLM-based answer engines.

Position Yourself for AI-Discoverability

Generative Engine Optimization is no longer optional – it’s the new frontline of organic growth. At Peec AI, we’re building the tools to track, influence, and win in this new ecosystem.

Generative Engine Optimization is no longer optional – it’s the new frontline of organic growth. We currently see clients growing their LLM traffic by 100% every 2 to 3 months. Sometimes with up to 20x the conversation rate of typical SEO traffic!

Whether you’re shaping AI answers, monitoring brand mentions, or pushing for source visibility, now is the time to act. The LLMs consumers will trust tomorrow are being trained today.


Image Credits

Featured Image: Image by Peec.ai Used with permission.

What you may have missed about Trump’s AI Action Plan

A number of the executive orders and announcements coming from the White House since Donald Trump returned to office have painted an ambitious vision for America’s AI future—crushing competition with China, abolishing “woke” AI models that suppress conservative speech, jump-starting power-hungry AI data centers. But the details have been sparse. 

The White House’s AI Action Plan, released last week, is meant to fix that. Many of the points in the plan won’t come as a surprise, and you’ve probably heard of the big ones by now. Trump wants to boost the buildout of data centers by slashing environmental rules; withhold funding from states that pass “burdensome AI regulations”; and contract only with AI companies whose models are “free from top-down ideological bias.”

But if you dig deeper, certain parts of the plan that didn’t pop up in any headlines reveal more about where the administration’s AI plans are headed. Here are three of the most important issues to watch. 

Trump is escalating his fight with the Federal Trade Commission

When Americans get scammed, they’re supposed to be helped by the Federal Trade Commission. As I wrote last week, the FTC under President Biden increasingly targeted AI companies that overhyped the accuracy of their systems, as well as deployments of AI it found to have harmed consumers. 

The Trump plan vows to take a fresh look at all the FTC actions under the previous administration as part of an effort to get rid of “onerous” regulation that it claims is hampering AI’s development. The administration may even attempt to repeal some of the FTC’s actions entirely. This would weaken a major AI watchdog agency, but it’s just the latest in the Trump administration’s escalating attacks on the FTC. Read more in my story

The White House is very optimistic about AI for science

The opening to the AI Action Plan describes a future where AI is doing everything from discovering new materials and drugs to “unraveling ancient scrolls once thought unreadable” to making breakthroughs in science and math

That type of unbounded optimism about AI for scientific discovery echoes what tech companies are promising. Some of that optimism is grounded in reality: AI’s role in predicting protein structures has indeed led to material scientific wins (and just last week, Google DeepMind released a new AI meant to help interpret ancient Latin engravings). But the idea that large language models—essentially very good text prediction machines—will act as scientists in their own right has less merit so far. 

Still, the plan shows that the Trump administration wants to award money to labs trying to make it a reality, even as it has worked to slash the funding the National Science Foundation makes available to human scientists, some of whom are now struggling to complete their research. 

And some of the steps the plan proposes are likely to be welcomed by researchers, like funding to build AI systems that are more transparent and interpretable.

The White House’s messaging on deepfakes is confused

Compared with President Biden’s executive orders on AI, the new action plan is mostly devoid of anything related to making AI safer. 

However, there’s a notable exception: a section in the plan that takes on the harms posed by deepfakes. In May, Trump signed legislation to protect people from nonconsensual sexually explicit deepfakes, a growing concern for celebrities and everyday people alike as generative video gets more advanced and cheaper to use. The law had bipartisan support.

Now, the White House says it’s concerned about the issues deepfakes could pose for the legal system. For example, it says, “fake evidence could be used to attempt to deny justice to both plaintiffs and defendants.” It calls for new standards for deepfake detection and asks the Department of Justice to create rules around it. Legal experts I’ve spoken with are more concerned with a different problem: Lawyers are adopting AI models that make errors such as citing cases that don’t exist, which judges may not catch. This is not addressed in the action plan. 

It’s also worth noting that just days before releasing a plan that targets “malicious deepfakes,” President Trump shared a fake AI-generated video of former president Barack Obama being arrested in the Oval Office.

Overall, the AI Action Plan affirms what President Trump and those in his orbit have long signaled: It’s the defining social and political weapon of our time. They believe that AI, if harnessed correctly, can help them win everything from culture wars to geopolitical conflicts. The right AI, they argue, will help defeat China. Government pressure on leading companies can force them to purge “woke” ideology from their models. 

The plan includes crowd-pleasers—like cracking down on deepfakes—but overall, it reflects how tech giants have cozied up to the Trump administration. The fact that it contains almost no provisions challenging their power shows how their investment in this relationship is paying off.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This startup wants to use the Earth as a massive battery

The Texas-based startup Quidnet Energy just completed a test showing it can store energy for up to six months by pumping water underground.

Using water to store electricity is hardly a new concept—pumped hydropower storage has been around for over a century. But the company hopes its twist on the technology could help bring cheap, long-duration energy storage to new places.

In traditional pumped hydro storage facilities, electric pumps move water uphill, into a natural or manmade body of water. Then, when electricity is needed, that water is released and flows downhill past a turbine, generating electricity. Quidnet’s approach instead pumps water down into impermeable rock formations and keeps it under pressure so it flows up when released. “It’s like pumped hydro, upside down,” says CEO Joe Zhou.

Quidnet started a six-month test of its technology in late 2024, pressurizing the system. In June, the company was able to discharge 35 megawatt-hours of energy from the well. There was virtually no self-discharge, meaning no energy loss, Zhou says.

Inexpensive forms of energy storage that can store electricity for weeks or months could help inconsistent electricity sources like wind and solar go further for the grid. And Quidnet’s approach, which uses commercially available equipment, could be deployed quickly and qualify for federal tax credits to help make it even cheaper.

However, there’s still a big milestone ahead: turning the pressurized water back into electricity. The company is currently building a facility with the turbines and support equipment to do that—all the components are available to purchase from established companies. “We don’t need to invent new things based on what we’ve already developed today,” Zhou says. “We can now start just deploying at very, very substantial scales.”

That process will come with energy losses. Energy storage systems are typically measured by their round-trip efficiency: how much of the electricity that’s put into the system is returned at the end as electricity. Modeling suggests that Quidnet’s technology could reach a maximum efficiency of about 65%, Zhou says, though some design choices made to optimize for economics will likely cause the system to land at roughly 50%.

That’s less efficient than lithium-ion batteries, but long-duration systems, if they’re cheap enough, can operate at low efficiencies and still be useful for the grid, says Paul Denholm, a senior research fellow at the National Renewable Energy Laboratory.

“It’s got to be cost-competitive; it all comes down to that,” Denholm says.

Lithium-ion batteries, the fastest-growing technology in energy storage, are the target that new forms of energy storage, like Quidnet’s, must chase. Lithium-ion batteries are about 90% cheaper today than they were 15 years ago. They’ve become a price-competitive alternative to building new natural-gas plants, Denholm says.

When it comes to competing with batteries, one potential differentiator for Quidnet could be government subsidies. While the Trump administration has clawed back funding for clean energy technologies, there’s still an energy storage tax credit, though recently passed legislation added new supply chain restrictions.

Starting in 2026, new energy storage facilities hoping to qualify for tax credits will need to prove that at least 55% of the value of a project’s materials are not from foreign entities of concern. That rules out sourcing batteries from China, which dominates battery production today. Quidnet has a “high level of domestic content” and expects to qualify for tax credits under the new rules, Zhou says.

The facility Quidnet is building is a project with utility partner CPS Energy, and it should come online in early 2026. 

The Download: how to store energy underground, and what you may not know about Trump’s AI Action Plan

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This startup wants to use the Earth as a massive battery

Texas-based startup Quidnet Energy just completed a test showing it can store energy for up to six months by pumping water underground.

Using water to store electricity is hardly a new concept—pumped hydropower storage has been around for over a century. But the company hopes its twist on the technology could help bring cheap, long-duration energy storage to new places. Read the full story.

—Casey Crownhart

What you may have missed about Trump’s AI Action Plan

The executive orders and announcements coming from the White House since Donald Trump returned to office have painted an ambitious vision for America’s AI future, but the details have been sparse. 

The White House’s AI Action Plan, released last week, is meant to fix that. Trump wants to boost the buildout of data centers by slashing environmental rules; withhold funding from states that pass “burdensome AI regulations”; and contract only with AI companies whose models are “free from top-down ideological bias.”

But if you dig deeper, certain parts of the plan that didn’t pop up in any headlines reveal more about where the administration’s AI plans are headed. Here are three of the most important issues to watch.

—James O’Donnell

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Democrats aren’t happy about Trump’s China chip U-turn
They’re worried about the security implications of approving exporting Nvidia chips. (WP $)
+ They claim the Trump administration is using export controls as a bargaining chip. (The Hill)
+ Meanwhile, both parties are planning new bills targeting China. (Reuters)

2 US tariffs are at their highest level since before WWII
Trump’s tariff wall appears likely to trigger a global reordering of trade. (FT $)
+ But who picks up the bill? (The Guardian)
+ Sweeping tariffs could threaten the US manufacturing rebound. (MIT Technology Review)

3 Utility companies want Big Tech to pay more for their data centers
Otherwise, rates may end up rising for regular customers. (WSJ $)
+ The data center boom in the desert. (MIT Technology Review)

4 Citizen science is on the rise across the US
Platform iNaturalist is playing a key role in helping to identify new species. (NYT $)
+ How nonprofits and academia are stepping up to salvage US climate programs. (MIT Technology Review)

5 Anthropic is cracking down on Claude power users
Some of its customers are running its AI coding tool 24/7. (TechCrunch)
+ That’s seriously bad news for the environment. (Engadget)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

6 MAHA might resurrect psychedelic therapy
Last year, the FDA rejected MDMA therapy. Now, it might get thrown a lifeline. (Wired $)
+ People are using AI to ‘sit’ with them while they trip on psychedelics. (MIT Technology Review)

7 Waymo is launching its robotaxi service in Dallas
In a new partnership with car rental firm Avis, not Uber. (Reuters)
+ It’s expanding steadily, unlike its rival Tesla. (Forbes $)

8 How a promising young coder wound up at DOGE
Luke Farritor has assessed, slashed, and dismantled at least 10 departments. (Bloomberg $)
+ The foundations of America’s prosperity are being dismantled. (MIT Technology Review)

9 This Californian startup’s robot kills fish the Japanese way 🐟
The method is considered the most humane way to kill them. (Semafor)

10 AI is making online shopping hyper-personalized 🛍
By serving up results for searches like “revenge dress to wear to a party in Sicily.” (CNN)

Quote of the day

“Now I’ll click the ‘Verify you are human’ checkbox…this step is necessary to prove I’m not a bot.”

—OpenAI’s new ChatGPT Agent explains how it passes a common internet security checkpoint designed to catch bots just like it, Ars Technica reports.

One more thing

How gamification took over the world

It’s a thought that occurs to every video-game player at some point: What if the weird, hyper-focused state I enter when playing in virtual worlds could somehow be applied to the real one?

Often pondered during especially challenging or tedious tasks in meatspace (writing essays, say, or doing your taxes), it’s an eminently reasonable question to ask. Life, after all, is hard. And while video games are too, there’s something almost magical about the way they can promote sustained bouts of superhuman concentration and resolve.

For some, this phenomenon leads to an interest in flow states and immersion. For others, it’s simply a reason to play more games. For a handful of consultants, startup gurus, and game designers in the late 2000s, it became the key to unlocking our true human potential. But instead of liberating us, gamification turned out to be just another tool for coercion, distraction, and control. Read the full story.

—Bryan Gardiner

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ USPS is taking votes from the public to bring back their favorite stamps (thanks Amy!)
+ Here’s how to make your morning toast that bit more interesting.
+ The long-awaited Madonna biopic is still happening, apparently.
+ Bad news for matcha fans—there’s a global shortage 🍵

OpenAI is launching a version of ChatGPT for college students

OpenAI is launching Study Mode, a version of ChatGPT for college students that it promises will act less like a lookup tool and more like a friendly, always-available tutor. It’s part of a wider push by the company to get AI more embedded into classrooms when the new academic year starts in September.

A demonstration for reporters from OpenAI showed what happens when a student asks Study Mode about an academic subject like game theory. The chatbot begins by asking what the student wants to know and then attempts to build an exchange, where the pair work methodically toward the answer together. OpenAI says the tool was built after consulting with pedagogy experts from over 40 institutions.

A handful of college students who were part of OpenAI’s testing cohort—hailing from Princeton, Wharton, and the University of Minnesota—shared positive reviews of Study Mode, saying it did a good job of checking their understanding and adapting to their pace.

The learning approaches that OpenAI has programmed into Study Mode, which are based partially on Socratic methods, appear sound, says Christopher Harris, an educator in New York who has created a curriculum aimed at AI literacy. They might grant educators more confidence about allowing, or even encouraging, their students to use AI. “Professors will see this as working with them in support of learning as opposed to just being a way for students to cheat on assignments,” he says.

But there’s a more ambitious vision behind Study Mode. As demonstrated in OpenAI’s recent partnership with leading teachers’ unions, the company is currently trying to rebrand chatbots as tools for personalized learning rather than cheating. Part of this promise is that AI will act like the expensive human tutors that currently only the most well-off students’ families can typically afford.

“We can begin to close the gap between those with access to learning resources and high-quality education and those who have been historically left behind,” says OpenAI’s head of education. Leah Belsky.

But painting Study Mode as an education equalizer obfuscates one glaring problem. Underneath the hood, it is not a tool trained exclusively on academic textbooks and other approved materials—it’s more like the same old ChatGPT, tuned with a new conversation filter that simply governs how it responds to students, encouraging fewer answers and more explanations. 

This AI tutor, therefore, more resembles what you’d get if you hired a human tutor who has read every required textbook, but also every flawed explanation of the subject ever posted to Reddit, Tumblr, and the farthest reaches of the web. And because of the way AI works, you can’t expect it to distinguish right information from wrong. 

Professors encouraging their students to use it run the risk of it teaching them to approach problems in the wrong way—or worse, being taught material that is fabricated or entirely false. 

Given this limitation, I asked OpenAI if Study Mode is limited to particular subjects. The company said no—students will be able to use it to discuss anything they’d normally talk to ChatGPT about. 

It’s true that access to human tutors—which for certain subjects can cost upward of $200 an hour—is typically for the elite few. The notion that AI models can spread the benefits of tutoring to the masses holds an allure. Indeed, it is backed up by at least some early research that shows AI models can adapt to individual learning styles and backgrounds.

But this improvement comes with a hidden cost. Tools like Study Mode, at least for now, take a shortcut by using large language models’ humanlike conversational style without fixing their inherent flaws. 

OpenAI also acknowledges that this tool won’t prevent a student who’s frustrated and wants an answer from simply going back to normal ChatGPT. “If someone wants to subvert learning, and sort of get answers and take the easier route, that is possible,” Belsky says. 

However, one thing going for Study Mode, the students say, is that it’s simply more fun to study with a chatbot that’s always encouraging you along than to stare at a textbook on Bayesian theorem for the hundredth time. “It’s like the reward signal of like, oh, wait, I can learn this small thing,” says Maggie Wang, a student from Princeton who tested it. The tool is free for now, but Praja Tickoo, a student from Wharton, says it wouldn’t have to be for him to use it. “I think it’s absolutely something I would be willing to pay for,” he says.