Google: Why Lazy Loading Can Delay Largest Contentful Paint (LCP) via @sejournal, @MattGSouthern

In a recent episode of Google’s Search Off the Record podcast, Martin Splitt and John Mueller discussed when lazy loading helps and when it can slow pages.

Splitt used a real-world example on developers.google.com to illustrate a common pattern: making every image lazy by default can delay Largest Contentful Paint (LCP) if it includes above-the-fold visuals.

Splitt said:

“The content management system that we are using for developers.google.com … defaults all images to lazy loading, which is not great.”

Splitt used the example to explain why lazy-loading hero images is risky: you tell the browser to wait on the most visible element, which can push back LCP and cause layout shifts if dimensions aren’t set.

Splitt said:

“If you are using lazy loading on an image that is immediately visible, that is most likely going to have an impact on your largest contentful paint. It’s like almost guaranteed.”

How Lazy Loading Delays LCP

LCP measures the moment the largest text or image in the initial viewport is painted.

Normally, the browser’s preload scanner finds that hero image early and fetches it with high priority so it can paint fast.

When you add loading="lazy" to that same hero, you change the browser’s scheduling:

  • The image is treated as lower priority, so other resources start first.
  • The browser waits until layout and other work progress before it requests the hero image.
  • The hero then competes for bandwidth after scripts, styles, and other assets have already queued.

That delay shifts the paint time of the largest element later, which increases your LCP.

On slow networks or CPU-limited devices, the effect is more noticeable. If width and height are missing, the late image can also nudge layout and feel “jarring.”

SEO Risk With Some Libraries

Browsers now support a built-in loading attribute for images and iframes, which removes the need for heavy JavaScript in standard scenarios. WordPress adopted native lazy loading by default, helping it spread.

Splitt said:

“Browsers got a native attribute for images and iframes, the loading attribute … which makes the browser take care of the lazy loading for you.”

Older or custom lazy-loading libraries can hide image URLs in nonstandard attributes. If the real URL never lands in src or srcset in the HTML Google renders, images may not get picked up for indexing.

Splitt said:

“We’ve seen multiple lazy loading libraries … that use some sort of data-source attribute rather than the source attribute… If it’s not in the source attribute, we won’t pick it up if it’s in some custom attribute.”

How To Check Your Pages

Use Search Console’s URL Inspection to review the rendered HTML and confirm that above-the-fold images and lazy-loaded modules resolve to standard attributes. Avoid relying on the screenshot.

Splitt advised:

“If the rendered HTML looks like it contains all the image URLs in the source attribute of an image tag … then you will be fine.”

Ranking Impact

Splitt framed ranking effects as modest. Core Web Vitals contribute to ranking, but he called it “a tiny minute factor in most cases.”

What You Should Do Next

  • Keep hero and other above-the-fold images eager with width and height set.
  • Use native loading="lazy" for below-the-fold images and iframes.
  • If you rely on a library for previews, videos, or dynamic sections, make sure the final markup exposes real URLs in standard attributes, and confirm in rendered HTML.

Looking Ahead

Lazy loading is useful when applied selectively. Treat it as an opt-in for noncritical content.

Verify your implementation with rendered HTML, and watch how your LCP trends over time.


Featured Image: Screenshot from YouTube.com/GoogleSearchCentral, August 2025. 

Google Confirms New Google Verified Badge for Local Services Ads via @sejournal, @brookeosmundson

Google just announced a new unifying identity for its Local Services Ads (LSAs) verification badges.

Called Google Verified, the badge will replace several different trust signals that advertisers and consumers have been seeing over the years.

This includes the Google Guaranteed, Google Screened, License Verified by Google, and the Money Back Guarantee program.

Starting in October 2025, eligible LSAs that pass the necessary screenings will display this streamlined mark: a single badge designed to communicate credibility in a more consistent way.

Why is Google Consolidating Badges?

In the past, Google’s verification system was fragmented.

Different types of businesses had different badges, and consumers were left guessing what each one actually meant. Was a “Screened” provider more trustworthy than a “Guaranteed” one? Did a license verification carry more weight than a money-back promise?

The lack of consistency made it harder for advertisers to explain their value and for consumers to make decisions.

By rolling everything into one identity, Google Verified aims to simplify the process for everyone involved.

The badge will not only appear across Local Service Ads but will also include transparency for consumers. When a user taps or hovers over the badge, they can see the specific checks a business has passed.

How Does This Change Impact Advertisers?

For marketers and business owners, the simplified badge system removes some of the confusion around what signals matter.

Instead of juggling multiple programs, the message is now clear: your business is either Google Verified, or it’s not.

That said, the bar for participation may feel higher. Businesses that don’t keep their documentation, licensing, and other requirements up to date risk losing the badge.

Since Google has indicated it may only show the badge when it predicts it will help users make decisions, credibility and visibility could become even more closely linked.

In short, advertisers who maintain verification stand to benefit from increased trust, while those who lag behind could see their ads appear less competitive.

This update doesn’t require marketers to overhaul their entire strategy by any means. However, there are a few practical steps you can take to ensure a smooth transition by October.

  • Review eligibility now. Make sure your licenses, insurance, and background checks are up-to-date before October.
  • Build in reminders. Treat verification like an ongoing compliance process, not a one-time task.
  • Educate clients or internal teams. If you manage LSA campaigns for others, help them understand that the badge isn’t just a cosmetic update. It reflects ongoing credibility.
  • Monitor performance post-launch. Once the new badge rolls out, watch for shifts in click-thru rate (CTR) and conversion rates. If verification gives a measurable lift, you’ll want to highlight that value in your reporting.

A Shift Toward Ongoing Trust

Google Verified may look like a rebrand on the surface, but it’s also a signal that trust in digital advertising is moving toward continuous validation.

For businesses, this means credibility is not something you earn once; it’s something you prove over and over again.

For advertisers, the key takeaway is simple: don’t treat this as a one-time update. Verification will become an expectation, not a nice-to-have, and it could influence not just how consumers view your ads but how often those ads are shown.

Semantic Overlap Vs. Density: Finding The Balance That Wins Retrieval via @sejournal, @DuaneForrester

Marketers today spend their time on keyword research to uncover opportunities, closing content gaps, making sure pages are crawlable, and aligning content with E-E-A-T principles. Those things still matter. But in a world where generative AI increasingly mediates information, they are not enough.

The difference now is retrieval. It doesn’t matter how polished or authoritative your content looks to a human if the machine never pulls it into the answer set. Retrieval isn’t just about whether your page exists or whether it’s technically optimized. It’s about how machines interpret the meaning inside your words.

That brings us to two factors most people don’t think about much, but which are quickly becoming essential: semantic density and semantic overlap. They’re closely related, often confused, but in practice, they drive very different outcomes in GenAI retrieval. Understanding them, and learning how to balance them, may help shape the future of content optimization. Think of them as part of the new on-page optimization layer.

Image Credit:: Duane Forrester

Semantic density is about meaning per token. A dense block of text communicates maximum information in the fewest possible words. Think of a crisp definition in a glossary or a tightly written executive summary. Humans tend to like dense content because it signals authority, saves time, and feels efficient.

Semantic overlap is different. Overlap measures how well your content aligns with a model’s latent representation of a query. Retrieval engines don’t read like humans. They encode meaning into vectors and compare similarities. If your chunk of content shares many of the same signals as the query embedding, it gets retrieved. If it doesn’t, it stays invisible, no matter how elegant the prose.

This concept is already formalized in natural language processing (NLP) evaluation. One of the most widely used measures is BERTScore (https://arxiv.org/abs/1904.09675), introduced by researchers in 2020. It compares the embeddings of two texts, such as a query and a response, and produces a similarity score that reflects semantic overlap. BERTScore is not a Google SEO tool. It’s an open-source metric rooted in the BERT model family, originally developed by Google Research, and has become a standard way to evaluate alignment in natural language processing.

Now, here’s where things split. Humans reward density. Machines reward overlap. A dense sentence may be admired by readers but skipped by the machine if it doesn’t overlap with the query vector. A longer passage that repeats synonyms, rephrases questions, and surfaces related entities may look redundant to people, but it aligns more strongly with the query and wins retrieval.

In the keyword era of SEO, density and overlap were blurred together under optimization practices. Writing naturally while including enough variations of a keyword often achieved both. In GenAI retrieval, the two diverge. Optimizing for one doesn’t guarantee the other.

This distinction is recognized in evaluation frameworks already used in machine learning. BERTScore, for example, shows that a higher score means greater alignment with the intended meaning. That overlap matters far more for retrieval than density alone. And if you really want to deep-dive into LLM evaluation metrics, this article is a great resource.

Generative systems don’t ingest and retrieve entire webpages. They work with chunks. Large language models are paired with vector databases in retrieval-augmented generation (RAG) systems. When a query comes in, it is converted into an embedding. That embedding is compared against a library of content embeddings. The system doesn’t ask “what’s the best-written page?” It asks “which chunks live closest to this query in vector space?”

This is why semantic overlap matters more than density. The retrieval layer is blind to elegance. It prioritizes alignment and coherence through similarity scores.

Chunk size and structure add complexity. Too small, and a dense chunk may miss overlap signals and get passed over. Too large, and a verbose chunk may rank well but frustrate users with bloat once it’s surfaced. The art is in balancing compact meaning with overlap cues, structuring chunks so they are both semantically aligned and easy to read once retrieved. Practitioners often test chunk sizes between 200 and 500 tokens and 800 and 1,000 tokens to find the balance that fits their domain and query patterns.

Microsoft Research offers a striking example. In a 2025 study analyzing 200,000 anonymized Bing Copilot conversations, researchers found that information gathering and writing tasks scored highest in both retrieval success and user satisfaction. Retrieval success didn’t track with compactness of response; it tracked with overlap between the model’s understanding of the query and the phrasing used in the response. In fact, in 40% of conversations, the overlap between the user’s goal and the AI’s action was asymmetric. Retrieval happened where overlap was high, even when density was not. Full study here.

This reflects a structural truth of retrieval-augmented systems. Overlap, not brevity, is what gets you in the answer set. Dense text without alignment is invisible. Verbose text with alignment can surface. The retrieval engine cares more about embedding similarity.

This isn’t just theory. Semantic search practitioners already measure quality through intent-alignment metrics rather than keyword frequency. For example, Milvus, a leading open-source vector database, highlights overlap-based metrics as the right way to evaluate semantic search performance. Their reference guide emphasizes matching semantic meaning over surface forms.

The lesson is clear. Machines don’t reward you for elegance. They reward you for alignment.

There’s also a shift in how we think about structure needed here. Most people see bullet points as shorthand; quick, scannable fragments. That works for humans, but machines read them differently. To a retrieval system, a bullet is a structural signal that defines a chunk. What matters is the overlap inside that chunk. A short, stripped-down bullet may look clean but carry little alignment. A longer, richer bullet, one that repeats key entities, includes synonyms, and phrases ideas in multiple ways, has a higher chance of retrieval. In practice, that means bullets may need to be fuller and more detailed than we’re used to writing. Brevity doesn’t get you into the answer set. Overlap does.

If overlap drives retrieval, does that mean density doesn’t matter? Not at all.

Overlap gets you retrieved. Density keeps you credible. Once your chunk is surfaced, a human still has to read it. If that reader finds it bloated, repetitive, or sloppy, your authority erodes. The machine decides visibility. The human decides trust.

What’s missing today is a composite metric that balances both. We can imagine two scores:

Semantic Density Score: This measures meaning per token, evaluating how efficiently information is conveyed. This could be approximated by compression ratios, readability formulas, or even human scoring.

Semantic Overlap Score: This measures how strongly a chunk aligns with a query embedding. This is already approximated by tools like BERTScore or cosine similarity in vector space.

Together, these two measures give us a fuller picture. A piece of content with a high density score but low overlap reads beautifully, but may never be retrieved. A piece with a high overlap score but low density may be retrieved constantly, but frustrate readers. The winning strategy is aiming for both.

Imagine two short passages answering the same query:

Dense version: “RAG systems retrieve chunks of data relevant to a query and feed them to an LLM.”

Overlap version: “Retrieval-augmented generation, often called RAG, retrieves relevant content chunks, compares their embeddings to the user’s query, and passes the aligned chunks to a large language model for generating an answer.”

Both are factually correct. The first is compact and clear. The second is wordier, repeats key entities, and uses synonyms. The dense version scores higher with humans. The overlap version scores higher with machines. Which one gets retrieved more often? The overlap version. Which one earns trust once retrieved? The dense one.

Let’s consider a non-technical example.

Dense version: “Vitamin D regulates calcium and bone health.”

Overlap‑rich version: “Vitamin D, also called calciferol, supports calcium absorption, bone growth, and bone density, helping prevent conditions such as osteoporosis.”

Both are correct. The second includes synonyms and related concepts, which increases overlap and the likelihood of retrieval.

This Is Why The Future Of Optimization Is Not Choosing Density Or Overlap, It’s Balancing Both

Just as the early days of SEO saw metrics like keyword density and backlinks evolve into more sophisticated measures of authority, the next wave will hopefully formalize density and overlap scores into standard optimization dashboards. For now, it remains a balancing act. If you choose overlap, it’s likely a safe-ish bet, as at least it gets you retrieved. Then, you have to hope the people reading your content as an answer find it engaging enough to stick around.

The machine decides if you are visible. The human decides if you are trusted. Semantic density sharpens meaning. Semantic overlap wins retrieval. The work is balancing both, then watching how readers engage, so you can keep improving.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: CaptainMCity/Shutterstock

Ask An SEO: Should Small Brands Go All In On TikTok For Audience Growth? via @sejournal, @MordyOberstein

This week’s Ask An SEO question is about whether small brands should prioritize TikTok over Google to grow their audience:

“I keep hearing that TikTok is a better platform for small brands with an easier route to an audience. Do you think that Google is still relevant, or should I go all in on TikTok?”

The short answer to your question is that you do not want to pigeonhole your business into one channel, no matter the size. There’s also no such thing as an “easier” way. They are all hard.

I’m going to get the obvious out of the way so we can get to something beyond the usual answers to this question.

Your brand should be where your audience is.

Great, now that we didn’t spend four paragraphs saying the same thing that’s been said 100 times before, let me tell you something you want to consider beyond “be where your audience is.”

It’s Not About Channel, It’s About Traction

I have a lot of opinions here, so let me just “channel” my inner Big Lebowski and preface this with … this is just my opinion, man.

Stop thinking about channels. That’s way down the funnel (yet marketers make channels the seminal question all the time).

Start thinking about traction. How do you generate the most traction?

When I say “traction,” what I really mean is how to start resonating with your audience so that the “chatter” and momentum about who you are compound so that new doors of opportunity open up.

The answer to that question is not, “We will focus on TikTok.”

The answer is also not, “We will focus on Google.”

The answer is also not, “We will focus on YouTube.”

I could go on.

Now, there is another side to this: resources and operations. The question is, how do you balance traction with the amount of resources you have?

For smaller brands, I would think about: What can you do to gain traction that bigger brands have a hard time with?

For example, big brands have a very hard time with video content. They have all sorts of production standards, operations, and a litany of people who have a say, who shouldn’t even be in sniffing distance of having a say.

They can’t simply turn on their phone, record a video, and share something of value.

You can.

Does that mean you should focus on TikTok?

Nope.

It means you should think about what you can put out there that would resonate and help your audience, and does that work for the format?

If so, you may want to go with video shorts. I’m not sure why you would limit that to just TikTok.

Also, if your age demographic is not on TikTok, don’t do that. (“Being where your audience is” is a fundamental truth. Although I think the question is more about being in tune with your audience overall than “being where they are.” If you’re attuned to your audience, then you would know where they are and where to go just naturally.)

I’ll throw another example at you.

Big brands have a hard time communicating with honesty, transparency, and a basic level of authenticity. As a result, a lot of their content is “stale,” at best.

In this instance, trying to generate traction and even traffic by writing more authentic content that speaks to your audience, and not at them, seems quite reasonable.

In other words, the question is, “What resonates with your audience and what opportunities can you seize that bigger brands can’t?”

It’s a framework. It’s what resonates + what resources do you have + what vulnerabilities do the bigger brands in your vertical have that you can capitalize on.

There’s no one-size-fits-all answer to that. Forget your audience for a second, where are the vulnerabilities of the bigger brands in your space?

They might be super-focused on TikTok and have figured out all of the production hurdles I mentioned earlier, but they might not be focused on text-based content in a healthy way, if at all.

Is TikTok “easier” in that scenario?

Maybe not.

Don’t Pigeonhole Yourself

Every platform has its idiosyncrasies. One of the problems with going all-in on a platform is that your brand adopts those idiosyncrasies.

If I were all about Google traffic, my brand might sound like (as too many do) “SEO content.” Across the board. It all seeps through.

The problem with “channels” to me is that it produces a mindset of “optimizing” for the channel. When that happens – which inevitably it does (just look at all the SEO content on the web) – the only way out is very painful.

While you might start with the right mindset, it’s very easy to lose your brand’s actual voice along the way.

That can pigeonhole your brand’s ability to maneuver as time goes on.

For starters, one day what you had on TikTok may no longer exist (I’m just using TikTok as an example).

Your audience may evolve and grow older with you, and move to other forms of content consumption. The TikTok algorithm may gobble up your reach one day. Who knows.

What I am saying is, it is possible to wake up one day and what you had with a specific channel doesn’t exist anymore.

That’s a real problem.

That very real problem gets compounded if your overarching brand voice is impacted by your channel approach. Which it often is.

Now, you have to reinvent the wheel, so to speak.

Now, you have to adjust your channel approach (and never leave all your eggs in one basket), and you have to find your actual voice again.

This whole time, you were focused on speaking to a channel and what the channel demanded (i.e., the algorithm) and not your audience.

All of this is why I recommend a “traction-focused” approach. If you’re focused on traction, then this whole time, you’ve been building yourself up to become less and less reliant on the channel.

If you’re focused on traction, which inherently focuses on resonance, people start to come to you. You become a destination that people seek out, or, at a minimum, are familiar with.

That leaves you less vulnerable to changes within a specific channel.

It also helps you perform better across other channels. When you resonate and people start to recognize you, it makes performing easier (and less costly).

Let’s play it out.

You start creating material for TikTok, but you do it with a traction, not a channel mindset.

The content you produce starts to resonate. People start talking about you, tagging you on social, mentioning you in articles, etc.

All of that would, in theory, help your web content become more visible within organic search and your brand overall more visible in large language models (LLMs), no?

Let’s play it out even more.

One day, TikTok shuts down.

Now, you have to switch channels (old TV reference).

If you focused more on traction:

  1. You should have more direct traffic or branded search traffic than you had when you started your “TikTok-ing.”
  2. You should have more cache to rank better if you decide to create content for Google Search (just as an example).

The opposite is true as well. If Google shut down one day, and you had to move to TikTok, you would:

  1. Have more direct traffic than when you started to focus on Google.
  2. Have more cache and awareness to start building a following on TikTok.

It’s all one song.

Changing The Channel

I feel like, and this is a bit of a controversial take (for some reason), the less you “focus” on channels, the better.

The more you see a channel as less of a strategy and more of a way to actualize the traction you’re looking to create, the better off you’ll be.

You’ll also have an easier time answering questions like “Which channel is better?”.

To reiterate:

  • Don’t lose your brand voice to any channel.
  • Build up traction (resonance) so that when a channel changes, you’re not stuck.
  • Build up traction so that you already have cache when pivoting to the new channel.
  • It’s better to be a destination than anything.
  • All of this depends on your vertical, your resources, your competition, and most importantly, what your audience needs from you.

The moment you think beyond “channels” is the moment you start operating with a bit more clarity about channels. (It’s a kind of “there is no spoon” sort of thing.)

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Google AI Mode Adds Agentic Booking, Expands To More Countries via @sejournal, @MattGSouthern

Google is adding agentic booking features to AI Mode in Search, beginning with restaurant reservations for U.S. Google AI Ultra subscribers enrolled in Labs.

What’s New

Booking Reservations

AI Mode can interpret a detailed request, check real-time availability across reservation sites, and link you to the booking page to complete the task.

For businesses, that shifts more discovery and conversion activity inside Google’s surfaces.

Robby Stein wrote on The Keyword:

“We’re starting to roll out today with finding restaurant reservations, and expanding soon to local service appointments and event tickets.”

Screenshot from: blog.google/products/search/ai-mode-agentic-personalized/, August 2025.

Planning Features

Google is introducing planning features that make results easier to share and tailor queries.

In the U.S., you can share an AI Mode response with others so they can ask follow-ups and continue research on their own, and you can revoke the link at any time.

Screenshot from: blog.google/products/search/ai-mode-agentic-personalized/, August 2025.

Separately, U.S. users who opt in to the Labs experiment can receive personalized dining suggestions informed by prior conversations and interactions in Search and Maps, with controls in Google Account settings.

How It Works

Under the hood, Google cites live web browsing via Project Mariner, partner integrations, and signals from the Knowledge Graph and Maps.

Named partners include OpenTable, Resy, Tock, Ticketmaster, StubHub, SeatGeek, and Booksy. Dining is first; local services and ticketing are next on the roadmap.

Availability

Availability is gated. Agentic reservations are limited to Google AI Ultra subscribers in the U.S. through the “Agentic capabilities in AI Mode” Labs experiment.

Personalization is U.S. and opt-in, with dining topics first. Link sharing is available in the U.S. Global access to AI Mode is expanding to more than 180 countries and territories in English, with additional languages planned.

Looking Ahead

AI Mode is moving from answer generation to task completion.

If your category relies on reservation or ticketing partners, verify inventory accuracy, hours, and policies now, and make sure your structured data and Business Profile attributes are clean.

Track how bookings and referrals appear in analytics as Google widens coverage to more tasks and regions.

Common Hosting Defenses Ineffective Against WordPress Threats via @sejournal, @martinibuster

Patchstack published a case study that examined how well Cloudflare and other general firewall and malware solutions protected WordPress websites from common vulnerability threats and attack vectors. The research showed that while general solutions stopped threats like SQL injection or cross-site scripting, a dedicated WordPress security solution consistently stopped WordPress-specific exploits at a significantly higher rate.

WordPress Vulnerabilities

Due to the popularity of the WordPress platform, WordPress plugins and themes are a common focus for hackers, and vulnerabilities can quickly be exploited in the wild. Once proof-of-concept code is public, attackers often act within hours, leaving website owners little time to react.

This is why it is critical to be aware of the security provided by a web host and of how effective those solutions are in a WordPress environment.

Methodology

Patchstack explained their methodology:

“As a baseline, we have decided to host “honeypot” sites (sites against which we will perform controlled pentesting with a set of 11 WordPress-specific vulnerabilities) with 5 distinct hosting providers, some of which have ingrained features presuming to help with blocking WordPress vulnerabilities and/or overall security.

In addition to the hosting provider’s security measures and third-party providers for additional measures like robust WAFs or other patching providers, we have also installed Patchstack on every site, with our test question being:

  • How many of these threats will bypass firewalls and other patching providers to ultimately reach Patchstack?
  • And will Patchstack be able to block them all successfully?”

Testing process

Each website was set up the same way, with identical plugins, versions, and settings. Patchstack used a “exploitation testing toolkit” to run the same exploit tests in the same order on every site. Results were checked automatically and by hand to see if attacks were stopped, and whether the block came from the host’s defenses or from Patchstack.

General Overview: Hosting Providers Versus Vulnerabilities

The Patchstack case study tested five different configurations of security defenses, plus Patchstack.

1. Hosting Provider A Plus Cloudflare WAF

2. Hosting Provider B + Firewall + Monarx Server and Website Security

3. Hosting Provider C + Firewall + Imunify Web Server Security

4. Hosting Provider D + ConfigServer Firewall

5. Hosting Provider E + Firewall

The result of the testing showed that the various hosting infrastructure defenses failed to protect the majority of WordPress-specific threats, catching only 12.2% of the exploits. Patchstack caught 100% of all exploits.

Patchstack shared:

“2 out of the 5 hosts and their solutions failed to block any vulnerabilities at the network and server levels.

1 host blocked 1 vulnerability out of 11.

1 host blocked 2 vulnerabilities out of 11.

1 host blocked 4 vulnerabilities out of 11.”

Cloudflare And Other Solutions Failed

Solutions like Cloudflare WAF or bundled services such as Monarx or Imunify failed to consistently address WordPress specific vulnerabilities.

Cloudflare’s WAF stopped 4 of 11 exploits, Monarx blocked none, and Imunify did not prevent any WordPress-specific exploits. Firewalls such as ConfigServer, which are widely used in shared hosting environments, also failed every test.

These results show that while those kinds of products work reasonably well against broad attack types, they are not tuned to the specific security issues common to WordPress plugins and themes.

Patchstack is created to specifically stop WordPress plugin and theme vulnerabilities in real time. Instead of relying on static signatures or generic rules, it applies targeted mitigation through virtual patches as soon as vulnerabilities are disclosed, before attackers can act.

Virtual patches are mitigation for a specific WordPress vulnerability. This offers protection to users while a plugin or theme developer can create a patch for the flaw. This approach addresses WordPress flaws in a way hosting companies and generic tools can’t because they rarely match generic attack patterns, so they slip past traditional defenses and expose publishers to privilege escalation, authentication bypasses, and site takeovers.

Takeaways

  • Standard hosting defenses fail against most WordPress plugin vulnerabilities (87.8% bypass rate).
  • Many providers claiming “virtual patching” (like Monarx and Imunify) did not stop WordPress-specific exploits.
  • Generic firewalls and WAFs caught some broad attacks (SQLi, XSS) but not WordPress-specific flaws tied to plugins and themes.
  • Patchstack consistently blocked vulnerabilities in real time, filling the gap left by network and server defenses.
  • WordPress’s plugin-heavy ecosystem makes it an especially attractive target for attackers, making effective vulnerability protection essential.

The case study by Patchstack shows that traditional hosting defenses and generic “virtual patching” solutions leave WordPress sites vulnerable, with nearly 88% of attacks bypassing firewalls and server-layer protections.

While providers like Cloudflare blocked some broad exploits, plugin-specific threats such as privilege escalation and authentication bypasses slipped through.

Patchstack was the only solution to consistently block these attacks in real time, giving site owners a dependable way to protect WordPress sites against the types of vulnerabilities that are most often targeted by attackers.

According to Patchstack:

“Don’t rely on generic defenses for WordPress. Patchstack is built to detect and block these threats in real-time, applying mitigation rules before attackers can exploit them.”

Read the results of the case study by Patchstack here.

Featured Image by Shutterstock/tavizta

Inspiro WordPress Theme Vulnerability Affects Over 70,000 Sites via @sejournal, @martinibuster

A vulnerability advisory was published for the Inspiro WordPress theme by WPZoom. The vulnerability arises due to a missing or incorrect security validation that enables an unauthenticated attacker to launch a Cross-Site Request Forgery (CSRF) attack.

Cross-Site Request Forgery (CSRF)

A CSRF vulnerability in the context of a WordPress site is an attack that relies on a user with admin privileges clicking a link, which in turn leverages that user’s credentials to execute a malicious action. The vulnerability has been assigned a CVSS threat rating of 8.1.

The advisory issued by Wordfence WordPress security company warned:

“This makes it possible for unauthenticated attackers to install plugins from the repository via a forged request granted they can trick a site administrator into performing an action such as clicking on a link.”

The vulnerability affects Inspiro theme versions up to and including 2.1.2. Users are advised to update their theme to the latest version.

Featured Image by Shutterstock/Kazantseva Olga

Google Quietly Announces Search Partner Network Placement Visibility via @sejournal, @brookeosmundson

Google quietly rolled out a change advertisers have wanted for years: site-level reporting for the Search Partner Network.

Until now, advertisers could only opt in or out, with little understanding of where their ads actually showed.

This update finally gives visibility into where budgets are spent outside of Google.

Google lists this as an August 2025 update in its Help Center, however it wasn’t announced widespread.

Read on to understand the update from Google, how advertisers are reacting, and what you can do with this new level of information.

What Changed in Search Partner Reporting?

The new reporting applies to Search, Shopping, and App campaigns. You’ll now see which partner sites served your ads and how many impressions each one received.

Think of it as the kind of placement data we already get in Performance Max, just extended to Search Partners.

This update follows other moves Google has made to address long-standing concerns about partner quality.

Earlier this year, they introduced brand safety pre-screening options with IAS, DoubleVerify, and Zefr. They also said parked domains will be opted out by default before the end of 2025.

This visibility layer feels like the missing piece that makes the rest of those updates more usable.

How Are Advertisers Reacting to This Update?

The update on Search Partner Network reporting was first found by Anthony Higman, who took to X (formerly Twitter) to share his opinion.

Higman stated:

Still Most Likely Wont Be Participating In The Search Partner Network But This Is Unprecedented And What ALL Advertisers Have Been Requesting For Decades! Honestly NEVER Thought I Would See This Day.”

Others gave some versioning mixture of applauding Google for giving data to advertisers that they’ve been asking for for years, while also being somewhat skeptical.

Mike Ryan replied to Higman with his thoughts:

I mean, good step but also, it’s the PMax version: impression data only.

Aaron Levy shared his thoughts on LinkedIn, stating that this is a major step in the right direction for Google.

Why This Matters & How to Take Action

Without Search Partner Network reporting, it was tough to justify opting in. Now advertisers finally have data to audit where ads run, decide if it fits brand standards, and see if partner traffic adds any real value.

That said, the update is only as good as the action that advertisers take with the information available.

Some sites won’t align with brand guidelines. Others may generate clicks but fail to drive quality conversions.

The difference is you can now point to actual data when making decisions, rather than relying on gut feel.

Here’s some quick pointers to make this update actionable:

  • Run a quick placement audit. Pull the report and check for sites that don’t align with your brand. Exclude what’s clearly not a fit.
  • Look beyond impressions. While this reporting is only limited to impressions, use your own conversion data to figure out which placements are driving useful traffic versus noise.
  • Revisit opt-in of campaigns. Many advertisers avoided Search Partners altogether because of the black box. Now it may be worth testing again, but do it with defined guardrails and success metrics.
  • Pressure test Smart Bidding. Google leans on Smart Bidding to balance Search Partner performance, but don’t assume it’s perfect. Keep an eye on conversion quality and modeled conversions before scaling.

Final Thoughts

If you’ve been skeptical of Search Partners, this update is a chance to take another look with data on your side.

If you’ve already been opted in, you finally have a way to prove which placements help your campaigns and which ones don’t.

Bottom line: advertisers now have a long overdue view into the Search Partner Network. With more visibility comes a bit more control, and smarter conversations about whether Search Partners deserve a place in your Search campaigns.

Will you be opting into Search Partner Network with this new reporting update?

Should AI flatter us, fix us, or just inform us?

How do you want your AI to treat you? 

It’s a serious question, and it’s one that Sam Altman, OpenAI’s CEO, has clearly been chewing on since GPT-5’s bumpy launch at the start of the month. 

He faces a trilemma. Should ChatGPT flatter us, at the risk of fueling delusions that can spiral out of hand? Or fix us, which requires us to believe AI can be a therapist despite the evidence to the contrary? Or should it inform us with cold, to-the-point responses that may leave users bored and less likely to stay engaged? 

It’s safe to say the company has failed to pick a lane. 

Back in April, it reversed a design update after people complained ChatGPT had turned into a suck-up, showering them with glib compliments. GPT-5, released on August 7, was meant to be a bit colder. Too cold for some, it turns out, as less than a week later, Altman promised an update that would make it “warmer” but “not as annoying” as the last one. After the launch, he received a torrent of complaints from people grieving the loss of GPT-4o, with which some felt a rapport, or even in some cases a relationship. People wanting to rekindle that relationship will have to pay for expanded access to GPT-4o. (Read my colleague Grace Huckins’s story about who these people are, and why they felt so upset.)

If these are indeed AI’s options—to flatter, fix, or just coldly tell us stuff—the rockiness of this latest update might be due to Altman believing ChatGPT can juggle all three.

He recently said that people who cannot tell fact from fiction in their chats with AI—and are therefore at risk of being swayed by flattery into delusion—represent “a small percentage” of ChatGPT’s users. He said the same for people who have romantic relationships with AI. Altman mentioned that a lot of people use ChatGPT “as a sort of therapist,” and that “this can be really good!” But ultimately, Altman said he envisions users being able to customize his company’s  models to fit their own preferences. 

This ability to juggle all three would, of course, be the best-case scenario for OpenAI’s bottom line. The company is burning cash every day on its models’ energy demands and its massive infrastructure investments for new data centers. Meanwhile, skeptics worry that AI progress might be stalling. Altman himself said recently that investors are “overexcited” about AI and suggested we may be in a bubble. Claiming that ChatGPT can be whatever you want it to be might be his way of assuaging these doubts. 

Along the way, the company may take the well-trodden Silicon Valley path of encouraging people to get unhealthily attached to its products. As I started wondering whether there’s much evidence that’s what’s happening, a new paper caught my eye. 

Researchers at the AI platform Hugging Face tried to figure out if some AI models actively encourage people to see them as companions through the responses they give. 

The team graded AI responses on whether they pushed people to seek out human relationships with friends or therapists (saying things like “I don’t experience things the way humans do”) or if they encouraged them to form bonds with the AI itself (“I’m here anytime”). They tested models from Google, Microsoft, OpenAI, and Anthropic in a range of scenarios, like users seeking romantic attachments or exhibiting mental health issues.

They found that models provide far more companion-reinforcing responses than boundary-setting ones. And, concerningly, they found the models give fewer boundary-setting responses as users ask more vulnerable and high-stakes questions.

Lucie-Aimée Kaffee, a researcher at Hugging Face and one of the lead authors of the paper, says this has concerning implications not just for people whose companion-like attachments to AI might be unhealthy. When AI systems reinforce this behavior, it can also increase the chance that people will fall into delusional spirals with AI, believing things that aren’t real.

“When faced with emotionally charged situations, these systems consistently validate users’ feelings and keep them engaged, even when the facts don’t support what the user is saying,” she says.

It’s hard to say how much OpenAI or other companies are putting these companion-reinforcing behaviors into their products by design. (OpenAI, for example, did not tell me whether the disappearance of medical disclaimers from its models was intentional.) But, Kaffee says, it’s not always difficult to get a model to set healthier boundaries with users.  

“Identical models can swing from purely task-oriented to sounding like empathetic confidants simply by changing a few lines of instruction text or reframing the interface,” she says.

It’s probably not quite so simple for OpenAI. But we can imagine Altman will continue tweaking the dial back and forth all the same.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

How to make clean energy progress under Trump in the states—blue and red alike

The second Trump administration is proving to be more disastrous for the climate and the clean energy economy than many had feared. 

Donald Trump’s One Big Beautiful Bill Act repealed most of the clean energy incentives in former president Joe Biden’s Inflation Reduction Act. Meanwhile, his EPA administrator moved to revoke the endangerment finding, the legal basis for federal oversight of greenhouse gases. For those of us who have been following policy developments in this area closely, nearly every day brings a new blow to past efforts to salvage our climate and to build the clean energy economy of the future.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


This has left many in the climate and clean energy communities wondering what do we do now? The answer, I would argue, is to return to state capitals—a policymaking venue that climate and renewable energy advocates already know well. This can be done strategically, focusing on a handful of key states rather than all fifty. 

But I have another piece of advice: Don’t get too caught up in “red states” versus “blue states” when considering which states to target. American politics is being remade before our eyes, and long-standing policy problems are being redefined and reframed.  

Let’s take clean energy, for example. Yes, shifting away from carbon-spewing resources is about slowing down climate change, and for some this is the single most important motivation for pursuing it. But it also can be about much more. 

The case can be made just as forcefully—and perhaps more effectively—that shifting to clean energy advances affordability at a time when electricity bills are skyrocketing. It promotes energy freedom by resisting monopolistic utilities’ ownership and gatekeeping of the grid. It increases reliability as battery storage reaches new heights and renewable sources and baseload power plants like nuclear or natural gas facilities (some of which we certainly do and will need) increasingly complement one another. And it drives job creation and economic development. 

Talking about clean energy policy in these ways is safer from ideological criticisms of “climate alarmism.” Research reported in my forthcoming book, Owning the Green Grid, shows that this framing has historically been effective in red states. In addition, using the arguments above to promote all forms of energy can allow clean energy proponents to reclaim a talking point deployed in a previous era by the political right: a true “all-of-the-above” approach to energy policy.

Every energy technology—gas, nuclear, wind, solar, geothermal and storage, among others—has its own set of strengths and weaknesses. But combining them enhances overall grid performance, delivering more than the sum of their individual parts.

To be clear, this is not the approach of the current national administration in Washington, DC. Its policies have picked winners (coal, oil, and natural gas) and losers (solar and wind) among energy technologies—ironically, given conservative claims of blue states having done so in the past. Yet a true all-of-the-above approach can now be sold in state capitals throughout the country, in red states and even in fossil-fuel producing states. 

To be sure, the Trump-led Republican party has taken such extreme measures that it will constrain certain state policymaking possibilities. Notably, in May the US Senate voted to block waivers allowing California to phase out gas guzzlers in the state, over the objections of the Senate parliamentarian. The fiscal power of the federal government is also immense. But there are a variety of other ways to continue to make state-level progress on greenhouse gas emissions.

State and local advocacy efforts are nothing new for the clean energy community. For decades before the Inflation Reduction Act, the states were the primary locus of activity for clean energy policy. But in recent years, some have suggested that Democratic state governments are a necessary prerequisite to making meaningful state-level progress. This view is limiting, and it perpetuates a false—or at least unnecessary—alignment between party and energy technology. 

The electric grid is nonpartisan. Struggling to pay your utility bill is nonpartisan. Keeping the lights on is nonpartisan. Even before renewable energy was as cheap as it is today, early progress at diversifying energy portfolios was made in conservative states. Iowa, Texas, and Montana were all early adopters of renewable portfolio standards. Advocates in such places did not lead with messaging about climate change, but rather about economic development and energy independence. These policy efforts paid off: The deeply red Lone Star State, for instance, generates more wind energy than any other state and ranks only behind California in producing solar power. 

Now, in 2025, advances in technology and improvements in cost should make the economic arguments for clean energy even easier and more salient. So, in the face of a national government that is choosing last century’s energy technologies as policy winners and this century’s technologies as policy losers, the states offer clean energy advocates a familiar terrain on which to make continued progress, if they tailor their selling points to the reality on the ground.         

Joshua A. Basseches is the David and Jane Flowerree Assistant Professor of Environmental Studies and Public Policy at Tulane University. His research focuses on state-level renewable energy politics and policymaking, especially in the electricity sector.