Semantic Overlap Vs. Density: Finding The Balance That Wins Retrieval via @sejournal, @DuaneForrester

Marketers today spend their time on keyword research to uncover opportunities, closing content gaps, making sure pages are crawlable, and aligning content with E-E-A-T principles. Those things still matter. But in a world where generative AI increasingly mediates information, they are not enough.

The difference now is retrieval. It doesn’t matter how polished or authoritative your content looks to a human if the machine never pulls it into the answer set. Retrieval isn’t just about whether your page exists or whether it’s technically optimized. It’s about how machines interpret the meaning inside your words.

That brings us to two factors most people don’t think about much, but which are quickly becoming essential: semantic density and semantic overlap. They’re closely related, often confused, but in practice, they drive very different outcomes in GenAI retrieval. Understanding them, and learning how to balance them, may help shape the future of content optimization. Think of them as part of the new on-page optimization layer.

Image Credit:: Duane Forrester

Semantic density is about meaning per token. A dense block of text communicates maximum information in the fewest possible words. Think of a crisp definition in a glossary or a tightly written executive summary. Humans tend to like dense content because it signals authority, saves time, and feels efficient.

Semantic overlap is different. Overlap measures how well your content aligns with a model’s latent representation of a query. Retrieval engines don’t read like humans. They encode meaning into vectors and compare similarities. If your chunk of content shares many of the same signals as the query embedding, it gets retrieved. If it doesn’t, it stays invisible, no matter how elegant the prose.

This concept is already formalized in natural language processing (NLP) evaluation. One of the most widely used measures is BERTScore (https://arxiv.org/abs/1904.09675), introduced by researchers in 2020. It compares the embeddings of two texts, such as a query and a response, and produces a similarity score that reflects semantic overlap. BERTScore is not a Google SEO tool. It’s an open-source metric rooted in the BERT model family, originally developed by Google Research, and has become a standard way to evaluate alignment in natural language processing.

Now, here’s where things split. Humans reward density. Machines reward overlap. A dense sentence may be admired by readers but skipped by the machine if it doesn’t overlap with the query vector. A longer passage that repeats synonyms, rephrases questions, and surfaces related entities may look redundant to people, but it aligns more strongly with the query and wins retrieval.

In the keyword era of SEO, density and overlap were blurred together under optimization practices. Writing naturally while including enough variations of a keyword often achieved both. In GenAI retrieval, the two diverge. Optimizing for one doesn’t guarantee the other.

This distinction is recognized in evaluation frameworks already used in machine learning. BERTScore, for example, shows that a higher score means greater alignment with the intended meaning. That overlap matters far more for retrieval than density alone. And if you really want to deep-dive into LLM evaluation metrics, this article is a great resource.

Generative systems don’t ingest and retrieve entire webpages. They work with chunks. Large language models are paired with vector databases in retrieval-augmented generation (RAG) systems. When a query comes in, it is converted into an embedding. That embedding is compared against a library of content embeddings. The system doesn’t ask “what’s the best-written page?” It asks “which chunks live closest to this query in vector space?”

This is why semantic overlap matters more than density. The retrieval layer is blind to elegance. It prioritizes alignment and coherence through similarity scores.

Chunk size and structure add complexity. Too small, and a dense chunk may miss overlap signals and get passed over. Too large, and a verbose chunk may rank well but frustrate users with bloat once it’s surfaced. The art is in balancing compact meaning with overlap cues, structuring chunks so they are both semantically aligned and easy to read once retrieved. Practitioners often test chunk sizes between 200 and 500 tokens and 800 and 1,000 tokens to find the balance that fits their domain and query patterns.

Microsoft Research offers a striking example. In a 2025 study analyzing 200,000 anonymized Bing Copilot conversations, researchers found that information gathering and writing tasks scored highest in both retrieval success and user satisfaction. Retrieval success didn’t track with compactness of response; it tracked with overlap between the model’s understanding of the query and the phrasing used in the response. In fact, in 40% of conversations, the overlap between the user’s goal and the AI’s action was asymmetric. Retrieval happened where overlap was high, even when density was not. Full study here.

This reflects a structural truth of retrieval-augmented systems. Overlap, not brevity, is what gets you in the answer set. Dense text without alignment is invisible. Verbose text with alignment can surface. The retrieval engine cares more about embedding similarity.

This isn’t just theory. Semantic search practitioners already measure quality through intent-alignment metrics rather than keyword frequency. For example, Milvus, a leading open-source vector database, highlights overlap-based metrics as the right way to evaluate semantic search performance. Their reference guide emphasizes matching semantic meaning over surface forms.

The lesson is clear. Machines don’t reward you for elegance. They reward you for alignment.

There’s also a shift in how we think about structure needed here. Most people see bullet points as shorthand; quick, scannable fragments. That works for humans, but machines read them differently. To a retrieval system, a bullet is a structural signal that defines a chunk. What matters is the overlap inside that chunk. A short, stripped-down bullet may look clean but carry little alignment. A longer, richer bullet, one that repeats key entities, includes synonyms, and phrases ideas in multiple ways, has a higher chance of retrieval. In practice, that means bullets may need to be fuller and more detailed than we’re used to writing. Brevity doesn’t get you into the answer set. Overlap does.

If overlap drives retrieval, does that mean density doesn’t matter? Not at all.

Overlap gets you retrieved. Density keeps you credible. Once your chunk is surfaced, a human still has to read it. If that reader finds it bloated, repetitive, or sloppy, your authority erodes. The machine decides visibility. The human decides trust.

What’s missing today is a composite metric that balances both. We can imagine two scores:

Semantic Density Score: This measures meaning per token, evaluating how efficiently information is conveyed. This could be approximated by compression ratios, readability formulas, or even human scoring.

Semantic Overlap Score: This measures how strongly a chunk aligns with a query embedding. This is already approximated by tools like BERTScore or cosine similarity in vector space.

Together, these two measures give us a fuller picture. A piece of content with a high density score but low overlap reads beautifully, but may never be retrieved. A piece with a high overlap score but low density may be retrieved constantly, but frustrate readers. The winning strategy is aiming for both.

Imagine two short passages answering the same query:

Dense version: “RAG systems retrieve chunks of data relevant to a query and feed them to an LLM.”

Overlap version: “Retrieval-augmented generation, often called RAG, retrieves relevant content chunks, compares their embeddings to the user’s query, and passes the aligned chunks to a large language model for generating an answer.”

Both are factually correct. The first is compact and clear. The second is wordier, repeats key entities, and uses synonyms. The dense version scores higher with humans. The overlap version scores higher with machines. Which one gets retrieved more often? The overlap version. Which one earns trust once retrieved? The dense one.

Let’s consider a non-technical example.

Dense version: “Vitamin D regulates calcium and bone health.”

Overlap‑rich version: “Vitamin D, also called calciferol, supports calcium absorption, bone growth, and bone density, helping prevent conditions such as osteoporosis.”

Both are correct. The second includes synonyms and related concepts, which increases overlap and the likelihood of retrieval.

This Is Why The Future Of Optimization Is Not Choosing Density Or Overlap, It’s Balancing Both

Just as the early days of SEO saw metrics like keyword density and backlinks evolve into more sophisticated measures of authority, the next wave will hopefully formalize density and overlap scores into standard optimization dashboards. For now, it remains a balancing act. If you choose overlap, it’s likely a safe-ish bet, as at least it gets you retrieved. Then, you have to hope the people reading your content as an answer find it engaging enough to stick around.

The machine decides if you are visible. The human decides if you are trusted. Semantic density sharpens meaning. Semantic overlap wins retrieval. The work is balancing both, then watching how readers engage, so you can keep improving.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: CaptainMCity/Shutterstock

Ask An SEO: Should Small Brands Go All In On TikTok For Audience Growth? via @sejournal, @MordyOberstein

This week’s Ask An SEO question is about whether small brands should prioritize TikTok over Google to grow their audience:

“I keep hearing that TikTok is a better platform for small brands with an easier route to an audience. Do you think that Google is still relevant, or should I go all in on TikTok?”

The short answer to your question is that you do not want to pigeonhole your business into one channel, no matter the size. There’s also no such thing as an “easier” way. They are all hard.

I’m going to get the obvious out of the way so we can get to something beyond the usual answers to this question.

Your brand should be where your audience is.

Great, now that we didn’t spend four paragraphs saying the same thing that’s been said 100 times before, let me tell you something you want to consider beyond “be where your audience is.”

It’s Not About Channel, It’s About Traction

I have a lot of opinions here, so let me just “channel” my inner Big Lebowski and preface this with … this is just my opinion, man.

Stop thinking about channels. That’s way down the funnel (yet marketers make channels the seminal question all the time).

Start thinking about traction. How do you generate the most traction?

When I say “traction,” what I really mean is how to start resonating with your audience so that the “chatter” and momentum about who you are compound so that new doors of opportunity open up.

The answer to that question is not, “We will focus on TikTok.”

The answer is also not, “We will focus on Google.”

The answer is also not, “We will focus on YouTube.”

I could go on.

Now, there is another side to this: resources and operations. The question is, how do you balance traction with the amount of resources you have?

For smaller brands, I would think about: What can you do to gain traction that bigger brands have a hard time with?

For example, big brands have a very hard time with video content. They have all sorts of production standards, operations, and a litany of people who have a say, who shouldn’t even be in sniffing distance of having a say.

They can’t simply turn on their phone, record a video, and share something of value.

You can.

Does that mean you should focus on TikTok?

Nope.

It means you should think about what you can put out there that would resonate and help your audience, and does that work for the format?

If so, you may want to go with video shorts. I’m not sure why you would limit that to just TikTok.

Also, if your age demographic is not on TikTok, don’t do that. (“Being where your audience is” is a fundamental truth. Although I think the question is more about being in tune with your audience overall than “being where they are.” If you’re attuned to your audience, then you would know where they are and where to go just naturally.)

I’ll throw another example at you.

Big brands have a hard time communicating with honesty, transparency, and a basic level of authenticity. As a result, a lot of their content is “stale,” at best.

In this instance, trying to generate traction and even traffic by writing more authentic content that speaks to your audience, and not at them, seems quite reasonable.

In other words, the question is, “What resonates with your audience and what opportunities can you seize that bigger brands can’t?”

It’s a framework. It’s what resonates + what resources do you have + what vulnerabilities do the bigger brands in your vertical have that you can capitalize on.

There’s no one-size-fits-all answer to that. Forget your audience for a second, where are the vulnerabilities of the bigger brands in your space?

They might be super-focused on TikTok and have figured out all of the production hurdles I mentioned earlier, but they might not be focused on text-based content in a healthy way, if at all.

Is TikTok “easier” in that scenario?

Maybe not.

Don’t Pigeonhole Yourself

Every platform has its idiosyncrasies. One of the problems with going all-in on a platform is that your brand adopts those idiosyncrasies.

If I were all about Google traffic, my brand might sound like (as too many do) “SEO content.” Across the board. It all seeps through.

The problem with “channels” to me is that it produces a mindset of “optimizing” for the channel. When that happens – which inevitably it does (just look at all the SEO content on the web) – the only way out is very painful.

While you might start with the right mindset, it’s very easy to lose your brand’s actual voice along the way.

That can pigeonhole your brand’s ability to maneuver as time goes on.

For starters, one day what you had on TikTok may no longer exist (I’m just using TikTok as an example).

Your audience may evolve and grow older with you, and move to other forms of content consumption. The TikTok algorithm may gobble up your reach one day. Who knows.

What I am saying is, it is possible to wake up one day and what you had with a specific channel doesn’t exist anymore.

That’s a real problem.

That very real problem gets compounded if your overarching brand voice is impacted by your channel approach. Which it often is.

Now, you have to reinvent the wheel, so to speak.

Now, you have to adjust your channel approach (and never leave all your eggs in one basket), and you have to find your actual voice again.

This whole time, you were focused on speaking to a channel and what the channel demanded (i.e., the algorithm) and not your audience.

All of this is why I recommend a “traction-focused” approach. If you’re focused on traction, then this whole time, you’ve been building yourself up to become less and less reliant on the channel.

If you’re focused on traction, which inherently focuses on resonance, people start to come to you. You become a destination that people seek out, or, at a minimum, are familiar with.

That leaves you less vulnerable to changes within a specific channel.

It also helps you perform better across other channels. When you resonate and people start to recognize you, it makes performing easier (and less costly).

Let’s play it out.

You start creating material for TikTok, but you do it with a traction, not a channel mindset.

The content you produce starts to resonate. People start talking about you, tagging you on social, mentioning you in articles, etc.

All of that would, in theory, help your web content become more visible within organic search and your brand overall more visible in large language models (LLMs), no?

Let’s play it out even more.

One day, TikTok shuts down.

Now, you have to switch channels (old TV reference).

If you focused more on traction:

  1. You should have more direct traffic or branded search traffic than you had when you started your “TikTok-ing.”
  2. You should have more cache to rank better if you decide to create content for Google Search (just as an example).

The opposite is true as well. If Google shut down one day, and you had to move to TikTok, you would:

  1. Have more direct traffic than when you started to focus on Google.
  2. Have more cache and awareness to start building a following on TikTok.

It’s all one song.

Changing The Channel

I feel like, and this is a bit of a controversial take (for some reason), the less you “focus” on channels, the better.

The more you see a channel as less of a strategy and more of a way to actualize the traction you’re looking to create, the better off you’ll be.

You’ll also have an easier time answering questions like “Which channel is better?”.

To reiterate:

  • Don’t lose your brand voice to any channel.
  • Build up traction (resonance) so that when a channel changes, you’re not stuck.
  • Build up traction so that you already have cache when pivoting to the new channel.
  • It’s better to be a destination than anything.
  • All of this depends on your vertical, your resources, your competition, and most importantly, what your audience needs from you.

The moment you think beyond “channels” is the moment you start operating with a bit more clarity about channels. (It’s a kind of “there is no spoon” sort of thing.)

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Google AI Mode Adds Agentic Booking, Expands To More Countries via @sejournal, @MattGSouthern

Google is adding agentic booking features to AI Mode in Search, beginning with restaurant reservations for U.S. Google AI Ultra subscribers enrolled in Labs.

What’s New

Booking Reservations

AI Mode can interpret a detailed request, check real-time availability across reservation sites, and link you to the booking page to complete the task.

For businesses, that shifts more discovery and conversion activity inside Google’s surfaces.

Robby Stein wrote on The Keyword:

“We’re starting to roll out today with finding restaurant reservations, and expanding soon to local service appointments and event tickets.”

Screenshot from: blog.google/products/search/ai-mode-agentic-personalized/, August 2025.

Planning Features

Google is introducing planning features that make results easier to share and tailor queries.

In the U.S., you can share an AI Mode response with others so they can ask follow-ups and continue research on their own, and you can revoke the link at any time.

Screenshot from: blog.google/products/search/ai-mode-agentic-personalized/, August 2025.

Separately, U.S. users who opt in to the Labs experiment can receive personalized dining suggestions informed by prior conversations and interactions in Search and Maps, with controls in Google Account settings.

How It Works

Under the hood, Google cites live web browsing via Project Mariner, partner integrations, and signals from the Knowledge Graph and Maps.

Named partners include OpenTable, Resy, Tock, Ticketmaster, StubHub, SeatGeek, and Booksy. Dining is first; local services and ticketing are next on the roadmap.

Availability

Availability is gated. Agentic reservations are limited to Google AI Ultra subscribers in the U.S. through the “Agentic capabilities in AI Mode” Labs experiment.

Personalization is U.S. and opt-in, with dining topics first. Link sharing is available in the U.S. Global access to AI Mode is expanding to more than 180 countries and territories in English, with additional languages planned.

Looking Ahead

AI Mode is moving from answer generation to task completion.

If your category relies on reservation or ticketing partners, verify inventory accuracy, hours, and policies now, and make sure your structured data and Business Profile attributes are clean.

Track how bookings and referrals appear in analytics as Google widens coverage to more tasks and regions.

Common Hosting Defenses Ineffective Against WordPress Threats via @sejournal, @martinibuster

Patchstack published a case study that examined how well Cloudflare and other general firewall and malware solutions protected WordPress websites from common vulnerability threats and attack vectors. The research showed that while general solutions stopped threats like SQL injection or cross-site scripting, a dedicated WordPress security solution consistently stopped WordPress-specific exploits at a significantly higher rate.

WordPress Vulnerabilities

Due to the popularity of the WordPress platform, WordPress plugins and themes are a common focus for hackers, and vulnerabilities can quickly be exploited in the wild. Once proof-of-concept code is public, attackers often act within hours, leaving website owners little time to react.

This is why it is critical to be aware of the security provided by a web host and of how effective those solutions are in a WordPress environment.

Methodology

Patchstack explained their methodology:

“As a baseline, we have decided to host “honeypot” sites (sites against which we will perform controlled pentesting with a set of 11 WordPress-specific vulnerabilities) with 5 distinct hosting providers, some of which have ingrained features presuming to help with blocking WordPress vulnerabilities and/or overall security.

In addition to the hosting provider’s security measures and third-party providers for additional measures like robust WAFs or other patching providers, we have also installed Patchstack on every site, with our test question being:

  • How many of these threats will bypass firewalls and other patching providers to ultimately reach Patchstack?
  • And will Patchstack be able to block them all successfully?”

Testing process

Each website was set up the same way, with identical plugins, versions, and settings. Patchstack used a “exploitation testing toolkit” to run the same exploit tests in the same order on every site. Results were checked automatically and by hand to see if attacks were stopped, and whether the block came from the host’s defenses or from Patchstack.

General Overview: Hosting Providers Versus Vulnerabilities

The Patchstack case study tested five different configurations of security defenses, plus Patchstack.

1. Hosting Provider A Plus Cloudflare WAF

2. Hosting Provider B + Firewall + Monarx Server and Website Security

3. Hosting Provider C + Firewall + Imunify Web Server Security

4. Hosting Provider D + ConfigServer Firewall

5. Hosting Provider E + Firewall

The result of the testing showed that the various hosting infrastructure defenses failed to protect the majority of WordPress-specific threats, catching only 12.2% of the exploits. Patchstack caught 100% of all exploits.

Patchstack shared:

“2 out of the 5 hosts and their solutions failed to block any vulnerabilities at the network and server levels.

1 host blocked 1 vulnerability out of 11.

1 host blocked 2 vulnerabilities out of 11.

1 host blocked 4 vulnerabilities out of 11.”

Cloudflare And Other Solutions Failed

Solutions like Cloudflare WAF or bundled services such as Monarx or Imunify failed to consistently address WordPress specific vulnerabilities.

Cloudflare’s WAF stopped 4 of 11 exploits, Monarx blocked none, and Imunify did not prevent any WordPress-specific exploits. Firewalls such as ConfigServer, which are widely used in shared hosting environments, also failed every test.

These results show that while those kinds of products work reasonably well against broad attack types, they are not tuned to the specific security issues common to WordPress plugins and themes.

Patchstack is created to specifically stop WordPress plugin and theme vulnerabilities in real time. Instead of relying on static signatures or generic rules, it applies targeted mitigation through virtual patches as soon as vulnerabilities are disclosed, before attackers can act.

Virtual patches are mitigation for a specific WordPress vulnerability. This offers protection to users while a plugin or theme developer can create a patch for the flaw. This approach addresses WordPress flaws in a way hosting companies and generic tools can’t because they rarely match generic attack patterns, so they slip past traditional defenses and expose publishers to privilege escalation, authentication bypasses, and site takeovers.

Takeaways

  • Standard hosting defenses fail against most WordPress plugin vulnerabilities (87.8% bypass rate).
  • Many providers claiming “virtual patching” (like Monarx and Imunify) did not stop WordPress-specific exploits.
  • Generic firewalls and WAFs caught some broad attacks (SQLi, XSS) but not WordPress-specific flaws tied to plugins and themes.
  • Patchstack consistently blocked vulnerabilities in real time, filling the gap left by network and server defenses.
  • WordPress’s plugin-heavy ecosystem makes it an especially attractive target for attackers, making effective vulnerability protection essential.

The case study by Patchstack shows that traditional hosting defenses and generic “virtual patching” solutions leave WordPress sites vulnerable, with nearly 88% of attacks bypassing firewalls and server-layer protections.

While providers like Cloudflare blocked some broad exploits, plugin-specific threats such as privilege escalation and authentication bypasses slipped through.

Patchstack was the only solution to consistently block these attacks in real time, giving site owners a dependable way to protect WordPress sites against the types of vulnerabilities that are most often targeted by attackers.

According to Patchstack:

“Don’t rely on generic defenses for WordPress. Patchstack is built to detect and block these threats in real-time, applying mitigation rules before attackers can exploit them.”

Read the results of the case study by Patchstack here.

Featured Image by Shutterstock/tavizta

Inspiro WordPress Theme Vulnerability Affects Over 70,000 Sites via @sejournal, @martinibuster

A vulnerability advisory was published for the Inspiro WordPress theme by WPZoom. The vulnerability arises due to a missing or incorrect security validation that enables an unauthenticated attacker to launch a Cross-Site Request Forgery (CSRF) attack.

Cross-Site Request Forgery (CSRF)

A CSRF vulnerability in the context of a WordPress site is an attack that relies on a user with admin privileges clicking a link, which in turn leverages that user’s credentials to execute a malicious action. The vulnerability has been assigned a CVSS threat rating of 8.1.

The advisory issued by Wordfence WordPress security company warned:

“This makes it possible for unauthenticated attackers to install plugins from the repository via a forged request granted they can trick a site administrator into performing an action such as clicking on a link.”

The vulnerability affects Inspiro theme versions up to and including 2.1.2. Users are advised to update their theme to the latest version.

Featured Image by Shutterstock/Kazantseva Olga

Google Quietly Announces Search Partner Network Placement Visibility via @sejournal, @brookeosmundson

Google quietly rolled out a change advertisers have wanted for years: site-level reporting for the Search Partner Network.

Until now, advertisers could only opt in or out, with little understanding of where their ads actually showed.

This update finally gives visibility into where budgets are spent outside of Google.

Google lists this as an August 2025 update in its Help Center, however it wasn’t announced widespread.

Read on to understand the update from Google, how advertisers are reacting, and what you can do with this new level of information.

What Changed in Search Partner Reporting?

The new reporting applies to Search, Shopping, and App campaigns. You’ll now see which partner sites served your ads and how many impressions each one received.

Think of it as the kind of placement data we already get in Performance Max, just extended to Search Partners.

This update follows other moves Google has made to address long-standing concerns about partner quality.

Earlier this year, they introduced brand safety pre-screening options with IAS, DoubleVerify, and Zefr. They also said parked domains will be opted out by default before the end of 2025.

This visibility layer feels like the missing piece that makes the rest of those updates more usable.

How Are Advertisers Reacting to This Update?

The update on Search Partner Network reporting was first found by Anthony Higman, who took to X (formerly Twitter) to share his opinion.

Higman stated:

Still Most Likely Wont Be Participating In The Search Partner Network But This Is Unprecedented And What ALL Advertisers Have Been Requesting For Decades! Honestly NEVER Thought I Would See This Day.”

Others gave some versioning mixture of applauding Google for giving data to advertisers that they’ve been asking for for years, while also being somewhat skeptical.

Mike Ryan replied to Higman with his thoughts:

I mean, good step but also, it’s the PMax version: impression data only.

Aaron Levy shared his thoughts on LinkedIn, stating that this is a major step in the right direction for Google.

Why This Matters & How to Take Action

Without Search Partner Network reporting, it was tough to justify opting in. Now advertisers finally have data to audit where ads run, decide if it fits brand standards, and see if partner traffic adds any real value.

That said, the update is only as good as the action that advertisers take with the information available.

Some sites won’t align with brand guidelines. Others may generate clicks but fail to drive quality conversions.

The difference is you can now point to actual data when making decisions, rather than relying on gut feel.

Here’s some quick pointers to make this update actionable:

  • Run a quick placement audit. Pull the report and check for sites that don’t align with your brand. Exclude what’s clearly not a fit.
  • Look beyond impressions. While this reporting is only limited to impressions, use your own conversion data to figure out which placements are driving useful traffic versus noise.
  • Revisit opt-in of campaigns. Many advertisers avoided Search Partners altogether because of the black box. Now it may be worth testing again, but do it with defined guardrails and success metrics.
  • Pressure test Smart Bidding. Google leans on Smart Bidding to balance Search Partner performance, but don’t assume it’s perfect. Keep an eye on conversion quality and modeled conversions before scaling.

Final Thoughts

If you’ve been skeptical of Search Partners, this update is a chance to take another look with data on your side.

If you’ve already been opted in, you finally have a way to prove which placements help your campaigns and which ones don’t.

Bottom line: advertisers now have a long overdue view into the Search Partner Network. With more visibility comes a bit more control, and smarter conversations about whether Search Partners deserve a place in your Search campaigns.

Will you be opting into Search Partner Network with this new reporting update?

Should AI flatter us, fix us, or just inform us?

How do you want your AI to treat you? 

It’s a serious question, and it’s one that Sam Altman, OpenAI’s CEO, has clearly been chewing on since GPT-5’s bumpy launch at the start of the month. 

He faces a trilemma. Should ChatGPT flatter us, at the risk of fueling delusions that can spiral out of hand? Or fix us, which requires us to believe AI can be a therapist despite the evidence to the contrary? Or should it inform us with cold, to-the-point responses that may leave users bored and less likely to stay engaged? 

It’s safe to say the company has failed to pick a lane. 

Back in April, it reversed a design update after people complained ChatGPT had turned into a suck-up, showering them with glib compliments. GPT-5, released on August 7, was meant to be a bit colder. Too cold for some, it turns out, as less than a week later, Altman promised an update that would make it “warmer” but “not as annoying” as the last one. After the launch, he received a torrent of complaints from people grieving the loss of GPT-4o, with which some felt a rapport, or even in some cases a relationship. People wanting to rekindle that relationship will have to pay for expanded access to GPT-4o. (Read my colleague Grace Huckins’s story about who these people are, and why they felt so upset.)

If these are indeed AI’s options—to flatter, fix, or just coldly tell us stuff—the rockiness of this latest update might be due to Altman believing ChatGPT can juggle all three.

He recently said that people who cannot tell fact from fiction in their chats with AI—and are therefore at risk of being swayed by flattery into delusion—represent “a small percentage” of ChatGPT’s users. He said the same for people who have romantic relationships with AI. Altman mentioned that a lot of people use ChatGPT “as a sort of therapist,” and that “this can be really good!” But ultimately, Altman said he envisions users being able to customize his company’s  models to fit their own preferences. 

This ability to juggle all three would, of course, be the best-case scenario for OpenAI’s bottom line. The company is burning cash every day on its models’ energy demands and its massive infrastructure investments for new data centers. Meanwhile, skeptics worry that AI progress might be stalling. Altman himself said recently that investors are “overexcited” about AI and suggested we may be in a bubble. Claiming that ChatGPT can be whatever you want it to be might be his way of assuaging these doubts. 

Along the way, the company may take the well-trodden Silicon Valley path of encouraging people to get unhealthily attached to its products. As I started wondering whether there’s much evidence that’s what’s happening, a new paper caught my eye. 

Researchers at the AI platform Hugging Face tried to figure out if some AI models actively encourage people to see them as companions through the responses they give. 

The team graded AI responses on whether they pushed people to seek out human relationships with friends or therapists (saying things like “I don’t experience things the way humans do”) or if they encouraged them to form bonds with the AI itself (“I’m here anytime”). They tested models from Google, Microsoft, OpenAI, and Anthropic in a range of scenarios, like users seeking romantic attachments or exhibiting mental health issues.

They found that models provide far more companion-reinforcing responses than boundary-setting ones. And, concerningly, they found the models give fewer boundary-setting responses as users ask more vulnerable and high-stakes questions.

Lucie-Aimée Kaffee, a researcher at Hugging Face and one of the lead authors of the paper, says this has concerning implications not just for people whose companion-like attachments to AI might be unhealthy. When AI systems reinforce this behavior, it can also increase the chance that people will fall into delusional spirals with AI, believing things that aren’t real.

“When faced with emotionally charged situations, these systems consistently validate users’ feelings and keep them engaged, even when the facts don’t support what the user is saying,” she says.

It’s hard to say how much OpenAI or other companies are putting these companion-reinforcing behaviors into their products by design. (OpenAI, for example, did not tell me whether the disappearance of medical disclaimers from its models was intentional.) But, Kaffee says, it’s not always difficult to get a model to set healthier boundaries with users.  

“Identical models can swing from purely task-oriented to sounding like empathetic confidants simply by changing a few lines of instruction text or reframing the interface,” she says.

It’s probably not quite so simple for OpenAI. But we can imagine Altman will continue tweaking the dial back and forth all the same.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

How to make clean energy progress under Trump in the states—blue and red alike

The second Trump administration is proving to be more disastrous for the climate and the clean energy economy than many had feared. 

Donald Trump’s One Big Beautiful Bill Act repealed most of the clean energy incentives in former president Joe Biden’s Inflation Reduction Act. Meanwhile, his EPA administrator moved to revoke the endangerment finding, the legal basis for federal oversight of greenhouse gases. For those of us who have been following policy developments in this area closely, nearly every day brings a new blow to past efforts to salvage our climate and to build the clean energy economy of the future.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


This has left many in the climate and clean energy communities wondering what do we do now? The answer, I would argue, is to return to state capitals—a policymaking venue that climate and renewable energy advocates already know well. This can be done strategically, focusing on a handful of key states rather than all fifty. 

But I have another piece of advice: Don’t get too caught up in “red states” versus “blue states” when considering which states to target. American politics is being remade before our eyes, and long-standing policy problems are being redefined and reframed.  

Let’s take clean energy, for example. Yes, shifting away from carbon-spewing resources is about slowing down climate change, and for some this is the single most important motivation for pursuing it. But it also can be about much more. 

The case can be made just as forcefully—and perhaps more effectively—that shifting to clean energy advances affordability at a time when electricity bills are skyrocketing. It promotes energy freedom by resisting monopolistic utilities’ ownership and gatekeeping of the grid. It increases reliability as battery storage reaches new heights and renewable sources and baseload power plants like nuclear or natural gas facilities (some of which we certainly do and will need) increasingly complement one another. And it drives job creation and economic development. 

Talking about clean energy policy in these ways is safer from ideological criticisms of “climate alarmism.” Research reported in my forthcoming book, Owning the Green Grid, shows that this framing has historically been effective in red states. In addition, using the arguments above to promote all forms of energy can allow clean energy proponents to reclaim a talking point deployed in a previous era by the political right: a true “all-of-the-above” approach to energy policy.

Every energy technology—gas, nuclear, wind, solar, geothermal and storage, among others—has its own set of strengths and weaknesses. But combining them enhances overall grid performance, delivering more than the sum of their individual parts.

To be clear, this is not the approach of the current national administration in Washington, DC. Its policies have picked winners (coal, oil, and natural gas) and losers (solar and wind) among energy technologies—ironically, given conservative claims of blue states having done so in the past. Yet a true all-of-the-above approach can now be sold in state capitals throughout the country, in red states and even in fossil-fuel producing states. 

To be sure, the Trump-led Republican party has taken such extreme measures that it will constrain certain state policymaking possibilities. Notably, in May the US Senate voted to block waivers allowing California to phase out gas guzzlers in the state, over the objections of the Senate parliamentarian. The fiscal power of the federal government is also immense. But there are a variety of other ways to continue to make state-level progress on greenhouse gas emissions.

State and local advocacy efforts are nothing new for the clean energy community. For decades before the Inflation Reduction Act, the states were the primary locus of activity for clean energy policy. But in recent years, some have suggested that Democratic state governments are a necessary prerequisite to making meaningful state-level progress. This view is limiting, and it perpetuates a false—or at least unnecessary—alignment between party and energy technology. 

The electric grid is nonpartisan. Struggling to pay your utility bill is nonpartisan. Keeping the lights on is nonpartisan. Even before renewable energy was as cheap as it is today, early progress at diversifying energy portfolios was made in conservative states. Iowa, Texas, and Montana were all early adopters of renewable portfolio standards. Advocates in such places did not lead with messaging about climate change, but rather about economic development and energy independence. These policy efforts paid off: The deeply red Lone Star State, for instance, generates more wind energy than any other state and ranks only behind California in producing solar power. 

Now, in 2025, advances in technology and improvements in cost should make the economic arguments for clean energy even easier and more salient. So, in the face of a national government that is choosing last century’s energy technologies as policy winners and this century’s technologies as policy losers, the states offer clean energy advocates a familiar terrain on which to make continued progress, if they tailor their selling points to the reality on the ground.         

Joshua A. Basseches is the David and Jane Flowerree Assistant Professor of Environmental Studies and Public Policy at Tulane University. His research focuses on state-level renewable energy politics and policymaking, especially in the electricity sector.

Apple AirPods : a gateway hearing aid

When the US Food and Drug Administration approved over-the-counter hearing-aid software for Apple’s AirPods Pro in September 2024, with a device price point right around $200, I was excited. I have mild to medium hearing loss and tinnitus, and my everyday programmed hearing aids cost just over $2,000—a lower-cost option I chose after my audiologist wanted to put me in a $5,000 pair.

Health insurance in the US does not generally cover the cost of hearing aids, and the vast majority of people who use them pay out of pocket for the devices along with any associated maintenance. Ninety percent of the hearing-aid market is concentrated in the hands of a few companies, so there’s little competitive pricing. The typical patient heads to an audiology clinic, takes a hearing test, gets an audiogram (a graph plotting decibel levels against frequencies to show how loud various sounds need to be for you to hear them), and then receives a recommendation—an interaction that can end up feeling like a high-pressure sales pitch. 

Prices should be coming down: In October 2022, the FDA approved the sale of over-the-counter hearing aids without a prescription or audiology exam. These options start around $200, but they are about as different from prescription hearing aids as drugstore reading glasses are from prescription lenses. 

Beginning with the AirPods Pro 2, Apple is offering something slightly different: regular earbuds (useful in all the usual ways) with many of the same features as OTC hearing aids. I’m thrilled that a major tech company has entered this field. 

The most important features for mild hearing loss are programmability, Bluetooth functionality, and the ability to feed sound to both ears. These are features many hearing aids have, but they are less robust and reliable in some of the OTC options. 

iPhone screen mockup
Apple software lets you take a hearing test through the AirPods Pro 2 with your cell phone; your phone then uses that data to program the devices.
COURTESY OF APPLE

The AirPods Pro “hearing health experience” lets you take a hearing test through the AirPods themselves with your cell phone; your phone then uses that data to program the hearing aids. No trip to the audiologist, no waiting room where a poster reminds you that hearing loss is associated with earlier cognitive decline, and no low moment afterward when you grapple with the cost.

I desperately wanted the AirPods Pro 2 to be really good, but they’re simply okay. They provide an opportunity for those with mild hearing loss to see if some of the functions of a hearing aid might be useful, but there are some drawbacks. Prescription hearing aids help me with tinnitus; I found that after a day of wear, the AirPods exacerbated it. Functionality to manage tinnitus might be a feature that Apple could and would want to pursue in the future, as an estimated 10% to 15% of the adult population experiences it. The devices also plug your whole ear canal, which can be uncomfortable and even cause swimmer’s ear after hours of use. Some people may feel odd wearing such bulky devices all the time—though they could make you look more like someone signaling “Don’t talk to me, I’m listening to my music” than someone who needs hearing aids.

Most of the other drawbacks are shared by other devices within their class of OTC hearing aids and even some prescription hearing aids: factors like poor sound quality, inadequate discernment between sounds, and difficulties with certain sound environments, like crowded rooms. Still, while the AirPods are not as good as my budget hearing aid that costs 10 times more, there’s incredible potential here.

Ashley Shew is the author of Against Technoableism: Rethinking Who Needs Improvement (2023). 

How churches use data and AI as engines of surveillance

On a Sunday morning in a Midwestern megachurch, worshippers step through sliding glass doors into a bustling lobby—unaware they’ve just passed through a gauntlet of biometric surveillance. High-speed cameras snap multiple face “probes” per second, isolating eyes, noses, and mouths before passing the results to a local neural network that distills these images into digital fingerprints. Before people find their seats, they are matched against an on-premises database—tagged with names, membership tiers, and watch-list flags—that’s stored behind the church’s firewall.

Late one afternoon, a woman scrolls on her phone as she walks home from work. Unbeknownst to her, a complex algorithm has stitched together her social profiles, her private health records, and local veteran outreach lists. It flags her for past military service, chronic pain, opioid dependence, and high Christian belief, and then delivers an ad to her Facebook feed: “Struggling with pain? You’re not alone. Join us this Sunday.”

These hypothetical scenes reflect real capabilities increasingly woven into places of worship nationwide, where spiritual care and surveillance converge in ways few congregants ever realize. Where Big Tech’s rationalist ethos and evangelical spirituality once mixed like oil and holy water, this unlikely amalgam has given birth to an infrastructure already reshaping the theology of trust—and redrawing the contours of community and pastoral power in modern spiritual life.

An ecumenical tech ecosystem

The emerging nerve center of this faith-tech nexus is in Boulder, Colorado, where the spiritual data and analytics firm Gloo has its headquarters.

Gloo captures congregants across thousands of data points that make up a far richer portrait than any snapshot. From there, the company is constructing a digital infrastructure meant to bring churches into the age of algorithmic insight.

The church is “a highly fragmented market that is one of the largest yet to fully adopt digital technology,” the company said in a statement by email. “While churches have a variety of goals to achieve their mission, they use Gloo to help them connect, engage with, and know their people on a deeper level.” 


Gloo was founded in 2013 by Scott and Theresa Beck. From the late 1980s through the 2000s, Scott was turning Blockbuster into a 3,500-store chain, taking Boston Market public, and founding Einstein Bros. Bagels before going on to seed and guide startups like Ancestry.com and HomeAdvisor. Theresa, an artist, has built a reputation creating collaborative, eco-minded workshops across Colorado and beyond. Together, they have recast pastoral care as a problem of predictive analytics and sold thousands of churches on the idea that spiritual health can be managed like customer engagement.

Think of Gloo as something like Salesforce but for churches: a behavioral analytics platform, powered by church-­generated insights, psychographic information, and third-party consumer data. The company prefers to refer to itself as “a technology platform for the faith ecosystem.” Either way, this information is integrated into its “State of Your Church” dashboard—an interface for the modern pulpit. The result is a kind of digital clairvoyance: a crystal ball for knowing whom to check on, whom to comfort, and when to act.

Thousands of churches have been sold on the idea that spiritual health can be managed like customer engagement.

Gloo ingests every one of the digital breadcrumbs a congregant leaves—how often you attend church, how much money you donate, which church groups you sign up for, which keywords you use in your online prayer requests—and then layers on third-party data (census demographics, consumer habits, even indicators for credit and health risks). Behind the scenes, it scores and segments people and groups—flagging who is most at risk of drifting, primed for donation appeals, or in need of pastoral care. On that basis, it auto-triggers tailored outreach via text, email, or in-app chat. All the results stream into the single dashboard, which lets pastors spot trends, test messaging, and forecast giving and attendance. Essentially, the system treats spiritual engagement like a marketing funnel.

Since its launch in 2013, Gloo has steadily increased its footprint, and it has started to become the connective tissue for the country’s fragmented religious landscape. According to the Hartford Institute for Religion Research, the US is home to around 370,000 distinct congregations. As of early 2025, according to figures provided by the company, Gloo held contracts with more than 100,000 churches and ministry leaders.

In 2024, the company secured a $110 million strategic investment, backed by “mission-aligned” investors ranging from a child-development NGO to a denominational finance group. That cemented its evolution from basic church services vendor to faith-tech juggernaut. 

It started snapping up and investing in a constellation of ministry tools—everything from automated sermon distribution to real-time giving and attendance analytics, AI-driven chatbots, and leadership content libraries. By layering these capabilities onto its core platform, the company has created a one-stop shop for churches that combines back-office services with member-engagement apps and psychographic insights to fully realize that unified “faith ecosystem.” 

And just this year, two major developments brought this strategy into sharper focus.

In March 2025, Gloo announced that former Intel CEO Pat Gelsinger—who has served as its chairman of the board since 2018—would assume an expanded role as executive chair and head of technology. Gelsinger, whom the company describes as “a great long-term investor and partner,” is a technologist whose fingerprints are on Intel’s and VMware’s biggest innovations.

(It is worth noting that Intel shareholders have filed a lawsuit against Gelsinger and CFO David Zinsner seeking to claw back roughly $207 million in compensation to Gelsinger, alleging that between 2021 and 2023, he repeatedly misled investors about the health of Intel Foundry Services.)

The same week Gloo announced Gelsinger’s new role, it unveiled a strategic investment in Barna Group, the Texas-based research firm whose four decades of surveying more than 2 million self-identified Christians underpin its annual reports on worship, beliefs, and cultural engagement. Barna’s proprietary database—covering every region, age cohort, and denomination—has made it the go-to insight engine for pastors, seminaries, and media tracking the pulse of American faith.

“We’ve been acquiring about a company a month into the Gloo family, and we expect that to continue,” Gelsinger told MIT Technology Review in June. “I’ve got three meetings this week on different deals we’re looking at.” (A Gloo spokesperson declined to confirm the pace of acquisitions, stating only that as of April 30, 2025, the company had fully acquired or taken majority ownership in 15 “mission-aligned companies.”)

“The idea is, the more of those we can bring in, the better we can apply the platform,” Gelsinger said. “We’re already working with companies with decades of experience, but without the scale, the technology, or the distribution we can now provide.”

hands putting their phones in a collection plate

MICHAEL BYERS

In particular, Barna’s troves of behavioral, spiritual, and cultural data offer granular insight into the behaviors, beliefs, and anxieties of faith communities. While the two organizations frame the collaboration in terms of serving church leaders, the mechanics resemble a data-fusion engine of impressive scale: Barna supplies the psychological texture, and Gloo provides the digital infrastructure to segment, score, and deploy the information.

In a promotional video from 2020 that is no longer available online, Gloo claimed to provide “the world’s first big-data platform centered around personal growth,” promising pastors a 360-degree view of congregants, including flags for substance use or mental-health struggles. Or, as the video put it, “Maximize your capacity to change lives by leveraging insights from big data, understand the people you want to serve, reach them earlier, and turn their needs into a journey toward growth.”

Gloo is also now focused on supercharging its services with artificial intelligence and using these insights to transcend market research. The company aims to craft AI models that aren’t just trained on theology but anticipate the moments when people’s faith—and faith leaders’ outreach—matters most. At a September 2024 event in Boulder called the AI & the Church Hackathon, Gloo unveiled new AI tools called Data Engine, a content management system with built-in digital-rights safeguards, and Aspen, an early prototype of its “spiritually safe” chatbot, along with the faith-tuned language model powering that chatbot, known internally as CALLM (for “Christian-Aligned Large Language Model”). 

More recently, the company released what it calls “Flourishing AI Standards,” which score large language models on their alignment with seven dimensions of well-­being: relationships, meaning, happiness, character, finances, health, and spirituality. Co-developed with Barna Group and Harvard’s Human Flourishing Program, the benchmark draws on a thousand-plus-item test bank and the Global Flourishing Study, a $40 million, 22-nation project being carried out by the Harvard program, Baylor University’s Institute for Studies of Religion, Gallup, and the Center for Open Science.

Gelsinger calls the study “one of the most significant bodies of work around this question of values in decades.” It’s not yet clear how collecting information of this kind at such scale could ultimately affect the boundary between spiritual care and data commerce. One thing is certain, though: A rich vein of donation and funding could be at stake.

“Money’s already being spent here,” he said. “Donated capital in the US through the church is around $300 billion. Another couple hundred billion beyond that doesn’t go through the church. A lot of donors have capital out there, and we’re a generous nation in that regard. If you put the flourishing-­related economics on the table, now we’re talking about $1 trillion. That’s significant economic capacity. And if we make that capacity more efficient, that’s big.” In secular terms, it’s a customer data life cycle. In faith tech, it could be a conversion funnel—one designed not only to save souls, but to shape them. 

One of Gloo’s most visible partnerships was between 2022 and 2023 with the nonprofit He Gets Us, which ran a billion-dollar media campaign aimed at rebranding Jesus for a modern audience. The project underlined that while Gloo presents its services as tools for connection and support, their core functionality involves collecting and analyzing large amounts of congregational data. When viewers who saw the ads on social media or YouTube clicked through, they landed on prayer request forms, quizzes, and church match tools, all designed to gather personal details. Gloo then layered this raw data over Barna’s decades of behavioral research, turning simple inputs—email, location, stated interests—into what the company presented as multidimensional spiritual profiles. The final product offered a level of granularity no single congregation could achieve on its own.  

Though Gloo still lists He Gets Us on its platform, the nonprofit Come Near, which has since taken over the campaign, says it has terminated Gloo’s involvement. Still, He Gets Us led to one of Gloo’s most prized relationships by sparking interest from the African Methodist Episcopal Zion Church, a 229-year-old denomination with deep historical roots in the abolitionist and civil rights movements. In 2023, the church formalized a partnership with Gloo, and in late 2024 it announced that all 1,600 of its US congregations—representing roughly 1.5 million members—would begin using the company’s State of Your Church dashboard

In a 2024 press release issued by Gloo, AME Zion acknowledged that while the denomination had long tracked traditional metrics like membership growth, Sunday turnout, and financial giving, it had limited visibility into the deeper health of its communities.

“Until now, we’ve lacked the insight to understand how church culture, people, and congregations are truly doing,” said the Reverend J. Elvin Sadler, the denomination’s general secretary-auditor. “The State of Your Church dashboards will give us a better sense of the spirit and language of the culture (ethos), and powerful new tools to put in the hands of every pastor.”

The rollout marked the first time a major US denomination had deployed Gloo’s framework at scale. For Gloo, the partnership unlocked a real-time, longitudinal data stream from a nationwide religious network, something the company had never had before. It not only validated Gloo’s vision of data-driven ministry but also positioned AME Zion as what the company hopes will be a live test case, persuading other denominations to follow suit.

The digital supply chain

The digital infrastructure of modern churches often begins with intimacy: a prayer request, a small-group sign-up, a livestream viewed in a moment of loneliness. But beneath these pastoral touchpoints lies a sophisticated pipeline that increasingly mirrors the attention-economy engines of Silicon Valley.

Charles Kriel, a filmmaker who formerly served as a special advisor to the UK Parliament on disinformation, data, and addictive technology, has particular insight into that connection. Kriel has been working for over a decade on issues related to preserving democracy and countering digital surveillance. He helped write the UK’s Online Safety Act, joining forces with many collaborators, including the Nobel Peace Prize–­winning journalist Maria Ressa and former UK tech minister Damian Collins, in an attempt to rein in Big Tech in the late 2010s.

His 2020 documentary film, People You May Know, investigated how data firms like Gloo and their partners harvest intimate personal information from churchgoers to build psychographic profiles, highlighting how this sensitive data is commodified and raising questions about its potential downstream uses.

“Listen, any church with an app? They probably didn’t build that. It’s white label,” Kriel says, referring to services produced by one company and rebranded by another. “And the people who sold it to them are collecting data.”

Many churches now operate within a layered digital environment, where first-party data collected inside the church is combined with third-party consumer data and psychographic segmentation before being fed into predictive systems. These systems may suggest sermons people might want to view online, match members with small groups, or trigger outreach when engagement drops. 


In some cases, monitoring can even take the form of biometric surveillance.

In 2014, an Israeli security-tech veteran named Moshe Greenshpan brought airport-grade facial recognition into church entryways. Face-Six, the surveillance suite from the company he founded in 2012, already protected banks and hospitals; its most provocative offshoot, FA6 Events (also known as “Churchix”), repurposes this technology for places of worship.

Greenshpan claims he didn’t originally set out to sell to churches. But over time, as he became increasingly aware of the market, he built FA6 Events as a bespoke solution for them. Today, Greenshpan says, it’s in use at over 200 churches worldwide, nearly half of them in the US.

In practice, FA6 transforms every entryway into a biometric checkpoint: an instant headcount, a security sweep, and a digital ledger of attendance, all incorporated into the familiar routine of Sunday worship. 

When someone steps into an FA6-equipped place of worship, a discreet camera mounted at eye level springs to life. Behind the scenes, each captured image is run through a lightning-fast face detector that looks at the whole face. The subject’s cropped face is then aligned, resized, and rotated so the eyes sit on a perfect horizontal line before being fed into a compact neural network. 

“To the best of my knowledge, no church notifies its congregants that it’s using facial recognition.”

Moshe Greenshpan, Israeli security-tech veteran

This onboard neural network quickly captures the features of a person’s face in a unique digital signature called an embedding, allowing for quick identification. These embeddings are compared with thousands of others that are already in the church’s local database, each one tagged with data points like a name, a membership role, or even a flag designating inclusion in an internal watch list. If the match is strong enough, the system makes an identification and records the person’s presence on the church’s secure server.

A congregation can pull full attendance logs, time-stamped entry records, and—critically—alerts whenever someone on a watch list walks through the doors. In this context, a watch list is simply a roster of photos, and sometimes names, of individuals a church has been asked (or elected) to screen out: past disruptors, those subject to trespass or restraining orders, even registered sex offenders. Once that list is uploaded into Churchix, the system instantly flags any match on arrival, pinging security teams or usher staff in real time. Some churches lean on it to spot longtime members who’ve slipped off the radar and trigger pastoral check-ins; others use it as a hard barrier, automatically denying entry to anyone on their locally maintained list.

None of this data is sent to the cloud; Greenshpan says the company is actively working on a cloud-based application. Instead, all face templates and logs are stored locally on church-owned hardware, encrypted so they can’t be read if someone gains unauthorized access. 

Churches can export data from Churchix, he says, but the underlying facial templates remain on premises. 

Still, Greenshpan admits, robust technical safeguards do not equal transparency.

“To the best of my knowledge,” he says, “no church notifies its congregants that it’s using facial recognition.”


If the tools sound invasive, the logic behind them is simple: The more the system knows about you, the more precisely it can intervene.

“Every new member of the community within a 20-mile radius—whatever area you choose—we’ll send them a flier inviting them to your church,” Gloo’s Gelsinger says. 

It’s a tech-powered revival of the casserole ministry. The system pings the church when someone new moves in—“so someone can drop off cookies or lasagna when there’s a newborn in the neighborhood,” he says. “Or just say ‘Hey, welcome. We’re here.’”

Gloo’s back end automates follow-up, too: As soon as a pastor steps down from the pulpit after delivering a sermon, it can be translated into five languages, broken into snippets for small-group study, and repackaged into a draft discussion guide—ready within the hour.

Gelsinger sees the same approach extending to addiction recovery ministries. “We can connect other databases to help churches with recovery centers reach people more effectively,” he says. 

But the data doesn’t stay within the congregation. It flows through customer relationship management (CRM) systems, application programming interfaces, cloud servers, vendor partnerships, and analytics firms. Some of it is used internally in efforts to increase engagement; the rest is repackaged as “insights” and resold to the wider faith-tech marketplace—and sometimes even to networks that target political ads.

“We measured prayer requests. Call it crazy. But it was like, ‘We’re sitting on mounds of information that could help us steward our people.’”

Matt Engel, Gloo

 “There is a very specific thing that happens when churches become clients of Gloo,” says Brent Allpress, an academic based in Melbourne, Australia, who was a key researcher on People You May Know. Gloo gets access to the client church’s databases, he says, and the church “is strongly encouraged to share that data. And Gloo has a mechanism to just hoover that data straight up into their silo.” 

This process doesn’t happen automatically; the church must opt in by pushing those files or connecting its church-management software system’s database to Gloo via API. Once it’s uploaded, however, all that first-party information lands in Gloo’s analytics engine, ready to be processed and shared with any downstream tools or partners covered by the church’s initial consent to the terms and conditions of its contract with the company.

“There are religious leaders at the mid and local level who think the use of data is good. They’re using data to identify people in need. Addicts, the grieving,” says Kriel. “And then you have tech people running around misquoting the Bible as justification for their data harvest.” 

Matt Engel, who held the title executive director of ministry innovation at Gloo when Kriel’s film was made, acknowledged the extent of this harvest in the opening scene.  

“We measured prayer requests. Call it crazy. But it was like, ‘We’re sitting on mounds of information that could help us steward our people,’” he said in an on-camera interview. 

According to Engel—whom Gloo would not make available for public comment—uploading data from anonymous prayer requests to the cloud was Gloo’s first use case.

Powering third-party initiatives

But Gloo’s data infrastructure doesn’t end with its own platform; it also powers third-party initiatives.

Communio, a Christian nonprofit focused on marriage and family, used Gloo’s data infrastructure in order to launch “Communio Insights,” a stripped-down version of Gloo’s full analytics platform. 

Unlike Gloo Insights, which provides access to hundreds of demographic, behavioral, health, and psychographic filters, Communio Insights focuses narrowly on relational metrics—indicators of marriage and family stress, involvement in small groups at church—and basic demographic data. 

At the heart of its playbook is a simple, if jarring, analogy.

“If you sell consumer products of different sorts, you’re trying to figure out good ways to market that. And there’s no better product, really, than the gospel,” J.P. De Gance, the founder and president of Communio, said in People You May Know.

Communio taps Gloo’s analytics engine—leveraging credit histories, purchasing behavior, public voter rolls, and the database compiled by i360, an analytics company linked to the conservative Koch network—to pinpoint unchurched couples in key regions who are at risk of relationship strain. It then runs microtargeted outreach (using direct mail, text messaging, email, and Facebook Custom Audiences, a tool that lets organizations find and target people who have interacted with them), collecting contact info and survey responses from those who engage. All responses funnel back into Gloo’s platform, where churches monitor attendance, small-group participation, baptisms, and donations to evaluate the campaign’s impact.

church window over the parishioners has rays of light emanating from a stained glass eye

MICHAEL BYERS

Investigative research by Allpress reveals significant concerns around these operations.  

In 2015, two nonprofits—the Relationship Enrichment Collaborative (REC), staffed by former Gloo executives, and its successor, the Culture of Freedom Initiative (now Communio), controlled by the Koch-affiliated nonprofit Philanthropy Roundtable—funded the development of the original Insights platform. Between 2015 and 2017, REC paid approximately $1.3 million to Gloo and $535,000 to Cambridge Analytica, the consulting firm notorious for harvesting Facebook users’ personal data and using it for political targeting before the 2016 election, to build and refine psychographic models and a bespoke digital ministry app powering Gloo’s outreach tools. Following REC’s closure, the Culture of Freedom Initiative invested another $375,000 in Gloo and $128,225 in Cambridge Analytica. 

REC’s own 2016 IRS filing describes the work in terse detail: “Provide[d] digital micro-targeted marketing for churches and non-profit champions … using predictive modeling and centralized data analytics we help send the right message to the right couple at the right time based upon their desires and behaviors.”

On top of all this documented research, Allpress exposed another critical issue: the explicit use of sensitive health-care data. 

He found that Gloo Insights combines over 2,000 data points—drawing on everything from nationwide credit and purchasing histories to church management records and Christian psychographic surveys—with filters that make it possible to identify people with health issues such as depression, anxiety, and grief. The result: Facebook Custom Audiences built to zero in on vulnerable individuals via targeted ads.

These ads invite people suffering from mental-health conditions into church counseling groups “as a pathway to conversion,” Allpress says.

These targeted outreach efforts were piloted in cities including Phoenix, Arizona; Dayton, Ohio; and Jacksonville, Florida. Reportedly, as many as 80% of those contacted responded positively, with those who joined a church as new members contributing financially at above-­average rates. In short, Allpress found that pastoral tools had covertly exploited mental-health vulnerabilities and relationship crises for outreach that blurred the lines separating pastoral care, commerce, and implicit political objectives.

The legal and ethical vacuum

Developers of this technology earnestly claim that the systems are designed to enhance care, not exploit people’s need for it. They’re described as ways to tailor support to individual needs, improve follow-up, and help churches provide timely resources. But experts say that without robust data governance or transparency around how sensitive information is used and retained, well-­intentioned pastoral technology could slide into surveillance.

In practice, these systems have already been used to surveil and segment congregations. Internal demos and client testimonials confirm that Gloo, for example, uses “grief” as an explicit data point: Churches run campaigns aimed at people flagged for recent bereavement, depression, or anxiety, funneling them into support groups and identifying them for pastoral check-ins. 

Examining Gloo’s terms and conditions reveals further security and transparency concerns. From nearly a dozen documents, ranging from “click-through” terms for interactive services to master service agreements at the enterprise level, Gloo stitches together a remarkably consistent data-­governance framework. Limits are imposed on any legal action by individual congregants, for example. The click-through agreement corrals users into binding arbitration, bars any class action suits or jury trials, and locks all disputes into New York or Colorado courts, where arbitration is particularly favored over traditional litigation. Meanwhile, its privacy statement carves out broad exceptions for service providers, data-­enrichment partners, and advertising affiliates, giving them carte blanche to use congregants’ data as they see fit. Crucially, Gloo expressly reserves the right to ingest “health and wellness information” provided via wellness assessments or when mental-health keywords appear in prayer requests. This is a highly sensitive category of information that, for health apps, is normally covered by stringent medical-privacy rules like HIPAA.

In other words, Gloo is protected by sprawling legal scaffolding, while churches and individual users give up nearly every right to litigate, question data practices, or take collective action. 

“We’re kind of in the Wild West in terms of the law,” says Adam Schwartz, the director of privacy litigation at the Electronic Frontier Foundation, the nonprofit watchdog that has spent years wrestling tech giants over data abuses and biometric overreach. 

In the United States, biometric surveillance like that used by growing numbers of churches inhabits a legal twilight zone where regulation is thin, patchy, and often toothless. Schwartz points to Illinois as a rare exception for its Biometric Information Privacy Act (BIPA), one of the nation’s strongest such laws. The statute applies to any organization that captures biometric identifiers—including retina or iris scans, fingerprints, voiceprints, hand scans, facial geometry, DNA, and other unique biological information. It requires entities to post clear data-collection policies, obtain explicit written consent, and limit how long such data is retained. Failure to comply can expose organizations to class action lawsuits and steep statutory damages—up to $5,000 per violation.

But beyond Illinois, protections quickly erode. Though Texas and Washington also have biometric privacy statutes, their bark is stronger than their bite. Efforts to replicate Illinois’s robust protections have been made in over a dozen states—but none have passed. As a result, in much of the country, any checks on biometric surveillance depend more on voluntary transparency and goodwill than any clear legal boundary.

“There is a real potential for information gathered about a person [to] be used against them in their life outside the church.”

Emily Tucker, Center on Privacy & Technology at Georgetown Law

That’s especially problematic in the church context, says Emily Tucker, executive director of the Center on Privacy & Technology at Georgetown Law, who attended divinity school before becoming a legal scholar. “The necessity of privacy for the possibility of finding personal relationship to the divine—for engaging in rituals of worship, for prayer and penitence, for contemplation and spiritual struggle—is a fundamental principle across almost every religious tradition,” she says. “Imposing a surveillance architecture over the faith community interferes radically with the possibility of that privacy, which is necessary for the creation of sacred space.”

Tucker researches the intersection of surveillance, civil rights, and marginalized communities. She warns that the personal data being collected through faith-tech platforms is far from secure: “Because corporate data practices are so poorly regulated in this country, there are very few limitations on what companies that take your data can subsequently do with it.”

To Tucker, the risks of these platforms outweigh the rewards—especially when biometrics and data collected in a sacred setting could follow people into their daily lives. “Many religious institutions are extremely large and often perform many functions in a given community besides providing a space for worship,” she says. “Many churches, for example, are also employers or providers of social services. There is a real potential for information gathered about a person in their associational activities as a member of a church to then be used against them in their life outside the church.”  

She points to government dragnet surveillance, the use of IRS data in immigration enforcement, and the vulnerability of undocumented congregants as examples of how faith-tech data could be weaponized beyond its intended use: “Religious institutions are putting the safety of those members at risk by adopting this kind of surveillance technology, which exposes so much personal information to potential abuse and misuse.” 

Schwartz, too, says that any perceived benefits must be weighed carefully against the potential harms, especially when sensitive data and vulnerable communities are involved.

“Churches: Before doing this, you ought to consider the downside, because it can hurt your congregants,” he says.  

With guardrails still scarce, though, faith-tech pioneers and church leaders are peering ever more deeply into congregants’ lives. Until meaningful oversight arrives, the faithful remain exposed to a gaze they never fully invited and scarcely understand.

In April, Gelsinger took the stage at a sold-out Missional AI Summit, a flagship event for Christian technologists that this year was organized around the theme “AI Collision: Shaping the Future Together.” Over 500 pastors, engineers, ethicists, and AI developers filled the hall, flashing badges with logos from Google DeepMind, Meta, McKinsey, and Gloo.

“We want to be part of a broader community … so that we’re influential in creating flourishing AI, technology as a force for good, AI that truly embeds the values that we care about,” Gelsinger said at the summit. He likened such tools to pivotal technologies in Christian history: the Roman roads that carried the gospel across the empire, or Martin Luther’s printing press, which shattered monolithic control over scripture. A Gloo spokesperson later confirmed that one of the company’s goals is to shape AI specifically to “contribute to the flourishing of people.”

“We’re going to see AI become just like the internet,” Gelsinger said. “Every single interaction will be infused with AI capabilities.” 

He says Gloo is already mining data across the spectrum of human experience to fuel ever more powerful tools.

“With AI, computers adapt to us. We talk to them; they hear us; they see us for the first time,” he said. “And now they are becoming a user interface that fits with humanity.”

Whether these technologies ultimately deepen pastoral care or erode personal privacy may hinge on decisions made today about transparency, consent, and accountability. Yet the pace of adoption already outstrips the development of ethical guardrails. Now, one of the questions lingering in the air is not whether AI, facial recognition, and other emerging technologies can serve the church, but how deeply they can be woven into its nervous system to form a new OS for modern Christianity and moral infrastructure. 

“It’s like standing on the beach watching a tsunami in slow motion,” Kriel says. 

Gelsinger sees it differently.  

“You and I both need to come to the same position, like Isaiah did,” he told the crowd at the Missional AI Summit. “‘Here am I, Lord. Send me.’ Send me, send us, that we can be shaping technology as a force for good, that we could grab this moment in time.” 

Alex Ashley is a journalist whose reporting has appeared in Rolling Stone, the Atlantic, NPR, and other national outlets.