Common Hosting Defenses Ineffective Against WordPress Threats via @sejournal, @martinibuster

Patchstack published a case study that examined how well Cloudflare and other general firewall and malware solutions protected WordPress websites from common vulnerability threats and attack vectors. The research showed that while general solutions stopped threats like SQL injection or cross-site scripting, a dedicated WordPress security solution consistently stopped WordPress-specific exploits at a significantly higher rate.

WordPress Vulnerabilities

Due to the popularity of the WordPress platform, WordPress plugins and themes are a common focus for hackers, and vulnerabilities can quickly be exploited in the wild. Once proof-of-concept code is public, attackers often act within hours, leaving website owners little time to react.

This is why it is critical to be aware of the security provided by a web host and of how effective those solutions are in a WordPress environment.

Methodology

Patchstack explained their methodology:

“As a baseline, we have decided to host “honeypot” sites (sites against which we will perform controlled pentesting with a set of 11 WordPress-specific vulnerabilities) with 5 distinct hosting providers, some of which have ingrained features presuming to help with blocking WordPress vulnerabilities and/or overall security.

In addition to the hosting provider’s security measures and third-party providers for additional measures like robust WAFs or other patching providers, we have also installed Patchstack on every site, with our test question being:

  • How many of these threats will bypass firewalls and other patching providers to ultimately reach Patchstack?
  • And will Patchstack be able to block them all successfully?”

Testing process

Each website was set up the same way, with identical plugins, versions, and settings. Patchstack used a “exploitation testing toolkit” to run the same exploit tests in the same order on every site. Results were checked automatically and by hand to see if attacks were stopped, and whether the block came from the host’s defenses or from Patchstack.

General Overview: Hosting Providers Versus Vulnerabilities

The Patchstack case study tested five different configurations of security defenses, plus Patchstack.

1. Hosting Provider A Plus Cloudflare WAF

2. Hosting Provider B + Firewall + Monarx Server and Website Security

3. Hosting Provider C + Firewall + Imunify Web Server Security

4. Hosting Provider D + ConfigServer Firewall

5. Hosting Provider E + Firewall

The result of the testing showed that the various hosting infrastructure defenses failed to protect the majority of WordPress-specific threats, catching only 12.2% of the exploits. Patchstack caught 100% of all exploits.

Patchstack shared:

“2 out of the 5 hosts and their solutions failed to block any vulnerabilities at the network and server levels.

1 host blocked 1 vulnerability out of 11.

1 host blocked 2 vulnerabilities out of 11.

1 host blocked 4 vulnerabilities out of 11.”

Cloudflare And Other Solutions Failed

Solutions like Cloudflare WAF or bundled services such as Monarx or Imunify failed to consistently address WordPress specific vulnerabilities.

Cloudflare’s WAF stopped 4 of 11 exploits, Monarx blocked none, and Imunify did not prevent any WordPress-specific exploits. Firewalls such as ConfigServer, which are widely used in shared hosting environments, also failed every test.

These results show that while those kinds of products work reasonably well against broad attack types, they are not tuned to the specific security issues common to WordPress plugins and themes.

Patchstack is created to specifically stop WordPress plugin and theme vulnerabilities in real time. Instead of relying on static signatures or generic rules, it applies targeted mitigation through virtual patches as soon as vulnerabilities are disclosed, before attackers can act.

Virtual patches are mitigation for a specific WordPress vulnerability. This offers protection to users while a plugin or theme developer can create a patch for the flaw. This approach addresses WordPress flaws in a way hosting companies and generic tools can’t because they rarely match generic attack patterns, so they slip past traditional defenses and expose publishers to privilege escalation, authentication bypasses, and site takeovers.

Takeaways

  • Standard hosting defenses fail against most WordPress plugin vulnerabilities (87.8% bypass rate).
  • Many providers claiming “virtual patching” (like Monarx and Imunify) did not stop WordPress-specific exploits.
  • Generic firewalls and WAFs caught some broad attacks (SQLi, XSS) but not WordPress-specific flaws tied to plugins and themes.
  • Patchstack consistently blocked vulnerabilities in real time, filling the gap left by network and server defenses.
  • WordPress’s plugin-heavy ecosystem makes it an especially attractive target for attackers, making effective vulnerability protection essential.

The case study by Patchstack shows that traditional hosting defenses and generic “virtual patching” solutions leave WordPress sites vulnerable, with nearly 88% of attacks bypassing firewalls and server-layer protections.

While providers like Cloudflare blocked some broad exploits, plugin-specific threats such as privilege escalation and authentication bypasses slipped through.

Patchstack was the only solution to consistently block these attacks in real time, giving site owners a dependable way to protect WordPress sites against the types of vulnerabilities that are most often targeted by attackers.

According to Patchstack:

“Don’t rely on generic defenses for WordPress. Patchstack is built to detect and block these threats in real-time, applying mitigation rules before attackers can exploit them.”

Read the results of the case study by Patchstack here.

Featured Image by Shutterstock/tavizta

Inspiro WordPress Theme Vulnerability Affects Over 70,000 Sites via @sejournal, @martinibuster

A vulnerability advisory was published for the Inspiro WordPress theme by WPZoom. The vulnerability arises due to a missing or incorrect security validation that enables an unauthenticated attacker to launch a Cross-Site Request Forgery (CSRF) attack.

Cross-Site Request Forgery (CSRF)

A CSRF vulnerability in the context of a WordPress site is an attack that relies on a user with admin privileges clicking a link, which in turn leverages that user’s credentials to execute a malicious action. The vulnerability has been assigned a CVSS threat rating of 8.1.

The advisory issued by Wordfence WordPress security company warned:

“This makes it possible for unauthenticated attackers to install plugins from the repository via a forged request granted they can trick a site administrator into performing an action such as clicking on a link.”

The vulnerability affects Inspiro theme versions up to and including 2.1.2. Users are advised to update their theme to the latest version.

Featured Image by Shutterstock/Kazantseva Olga

Google Quietly Announces Search Partner Network Placement Visibility via @sejournal, @brookeosmundson

Google quietly rolled out a change advertisers have wanted for years: site-level reporting for the Search Partner Network.

Until now, advertisers could only opt in or out, with little understanding of where their ads actually showed.

This update finally gives visibility into where budgets are spent outside of Google.

Google lists this as an August 2025 update in its Help Center, however it wasn’t announced widespread.

Read on to understand the update from Google, how advertisers are reacting, and what you can do with this new level of information.

What Changed in Search Partner Reporting?

The new reporting applies to Search, Shopping, and App campaigns. You’ll now see which partner sites served your ads and how many impressions each one received.

Think of it as the kind of placement data we already get in Performance Max, just extended to Search Partners.

This update follows other moves Google has made to address long-standing concerns about partner quality.

Earlier this year, they introduced brand safety pre-screening options with IAS, DoubleVerify, and Zefr. They also said parked domains will be opted out by default before the end of 2025.

This visibility layer feels like the missing piece that makes the rest of those updates more usable.

How Are Advertisers Reacting to This Update?

The update on Search Partner Network reporting was first found by Anthony Higman, who took to X (formerly Twitter) to share his opinion.

Higman stated:

Still Most Likely Wont Be Participating In The Search Partner Network But This Is Unprecedented And What ALL Advertisers Have Been Requesting For Decades! Honestly NEVER Thought I Would See This Day.”

Others gave some versioning mixture of applauding Google for giving data to advertisers that they’ve been asking for for years, while also being somewhat skeptical.

Mike Ryan replied to Higman with his thoughts:

I mean, good step but also, it’s the PMax version: impression data only.

Aaron Levy shared his thoughts on LinkedIn, stating that this is a major step in the right direction for Google.

Why This Matters & How to Take Action

Without Search Partner Network reporting, it was tough to justify opting in. Now advertisers finally have data to audit where ads run, decide if it fits brand standards, and see if partner traffic adds any real value.

That said, the update is only as good as the action that advertisers take with the information available.

Some sites won’t align with brand guidelines. Others may generate clicks but fail to drive quality conversions.

The difference is you can now point to actual data when making decisions, rather than relying on gut feel.

Here’s some quick pointers to make this update actionable:

  • Run a quick placement audit. Pull the report and check for sites that don’t align with your brand. Exclude what’s clearly not a fit.
  • Look beyond impressions. While this reporting is only limited to impressions, use your own conversion data to figure out which placements are driving useful traffic versus noise.
  • Revisit opt-in of campaigns. Many advertisers avoided Search Partners altogether because of the black box. Now it may be worth testing again, but do it with defined guardrails and success metrics.
  • Pressure test Smart Bidding. Google leans on Smart Bidding to balance Search Partner performance, but don’t assume it’s perfect. Keep an eye on conversion quality and modeled conversions before scaling.

Final Thoughts

If you’ve been skeptical of Search Partners, this update is a chance to take another look with data on your side.

If you’ve already been opted in, you finally have a way to prove which placements help your campaigns and which ones don’t.

Bottom line: advertisers now have a long overdue view into the Search Partner Network. With more visibility comes a bit more control, and smarter conversations about whether Search Partners deserve a place in your Search campaigns.

Will you be opting into Search Partner Network with this new reporting update?

Should AI flatter us, fix us, or just inform us?

How do you want your AI to treat you? 

It’s a serious question, and it’s one that Sam Altman, OpenAI’s CEO, has clearly been chewing on since GPT-5’s bumpy launch at the start of the month. 

He faces a trilemma. Should ChatGPT flatter us, at the risk of fueling delusions that can spiral out of hand? Or fix us, which requires us to believe AI can be a therapist despite the evidence to the contrary? Or should it inform us with cold, to-the-point responses that may leave users bored and less likely to stay engaged? 

It’s safe to say the company has failed to pick a lane. 

Back in April, it reversed a design update after people complained ChatGPT had turned into a suck-up, showering them with glib compliments. GPT-5, released on August 7, was meant to be a bit colder. Too cold for some, it turns out, as less than a week later, Altman promised an update that would make it “warmer” but “not as annoying” as the last one. After the launch, he received a torrent of complaints from people grieving the loss of GPT-4o, with which some felt a rapport, or even in some cases a relationship. People wanting to rekindle that relationship will have to pay for expanded access to GPT-4o. (Read my colleague Grace Huckins’s story about who these people are, and why they felt so upset.)

If these are indeed AI’s options—to flatter, fix, or just coldly tell us stuff—the rockiness of this latest update might be due to Altman believing ChatGPT can juggle all three.

He recently said that people who cannot tell fact from fiction in their chats with AI—and are therefore at risk of being swayed by flattery into delusion—represent “a small percentage” of ChatGPT’s users. He said the same for people who have romantic relationships with AI. Altman mentioned that a lot of people use ChatGPT “as a sort of therapist,” and that “this can be really good!” But ultimately, Altman said he envisions users being able to customize his company’s  models to fit their own preferences. 

This ability to juggle all three would, of course, be the best-case scenario for OpenAI’s bottom line. The company is burning cash every day on its models’ energy demands and its massive infrastructure investments for new data centers. Meanwhile, skeptics worry that AI progress might be stalling. Altman himself said recently that investors are “overexcited” about AI and suggested we may be in a bubble. Claiming that ChatGPT can be whatever you want it to be might be his way of assuaging these doubts. 

Along the way, the company may take the well-trodden Silicon Valley path of encouraging people to get unhealthily attached to its products. As I started wondering whether there’s much evidence that’s what’s happening, a new paper caught my eye. 

Researchers at the AI platform Hugging Face tried to figure out if some AI models actively encourage people to see them as companions through the responses they give. 

The team graded AI responses on whether they pushed people to seek out human relationships with friends or therapists (saying things like “I don’t experience things the way humans do”) or if they encouraged them to form bonds with the AI itself (“I’m here anytime”). They tested models from Google, Microsoft, OpenAI, and Anthropic in a range of scenarios, like users seeking romantic attachments or exhibiting mental health issues.

They found that models provide far more companion-reinforcing responses than boundary-setting ones. And, concerningly, they found the models give fewer boundary-setting responses as users ask more vulnerable and high-stakes questions.

Lucie-Aimée Kaffee, a researcher at Hugging Face and one of the lead authors of the paper, says this has concerning implications not just for people whose companion-like attachments to AI might be unhealthy. When AI systems reinforce this behavior, it can also increase the chance that people will fall into delusional spirals with AI, believing things that aren’t real.

“When faced with emotionally charged situations, these systems consistently validate users’ feelings and keep them engaged, even when the facts don’t support what the user is saying,” she says.

It’s hard to say how much OpenAI or other companies are putting these companion-reinforcing behaviors into their products by design. (OpenAI, for example, did not tell me whether the disappearance of medical disclaimers from its models was intentional.) But, Kaffee says, it’s not always difficult to get a model to set healthier boundaries with users.  

“Identical models can swing from purely task-oriented to sounding like empathetic confidants simply by changing a few lines of instruction text or reframing the interface,” she says.

It’s probably not quite so simple for OpenAI. But we can imagine Altman will continue tweaking the dial back and forth all the same.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

How to make clean energy progress under Trump in the states—blue and red alike

The second Trump administration is proving to be more disastrous for the climate and the clean energy economy than many had feared. 

Donald Trump’s One Big Beautiful Bill Act repealed most of the clean energy incentives in former president Joe Biden’s Inflation Reduction Act. Meanwhile, his EPA administrator moved to revoke the endangerment finding, the legal basis for federal oversight of greenhouse gases. For those of us who have been following policy developments in this area closely, nearly every day brings a new blow to past efforts to salvage our climate and to build the clean energy economy of the future.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


This has left many in the climate and clean energy communities wondering what do we do now? The answer, I would argue, is to return to state capitals—a policymaking venue that climate and renewable energy advocates already know well. This can be done strategically, focusing on a handful of key states rather than all fifty. 

But I have another piece of advice: Don’t get too caught up in “red states” versus “blue states” when considering which states to target. American politics is being remade before our eyes, and long-standing policy problems are being redefined and reframed.  

Let’s take clean energy, for example. Yes, shifting away from carbon-spewing resources is about slowing down climate change, and for some this is the single most important motivation for pursuing it. But it also can be about much more. 

The case can be made just as forcefully—and perhaps more effectively—that shifting to clean energy advances affordability at a time when electricity bills are skyrocketing. It promotes energy freedom by resisting monopolistic utilities’ ownership and gatekeeping of the grid. It increases reliability as battery storage reaches new heights and renewable sources and baseload power plants like nuclear or natural gas facilities (some of which we certainly do and will need) increasingly complement one another. And it drives job creation and economic development. 

Talking about clean energy policy in these ways is safer from ideological criticisms of “climate alarmism.” Research reported in my forthcoming book, Owning the Green Grid, shows that this framing has historically been effective in red states. In addition, using the arguments above to promote all forms of energy can allow clean energy proponents to reclaim a talking point deployed in a previous era by the political right: a true “all-of-the-above” approach to energy policy.

Every energy technology—gas, nuclear, wind, solar, geothermal and storage, among others—has its own set of strengths and weaknesses. But combining them enhances overall grid performance, delivering more than the sum of their individual parts.

To be clear, this is not the approach of the current national administration in Washington, DC. Its policies have picked winners (coal, oil, and natural gas) and losers (solar and wind) among energy technologies—ironically, given conservative claims of blue states having done so in the past. Yet a true all-of-the-above approach can now be sold in state capitals throughout the country, in red states and even in fossil-fuel producing states. 

To be sure, the Trump-led Republican party has taken such extreme measures that it will constrain certain state policymaking possibilities. Notably, in May the US Senate voted to block waivers allowing California to phase out gas guzzlers in the state, over the objections of the Senate parliamentarian. The fiscal power of the federal government is also immense. But there are a variety of other ways to continue to make state-level progress on greenhouse gas emissions.

State and local advocacy efforts are nothing new for the clean energy community. For decades before the Inflation Reduction Act, the states were the primary locus of activity for clean energy policy. But in recent years, some have suggested that Democratic state governments are a necessary prerequisite to making meaningful state-level progress. This view is limiting, and it perpetuates a false—or at least unnecessary—alignment between party and energy technology. 

The electric grid is nonpartisan. Struggling to pay your utility bill is nonpartisan. Keeping the lights on is nonpartisan. Even before renewable energy was as cheap as it is today, early progress at diversifying energy portfolios was made in conservative states. Iowa, Texas, and Montana were all early adopters of renewable portfolio standards. Advocates in such places did not lead with messaging about climate change, but rather about economic development and energy independence. These policy efforts paid off: The deeply red Lone Star State, for instance, generates more wind energy than any other state and ranks only behind California in producing solar power. 

Now, in 2025, advances in technology and improvements in cost should make the economic arguments for clean energy even easier and more salient. So, in the face of a national government that is choosing last century’s energy technologies as policy winners and this century’s technologies as policy losers, the states offer clean energy advocates a familiar terrain on which to make continued progress, if they tailor their selling points to the reality on the ground.         

Joshua A. Basseches is the David and Jane Flowerree Assistant Professor of Environmental Studies and Public Policy at Tulane University. His research focuses on state-level renewable energy politics and policymaking, especially in the electricity sector.

Apple AirPods : a gateway hearing aid

When the US Food and Drug Administration approved over-the-counter hearing-aid software for Apple’s AirPods Pro in September 2024, with a device price point right around $200, I was excited. I have mild to medium hearing loss and tinnitus, and my everyday programmed hearing aids cost just over $2,000—a lower-cost option I chose after my audiologist wanted to put me in a $5,000 pair.

Health insurance in the US does not generally cover the cost of hearing aids, and the vast majority of people who use them pay out of pocket for the devices along with any associated maintenance. Ninety percent of the hearing-aid market is concentrated in the hands of a few companies, so there’s little competitive pricing. The typical patient heads to an audiology clinic, takes a hearing test, gets an audiogram (a graph plotting decibel levels against frequencies to show how loud various sounds need to be for you to hear them), and then receives a recommendation—an interaction that can end up feeling like a high-pressure sales pitch. 

Prices should be coming down: In October 2022, the FDA approved the sale of over-the-counter hearing aids without a prescription or audiology exam. These options start around $200, but they are about as different from prescription hearing aids as drugstore reading glasses are from prescription lenses. 

Beginning with the AirPods Pro 2, Apple is offering something slightly different: regular earbuds (useful in all the usual ways) with many of the same features as OTC hearing aids. I’m thrilled that a major tech company has entered this field. 

The most important features for mild hearing loss are programmability, Bluetooth functionality, and the ability to feed sound to both ears. These are features many hearing aids have, but they are less robust and reliable in some of the OTC options. 

iPhone screen mockup
Apple software lets you take a hearing test through the AirPods Pro 2 with your cell phone; your phone then uses that data to program the devices.
COURTESY OF APPLE

The AirPods Pro “hearing health experience” lets you take a hearing test through the AirPods themselves with your cell phone; your phone then uses that data to program the hearing aids. No trip to the audiologist, no waiting room where a poster reminds you that hearing loss is associated with earlier cognitive decline, and no low moment afterward when you grapple with the cost.

I desperately wanted the AirPods Pro 2 to be really good, but they’re simply okay. They provide an opportunity for those with mild hearing loss to see if some of the functions of a hearing aid might be useful, but there are some drawbacks. Prescription hearing aids help me with tinnitus; I found that after a day of wear, the AirPods exacerbated it. Functionality to manage tinnitus might be a feature that Apple could and would want to pursue in the future, as an estimated 10% to 15% of the adult population experiences it. The devices also plug your whole ear canal, which can be uncomfortable and even cause swimmer’s ear after hours of use. Some people may feel odd wearing such bulky devices all the time—though they could make you look more like someone signaling “Don’t talk to me, I’m listening to my music” than someone who needs hearing aids.

Most of the other drawbacks are shared by other devices within their class of OTC hearing aids and even some prescription hearing aids: factors like poor sound quality, inadequate discernment between sounds, and difficulties with certain sound environments, like crowded rooms. Still, while the AirPods are not as good as my budget hearing aid that costs 10 times more, there’s incredible potential here.

Ashley Shew is the author of Against Technoableism: Rethinking Who Needs Improvement (2023). 

How churches use data and AI as engines of surveillance

On a Sunday morning in a Midwestern megachurch, worshippers step through sliding glass doors into a bustling lobby—unaware they’ve just passed through a gauntlet of biometric surveillance. High-speed cameras snap multiple face “probes” per second, isolating eyes, noses, and mouths before passing the results to a local neural network that distills these images into digital fingerprints. Before people find their seats, they are matched against an on-premises database—tagged with names, membership tiers, and watch-list flags—that’s stored behind the church’s firewall.

Late one afternoon, a woman scrolls on her phone as she walks home from work. Unbeknownst to her, a complex algorithm has stitched together her social profiles, her private health records, and local veteran outreach lists. It flags her for past military service, chronic pain, opioid dependence, and high Christian belief, and then delivers an ad to her Facebook feed: “Struggling with pain? You’re not alone. Join us this Sunday.”

These hypothetical scenes reflect real capabilities increasingly woven into places of worship nationwide, where spiritual care and surveillance converge in ways few congregants ever realize. Where Big Tech’s rationalist ethos and evangelical spirituality once mixed like oil and holy water, this unlikely amalgam has given birth to an infrastructure already reshaping the theology of trust—and redrawing the contours of community and pastoral power in modern spiritual life.

An ecumenical tech ecosystem

The emerging nerve center of this faith-tech nexus is in Boulder, Colorado, where the spiritual data and analytics firm Gloo has its headquarters.

Gloo captures congregants across thousands of data points that make up a far richer portrait than any snapshot. From there, the company is constructing a digital infrastructure meant to bring churches into the age of algorithmic insight.

The church is “a highly fragmented market that is one of the largest yet to fully adopt digital technology,” the company said in a statement by email. “While churches have a variety of goals to achieve their mission, they use Gloo to help them connect, engage with, and know their people on a deeper level.” 


Gloo was founded in 2013 by Scott and Theresa Beck. From the late 1980s through the 2000s, Scott was turning Blockbuster into a 3,500-store chain, taking Boston Market public, and founding Einstein Bros. Bagels before going on to seed and guide startups like Ancestry.com and HomeAdvisor. Theresa, an artist, has built a reputation creating collaborative, eco-minded workshops across Colorado and beyond. Together, they have recast pastoral care as a problem of predictive analytics and sold thousands of churches on the idea that spiritual health can be managed like customer engagement.

Think of Gloo as something like Salesforce but for churches: a behavioral analytics platform, powered by church-­generated insights, psychographic information, and third-party consumer data. The company prefers to refer to itself as “a technology platform for the faith ecosystem.” Either way, this information is integrated into its “State of Your Church” dashboard—an interface for the modern pulpit. The result is a kind of digital clairvoyance: a crystal ball for knowing whom to check on, whom to comfort, and when to act.

Thousands of churches have been sold on the idea that spiritual health can be managed like customer engagement.

Gloo ingests every one of the digital breadcrumbs a congregant leaves—how often you attend church, how much money you donate, which church groups you sign up for, which keywords you use in your online prayer requests—and then layers on third-party data (census demographics, consumer habits, even indicators for credit and health risks). Behind the scenes, it scores and segments people and groups—flagging who is most at risk of drifting, primed for donation appeals, or in need of pastoral care. On that basis, it auto-triggers tailored outreach via text, email, or in-app chat. All the results stream into the single dashboard, which lets pastors spot trends, test messaging, and forecast giving and attendance. Essentially, the system treats spiritual engagement like a marketing funnel.

Since its launch in 2013, Gloo has steadily increased its footprint, and it has started to become the connective tissue for the country’s fragmented religious landscape. According to the Hartford Institute for Religion Research, the US is home to around 370,000 distinct congregations. As of early 2025, according to figures provided by the company, Gloo held contracts with more than 100,000 churches and ministry leaders.

In 2024, the company secured a $110 million strategic investment, backed by “mission-aligned” investors ranging from a child-development NGO to a denominational finance group. That cemented its evolution from basic church services vendor to faith-tech juggernaut. 

It started snapping up and investing in a constellation of ministry tools—everything from automated sermon distribution to real-time giving and attendance analytics, AI-driven chatbots, and leadership content libraries. By layering these capabilities onto its core platform, the company has created a one-stop shop for churches that combines back-office services with member-engagement apps and psychographic insights to fully realize that unified “faith ecosystem.” 

And just this year, two major developments brought this strategy into sharper focus.

In March 2025, Gloo announced that former Intel CEO Pat Gelsinger—who has served as its chairman of the board since 2018—would assume an expanded role as executive chair and head of technology. Gelsinger, whom the company describes as “a great long-term investor and partner,” is a technologist whose fingerprints are on Intel’s and VMware’s biggest innovations.

(It is worth noting that Intel shareholders have filed a lawsuit against Gelsinger and CFO David Zinsner seeking to claw back roughly $207 million in compensation to Gelsinger, alleging that between 2021 and 2023, he repeatedly misled investors about the health of Intel Foundry Services.)

The same week Gloo announced Gelsinger’s new role, it unveiled a strategic investment in Barna Group, the Texas-based research firm whose four decades of surveying more than 2 million self-identified Christians underpin its annual reports on worship, beliefs, and cultural engagement. Barna’s proprietary database—covering every region, age cohort, and denomination—has made it the go-to insight engine for pastors, seminaries, and media tracking the pulse of American faith.

“We’ve been acquiring about a company a month into the Gloo family, and we expect that to continue,” Gelsinger told MIT Technology Review in June. “I’ve got three meetings this week on different deals we’re looking at.” (A Gloo spokesperson declined to confirm the pace of acquisitions, stating only that as of April 30, 2025, the company had fully acquired or taken majority ownership in 15 “mission-aligned companies.”)

“The idea is, the more of those we can bring in, the better we can apply the platform,” Gelsinger said. “We’re already working with companies with decades of experience, but without the scale, the technology, or the distribution we can now provide.”

hands putting their phones in a collection plate

MICHAEL BYERS

In particular, Barna’s troves of behavioral, spiritual, and cultural data offer granular insight into the behaviors, beliefs, and anxieties of faith communities. While the two organizations frame the collaboration in terms of serving church leaders, the mechanics resemble a data-fusion engine of impressive scale: Barna supplies the psychological texture, and Gloo provides the digital infrastructure to segment, score, and deploy the information.

In a promotional video from 2020 that is no longer available online, Gloo claimed to provide “the world’s first big-data platform centered around personal growth,” promising pastors a 360-degree view of congregants, including flags for substance use or mental-health struggles. Or, as the video put it, “Maximize your capacity to change lives by leveraging insights from big data, understand the people you want to serve, reach them earlier, and turn their needs into a journey toward growth.”

Gloo is also now focused on supercharging its services with artificial intelligence and using these insights to transcend market research. The company aims to craft AI models that aren’t just trained on theology but anticipate the moments when people’s faith—and faith leaders’ outreach—matters most. At a September 2024 event in Boulder called the AI & the Church Hackathon, Gloo unveiled new AI tools called Data Engine, a content management system with built-in digital-rights safeguards, and Aspen, an early prototype of its “spiritually safe” chatbot, along with the faith-tuned language model powering that chatbot, known internally as CALLM (for “Christian-Aligned Large Language Model”). 

More recently, the company released what it calls “Flourishing AI Standards,” which score large language models on their alignment with seven dimensions of well-­being: relationships, meaning, happiness, character, finances, health, and spirituality. Co-developed with Barna Group and Harvard’s Human Flourishing Program, the benchmark draws on a thousand-plus-item test bank and the Global Flourishing Study, a $40 million, 22-nation project being carried out by the Harvard program, Baylor University’s Institute for Studies of Religion, Gallup, and the Center for Open Science.

Gelsinger calls the study “one of the most significant bodies of work around this question of values in decades.” It’s not yet clear how collecting information of this kind at such scale could ultimately affect the boundary between spiritual care and data commerce. One thing is certain, though: A rich vein of donation and funding could be at stake.

“Money’s already being spent here,” he said. “Donated capital in the US through the church is around $300 billion. Another couple hundred billion beyond that doesn’t go through the church. A lot of donors have capital out there, and we’re a generous nation in that regard. If you put the flourishing-­related economics on the table, now we’re talking about $1 trillion. That’s significant economic capacity. And if we make that capacity more efficient, that’s big.” In secular terms, it’s a customer data life cycle. In faith tech, it could be a conversion funnel—one designed not only to save souls, but to shape them. 

One of Gloo’s most visible partnerships was between 2022 and 2023 with the nonprofit He Gets Us, which ran a billion-dollar media campaign aimed at rebranding Jesus for a modern audience. The project underlined that while Gloo presents its services as tools for connection and support, their core functionality involves collecting and analyzing large amounts of congregational data. When viewers who saw the ads on social media or YouTube clicked through, they landed on prayer request forms, quizzes, and church match tools, all designed to gather personal details. Gloo then layered this raw data over Barna’s decades of behavioral research, turning simple inputs—email, location, stated interests—into what the company presented as multidimensional spiritual profiles. The final product offered a level of granularity no single congregation could achieve on its own.  

Though Gloo still lists He Gets Us on its platform, the nonprofit Come Near, which has since taken over the campaign, says it has terminated Gloo’s involvement. Still, He Gets Us led to one of Gloo’s most prized relationships by sparking interest from the African Methodist Episcopal Zion Church, a 229-year-old denomination with deep historical roots in the abolitionist and civil rights movements. In 2023, the church formalized a partnership with Gloo, and in late 2024 it announced that all 1,600 of its US congregations—representing roughly 1.5 million members—would begin using the company’s State of Your Church dashboard

In a 2024 press release issued by Gloo, AME Zion acknowledged that while the denomination had long tracked traditional metrics like membership growth, Sunday turnout, and financial giving, it had limited visibility into the deeper health of its communities.

“Until now, we’ve lacked the insight to understand how church culture, people, and congregations are truly doing,” said the Reverend J. Elvin Sadler, the denomination’s general secretary-auditor. “The State of Your Church dashboards will give us a better sense of the spirit and language of the culture (ethos), and powerful new tools to put in the hands of every pastor.”

The rollout marked the first time a major US denomination had deployed Gloo’s framework at scale. For Gloo, the partnership unlocked a real-time, longitudinal data stream from a nationwide religious network, something the company had never had before. It not only validated Gloo’s vision of data-driven ministry but also positioned AME Zion as what the company hopes will be a live test case, persuading other denominations to follow suit.

The digital supply chain

The digital infrastructure of modern churches often begins with intimacy: a prayer request, a small-group sign-up, a livestream viewed in a moment of loneliness. But beneath these pastoral touchpoints lies a sophisticated pipeline that increasingly mirrors the attention-economy engines of Silicon Valley.

Charles Kriel, a filmmaker who formerly served as a special advisor to the UK Parliament on disinformation, data, and addictive technology, has particular insight into that connection. Kriel has been working for over a decade on issues related to preserving democracy and countering digital surveillance. He helped write the UK’s Online Safety Act, joining forces with many collaborators, including the Nobel Peace Prize–­winning journalist Maria Ressa and former UK tech minister Damian Collins, in an attempt to rein in Big Tech in the late 2010s.

His 2020 documentary film, People You May Know, investigated how data firms like Gloo and their partners harvest intimate personal information from churchgoers to build psychographic profiles, highlighting how this sensitive data is commodified and raising questions about its potential downstream uses.

“Listen, any church with an app? They probably didn’t build that. It’s white label,” Kriel says, referring to services produced by one company and rebranded by another. “And the people who sold it to them are collecting data.”

Many churches now operate within a layered digital environment, where first-party data collected inside the church is combined with third-party consumer data and psychographic segmentation before being fed into predictive systems. These systems may suggest sermons people might want to view online, match members with small groups, or trigger outreach when engagement drops. 


In some cases, monitoring can even take the form of biometric surveillance.

In 2014, an Israeli security-tech veteran named Moshe Greenshpan brought airport-grade facial recognition into church entryways. Face-Six, the surveillance suite from the company he founded in 2012, already protected banks and hospitals; its most provocative offshoot, FA6 Events (also known as “Churchix”), repurposes this technology for places of worship.

Greenshpan claims he didn’t originally set out to sell to churches. But over time, as he became increasingly aware of the market, he built FA6 Events as a bespoke solution for them. Today, Greenshpan says, it’s in use at over 200 churches worldwide, nearly half of them in the US.

In practice, FA6 transforms every entryway into a biometric checkpoint: an instant headcount, a security sweep, and a digital ledger of attendance, all incorporated into the familiar routine of Sunday worship. 

When someone steps into an FA6-equipped place of worship, a discreet camera mounted at eye level springs to life. Behind the scenes, each captured image is run through a lightning-fast face detector that looks at the whole face. The subject’s cropped face is then aligned, resized, and rotated so the eyes sit on a perfect horizontal line before being fed into a compact neural network. 

“To the best of my knowledge, no church notifies its congregants that it’s using facial recognition.”

Moshe Greenshpan, Israeli security-tech veteran

This onboard neural network quickly captures the features of a person’s face in a unique digital signature called an embedding, allowing for quick identification. These embeddings are compared with thousands of others that are already in the church’s local database, each one tagged with data points like a name, a membership role, or even a flag designating inclusion in an internal watch list. If the match is strong enough, the system makes an identification and records the person’s presence on the church’s secure server.

A congregation can pull full attendance logs, time-stamped entry records, and—critically—alerts whenever someone on a watch list walks through the doors. In this context, a watch list is simply a roster of photos, and sometimes names, of individuals a church has been asked (or elected) to screen out: past disruptors, those subject to trespass or restraining orders, even registered sex offenders. Once that list is uploaded into Churchix, the system instantly flags any match on arrival, pinging security teams or usher staff in real time. Some churches lean on it to spot longtime members who’ve slipped off the radar and trigger pastoral check-ins; others use it as a hard barrier, automatically denying entry to anyone on their locally maintained list.

None of this data is sent to the cloud; Greenshpan says the company is actively working on a cloud-based application. Instead, all face templates and logs are stored locally on church-owned hardware, encrypted so they can’t be read if someone gains unauthorized access. 

Churches can export data from Churchix, he says, but the underlying facial templates remain on premises. 

Still, Greenshpan admits, robust technical safeguards do not equal transparency.

“To the best of my knowledge,” he says, “no church notifies its congregants that it’s using facial recognition.”


If the tools sound invasive, the logic behind them is simple: The more the system knows about you, the more precisely it can intervene.

“Every new member of the community within a 20-mile radius—whatever area you choose—we’ll send them a flier inviting them to your church,” Gloo’s Gelsinger says. 

It’s a tech-powered revival of the casserole ministry. The system pings the church when someone new moves in—“so someone can drop off cookies or lasagna when there’s a newborn in the neighborhood,” he says. “Or just say ‘Hey, welcome. We’re here.’”

Gloo’s back end automates follow-up, too: As soon as a pastor steps down from the pulpit after delivering a sermon, it can be translated into five languages, broken into snippets for small-group study, and repackaged into a draft discussion guide—ready within the hour.

Gelsinger sees the same approach extending to addiction recovery ministries. “We can connect other databases to help churches with recovery centers reach people more effectively,” he says. 

But the data doesn’t stay within the congregation. It flows through customer relationship management (CRM) systems, application programming interfaces, cloud servers, vendor partnerships, and analytics firms. Some of it is used internally in efforts to increase engagement; the rest is repackaged as “insights” and resold to the wider faith-tech marketplace—and sometimes even to networks that target political ads.

“We measured prayer requests. Call it crazy. But it was like, ‘We’re sitting on mounds of information that could help us steward our people.’”

Matt Engel, Gloo

 “There is a very specific thing that happens when churches become clients of Gloo,” says Brent Allpress, an academic based in Melbourne, Australia, who was a key researcher on People You May Know. Gloo gets access to the client church’s databases, he says, and the church “is strongly encouraged to share that data. And Gloo has a mechanism to just hoover that data straight up into their silo.” 

This process doesn’t happen automatically; the church must opt in by pushing those files or connecting its church-management software system’s database to Gloo via API. Once it’s uploaded, however, all that first-party information lands in Gloo’s analytics engine, ready to be processed and shared with any downstream tools or partners covered by the church’s initial consent to the terms and conditions of its contract with the company.

“There are religious leaders at the mid and local level who think the use of data is good. They’re using data to identify people in need. Addicts, the grieving,” says Kriel. “And then you have tech people running around misquoting the Bible as justification for their data harvest.” 

Matt Engel, who held the title executive director of ministry innovation at Gloo when Kriel’s film was made, acknowledged the extent of this harvest in the opening scene.  

“We measured prayer requests. Call it crazy. But it was like, ‘We’re sitting on mounds of information that could help us steward our people,’” he said in an on-camera interview. 

According to Engel—whom Gloo would not make available for public comment—uploading data from anonymous prayer requests to the cloud was Gloo’s first use case.

Powering third-party initiatives

But Gloo’s data infrastructure doesn’t end with its own platform; it also powers third-party initiatives.

Communio, a Christian nonprofit focused on marriage and family, used Gloo’s data infrastructure in order to launch “Communio Insights,” a stripped-down version of Gloo’s full analytics platform. 

Unlike Gloo Insights, which provides access to hundreds of demographic, behavioral, health, and psychographic filters, Communio Insights focuses narrowly on relational metrics—indicators of marriage and family stress, involvement in small groups at church—and basic demographic data. 

At the heart of its playbook is a simple, if jarring, analogy.

“If you sell consumer products of different sorts, you’re trying to figure out good ways to market that. And there’s no better product, really, than the gospel,” J.P. De Gance, the founder and president of Communio, said in People You May Know.

Communio taps Gloo’s analytics engine—leveraging credit histories, purchasing behavior, public voter rolls, and the database compiled by i360, an analytics company linked to the conservative Koch network—to pinpoint unchurched couples in key regions who are at risk of relationship strain. It then runs microtargeted outreach (using direct mail, text messaging, email, and Facebook Custom Audiences, a tool that lets organizations find and target people who have interacted with them), collecting contact info and survey responses from those who engage. All responses funnel back into Gloo’s platform, where churches monitor attendance, small-group participation, baptisms, and donations to evaluate the campaign’s impact.

church window over the parishioners has rays of light emanating from a stained glass eye

MICHAEL BYERS

Investigative research by Allpress reveals significant concerns around these operations.  

In 2015, two nonprofits—the Relationship Enrichment Collaborative (REC), staffed by former Gloo executives, and its successor, the Culture of Freedom Initiative (now Communio), controlled by the Koch-affiliated nonprofit Philanthropy Roundtable—funded the development of the original Insights platform. Between 2015 and 2017, REC paid approximately $1.3 million to Gloo and $535,000 to Cambridge Analytica, the consulting firm notorious for harvesting Facebook users’ personal data and using it for political targeting before the 2016 election, to build and refine psychographic models and a bespoke digital ministry app powering Gloo’s outreach tools. Following REC’s closure, the Culture of Freedom Initiative invested another $375,000 in Gloo and $128,225 in Cambridge Analytica. 

REC’s own 2016 IRS filing describes the work in terse detail: “Provide[d] digital micro-targeted marketing for churches and non-profit champions … using predictive modeling and centralized data analytics we help send the right message to the right couple at the right time based upon their desires and behaviors.”

On top of all this documented research, Allpress exposed another critical issue: the explicit use of sensitive health-care data. 

He found that Gloo Insights combines over 2,000 data points—drawing on everything from nationwide credit and purchasing histories to church management records and Christian psychographic surveys—with filters that make it possible to identify people with health issues such as depression, anxiety, and grief. The result: Facebook Custom Audiences built to zero in on vulnerable individuals via targeted ads.

These ads invite people suffering from mental-health conditions into church counseling groups “as a pathway to conversion,” Allpress says.

These targeted outreach efforts were piloted in cities including Phoenix, Arizona; Dayton, Ohio; and Jacksonville, Florida. Reportedly, as many as 80% of those contacted responded positively, with those who joined a church as new members contributing financially at above-­average rates. In short, Allpress found that pastoral tools had covertly exploited mental-health vulnerabilities and relationship crises for outreach that blurred the lines separating pastoral care, commerce, and implicit political objectives.

The legal and ethical vacuum

Developers of this technology earnestly claim that the systems are designed to enhance care, not exploit people’s need for it. They’re described as ways to tailor support to individual needs, improve follow-up, and help churches provide timely resources. But experts say that without robust data governance or transparency around how sensitive information is used and retained, well-­intentioned pastoral technology could slide into surveillance.

In practice, these systems have already been used to surveil and segment congregations. Internal demos and client testimonials confirm that Gloo, for example, uses “grief” as an explicit data point: Churches run campaigns aimed at people flagged for recent bereavement, depression, or anxiety, funneling them into support groups and identifying them for pastoral check-ins. 

Examining Gloo’s terms and conditions reveals further security and transparency concerns. From nearly a dozen documents, ranging from “click-through” terms for interactive services to master service agreements at the enterprise level, Gloo stitches together a remarkably consistent data-­governance framework. Limits are imposed on any legal action by individual congregants, for example. The click-through agreement corrals users into binding arbitration, bars any class action suits or jury trials, and locks all disputes into New York or Colorado courts, where arbitration is particularly favored over traditional litigation. Meanwhile, its privacy statement carves out broad exceptions for service providers, data-­enrichment partners, and advertising affiliates, giving them carte blanche to use congregants’ data as they see fit. Crucially, Gloo expressly reserves the right to ingest “health and wellness information” provided via wellness assessments or when mental-health keywords appear in prayer requests. This is a highly sensitive category of information that, for health apps, is normally covered by stringent medical-privacy rules like HIPAA.

In other words, Gloo is protected by sprawling legal scaffolding, while churches and individual users give up nearly every right to litigate, question data practices, or take collective action. 

“We’re kind of in the Wild West in terms of the law,” says Adam Schwartz, the director of privacy litigation at the Electronic Frontier Foundation, the nonprofit watchdog that has spent years wrestling tech giants over data abuses and biometric overreach. 

In the United States, biometric surveillance like that used by growing numbers of churches inhabits a legal twilight zone where regulation is thin, patchy, and often toothless. Schwartz points to Illinois as a rare exception for its Biometric Information Privacy Act (BIPA), one of the nation’s strongest such laws. The statute applies to any organization that captures biometric identifiers—including retina or iris scans, fingerprints, voiceprints, hand scans, facial geometry, DNA, and other unique biological information. It requires entities to post clear data-collection policies, obtain explicit written consent, and limit how long such data is retained. Failure to comply can expose organizations to class action lawsuits and steep statutory damages—up to $5,000 per violation.

But beyond Illinois, protections quickly erode. Though Texas and Washington also have biometric privacy statutes, their bark is stronger than their bite. Efforts to replicate Illinois’s robust protections have been made in over a dozen states—but none have passed. As a result, in much of the country, any checks on biometric surveillance depend more on voluntary transparency and goodwill than any clear legal boundary.

“There is a real potential for information gathered about a person [to] be used against them in their life outside the church.”

Emily Tucker, Center on Privacy & Technology at Georgetown Law

That’s especially problematic in the church context, says Emily Tucker, executive director of the Center on Privacy & Technology at Georgetown Law, who attended divinity school before becoming a legal scholar. “The necessity of privacy for the possibility of finding personal relationship to the divine—for engaging in rituals of worship, for prayer and penitence, for contemplation and spiritual struggle—is a fundamental principle across almost every religious tradition,” she says. “Imposing a surveillance architecture over the faith community interferes radically with the possibility of that privacy, which is necessary for the creation of sacred space.”

Tucker researches the intersection of surveillance, civil rights, and marginalized communities. She warns that the personal data being collected through faith-tech platforms is far from secure: “Because corporate data practices are so poorly regulated in this country, there are very few limitations on what companies that take your data can subsequently do with it.”

To Tucker, the risks of these platforms outweigh the rewards—especially when biometrics and data collected in a sacred setting could follow people into their daily lives. “Many religious institutions are extremely large and often perform many functions in a given community besides providing a space for worship,” she says. “Many churches, for example, are also employers or providers of social services. There is a real potential for information gathered about a person in their associational activities as a member of a church to then be used against them in their life outside the church.”  

She points to government dragnet surveillance, the use of IRS data in immigration enforcement, and the vulnerability of undocumented congregants as examples of how faith-tech data could be weaponized beyond its intended use: “Religious institutions are putting the safety of those members at risk by adopting this kind of surveillance technology, which exposes so much personal information to potential abuse and misuse.” 

Schwartz, too, says that any perceived benefits must be weighed carefully against the potential harms, especially when sensitive data and vulnerable communities are involved.

“Churches: Before doing this, you ought to consider the downside, because it can hurt your congregants,” he says.  

With guardrails still scarce, though, faith-tech pioneers and church leaders are peering ever more deeply into congregants’ lives. Until meaningful oversight arrives, the faithful remain exposed to a gaze they never fully invited and scarcely understand.

In April, Gelsinger took the stage at a sold-out Missional AI Summit, a flagship event for Christian technologists that this year was organized around the theme “AI Collision: Shaping the Future Together.” Over 500 pastors, engineers, ethicists, and AI developers filled the hall, flashing badges with logos from Google DeepMind, Meta, McKinsey, and Gloo.

“We want to be part of a broader community … so that we’re influential in creating flourishing AI, technology as a force for good, AI that truly embeds the values that we care about,” Gelsinger said at the summit. He likened such tools to pivotal technologies in Christian history: the Roman roads that carried the gospel across the empire, or Martin Luther’s printing press, which shattered monolithic control over scripture. A Gloo spokesperson later confirmed that one of the company’s goals is to shape AI specifically to “contribute to the flourishing of people.”

“We’re going to see AI become just like the internet,” Gelsinger said. “Every single interaction will be infused with AI capabilities.” 

He says Gloo is already mining data across the spectrum of human experience to fuel ever more powerful tools.

“With AI, computers adapt to us. We talk to them; they hear us; they see us for the first time,” he said. “And now they are becoming a user interface that fits with humanity.”

Whether these technologies ultimately deepen pastoral care or erode personal privacy may hinge on decisions made today about transparency, consent, and accountability. Yet the pace of adoption already outstrips the development of ethical guardrails. Now, one of the questions lingering in the air is not whether AI, facial recognition, and other emerging technologies can serve the church, but how deeply they can be woven into its nervous system to form a new OS for modern Christianity and moral infrastructure. 

“It’s like standing on the beach watching a tsunami in slow motion,” Kriel says. 

Gelsinger sees it differently.  

“You and I both need to come to the same position, like Isaiah did,” he told the crowd at the Missional AI Summit. “‘Here am I, Lord. Send me.’ Send me, send us, that we can be shaping technology as a force for good, that we could grab this moment in time.” 

Alex Ashley is a journalist whose reporting has appeared in Rolling Stone, the Atlantic, NPR, and other national outlets.

Material Cultures looks to the past to build the future

Despite decades of green certifications, better material sourcing, and the use of more sustainable materials such as mass timber, the built environment is still responsible for a third of global emissions worldwide. According to a 2024 UN report, the building sector has fallen “significantly behind on progress” toward becoming more sustainable. Changing the way we erect and operate buildings remains key to even approaching climate goals. 

“As soon as you set out and do something differently in construction, you are constantly bumping your head against the wall,” says Paloma Gormley, a director of the London-based design and research nonprofit Material Cultures. “You can either stop there or take a step back and try to find a way around it.”

Gormley has been finding a “way around it” by systematically exploring how tradition can be harnessed in new ways to repair what she has dubbed the “oil vernacular”—the contemporary building system shaped not by local, natural materials but by global commodities and plastic products made largely from fossil fuels.

Though she grew up in a household rich in art and design—she’s the daughter of the famed British sculptor Antony Gormley—she’s quick to say she’s far from a brilliant maker and more of a “bodger,” a term that means someone who does work that’s botched or shoddy. 

Improviser or DIYer might be more accurate. One of her first bits of architecture was a makeshift home built on the back of a truck she used to tour around England one summer in her 20s. The work of her first firm, Practice Architecture, which she cofounded after graduating from the University of Cambridge in 2009, was informed by London’s DIY subcultures and informal art spaces. She says these scenes “existed in the margins and cracks between things, but in which a lot felt possible.” 

Frank’s Café, a bar and restaurant she built in 2009 on the roof of a parking garage in Peckham that hosted a sculpture park, was constructed from ratchet straps, scaffold boards, and castoffs she’d source from lumberyards and transport on the roof rack of an old Volvo. It was the first of a series of cultural and social spaces she and her partner Lettice Drake created using materials both low-budget and local. 

Material Cultures grew out of connections Gormley made while she was teaching at London Metropolitan University. In 2019, she was a teaching assistant along with Summer Islam, who was friends with George Massoud, both architects and partners in the firm Study Abroad and advocates of more socially conscious design. The trio had a shared interest in sustainability and building practices, as well as a frustration with the architecture world’s focus on improving sustainability through high-tech design. Instead of using modern methods to build more efficient commercial and residential spaces from carbon-intensive materials like steel, they thought, why not revisit first principles? Build with locally sourced, natural materials and you don’t have to worry about making up a carbon deficit in the first place. 

The frame of Clearfell House was built with ash and larch, two species of wood vulnerable to climate change
HENRY WOIDE/COURTESY OF MATERIAL CULTURES
office in a house
Flat House was built with pressed panels of hemp grown in the fields surrounding the home.
OSKAR PROCTOR

As many other practitioners look to artificial intelligence and other high-tech approaches to building, Material Cultures has always focused on sustainability, finding creative ways to turn local materials into new buildings. And the three of them don’t just design and build. They team up with traditional craft experts to explore the potential of materials like reeds and clay, and techniques like thatching and weaving. 

More than any one project, Gormley, Islam, and Massoud are perhaps best known for their meditation on the subject of how architects work. Published in 2022, Material Reform: Building for a Post-Carbon Future is a pocket-size book that drills into materials and methodologies to suggest a more thoughtful, ecological architecture.

“There is a huge amount of technological knowledge and intelligence in historic, traditional, vernacular ways of doing things that’s been evolved over millennia, not just the last 100 years,” Gormley says. “We’re really about trying to tap into that.”

One of Material Cultures’ early works, Flat House, a home built in 2019 in Cambridgeshire, England, with pressed panels of hemp grown in the surrounding fields, was meant as an exploration of what kind of building could be made from what a single farm could produce. Gormley was there from the planting of the seeds to the harvesting of the hemp plants to the completion of construction. 

“It was incredible understanding that buildings could be part of these natural cycles,” she says. 

Clearfell House, a timber A-frame cabin tucked into a clearing in the Dalby Forest in North Yorkshire, England, exemplifies the firm’s obsession with elevating humble materials and vernacular techniques. Every square inch of the house, which was finished in late 2024 as part of a construction class Material Cultures’ architects taught at Central Saint Martins design school in London, emerged from extensive research into British timber, the climate crisis, and how forestry is changing. That meant making the frame from local ash and larch, two species of wood specifically chosen because they were affected by climate change, and avoiding the use of factory-farmed lumber. The modular system used for the structure was made to be replicated at scale.  

“I find it rare that architecture offices have such a clear framing and mission,” says Andreas Lang, head of the Saint Martins architecture program. “Emerging practices often become client-dependent. For [Material Cultures], the client is maybe the planet.”

Material Cultures fits in with the boom in popularity for more sustainable materials, waste-minimizing construction, and panelized building using straw and hemp, says Michael Burchert, a German expert on decarbonized buildings. “People are grabbing the good stuff from the hippies at the moment,” he says. Regulation has started to follow: France recently mandated that new public buildings be constructed with 50% timber or other biological material, and Denmark’s construction sector has embarked on a project, Pathways to Biobased Construction, to promote use of nature-based products in new building.

Burchert appreciates the way the firm melds theory and practice. “We have academia, and academia is full of papers,” he says. “We need makers.” 

Over the last several years, Gormley and her cofounders have developed a portfolio of work that rethinks construction supply chains and stays grounded in social impact. The just-finished Wolves Lane Centre, a $2.4 million community center in North London run by a pair of groups that work on food and racial justice, didn’t just reflect Material Cultures’ typical focus on bio-based materials—in this case, local straw, lime, and timber. 

LUKE O’DONOVAN/COURTESY OF MATERIAL CULTURES

LUKE O’DONOVAN/COURTESY OF MATERIAL CULTURES

For Wolves Lane Centre, a $2.4 million community facility for groups working on food and racial justice, expert plasterers and specialists in straw-bale construction were brought in so their processes could be shared and learned.

LUKE O’DONOVAN/COURTESY OF MATERIAL CULTURES

It was a project of self-determination and learning, says Gormley. Expert plasterers and specialists in straw-bale construction were brought in so the processes could be shared and learned. Introducing this kind of teaching into the construction process was quite time-consuming and, Gormley says, was as expensive as using contemporary techniques, if not more so. But the added value was worth it. 

“The people who become the custodians of these buildings then have the skills to maintain and repair, as well as evolve, the site over time,” she says. 

As Burchert puts it, science fiction tends to show a future built of concrete and steel; Material Cultures instead offers something natural, communal, and innovative, a needed paradigm shift. And it’s increasingly working on a larger scale. The Phoenix, a forthcoming low-carbon development in the southern English city of Lewes that’s being developed by a former managing director for Greenpeace, will use the firm’s designs for 70 of its 700 planned homes. 

The project Gormley may be most excited about is an interdisciplinary school Material Cultures is creating north of London: a 500-acre former farm in Essex that will be a living laboratory bridging the firm’s work in supply chains, materials science, and construction. The rural site for the project, which has the working title Land Lab, was deliberately chosen as a place where those connections would be inherent, Gormley says. 

The Essex project advances the firm’s larger mission. As Gormley, Massoud, and Islam advise in their book, “Hold a vision of a radically different world in your mind while continuing to act in the world as it is, persisting in the project of making changes that are within the scope of action.” 

Patrick Sisson, a Chicago expat living in Los Angeles, covers technology and urbanism.

The Download: churches in the age of AI, and how to run an LLM at home

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How churches use data and AI as engines of surveillance

On a Sunday morning in a Midwestern megachurch, worshippers step through sliding glass doors into a bustling lobby—unaware they’ve just passed through a gauntlet of biometric surveillance. High-speed cameras snap multiple face “probes” per second, before passing the results to a local neural network that distills these images into digital fingerprints. Before people find their seats, they are matched against an on-premises database—tagged with names, membership tiers, and watch-list flags—that’s stored behind the church’s firewall.

This hypothetical scene reflects real capabilities increasingly woven into places of worship nationwide, where spiritual care and surveillance converge in ways few congregants ever realize. 

Where Big Tech’s rationalist ethos and evangelical spirituality once mixed like oil and holy water, now they’re combining to redraw the contours of community and pastoral power in modern spiritual life. Read the full story.

—Alex Ashley

This story is from our forthcoming print issue, which is all about security. If you haven’t already, subscribe now to receive future issues once they land.

MIT Technology Review Narrated: How to run an LLM on your laptop

For people who are concerned about privacy, want to break free from the control of the big LLM companies, or just enjoy tinkering, local models offer a compelling alternative to ChatGPT and its web-based peers. Here’s how to get started running one from the safety and comfort of your own computer.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US tech stocks are sliding over fears the AI bubble may be about to burst
After an MIT report found that the vast majority of organizations are getting zero return on their AI investments. (FT $)
+ Even Sam Altman thinks the current hype is unsustainable. (CNBC)

2 Meta is reportedly weighing up downsizing its AI division
It wants to split it into four groups—and layoffs could be imminent. (NYT $)+ What’s happening with the metaverse, then? (NY Mag $)
+ Meta is desperately hoping its AI hiring spree will pay off. (Bloomberg $)

3 The American Academy of Pediatrics is defying RFK Jr

By releasing its own vaccination schedule for children. (Ars Technica)
+ It’s breaking with current CDC recommendations. (CNN)
+ Why US federal health agencies are abandoning mRNA vaccines. (MIT Technology Review)

4 Elon Musk’s America Party isn’t going so well
He’s said to be refocusing his attention on his companies instead. (WSJ $)

5 The White House has a TikTok account now
The very same TikTok that Donald Trump once tried to ban. (WP $)
+ What appears to have changed Congress’ stance? (The Verge)
+ There’s still no sign of a sale on the horizon. (The Guardian)

6 Nvidia is working on another chip for China
One that’s faster and more powerful than its current H20 model. (Reuters)

7 How AGI preppers are bracing themselves for an AI apocalypse
Some are spending all their retirement savings along the way. (Insider $)

8 Demand for critical minerals is soaring
Is there a less-invasive way to mine them? (New Scientist $)
+ The race to produce rare earth elements. (MIT Technology Review)

9 What’s an automaker CEO to do?
In our increasingly topsy turvy world, many of them feel like they can’t win. (Wired $)

10 This mattress company is building an AI agent for sleep
Eight Sleep’s agent could simulate digital twins of a user’s sleep habits. (The Information $)
+ I tried to hack my insomnia with technology. Here’s what worked. (MIT Technology Review)

Quote of the day

Too many cooks, too many kitchens.”

—Tech investor M.G. Siegler wryly comments on the news Meta is planning to restructure its AI division in a post on Bluesky.

One more thing

Responsible AI has a burnout problem

Margaret Mitchell had been working at Google for two years before she realized she needed a break. Only after she spoke with a therapist did she understand the problem: she was burnt out.

Mitchell, who now works as chief ethics scientist at the AI startup Hugging Face, is far from alone in her experience. Burnout is becoming increasingly common in responsible AI teams.

All the practitioners MIT Technology Review interviewed spoke enthusiastically about their work: it is fueled by passion, a sense of urgency, and the satisfaction of building solutions for real problems. But that sense of mission can be overwhelming without the right support. Read the full story

—Melissa Heikkilä

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Check out Wes Andersons’ quirky love letter to New York 🗽
+ Uhoh—beware the rise of the groomzilla.
+ The Rocky Horror Picture Show is 50 years old, if you can believe it.
+ Whisk me away to Lake George ASAP.

NASA’s new AI model can predict when a solar storm may strike

NASA and IBM have released a new open-source machine learning model to help scientists better understand and predict the physics and weather patterns of the sun. Surya, trained on over a decade’s worth of NASA solar data, should help give scientists an early warning when a dangerous solar flare is likely to hit Earth.

Solar storms occur when the sun erupts energy and particles into space. They can produce solar flares and slower-moving coronal mass ejections that can disrupt radio signals, flip computer bits onboard satellites, and endanger astronauts with bursts of radiation. 

There’s no way to prevent these sorts of effects, but being able to predict when a large solar flare will occur could let people work around them. However, as Louise Harra, an astrophysicist at ETH Zurich, puts it, “when it erupts is always the sticking point.”

Scientists can easily tell from an image of the sun if there will be a solar flare in the near future, says Harra, who did not work on Surya. But knowing the exact timing and strength of a flare is much harder, she says. That’s a problem because a flare’s size can make the difference between small regional radio blackouts every few weeks (which can still be disruptive) or a devastating solar superstorm that would cause satellites to fall out of orbit and electrical grids to fail. Some solar scientists believe we are overdue for a solar superstorm of this magnitude.

While machine learning has been used to study solar weather events before, the researchers behind Surya hope the quality and sheer scale of their data will help it predict a wider range of events more accurately. 

The model’s training data came from NASA’s Solar Dynamics Observatory, which collects pictures of the sun at many different wavelengths of light simultaneously. That made for a dataset of over 250 terabytes in total.

Early testing of Surya showed it could predict some solar flares two hours in advance. “It can predict the solar flare’s shape, the position in the sun, the intensity,” says Juan Bernabe-Moreno, an AI researcher at IBM who led the Surya project. Two hours may not be enough to protect against all the impacts a strong flare could have, but every moment counts. IBM claims in a blog post that this can as much as double the warning time currently possible with state-of-the-art methods, though exact reported lead times vary. It’s possible this predictive power could be improved through, for example, fine-tuning or by adding other data, as well. 

According to Harra, the hidden patterns underlying events like solar flares are hard to understand from Earth. She says that while astrophysicists know the conditions that make these events happen, they still do not understand why they occur when they do. “It’s just those tiny destabilizations that we know happen, but we don’t know when,” says Harra. The promise of Surya lies in whether it can find the patterns underlying those destabilizations faster than any existing methods, buying us extra time.

However, Bernabe-Moreno is excited for the potential beyond predicting solar flares. He hopes to use Surya alongside previous models he worked on for IBM and NASA that predict weather here on Earth to better understand how solar storms and Earth weather are connected. “There is some evidence about solar weather influencing lightning, for example,” he says. “What are the cross effects, and where and how do you map the influence from one type of weather to the other?”

Because Surya is a foundation model, trained without a specialized job, NASA and IBM hope that it can find many patterns in the sun’s physics, much as general-purpose large language models like ChatGPT can take on many different tasks. They believe Surya could even enable new understandings about how other celestial bodies work. 

“Understanding the sun is a proxy for understanding many other stars,” Bernabe-Moreno says. “We look at the sun as a laboratory.”