EcomFuel Founder on 2026 Industry Trends

For years EcomFuel has surveyed its community of ecommerce merchants about their growth, margins, tactics, and more. The company released this year’s findings last week.

Founder Andrew Youderian recaps the report in this episode, addressing the state of ecommerce among 300 participating businesses.

Our entire audio is embedded below. The transcript is edited for length and clarity.

Eric Bandholz: Give us a rundown of what you do.

Andrew Youderian: I run a company and community called eComFuel. My background is in starting and operating ecommerce businesses. We have an online message board, online forum, events, reviews, and research.

Our “2026 Ecommerce Trends Report” is based on responses from 300 store owners — mostly seven, eight, and nine-figure brands — who answered 50 questions.

We ask about traffic, margins, Amazon, warehousing, AI, business models, tariffs, and more. We don’t track the number of merchants who have exited the industry. People join and leave our community every month for various reasons. When asked, some say they’re closing their business. It peaked 12 to 18 months ago. I’m a little more optimistic about ecommerce for the next couple of years.

Going forward, successful brands will likely be smaller with loyal customers. They will make interesting products. They won’t grow as fast, but they’ll be much stickier and more durable in the long term.

The number of respondents in our report who manufacture products increased by 50% over the past three years. All other models were either flat or down. Respondents who resell products are largely unchanged. Private label sellers were down significantly. Drop shipping was down 50%. Merchants are adjusting to a new reality.

In 2017, about 20% of respondents’ total revenue came from Amazon. It subsequently spiked to about 28%. It’s now back to 20%, despite 63% selling on that marketplace.

I respect how Amazon built out its infrastructure for the long term. They’re not going anywhere, but the types of products they sell will likely be either very low-end or very high-end. They’ve lost the middle tier.

Bandholz: Have you tracked AI’s financial impact?

Youderian: For the trends report, we asked, “Have you meaningfully incorporated AI into your business?” Seventy-two percent of respondents said yes. The top four use cases were, in order, copywriting, images, analytics, and coding.

Certainly some merchants have dialed in AI and are seeing strong benefits. But most are still in the investment stage.

For example, EcomFuel has heavily invested in AI over the last year. We’ve built proprietary AI tools. But we’ve not seen great ROI from those efforts. That seems to be what’s happening for most ecommerce companies.

One of the most surprising findings in this year’s survey was the ages of AI adopters. Roughly 90% of respondents under 30 are using AI. But folks in their 30s are investing less than those in the 40- to 54-year-old cohort. Anecdotally, we’re seeing merchants build impressive in-house operational tools, and most are 40 or older.

Bandholz: Where can people join your community or reach out?

Youderian: Our site is eCommerceFuel.com. I’m on LinkedIn and X. I also host “The eComFuel Podcast.”

What I Learned About The Future Of Search And AI From Sundar Pichai’s Latest Interview via @sejournal, @marie_haynes

I really enjoyed this interview with Sundar Pichai by John Collison and Elad Gil of Stripe.

Here are the five most interesting things I learned.

1. Search Will Still Exist In The Future, But Much Of It Will Be Agentic

Sundar was asked if agents would replace Search. He said:

“If I fast forward, a lot of what are just information-seeking queries will be agentic in Search. You’ll be completing tasks. You’ll have many threads running.”

And also, Search will change so that we think of it like an agent manager.

“It keeps evolving. Search will be an agent manager in which you’re doing a lot of things. I think, in some ways, you know, I use Antigravity today, and you know, you have a bunch of agents doing stuff, and I can see search doing versions of those things, and you’re getting a bunch of stuff done.”

He said that people do deep research in AI Mode, and it will soon be the norm to do long-running tasks. He also said that the form factor of devices will change.

2. Google Uses Antigravity Internally

Boy, do I love Google’s IDE and agent manager, Antigravity. I have built so many things with it, including my own RSS feed reader, a screenshot and annotation tool, workflows to publish things I write in a Google Doc to my WordPress site, and a bunch of tools to do agentic things with Google Search Console and Google Analytics 4 data. While I think Claude Cowork and Claude Code are incredible, I truly do prefer using Antigravity.

It turns out that Google makes good use of Antigravity internally. Except they don’t call it Antigravity. They call it “Jet Ski.”

Sundar said that the Google DeepMind and the Google Software Engineers use it:

“I can see groups, and in particular I would say GDM and some of the SWE groups really change their workflows. They are using, we call this for some strange reason, we have a different name internally than externally of the same product, but it’s Jet Ski internally which is Antigravity. You’re living on it, you’re living in an agent manager world. You have workflows, and you’re working in this new way.”

He also uses it himself.

“I would query in Antigravity, in our internal version of Antigravity. “Hey, we launched this thing. What did people think about this? Tell me the worst five things people are talking about?” and I type that. Now that brings it back. Has my life gotten easier? Yes. In the past I would have to spend a lot more time trying to get a sense for it. Now an AI agent is helping me in that journey.”

Also, just last week, the Google Search team started using Antigravity.

“Just last week we rolled it [Antigravity] out to the Search team. We’re constantly pushing that. In a large organization, I think change management is a hard aspect of this technology diffusing, which may be easy for a small company. You can quickly switch over.”

If you want to learn how to use Antigravity, I’ve created a full guide teaching you how it works, and how I use it to not only code, but create full agentic workflows that I actually use in my day-to-day work. It’s available in the paid part of my community, The Search Bar. And next Thursday, the Search Bar Pro crew is having an event where we’re going to split into two teams, Team Claude Code and Team Antigravity, and see who can build the better SEO tool.

I know it’s a bit of a pain to try and use something new in your workflows. I thoroughly believe that those who learn how to use Antigravity today will have a big advantage as things really start to take off as AI improves.

3. Robotics Is Growing Fast

Sundar admitted that Google was previously too early to robotics. AI has become the missing ingredient for ideas conceived 10 to 15 years ago. The Gemini Robotics models have reached state-of-the-art status for spatial reasoning. Google has partnered back with Boston Dynamics and Agile and a few other companies.

Most interesting to me was the discussion on Wing for drone delivery.

“I think we are scaling up Wing where in some reasonable time period, 40 million Americans will have access to a Wing delivery service. I’m not talking years out or something like that.”

When asked if Google was going to do more to build hardware, Sundar said having first-party hardware for robotics and AI would be important.

“I think we’d keep a very open mind. My lesson from Waymo and on the AI side with TPUs, et cetera, I need to really push the curve well, particularly in areas where you have safety, regulatory, everything. You want the first hand experience of the product feedback cycle. I think having first party hardware will end up being very important.”

4. Agentic OpenClaw-Like Systems Are The Future

There’s a reason why OpenClaw (initially Clawdbot) went crazy viral a few weeks ago. I still haven’t set up an OpenClaw system because I don’t feel I know enough about security to make this system safe.

When Sundar was asked if something OpenClaw-like was coming from Google, he said he thought it was the future.

“I think you want to give users capability where you have persistent long-running tasks in a reliable, secure way. You have to think through things like identity, access, et cetera. But I think that’s the future. That’s the agentic future. And bringing that for consumers is a bit of an exciting frontier we are looking at. This is one of mine too.

I think effectively the consumer interfaces are going to have full coding models underneath, and the right harnesses and the right skills and the ability to persist and run somewhere security in the cloud, locally and in the cloud. All those primitives are coming together.

Today I feel like there’s 1% of the world, maybe not 1%, 0.1% of the world who’s living this future. They are building stuff for themselves, but bringing that to mass adoption. Yes. It is a very exciting frontier I think.”

As I am writing this, Google DeepMind has just tweeted out instructions for using their new local open model Gemma 4 with OpenClaw. A new way of communicating with our machines is starting to unfold!

5. AI And AI Agents Are Going To Improve Dramatically In 2027

Sundar was asked when he thought it would happen that agentic systems would be able to work fully with no human in the loop. He said twice that 2027 was likely to be a big year.

“I definitely expect in some of these areas ’27 to be an important inflection point for certain things. Even the people doing it, that is the workflow through which they would produce it. Maybe for a while you would check it in the conventional way, but you switch over, a crossover. But I expect ’27 to be a big year in which some of those shifts happen pretty profoundly.”

The interview finished with Sundar talking about what he was most excited about. He did mention that putting data centers in space was very exciting, but this last bit was super interesting.

“I literally spent time yesterday with someone who was explaining some improvement in post-training, which is one person talking through the improvement they are doing. Listening to it, I’m like, “Oh, it’s going to really show up as a nice jump.” That’s the constant power of this moment. All of that, I don’t want to be specific about the second one, but we’ll publish it one day I’m sure.”

It sounds to me like he is talking about agentic self-improvement.

We are currently learning how to have AI build and do things for us. I recall first learning to code with ChatGPT as a partner. It would give me code to paste into VS Code. Then I’d run it and paste the errors back into ChatGPT. We went back and forth until something actually worked. I felt like I was unnecessary in this process – the copying and pasting robot, and sure enough, today’s systems like Antigravity, Claude Code, and ChatGPT Codex run the code, check the errors, and fix things up without much need for human involvement.

It makes sense to me that the next step in this process is to have AI systems learn to improve their usefulness without us having to prompt them specifically. I expect that when this happens, we will see even faster progression of AI capabilities and usefulness!

More Resources:


Read Marie’s newsletter, AI News You Can Use. Subscribe now.


Featured Image: isasoulart/Shutterstock

Core Update Done, GSC Bug Fixed, Mueller On Gurus – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect when you can start analyzing core update performance, how much you can trust your impression data, and what Google’s CEO thinks AI will do to software security.

Here’s what matters for you and your work.

March 2026 Core Update Is Complete

Google’s March 2026 core update finished rolling out on April 8. The Google Search Status Dashboard confirms the completion.

Key facts: The rollout took 12 days, starting March 27 and finishing April 8. That’s within Google’s two-week estimate and faster than the December update, which took 18 days. Google called it “a regular update” and didn’t publish a companion blog post or new guidance. This was the third confirmed update in roughly five weeks, following the February Discover core update and the March spam update.

Why This Matters

You can now run a clean before-and-after comparison in Search Console. Google recommends waiting at least one full week after completion before drawing conclusions, which means mid-April is the earliest window for reliable analysis.

A ranking drop after a core update does not mean your site violated a policy. Core updates reassess content quality across the web. Some pages move up while others move down. Roger Montti, writing for Search Engine Journal, suggested the spam-then-core sequencing may not have been a coincidence, describing it as clearing the table before recalibrating quality signals.

What SEO Professionals Are Saying

Lily Ray, VP, SEO & AI Search at Amsive, noted on X that YouTube has gained visibility since the core update began rolling out:

“Just checked a client that ranked in AI Overviews last week and now the top 4 links in AI Overviews are all YouTube.

Let me guess: the core update was another way for Google to boost YouTube, like it did with the Discover core update.”

Aleyda Solís, SEO consultant and founder of Orainti, is running a poll on LinkedIn asking how the update impacted peoples’ websites. Currently, most respondants say the impact of the update with either positve or not noticeable.

Read our full coverage: Google Confirms March 2026 Core Update Is Complete

Google Fixes Search Console Bug That Inflated Impressions For Nearly A Year

Google confirmed a logging error in Search Console that over-reported impressions starting May 13, 2025. The company updated its Data Anomalies page on April 3 to acknowledge the issue.

Key facts: The bug ran for nearly 11 months before Google publicly acknowledged it. Clicks and other metrics were not affected. Google said the fix will roll out over the next several weeks, and sites may see a decrease in reported impressions during that period.

Why This Matters

If your impression numbers have looked unusually healthy since last May, this bug is likely part of the reason. The correction will change what your Performance report shows, but it will not change how your site actually performed in search. The impressions were logged incorrectly. Your actual visibility may not have changed.

Teams that reported impression-based metrics to clients or stakeholders since May were working with inflated numbers. Click data provides a cleaner signal for performance analysis while the fix rolls out. Treat May 13, 2025 as a data annotation point, similar to how you would mark an algorithm update date in your reporting.

What SEO Professionals Are Saying

Brodie Clark, independent SEO consultant, flagged the issue on March 30, four days before Google’s acknowledgment. He wrote:

“Heads-up: there is something bizarre going on with Google Search Console data right now.

Similar to the changes that came to light after the disabling of &num=100, impressions are again skyrocketing for specific surfaces on desktop.”

Clark documented impression spikes across merchant listings and Google Images filters on multiple ecommerce sites and called for the Search Console team to investigate.

Chris Long, co-founder of Nectiv, wrote on LinkedIn: “Holy moly SEOs. It turns out Google has been accidentally inflating impressions in Search Console reports for ALMOST A YEAR.” Long noted that Google did not indicate how much impressions would decrease, and that the profiles he checked appeared stable so far.

Source: Google Data Anomalies in Search Console

Pichai Says AI Could ‘Break Pretty Much All Software’

Google CEO Sundar Pichai said AI models are “going to break pretty much all software out there” during a podcast conversation with Stripe CEO Patrick Collison. The interview covered AI infrastructure constraints and security risks.

Key facts: Pichai framed software security as a hidden constraint on AI deployment alongside memory supply and energy. When investor Elad Gil mentioned hearing that black market zero-day prices were falling because AI was increasing the supply of discoverable vulnerabilities, Pichai said he was “not at all surprised.”

Why This Matters

The security conversation may feel distant from daily SEO work, but it connects to the infrastructure your sites run on. If AI accelerates the pace at which vulnerabilities are found and exploited, the window between a flaw existing and an attacker using it gets shorter. That puts more pressure on maintaining current patches and auditing dependencies.

Pichai’s comments were conversational, not a formal Google policy statement. But they came from someone who oversees both the company’s AI models and its threat intelligence operation. Google’s threat teams have been warning about software security risks tied to faster vulnerability discovery.

Read our full coverage: Pichai Says AI Could ‘Break Pretty Much All Software’

Mueller Calls Self-Described SEO Gurus ‘Clueless Imposters’

Google’s John Mueller responded to a blog post by SEO professional Preeti Gupta about how the word “guru” is misused in the SEO industry. Mueller shared his view on Bluesky.

Key facts: Mueller wrote:

“To me, when someone self-declares themselves as an SEO guru, it’s an extremely obvious sign that they’re a clueless imposter. SEO is not belief-based, nobody knows everything, and it changes over time. You have to acknowledge that you were wrong at times, learn, and practice more.”

Gupta’s original post explained that in India the word guru carries deep cultural and spiritual meaning that is trivialized when SEO practitioners use it as a self-applied label.

Why This Matters

The core of what Mueller said is that SEO changes over time and that nobody has it all figured out.

Just look at what happened this week. Core updates continue to happen without a clear explanation of what changed. A basic logging bug in Search Console went unnoticed for nearly a year. The tools and signals we rely on every day are imperfect, and treating any methodology or perspective as settled knowledge is how mistakes get made.

Read Roger Montti’s full coverage: Google’s Mueller On SEO Gurus Who Are “Clueless Imposters”

Theme Of The Week: The Day-to-Day Work Continue

The speculation about where search is going has never been louder. But this week’s events were a core update finishing, a data bug getting patched, and a Google Search Advocate reminding people that nobody has all the answers.

The future Pichai describes may be coming, but it hasn’t arrived yet. Right now, the job is still reading your Search Console data, waiting for a core update to settle, and staying honest about what you do and do not know.

Mueller’s comment that SEO “is not belief-based” and “changes over time” is as good a summary of this week as any. Those who will succeed in the next version of search are probably the ones paying attention to this version first.

Top Stories Of The Week:

Here are the main links from this week’s coverage.

More Resources:

For more context, these earlier stories help fill in the background.


Featured Image: [Photographer]/Shutterstock

Google’s Push For Data Strength Is Really A Push For Better Bidding via @sejournal, @brookeosmundson

Google keeps coming back to the same message this year: your AI is only as good as the data feeding it.

That message has shown up across the Ads Decoded podcast, Data Manager updates, tagging guidance, partner integrations, and now even developer-focused content like the Ads DevCast podcast. It seems to reflect a broader shift in how Google expects campaigns to be built and optimized.

The issue is not that advertisers lack data. Most accounts have plenty of it. The problem is how that data has been structured, selected, and fed into bidding systems over time.

As Google leans further into AI-driven optimization, that gap becomes more visible for advertisers who don’t have a sound conversion setup. Campaign performance is increasingly tied to how clearly the system understands what success looks like.

Why Google Is Pushing Advertisers To Rethink Conversion Strategy

For years, many advertisers treated conversion tracking as something to expand, not refine over and over again.

If a platform made it easy to track an action, it got added. If a CRM could send something back, it got imported. If a new conversion type became available, it often made its way into the account without much resistance.

On paper, that sounds like a more complete dataset. The more data, the better – right?

In reality, it’s created a lot of noise for machines to learn what truly matters.

Campaigns are often optimized toward a mix of actions that did not share the same level of intent, value, or timing.

Some signals are high quality but might have low volume due to a delay in sales cycle activity. Others may be immediate but loosely tied to actual business outcomes. Many accounts end up blending all of them together under a single bidding strategy for the sake of measuring everything.

That worked well enough when automation was less dependent on precise inputs.

It becomes a bigger problem when bidding systems are expected to make decisions based on patterns in that data.

Where Most Conversion Setups Break Down

In one of the recent Ads Decoded podcast episodes, Google’s recent guidance around lead generation makes it clear what they are trying to correct. The focus is on mapping the full customer journey and identifying the conversion point that provides a usable signal for bidding.

That means looking at three things at the same time:

  1. How predictive the action is of real business value
  2. How frequently it occurs
  3. How quickly it happens after the initial interaction

Many advertisers still default to the deepest possible conversion, assuming that optimizing toward the final sale will produce the best outcome across every campaign.

The issue isn’t that particular goal itself, but more how usable that signal is for the system in a higher-funnel campaign. And this is where many conversion strategies start to fall apart.

If that action happens infrequently or takes weeks to materialize, it limits how much the bidding system can learn from it. The result is often slower optimization, higher volatility, and less efficient scaling.

On the other end, optimizing toward early-stage actions without considering quality can inflate volume without improving actual outcomes.

Selecting the right signal requires matching the conversion to the role the campaign plays and ensuring that signal is both meaningful and usable for bidding.

That shift requires more intentional decision-making than many accounts have historically applied to conversion setup. It also introduces a level of discipline that many advertisers have not needed when automation was less dependent on signal quality.

Why Is Google Putting So Much Weight On Data Strength?

Google is not being subtle about the Data Strength push. It’s showing up in product updates, integrations, tagging changes, and even in the way Google is speaking to both advertisers and developers.

Part of the reason is practical. Advertisers have lost visibility into many of the signals they used to rely on. Privacy changes, browser restrictions, and platform limitations have made measurement less complete than it used to be.

At the same time, Google’s bidding systems are being asked to do more with less. That puts more pressure on the signals that are still available.

This is where Data Strength comes in. Google is trying to make those signals more reliable, easier to connect, and more useful for optimization. Data Manager, tag gateway, and partner integrations all support that goal.

The expansion of integrations with platforms like HubSpot, Zapier, and Cloudflare also supports this effort. Instead of relying on custom implementations, advertisers can connect the systems where their data already exists with less effort.

This improves consistency in how data flows into bidding systems.

It also reinforces Google’s broader goal of making its automation more effective in a lower-signal environment.

Does This Point To A Broader Role For Google?

I also think there is a bigger shift underneath this push.

Google is moving closer to the systems where business outcomes actually happen, not just where ads are served. Connecting CRM data, offline conversions, and audience signals allows Google’s platforms to better understand what a “good” customer looks like beyond the initial click or form fill.

That can absolutely help advertisers improve performance.

At the same time, it positions Google as more than just an ads platform. It becomes more integrated into how businesses measure performance, define value, and connect marketing efforts back to real outcomes

Where Does Server-Side Tagging Fit In With This?

There has been a lot of confusion around server-side tagging and how it relates to what Google is promoting today.

They are related, but they aren’t the same thing.

Google tag gateway focuses on how the Google tag is delivered and how requests are routed through first-party infrastructure. It is a way to make existing tagging setups more resilient and aligned with privacy expectations.

Server-side tagging is a broader architectural approach. It shifts data processing from the browser to a server environment that the advertiser controls. This can improve site performance, provide more control over data handling, and support more advanced use cases across multiple platforms.

In practical terms, tag gateway is often a more accessible first step for advertisers looking to improve data reliability without a full infrastructure overhaul.

Server-side tagging is a larger investment and tends to be more relevant for organizations with more complex data requirements or stricter governance needs.

The two approaches can work together, and Google documentation often recommends combining them for a more durable setup.

A Thoughtful Approach To Data Strength

The increased focus on Data Strength is directionally positive, but it does not remove the need for careful decision-making.

Simplifying setup does not automatically lead to better outcomes. If conversion actions are poorly defined or not aligned with campaign intent, connecting them more efficiently will not improve performance.

If you’re a marketer who isn’t directly involved with setting up conversions, it may be worthwhile to meet with your Analytics teams. Create a list of must-have conversion events or actions you need to track for campaigns (online and/or offline), and cross-check that list with what’s currently set up.

There is also a governance component to consider. As tagging becomes more automated and data collection expands, teams need to understand what is being captured, how it is being used, and how it aligns with internal policies.

Google has noted that expanded automatic event collection may result in additional data being sent to its systems, which should be reviewed as part of implementation.

Another consideration is how platform-specific improvements fit into a broader measurement strategy.

Google’s push around Data Strength is primarily focused on improving performance within its own arena. That is valuable, but it should be complemented by broader measurement approaches when making budget and channel decisions.

This is where initiatives like Meridian come into play. Google has positioned Meridian as an open-source marketing mix modeling solution to help advertisers evaluate performance across channels and connect those insights to budget planning.

How Google Is Reinforcing Data Strength Across The Industry

One of the more interesting aspects of this push is how consistently it’s showing up across different mediums.

Product updates are only one piece of it.

Google is also investing in education and communication around Data Strength, using formats that reach both marketers and developers. Ads Decoded continues to focus on practical campaign strategies, including how to map the customer journey and select the right conversion signals.

At the same time, newer initiatives like Ads DevCast are aimed at a more technical audience, with episodes focused on topics like the Data Manager API and data integration workflows. The goal seems to be to meet teams where they are, whether they are responsible for campaign strategy or the underlying implementation.

The Data Manager API itself reinforces this direction. Google is shifting workflows like Customer Match into a system designed specifically for data connectivity, privacy controls, and more consistent ingestion of first-party data.

That combination of product changes, partnerships, and education signals a coordinated effort to strengthen how data is collected, connected, and used across the entire advertising atmosphere.

What Advertisers Are Saying About The Data Strength Conversation

The discussion around Data Strength and lead quality have sparked a lot of needed conversations between Google and advertisers.

In reaction to the Ads Decoded episode “Beyond the Form Fill“, many advertisers are happy that B2B businesses are getting the attention they’ve been asking for. Melissa Mackey praised the episode, stating that “All lead gen advertisers should go listen.” A few marketers noted the need to improve or purge the amount of bot leads they see in their B2B campaigns, including Robert Peck.

Google also did a series of posts and interviews with experts on the importance of data strength. All seemed to have similar sentiment and this is where I started seeing more and more advertisers connect the dots.

Adrija Bose commented on a discussion with Kamal Janardhan, Senior PM Director at Google, and Jeff Sauer, CEO of MeasureU:

What strikes me most is the framing of AI as the engine, not the strategy. Too many leaders conflate the two, expecting AI to compensate for weak signals. This post nails why high-quality data is non-negotiable for meaningful outcomes.

Jonathan Reed also showed his support on the renewed focus of data strength, stating that while it’s a full-time job for his team, they’ve seen “seeing dramatic increases in conversions, and dramatic decreases in cost!”

What Does This Mean For Your Campaigns?

This shift will show up pretty quickly once you look at how your campaigns are actually set up.

A lot of accounts still treat conversion tracking as something to build once and leave alone. But if the signals feeding your campaigns don’t match the intent behind the queries you’re targeting, it becomes harder for bidding to do its job well.

That usually shows up in ways you’ve probably already seen, where performance feels inconsistent and scaling becomes more difficult. Even small changes can create overly volatile swings.

None of that is coming from one setting or one campaign. It is usually a reflection of how the system is learning from the data it is given.

That is why this push toward Data Strength matters so much.

It forces a closer look at which signals are actually being used for optimization, how reliable they are, and whether they reflect real business outcomes.

In some cases, that means connecting better data from your CRM. In others, it is fixing how your tags are set up or how conversions are being defined in the first place.

As Google continues to lean into this direction, the gap will likely grow between accounts that are intentional about their data and those that aren’t.

More Resources:


Featured Image: Garun.Prdt/Shutterstock

Google Answers If Outbound Links Pass “Poor Signals” via @sejournal, @martinibuster

Google’s John Mueller responded to a question about how Google treats outbound links from a site that has a link-related penalty. His answer suggests the situation may not work in the way many assume.

An SEO asked on Bluesky whether a site that has what they described as a “link penalty” could affect the value of outbound links. The question is somewhat vague because a link penalty can mean different things.

  • Was the site buying or building low quality inbound links?
  • Was the site selling links?
  • Was the site involved in some kind of link building scheme?

Despite the vagueness of the question, there’s a legitimate concern underlying it, which is about whether getting links from a site that lost rankings could also transfer harmful signals to other sites.

They asked:

“Hey @johnmu.com hypothetically speaking. If a site has a link penalty are the outbound links from that site devalued? Or do they have the ability to pass on poor signals.. ie bad neighbours?”

There are a number of link related algorithms that I have written about in the past. And as often happens in SEO, other SEOs will pick up on what I wrote and paraphrase it without mentioning my article. Then someone else will paraphrase that and after a couple generations of that there are some weird ideas circulating around.

Poor Signals AKA Link Cooties

If you really want to dig deep into link-related algorithms, I wrote a long and comprehensive article titled What Is Google’s Penguin Algorithm. Many of the research papers discussed in that article were never written about by anyone until I wrote about them. I strongly encourage you to read that article, but only if you’re ready to commit to a really deep dive into the topic.

Another one is about an algorithm that starts with a seed set of trusted sites, and then the further a site is from that seed set, the likelier that site is spam. That’s about link distance ranking, ranking links. Nobody had ever written about this link distance ranking patent until I wrote about it first. Over the years, other SEOs have written about it after reading my article, and though they don’t link to my article, they’re mostly paraphrasing what I wrote. You know how I can tell those SEOs copied my article? They use the phrase “link distance ranking,” a phrase that I invented. Yup! That phrase does not exist in the patent. I invented it, lol.

The other foundational article that I wrote is about Google’s Link Graph and how it plays into ranking web pages. Everything I write is easy to understand and is based on research papers and patents that I link to so that you can go and read them yourself.

The idea behind the research papers and patents is that there are ways to use the link relationships between sites to identify what a site is about, but also whether it’s in a spammy neighborhood, which means low-quality content and/or manipulated links.

The articles about Link Graphs and link distance ranking algorithms are the ones that are related to the question that was asked about outbound links passing on a negative signal. The thing about it is that those algorithms aren’t about passing a negative signal. They’re based on the intuition that good sites link to other good sites, and spammy sites tend to link to other spammy sites. There’s no outbound link cooties being passed from site to site.

So what probably happened is that one SEO copied my article, then added something to it, and fifty others did the same thing, and then the big takeaway ends up being about outbound link cooties. And that’s how we got to this point where someone’s asking Mueller if sites pass “poor signals” (link cooties) to the sites they link to.

Google May Ignore Links From Problematic Sites

Google’s John Mueller was seemingly confused about the question, but he did confirm that Google basically just ignores low quality links. In other words, there are no “link cooties” being passed from one site to another one.

Mueller responded:

“I’m not sure what you mean with ‘has a link penalty’, but in general, if our systems recognize that a site links out in a way that’s not very helpful or aligned with our policies, we may end up ignoring all links out from that site. For some sites, it’s just not worth looking for the value in links.”

Mueller’s answer suggests that Google does not necessarily treat links from problematic sites as harmful but may instead choose to ignore them entirely. This means that rather than passing value or negative signals, those links may simply be excluded from consideration.

That doesn’t mean that links aren’t used to identify spammy sites. It just means that spamminess isn’t something that is passed from one site to another.

Ignoring Links Is Not The Same As Passing Negative Signals

The distinction about ignoring links is important because it separates two different ideas that are easily conflated.

  • One is that a link can lose value or be discounted.
  • The other is that a link can actively pass negative signals.

Mueller’s explanation aligns with the idea that Google simply ignores low-quality links altogether. In that case, the links are not contributing positively, but they are also not spreading a negative signal to other sites. They’re just ignored.

And that kind of aligns with the idea of something else that I was the first to write about, the Reduced Link Graph. A link graph is basically a map of the web created from all the link relationships from one page to another page. If you drop all the links that are ignored from that link graph, all the spammy sites drop out. That’s the reduced link graph.

Mueller cited two interesting factors for ignoring links: helpfulness and the state of not being aligned with their policies. That helpfulness part is interesting, also kind of vague, but it kind of makes sense.

Takeaways:

  • Links from problematic low quality sites may be ignored
  • Links don’t pass on “poor signals”
  • Negative signal propagation is highly likely not a thing
  • Google’s systems appear to prioritize usefulness and policy alignment when evaluating links
  • If you write an article based on one of mine, link back to it. 🙂

Featured Image by Shutterstock/minifilm

Google March Core Update Left 4 Losers For Every Winner In Germany via @sejournal, @MattGSouthern

A SISTRIX analysis of German search data found far more losers than winners after Google’s March core update.

The analysis revealed 134 domains experiencing confirmed visibility losses and 32 with gains. SISTRIX determined these figures by examining 1,371 domains showing significant visibility changes, then applying filters such as a 52-week Visibility Index history, 30 days of daily data, and visual confirmation of each domain’s trend.

The update began rolling out on March 27 and was completed on April 8, 12 days after launch. It was the first broad core update of 2026 and arrived two days after Google finished the March 2026 spam update.

The SISTRIX data covers the German search market specifically. Results in other markets may differ.

What The Data Shows

Online shops accounted for the largest share of losers, with 39 of 134. Losses cut across verticals, hitting fashion (cecil.de, down 30%), electronics (media-dealer.de, down 37%), gardening (123zimmerpflanzen.de, down 27%), and B2B supply retailers. Larger German brands like notebooksbilliger.de and expert.de also declined, each losing about 11%.

Seven language and education tools lost visibility together, forming the most distinct cluster among the losers. verbformen.de fell 30%, bab.la dropped 22%, and korrekturen.de, studysmarter.de, linguee.de, openthesaurus.de, and reverso.net all declined by 7% to 15%. These sites offer conjugation tables, translations, synonyms, and study tools.

SISTRIX reports that recipe and food portals have faced pressure from Featured Snippets and, more recently, AI Overviews. The March update affected several of them. kuechengoetter.de lost 29%, schlemmer-atlas.de fell 25%, and eatsmarter.de dropped 18%. chefkoch.de, Germany’s largest recipe site, remained stable.

Among user-generated content platforms, gutefrage.net (Germany’s equivalent of Quora) lost about 24% of its visibility. SISTRIX noted that the site has been declining since mid-2025, when its Visibility Index peaked at 127. It was around 62 before this update and dropped to 47. x.com also fell 25% in German search visibility.

Who Gained

The 32 winners were dominated by official websites and established brands.

audible.de was the largest gainer at 172%, jumping from a Visibility Index of about 3 to over 8. ratiopharm.de gained 12%, commerzbank.de gained 11%, and government sites like hessen.de and arbeitsagentur.de gained 5-8%.

Four German airport websites grew in parallel. Stuttgart Airport rose 22%, Cologne-Bonn 18%, Hamburg 17%, and Munich 8%. SISTRIX described the airport gains as the clearest cluster signal among winners, which may point to a broader ranking pattern rather than isolated site-level changes.

chatgpt.com gained 32% and bing.com gained 19% in German search visibility, though both started from low baselines (Visibility Index under 5). SISTRIX attributed this more to rising demand for brand search than to algorithmic preference.

Why This Matters

The German data covers a single market, and SISTRIX’s methodology captures domains with a Visibility Index above 1, so smaller sites aren’t represented in this dataset. But the patterns are worth watching.

The language tool cluster is notable. Seven sites offering similar functionality all lost visibility at the same time. SISTRIX raises the question of whether these losses reflect Google devaluing such sites or a shift in user behavior as AI tools cover similar functions.

If you’re tracking your own site’s performance after the March core update, Google recommends waiting at least one full week after the update is complete before drawing conclusions. Your baseline period should be before March 27, compared with performance after April 8.

Looking Ahead

SISTRIX plans to publish additional market analyses. Their English-language core update tracking page covers UK and US radar data but hasn’t yet published the detailed winners-and-losers breakdown for those markets.

Google hasn’t commented on what specific changes the March 2026 core update made. As with all core updates, pages can move up or down as Google’s systems reassess quality across the web.


Featured Image: nitpicker/Shutterstock

What 400 Sites Reveal About Organic Traffic Gains via @sejournal, @MattGSouthern

An analysis of more than 400 websites by Zyppy founder Cyrus Shepard identifies five characteristics associated with whether a site gained or lost estimated organic traffic over the past 12 months.

Shepard classified sites by revisiting many of the same ones covered in Lily Ray’s December core update analysis, categorizing them by business model, content type, and other features, then measuring correlation with traffic changes. Traffic estimates come from third-party tools, not verified Search Console data.

Five features showed the strongest association with traffic gains, measured by Spearman correlation:

  1. Offers a Product or Service: 70% of winning sites offered their own product or service, compared to 34% of losing sites. Service-based offerings like subscriptions and digital goods performed well alongside physical products.
  2. Allows Task Completion: 83% of winners let users complete the task they searched for, versus 50% of losers. Sites don’t need to sell anything to score here.
  3. Proprietary Assets: 92% of winners owned something difficult to replicate, such as unique datasets, user-generated content, or specialized software. Among losers, that figure was 57%.
  4. Tight Topical Focus: Winners tended to cover a single narrow topic deeply. Shepard noted that a general “topical focus” classification showed no difference between winners and losers, but tightening the definition to single-topic depth revealed the pattern.
  5. Strong Brand: 32% of winners had high branded search volume relative to their overall traffic, compared to 16% of losers. Shepard measured brand strength by examining each site’s top 20 keywords for navigational branded terms using Ahrefs data.

The effects were additive. Sites with zero features had a 13.5% win rate. Sites with all five reached 69.7%.

What Didn’t Correlate

The study also tested features Shepard expected to matter but found no correlation with traffic changes. These included first-hand experience, personal perspectives, user-generated content, community platforms, and uniqueness of information.

Shepard cautioned against misreading those findings.

He suggested these features may already be baked into Google’s algorithm from earlier updates, meaning they could still matter even if they don’t show differential results between winners and losers in this dataset.

Why This Matters

Shepard’s findings suggest that sites offering a product, completing a task, or owning harder-to-replicate assets were more likely to show estimated organic traffic gains in this dataset. The study puts specific numbers behind that pattern, though it doesn’t establish causation.

The additive pattern is the most useful finding for those evaluating their position. A site with one winning feature had a win rate (15%) roughly the same as a site with no winning features (13%). The gap only widened at three or more features.

Roger Montti’s analysis for Search Engine Journal in December identified related patterns from the other direction, noting that Google’s topical classifications have become more precise and that core updates sometimes correct over-ranking rather than penalizing sites.

Looking Ahead

The correlation values in this study are moderate (0.206–0.391), and the methodology relies on third-party traffic estimates rather than verified analytics. Correlation doesn’t establish causation.

Sites that offer products may perform better for reasons beyond Google’s ranking preferences, including higher return-visitor rates and more natural backlink profiles.

The full dataset is public, which means others can test these classifications against their own data.


Featured Image: Master1305/Shutterstock

Is fake grass a bad idea? The AstroTurf wars are far from over.

A rare warm spell in January melted enough snow to uncover Cornell University’s newest athletic field, built for field hockey. Months before, it was a meadow teeming with birds and bugs; now it’s more than an acre of synthetic turf roughly the color of the felt on a pool table, almost digital in its saturation. The day I walked up the hill from a nearby creek to take a look, the metal fence around the field was locked, but someone had left a hallway-size piece of the new simulated grass outside the perimeter. It was bristly and tough, but springy and squeaky under my booted feet. I could imagine running around on it, but it would definitely take some getting used to.

My companion on this walk seemed even less favorably disposed to the thought. Yayoi Koizumi, a local environmental advocate, has been fighting synthetic-turf projects at Cornell since 2023. A petite woman dressed that day in a faded plum coat over a teal vest, with a scarf the colors of salmon, slate, and sunflowers, Koizumi compulsively picked up plastic trash as we walked: a red Solo cup, a polyethylene Dunkin’ container, a five-foot vinyl panel. She couldn’t bear to leave this stuff behind to fragment into microplastic bits—as she believes the new field will. “They’ve covered the living ground in plastic,” she said. “It’s really maddening.” 

The new pitch is one part of a $70 million plan to build more recreational space at the university. As of this spring, Cornell plans to install something like a quarter million square feet of synthetic grass—what people have colloquially called “astroturf” since the middle of the last century. University PR says it will be an important part of a “health-promoting campus” that is “supportive of holistic individual, social, and ecological well-being.” Koizumi runs an anti-plastic environmental group called Zero Waste Ithaca, which says that’s mostly nonsense.

This fight is more than just the usual town-versus-gown tension. Synthetic turf used to be the stuff of professional sports arenas and maybe a suburban yard or two; today communities across the United States are debating whether to lay it down on playgrounds, parks, and dog runs. Proponents say it’s cheaper and hardier than grass, requiring less water, fertilizer, and maintenance—and that it offers a uniform surface for more hours and more days of the year than grass fields, a competitive advantage for athletes and schools hoping for a more robust athletic program.

But while new generations of synthetic turf look and feel better than that mid-century stuff, it’s still just plastic. Some evidence suggests it sheds bits that endanger users and the environment, and that it contains PFAS “forever chemicals”—per- and polyfluoroalkyl substances, which are linked to a host of health issues. The padding within the plastic grass is usually made from shredded tires, which might also pose health risks. And plastic fields need to be replaced about once a decade, creating lots of waste.

Yet people are buying a lot of the stuff. In 2001, Americans installed just over 7 million square meters of synthetic turf, just shy of 11,000 metric tons. By 2024, that number was 79 million square meters—enough to carpet all of Manhattan and then some, almost 120,000 metric tons. Synthetic turf covers 20,000 athletic fields and tens of thousands of parks, playgrounds, and backyards. And the US is just 20% of the global market. 

Where real estate is limited and demand for athletic facilities is high, artificial turf is tempting. “It all comes down to land and demand.”

Frank Rossi, professor of turf science, Cornell

Those increases worry folks who study microplastics and environmental pollution. Any actual risk is hard to parse; the plastic-making industry insists that synthetic fields are safe if properly installed, but lots of researchers think that isn’t so. “They’re very expensive, they contain toxic chemicals, and they put kids at unnecessary risk,” says Philip Landrigan, a Boston College epidemiologist who has studied environmental toxins like lead and microplastics.

But at Cornell, where real estate is limited and demand for athletic facilities is high, synthetic turf was a tempting option. As Frank Rossi, a professor of turf science at Cornell, told me: “It all comes down to land and demand.”


In 1965, Houston’s new, domed base­ball stadium was an icon of space-age design. But the Astrodome had a problem: the sun. Deep in the heart of Texas, it shined brightly through the Astrodome’s skylights—so much so that players kept missing fly balls. So the club painted over the skylights. Denied sunlight, the grass in the outfield withered and died.

A replacement was already in the works. In the late 1950s a Ford Foundation–funded educational laboratory determined that a soft, grasslike surface material would give city kids more places to play outside and had prevailed upon the Monsanto corporation to invent one. The result was clipped blades of nylon stuck to a rubber base, which the company called ChemGrass. Down it went into Houston’s outfield, where it got a new, buzzier name: AstroTurf.

Workers lay artificial turf at the Astrodome in Houston on July 13, 1966. Developed by Monsanto, the material was originally known as ChemGrass but was later renamed AstroTurf after the stadium.
AP PHOTO/ED KOLENOVSKY, FILE

That first generation of simulated lawn was brittle and hard, but quality has improved. Today, there are a few competing products, but they’re all made by extruding a petroleum-based polymer—that’s plastic—through tiny holes and then stitching or fusing the resulting fibers to a carpetlike bottom. That gets attached to some kind of padding, also plastic. In the 1970s the industry started layering that over infill, usually sand; by the 1990s, “third generation” synthetic turf had switched to softer fibers made of polyethylene. Beneath that, they added infill that combined sand and a soft, cheap shredded rubber made from discarded automobile tires, which pile up by the hundreds of millions every year. This “crumb rubber” provides padding and fills spaces between the blades and the backing.

In the early 1980s, nearly half the professional baseball and football fields in the US had synthetic turf. But many players didn’t like it. It got hotter than real grass, gave the ball different action, and seemed to be increasing the rate of injuries among athletes. Since the 1990s, most pro sports have shifted back toward grass—water and maintenance costs pale in comparison to the importance of keeping players happy or sparing them the risk of injury. 

But at the same time, more universities and high schools are buying the artificial stuff. The advantages are clear, especially in places where it rains either too much or not enough. A natural-grass field is usable for a little more than 800 hours a year at the most, spread across just eight months in the cooler, wetter northern US. An artificial-turf field can see 3,000 hours of activity per year. For sports like lacrosse, which begins in late winter, this makes artificial turf more appealing. Most lacrosse pitches are now synthetic. So are almost all field hockey pitches; players like the way the even, springy turf makes the ball bounce.

Furthermore, supporters say synthetic turf needs less maintenance than grass, saving money and resources. That’s not always true; workers still have to decompact the playing surface and hose it off to remove bird poop or cool it down. Sometimes the infill needs topping up. But real grass allows less playing time, and because grass athletic fields often need to be rotated to avoid damage, synthetic ground cover can require less space. Hence the market’s explosive growth in the 21st century.


The city and town of Ithaca—two separate political entities with overlapping jurisdiction over Cornell construction projects—held multiple public meetings about the university’s new synthetic fields: the field hockey pitch and a complex called the Meinig Fieldhouse. Koizumi’s group turned up in force, and a few folks who worked at Cornell came to oppose the idea too—submitting pages of citations and studies on the risks of synthetic grass.

At two of those meetings, dozens of Cornell athletes turned out to support the turf. Representatives of the university and the athletic department declined to speak with me for this story, citing an ongoing lawsuit from Zero Waste Ithaca. But before that, Nicki Moore, Cornell’s director of athletics, told a local newspaper that demand from campus groups and sports teams meant the fields were constantly overcrowded. “Activities get bumped later and later, and sometimes varsity teams won’t start practicing until 10 at night, you know?” Moore told the paper. “Availability of all-weather space should normalize scheduling a great deal.”

That argument wasn’t universally convincing. “It’s a bad idea, but that’s from the environmental perspective,” says Marianne Krasny, director of Cornell’s Civic Ecology Lab and one of the speakers at those hearings. “Obviously the athletic department thinks it’s a great idea.”

square patch of artificial turf

GETTY IMAGES

Members of Cornell on Fire, a climate action group with members from both the university and the town, joined in opposing the use of artificial turf, citing the fossil-fuel origins of the stuff. They described the nominal support of the project from student athletes as inauthentic, representing not grassroots support but, yes, an astroturf campaign. 

Sorting out the actual science here isn’t simple. Over time, the plastic that synthetic turf is made of sheds bits of itself into the environment. In one study, published in 2023 in the journal Environmental Pollution, researchers found that 15% of the medium-­size and microplastic particles in a river and the Mediterranean Sea outside Barcelona, Spain, came from artificial turf, mostly in the form of tiny green fibers. Back in 2020, the European Chemicals Agency estimated that infill material from artificial-­turf fields in the European Union was contributing 16,000 metric tons of microplastics to the environment each year—38% of all annual microplastic pollution. Most of that came from the crumb rubber infill, which Europe now plans to ban by 2031. 

This pollution worries the Cornell activists. Ithaca is famous for scenic gorges and waterways. The new field hockey pitch is uphill from a local creek that empties into Cayuga Lake, the longest of the Finger Lakes and the source of drinking water for over 40,000 people.

And it’s not just the plastic bits. When newer generations of synthetic turf switched to durable high-density polyethylene, the new material gunked up the extruders used in the manufacturing process. So turf makers started adding fluorinated polymers—a type of PFAS. Some of these environmentally persistent “forever chemicals” cause cancer, disrupt the endocrine system, or lead to other health problems. Research in several different labs has found PFAS in many types of plastic grass.

But the key to assessing the threat here is exposure. Heather Whitehead, an analytical chemist then at the University of Notre Dame, found PFAS in synthetic turf at levels around five parts per billion—but estimated it’d be in water running off the fields at three parts per trillion; for context, the US Environmental Protection Agency’s legal drinking-water limit on one of the most widespread and dangerous PFAS chemicals is four parts per trillion. “These chemicals will wash off in small amounts for long periods of time,” says Graham Peaslee, Whitehead’s advisor and an emeritus nuclear physicist who studies PFAS concentrations. “I think it’s reason enough not to have artificial turf.”

This gets confusing, though. There are over 16,000 different types of PFAS, few have been well studied, and different ­companies use different manufacturing techniques. Companies represented by the Synthetic Turf Council now “use zero intentionally added PFAS,” says Melanie Taylor, the group’s president. “This means that as the field rolls off the assembly line, there are zero PFAS-formulated materials present.”

Some researchers are skeptical of the industry’s assurances. They’re hard to confirm, especially because there are a lot of ways to test for PFAS. The type of synthetic turf going onto the new field hockey pitch at Cornell is called GreenFields TX; the university had a sample tested using an EPA method that looks for 40 different PFAS compounds. It came back negative for all of them. The local activists countered that the test doesn’t detect the specific types they’re most concerned about, and in 2025 they paid for three more tests on newly purchased synthetic turf. Two clearly found fluorine—the F in “PFAS”—and one identified two distinct PFAS compounds. (The company that makes GreenFields TX, TenCate, declined to comment, citing ongoing litigation.)

PFAS isn’t the only potential problem. There’s also the crumb rubber made from tires. A billion tires get thrown out every year worldwide, and if they aren’t recycled they sit in giant piles that make great habitats for rats and mosquitoes; they also occasionally catch fire. Lots of the tires that go into turf are made of styrene-­butadiene rubber, or SBR. In bulk, that’s bad. Butadiene is a carcinogen that causes leukemia, and fumes from styrene can cause nervous system damage. SBR also contains high levels of lead.

But how much of that comes out of synthetic-­turf infill? Again, that’s hotly debated. Researchers around the world have published suggestive studies finding potentially dangerous levels of heavy metals like zinc and lead in synthetic turf, with possible health risks to people using the fields. But a review of many of the relevant studies on turf and crumb rubber from Canada’s National Collaborating Centre for Environmental Health determined that most well-conducted health risk assessments over the last decade found exposures below levels of concern for cancer and certain other diseases. A 2017 report by the European Chemicals Agency—the same people who found all those microplastics in the environment—“found no reason to advise people against playing sports on synthetic turf containing recycled rubber granules as infill material.” And a multiyear study from the EPA, published in 2024, found much the same thing—although the researchers said that levels of certain synthetic chemicals were elevated inside places that used indoor artificial turf. They also stressed that the paper was not a risk assessment. 

The problem is, the kinds of cancers these chemicals can cause may take decades to show up. Long-term studies haven’t been done yet. All the evidence available so far is anecdotal—like a series for the Philadelphia Inquirer that linked the deaths of six former Phillies players from a rare type of brain cancer called glioblastoma to years spent playing on PFAS-containing artificial turf. That’d be about three times the usual rate of glioblastoma among adult men, but the report comes with a lot of cautions—small sample size, lots of other potential causes, no way to establish causation.

Synthetic turf has one negative that no one really disputes: It gets very hot in the sun—as hot as 150 °F (66 °C). This can actually burn players, so they often want to avoid using a field on very hot days.

A field hockey player from Cornell University passes the ball during a game played on artificial turf at Bryant University in 2025. Cornell’s own turf field will be ready for the 2026 season.
GETTY IMAGES

Athletes playing on artificial turf also have a higher rate of foot and ankle injuries, and elite-level football players seem to be more predisposed to knee injuries on those surfaces. But other studies have found rates of knee and hip injury to be roughly comparable on artificial and natural turf—a point the landscape architect working on the Cornell project made in the information packet the university sent to the city. Athletic departments and city parks departments say that the material’s upsides make it worthwhile, given that there’s no conclusive proof of harm.

Back in Ithaca, Cornell hired an environmental consulting firm called Haley & Aldrich to assess the evidence. The company concluded that none of the university’s proposed installations of artificial turf would have a negative environmental impact. People from Cornell on Fire and Zero Waste Ithaca told me they didn’t trust the firm’s findings; representatives from Haley & Aldrich declined to comment.

Longtime activists say that as global consumption of fossil fuels declines, petrochemical companies are desperate to find other markets. That means plastics. “There’s a big push to shift more petrochemicals into plastic products for an end market,” says Jeff Gearhart, a consumer product researcher at the Ecology Center. “Industry people, with a vested interest in petrochemicals, are looking to expand and build out alternative markets for this stuff.”

All that and more went before the decision-­makers in Ithaca. In September 2024, the City of Ithaca Planning Board unanimously issued a judgment that the Meinig Fieldhouse would not have a significant environmental impact and thus would not need to complete a full environmental impact assessment. Six months later, the town made the same determination for the field hockey pitch.

Zero Waste Ithaca sued in New York’s supreme court, which ruled against the group. Koizumi and lawyers from Pace University’s Environmental Litigation Clinic have appealed. She says she’s still hopeful the court might agree that Ithaca authorities made a mistake by not requiring an environmental impact statement from the college. “We have the science on our side,” she says.


Ithaca is a pretty rarefied place, an Ivy League university town. But these same tensions—potential long-term environmental and public health consequences versus the financial and maintenance concerns of the now—are pitting worried citizens against their representatives and city agencies around the country. 

New York City has 286 municipal synthetic-­turf fields, with more under construction. In Inwood, the northernmost neighborhood in Manhattan, two fields were approved via Zoom meetings during the pandemic, and Massimo Strino, a local artist who makes kaleidoscopes, says he found out only when he saw signs announcing the work on one of his daily walks in Inwood Hill Park, along the Hudson River. He joined a campaign against the plan, gathering more than 4,300 signatures. “I was canvassing every weekend,” Strino says. “You can count on one hand, literally, the number of people who said they were in favor.” 

But that doesn’t include the group that pushed for one of those fields in the first place: Uptown Soccer, which offers free and low-cost lessons and games to 1,000 kids a year, mostly from underserved immigrant families. “It was turning an unused community space into a usable space,” says David Sykes, the group’s executive director. “That trumped the sort of abstract concerns about the environmental impacts. I’m not an expert in artificial turf, but the parks department assured me that there was no risk of health effects.”

Artificial turf doesn’t go away. “You’re going to be paying to get rid of it. Somebody will have to take it to a dump, where it will sit for a thousand years.”

Graham Peaslee, emeritus nuclear physicist studying PFAS concentrations, University of Notre Dame

New York City councilmember Christopher Marte disagrees. He has introduced a bill to ban new artificial turf from being installed in parks, and he hopes the proposal will be taken up by the Parks Committee this spring. Last session, the bill had 10 cosponsors—that’s a lot. Marte says he expects resistance from lobbyists, but there’s precedent. The city of Boston banned artificial turf in 2022.  

Upstate, in a Rochester suburb called Brighton, the school district included synthetic-­turf baseball and softball diamonds in a wide-ranging February 2024 capital improvement proposition. The measure passed. In a public meeting in November 2025, the school board acknowledged the intent to use synthetic grass—or, as concerned parents had it, “to rip up a quarter ­million square feet of this open space and replace it with artificial turf,” says David Masur, executive director of the environmental group PennEnvironment, whose kids attend school in Brighton. Parents and community members mobilized against the plan, further angered when contractors also cut down a beloved 200-year-old tree. School superintendent Kevin McGowan says it’s too late to change course. Masur has been working to oppose the plan nevertheless—he says school boards are making consequential decisions about turf without sharing information or getting input, even though these fields can cost millions of dollars of taxpayer money.

In short, the fights can get tense. On Martha’s Vineyard, in Massachusetts, a meeting about plans to install an artificial field at a local high school had to be ended early amid verbal abuse. A staffer for the local board of health who voiced concern about PFAS in the turf quit the board after discovering bullet casings in her tote bag, she said, which she perceived as a death threat. After an eight-year fight, the board eventually banned artificial turf altogether. 


What happens next? Well, outdoor artificial turf lasts only eight to 12 years before it needs to be taken up and replaced. The Synthetic Turf Council says it’s at least partially recyclable and cites a company called BestPLUS Plastic Lumber as a purveyor of products made from recycled turf. The company says one of its products, a liner called GreenBoard that artificial turf can be nailed into, is at least 40% recycled from fake grass. Joseph Sadlier, vice president and general manager of plastics recycling at BestPLUS, says the company recycles over 10 million pounds annually. 

Yet the material is piling up. In 2021, a Danish company called Re-Match announced plans to open a recycling plant in Pennsylvania and began amassing thousands of tons of used plastic turf in three locations. The company filed for bankruptcy in 2025.

In Ithaca, university representatives told planning boards that it would be possible to recycle the old artificial turf they ripped out to make way for the Meinig Fieldhouse. That didn’t happen. An anonymous local activist tracked the old rolls to a hauling company a half-hour’s drive south of campus and shared pictures of them sitting on the lot, where they stayed for months. It’s unclear what their ultimate fate will be.

That’s the real problem: Artificial turf just doesn’t go away. “You’re going to be paying to get rid of it,” says Peaslee, the PFAS expert. “Somebody will have to take it to a dump, where it will sit for a thousand years.” At minimum, real grass is a net carbon sink, even including installation and maintenance. Synthetic turf releases greenhouse gases. One life-­cycle analysis of a 2.2-acre synthetic field in Toronto determined that it would emit 55 metric tons of carbon dioxide over a decade. Plastic fields need less water to maintain, but it takes water to make plastic, and natural grass lets rainwater seep into the ground. Synthetic turf sends most of it away as runoff.

It’s a boggling set of issues to factor into a decision. Rossi, the Cornell turf scientist, says he can understand why a school in the northern United States might go plastic, even when it cares about its students’ health. “It was the best bad option,” he says. Concerns about microplastics and PFAS are “significant issues we have not fully addressed.” And they need to be. 

Douglas Main is a journalist and former senior editor and writer at National Geographic.

Desalination technology, by the numbers

When I started digging into desalination technology for a new story, I couldn’t help but obsess over the numbers.

I’d known on some level that desalination—pulling salt out of seawater to produce fresh water—was an increasingly important technology, especially in water-stressed regions including the Middle East. But just how much some countries rely on desalination, and how big a business it is, still surprised me.

For more on how this crucial water infrastructure is increasingly vulnerable during the war in Iran, check out my latest story. Here, though, let’s look at the state of desalination technology, by the numbers.

Desalination produces 77% of all fresh water and 99% of drinking water in Qatar.

Globally, we rely on desalination for just 1% of fresh-water withdrawals. But for some countries in the Middle East, and particularly for the Gulf Cooperation Council countries (Bahrain, Qatar, Kuwait, the United Arab Emirates, Saudi Arabia, and Oman), it’s crucial.

Qatar, home to over 3 million people, is one of the most staggering examples, with nearly all its drinking water supplies coming from desalination. But many major cities in the region couldn’t exist without the technology. There are no permanent rivers on the Arabian Peninsula, and supplies of fresh water are incredibly limited, so countries rely on facilities that can take in seawater and pull out the salt and other impurities.

The Middle East is home to just 6% of the world’s population and over 27% of its desalination facilities.

The region has historically been water-scarce, and that trend is only continuing as climate change pushes temperatures higher and changes rainfall patterns.

Of the 17,910 desalination facilities that are operational globally, 4,897 are located in the Middle East, according to a 2026 study in npj Clean Water. The technology supplies not only municipal water used by homes and businesses, but also industries including agriculture, manufacturing, and increasingly data centers.

One massive desalination plant in Saudi Arabia produces over 1 million cubic meters of fresh water per day.

The Ras Al-Khair water and power plant in Eastern Province, Saudi Arabia, is one of a growing number of gigantic plants that output upwards of a million cubic meters of water each day. That amount of water can meet the needs of millions of people in Riyadh City. Producing it takes a lot of power—the attached power plant has a capacity of 2.4 gigawatts.

While this plant is just one of thousands across the region, it’s an example of a growing trend: The average size of a desalination plant is about 10 times what it was 15 years ago, according to data from the International Energy Agency. Communities are increasingly turning to larger plants, which can produce water more efficiently than smaller ones.

Between 2024 and 2028, the Middle East’s desalination capacity could grow by over 40%.

Desalination is only going to be more crucial for life in the Middle East. The region is expected to spend over $25 billion on capital expenses for desalination facilities between 2024 and 2028, according to the 2026 npj Clean Water study. More massive plants are expected to come online in Saudi Arabia, Iraq, and Egypt during that time.

All this growth could consume a lot of electricity. Between growth of the technology generally and the move toward plants that use electricity rather than fossil fuels, desalination could add 190 terawatt-hours of electricity demand globally by 2035, according to IEA data. That’s the equivalent of about 60 million households.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here

The Download: AstroTurf wars and exponential AI growth

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Is fake grass a bad idea? The AstroTurf wars are far from over. 

In 2001, Americans installed just over 7 million square meters of synthetic turf. By 2024, that number was 79 million square meters—enough to carpet all of Manhattan and then some. The increase worries folks who study microplastics and environmental pollution.  

While the plastic-making industry insists that synthetic fields are safe if properly installed, lots of researchers think that isn’t so. Find out why AstroTurf has ignited heated debates.

—Douglas Main 

This story is from the next issue of our print magazine, packed with stories all about nature. Subscribe now to read the full thing when it lands on Wednesday, April 22. 

Mustafa Suleyman: AI development won’t hit a development wall anytime soon—here’s why 

—Mustafa Suleyman, Microsoft AI CEO and Google DeepMind co-founder 

The skeptics keep predicting that AI compute will soon hit a wall—and keep getting proven wrong. To understand why that is, you need to look at the forces driving the AI explosion.  

Three advances are enabling exponential progress: faster basic calculators, high-bandwidth memory, and technologies that turn disparate GPUs into enormous supercomputers. Where does all this get us? Read the full op-ed on the future of AI development to learn more
 

Desalination technology, by the numbers 

—Casey Crownhart 

When I started digging into desalination technology for a new story, I couldn’t help but obsess over the numbers. 

I knew on some level that desalination—pulling salt out of seawater to produce fresh water—was an increasingly important technology, especially in water-stressed regions including the Middle East. But just how much some countries rely on desalination, and how big a business it is, still surprised me.

Here are the extraordinary numbers behind the crucial water source

This story is from The Spark, our weekly newsletter on the tech that could combat the climate crisis. Sign up to receive it in your inbox every Wednesday. 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 Meta has launched the first AI model from its Superintelligence Labs
Muse Spark is the company’s first model in a year. (Reuters $) 
+ The closed model brings reasoning capabilities to the Meta AI app. (Engadget
+ It’s built by Meta’s Superintelligence Labs, the unit led by Alexandr Wang. (TechCrunch

2 Anthropic has lost a bid to pause the Pentagon’s blacklisting 
An appeals court in Washington, DC denied the request. (CNBC
+ A California judge had temporarily blocked the blacklisting in March. (NPR
+ The mixed rulings leave Anthropic in a legal limbo. (Wired $) 
+ And open doors for smaller AI rivals. (Reuters $) 

3 New evidence suggests Adam Back invented Bitcoin 
The British cryptographer may be the real Satoshi Nakamoto. (NYT $) 
+ Back denies the claims. (BBC
+ There’s a dark side to crypto’s permissionless dream. (MIT Technology Review

4 Gen Z is cooling on AI 
The share feeling angry about it has risen from 22% to 31% in a year. (Axios
+ Anti-AI protests are also growing. (MIT Technology Review

5 War in the Gulf could tilt the cloud race toward China 
Huawei is pitching “multi-cloud” resilience to Gulf clients. (Rest of World

6 Meta has killed a leaderboard of its AI token users 
It showed the top 250 users. (The Information $) 
+ Meta blamed data leaks for the shutdown. (Fortune
+ It encouraged “tokenmaxxing,” a growing phenomenon in Big Tech. (NYT $) 

7 Did Artemis II really tell us anything new about space? 
Or was it primarily a PR exercise? (Ars Technica

8 Israeli attacks have brutally exposed Lebanon’s digital infrastructure 
It’s managing a modern crisis without modern technology. (Wired $) 

9 AI models could offer mathematicians a common language 
They hope it will simplify the process of verifying proofs. (Economist)  

10 A “self-doxing’ rave is helping trans people stay safe online 
It’s among a series of digital self-defenses. (404 Media

Quote of the day 

“I feel like anything that I’m interested in has the potential of maybe getting replaced, even in the next few years.” 

—Sydney Gill, a freshman at Rice University, tells the New York Times why she’s soured on AI. 

One More Thing 

A view inside ATLAS,
one of two general-purpose detectors at the Large Hadron Collider.
MAXIMILIEN BRICE/CERN

Inside the hunt for new physics at the world’s largest particle collider 

In 2012, data from CERN’s Large Hadron Collider (LHC) unearthed a particle called the Higgs boson. The discovery answered a nagging question: where do fundamental particles, such as the ones that make up all the protons and neutrons in our bodies, get their mass?

But now particle physicists have reached an impasse in their quest to discover, produce, and study new particles at colliders. Find out what they’re trying to do about it.

—Dan Garisto 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ Enjoy this tale of the “joke” sound that accidentally defined 90s rave culture
+ Take a nostalgic trip through the websites of the early 00s. 
+ One for animal lovers: sperm whales have teamed up to support a newborn. 
+ Here’s a long overdue answer to a vital question: can the world’s largest mousetrap catch a limousine?