The snow gods: How a couple of ski bums built the internet’s best weather app

The best snow-forecasting app for skiers and snowboarders isn’t from any of the federally funded weather services. Nor from any of the big-name brands. It’s an independent app startup that leverages government data, its own AI models, and decades of alpine-life experience to offer better snow (and soon avalanche) predictions than anything else out there.

Skiers in the know follow OpenSnow and won’t bother heading to the mountains—from Alpine Meadows to Mont Blanc, Crested Butte to Killington—unless this small team of trusted weathered men tells them to. (And yes, they’re all men.) The app has made microcelebrities of its forecasters, who sift through and analyze reams of data to write “Daily Snow” reports for locations throughout the world.

“I’m F-list famous,” OpenSnow founding partner and forecaster Bryan Allegretto says with a laugh. “Not even D-list.” 

The app has proved especially vital this year, which has been one of the weirder winters on record. The US West saw very little daily snow, despite an intense storm cycle that led to one of the deadliest avalanches in history. That storm was followed by one of the fastest melts in memory, and several resorts in California are already shutting down for the season. Meanwhile, in the East, the ongoing snowfall has offered a rare gift: a deep and seemingly endless winter.. 

MIT Technology Review caught up with Allegretto, better known as BA, in the Tahoe mountains to talk about the weather, AI, avalanches, and how a little weather app became the closest thing powder-hounds have to a crystal ball: a daily dump of the freshest, most decipherable, and most micro-accurate forecasts in the biz. And how two once-broke ski bums—Allegretto and his Colorado counterpart, CEO Joel Gratz— managed to bootstrap a business and turn an email list of 37 into a cult following half a million strong. 

This interview has been edited for clarity and accuracy. 

You grew up in New Jersey. Middle of the pack as far as snowy states. What were your winters like as a kid?

I was always obsessed with weather. Especially severe weather. Nor’easters. There was the blizzard of ’89, I believe, that hit the East Coast hard—dropped two to three feet of snow, which was a lot for the Jersey Shore. My dad worked for the highway authority, so he had tools other than the evening news. He was in charge of calling out the snowplows whenever it snowed, so I just remember chasing storms with my dad. I wasn’t allowed to ride in the snowplows. I’d watch them. When I got older, I was the one shoveling the neighbors’ driveways. I just liked being out there. In it. In college, I used to go around and shovel all the girls’ sidewalks. That was fun. 

When did you start skiing?

We would cut school and take a bus to go skiing, unbeknownst to our parents. It was the ’90s, and the surfers decided snowboarding would be fun, so the local surf shop started  running a bus and all these surfers would show up and hop the bus to Hunter Mountain. We’d drive to the Poconos, go night skiing, turn around. It wasn’t uncommon for me in high school to get in the car by myself, either —and just drive. Me, my dog, my backpack. I’d sleep in gas stations and ski. Storm-chasing around the Northeast. 

What were you really chasing, you think?

Natural highs. Happiness. I’ve always been a soul-searcher. I grew up in a crazy house situation, a broken home. My dad left. My mom became a drug addict. I just wanted to be gone. I’m the oldest. I was always trying to help my mom and make sure she was okay. No one was telling me to go to school and have a career. I just wanted to do something that fulfills me.

How’d you go about figuring out what that was? 

For me, to go to school was a big task, given where I was coming out of. There wasn’t any money. I could get grants and scholarships because my mom was so poor. I wanted to go to Penn State but didn’t have the grades. I ended up at Kean, a public university in New Jersey. It had a meteorology program. We got to go to New York City, to NBC, and practiced on the green screen. In meteorology school, I started thinking: How do I work in the ski and snowboard industry and use weather at the same time? I went to Rowan [University] for business, in South Jersey, and in between moved to Hawaii to surf and spent a year teaching snowboarding. My goal the whole time was to not work in a career I hated.

I imagine you weren’t like most meteorology students. 

Us punk rockers, skaters, snowboarders—we were a little different than the typical meteorology nerds. I was the radical storm chaser. A big personality. I still am.

You didn’t quite fit the traditional weatherman mold.

Back then, there were no smartphones or social media. If you were a meteorologist, you either worked in a cubicle for the government or at an insurance company assessing weather risk.  Or you were on the local news. That wasn’t my thing. They didn’t want Grizzly Adams up there with his big beard.

Beards belong in the mountains?

Meteorologists live in cities because that’s where the jobs are. They don’t live in small mountain towns.  That’s what was missing in the industry. When I moved to Tahoe, in 2006, I realized nobody had any trust in the weather forecasts. It was more like a “We’ll believe it when we see it” old-fashioned mentality. If you’re a forecaster in flat areas, you just look at the weather model and regurgitate the news. Weathermen in Sacramento or Reno didn’t give a crap about the ski resorts! They’d just say “We’ll see three feet above 6,000 feet” and go on to the next segment. And skiers were like: “Wait a minute. Is it going to be windy at the top?” I thought: Let’s home in and give skiers what they’re looking for.

So you were living in Tahoe, skiing and forecasting?

I was working in the office at a resort, snowboarding, and doing weather on the side. I’d get up at 4 a.m. and do it before my 9 a.m. day job. Forecasting, figuring out: How the heck do these storms interact with these mountains? I started emailing everyone in the office what I’d see coming, and people kept saying “Add me! Add me!”  Eventually, resorts around Tahoe started asking to use my forecasts.

How were you actually forecasting, though? 

The NOAA, the GFS [Global Forecasting System], the Canadian model, the Euro model, German, Japanese—all these governments make these weather models to forecast the weather. And share it. Anyone can access it. But you can’t just look at a weather model and go, Yep, that’s what’s going to happen. That’s not how it works in the mountains. It’s way harder. You can’t rely on model data. It’s low-res, forecasting for a grid area that’s too big. It can’t understand what’s going on. It’s going to generalize the weather. You can try that, but you’re going to be wrong. A lot of people are going to stop listening. I was able to forecast more accurately than most people because I was living there; I could fix a lot of these errors. Around 2007, I started my own website, Tahoe Weather Discussion.

Bryan Allegretto (right) with Joel Gratz (center) and Gratz' wife.
Bryan Allegretto (right) on the lift with OpenSnow CEO Joel Gratz and Gratz’ wife Lauren.
COURTESY OF BRYAN ALLEGRETTO

Snazzy.

Meanwhile, I heard about this guy Joel out in Boulder, Colorado. People were telling us about each other, saying: “You guys are doing the same thing!” He was sleeping on his friend’s couch, running a site called Colorado Powder Forecast. And then there was Evan [Thayer, who would later join the company], in Utah. I think his website was called Wasatch Forecast. 

Great minds!

He actually grew up outside Philly, only about an hour from me. We both were obsessed with storms and snow and moved west to the mountains and started similar websites. We would’ve been best friends as kids! Anyway, Joel called me in 2010 and was like, “Hey. I’m building this site, forecasting skiing in ski states.” And wanted me to join. He knew I had big traffic. He was like, “Let’s do it together, not against each other.” I asked, “What’s the pay?” He said, Zero. Give me your company. 

And you just said: Yeah, sounds good?

I just really trusted him. He’d asked Evan too—but Evan was like, Give you my site and my traffic for free?? No, I built this.

A normal response.

I was the knucklehead that was like, okay. Evan was still single. I already had a wife and two kids. I’d just had my son. I was working two jobs. I was so overwhelmed. So busy with my day job, as an account manager at the Ritz at North Star. Vail had just bought them and we all thought we were going to lose our jobs. My site was struggling. I was desperate for somebody to do it with. I think I thought it was a good opportunity. I was scared, though. For sure.  

That was 15 years ago. How’d OpenSnow work in the old days? 

We were just using our brains. That’s how it started: with us using our brains.Looking at all the weather models—all the data from the government models and airplanes, satellites, balloons. A million places. Building spreadsheets and fixing all the errors in the forecast models. We’d take the data and reconfigure it—appropriate it for the mountains. It was all manual for a really long time.

How manual? 

It was old-school. All the resorts had snowfall reports on their sites, and I was the one hand-keying it in: “three to six inches.” That was me on the back end, typing it in every single morning for every single ski resort. It’d take me hours

And then?

Around 2018, we built our own weather model to do what we were doing. We called it METEOS. It’s an acronym—I can’t even remember what it stood for!  METEOS was just us using our brains and our experience to create formulas. It automated everything and allowed us to create a grid across the whole world and forecast for any GPS point. It took all this data, ingested it, fixed some of it, and then spit out a forecast for any location. In the world. 

Were you guys making any money? 

It was crap in the beginning. Advertising-based. We stole Eric Strassburger from The Denver Post —he doubled our ad revenue in his first year full-time with us. Still, Google Ads had chopped our ad rates in half; it wasn’t a good long-term strategy to rely just on ads. We had to pivot to plan B so we didn’t go out of business. 

Subscriptions.

When all the newspapers started charging to read articles, Joel was like: We are meteorologists writing columns every day. Journalism weather is not sustainable! We need to be a weather site. We need to be a weather app. 

What happened when you moved from ads to subscriptions? 

The money took off.  We could quit our day jobs and work full time on OpenSnow. The company exploded. We were like: Are people gonna really pay for this? They did! Although they could still access the majority of the site for free. 

At the end of 2021, you put in a pay wall?

That’s when we panicked! We’re gonna lose 90% of our customers! But 10% will stay loyal and pay. Since the beginning, there’s been only two times our traffic went down: the paywall and covid. Otherwise, every year it’s gone up. People were like, Okay I can’t live without this.

I admit, I’m one of those people. So is my editor. Any other weather app is useless for skiers.

When it comes to ski towns, everyone uses OpenSnow. When the Tahoe avalanche happened, we were up early on search-and-rescue calls, helping the rescuers with forecasts. We’re now the official lead forecast providers for Ski California. Ski Utah. Head of Forecasting for National Ski Patrol. Professional Ski Instructors of America. US Collegiate Ski & Snowboard Association. Dozens of destinations and ski resorts. Joel doesn’t like to talk about it publicly, but our renewals and retention and open rates blow away the industry standards. 

I bet. OpenSnow is like a benevolent cult. 

People connect with a small company with underground roots. We’re independent. Fourteen full-time, plus seasonal. About half have meteorology backgrounds, from bachelor’s to doctoral degrees. Our very first employee was Sam Collentine,  a meteorology student in Boulder, who started as an intern in 2012 and is now our COO and does everything. 

Sounds like employees and subscribers sign on and just … stay.

Everyone stays! Our cofounder Andrew Murray, Joel’s friend and OpenSnow’s web designer, left around 2021. But yeah, people feel like they know us. They’ve been reading me in Tahoe with their coffee for 20 years! I get recognized everywhere I go. For example, I broke my binding, and went into a ski shop and asked if I could demo. And the guy was like, ARE YOU BA? Just take it! Sounds fun—until you just want to have dinner with your family, or buy a glove. Joel gets the same thing—people make Joel shrines in the slopes that look like Catholic candles.

You guys are like modern-day snow gods. Gods of snow.

People are weird.

How weird?

Someone once sent me a photo, saying: “Look, my friend dressed up as you for Halloween!” People are always inviting me over to dinner, to PlumpJack with Jonny Moseley. I guess they want to hang out with the “Who’s who of Tahoe.” There was an executive from Pixar who had me to his multimillion-dollar home on the west shore of Lake Tahoe. He had a photo of me over the fireplace in the bathroom. I thought: That’s weird, he has a photo of me over the fireplace. What was even weirder, though: It was autographed. I’ve never autographed a photo in my life! This guy just signed it—himself. I didn’t say anything. I just left.

Do you get a lot of hate mail? Mean DMs? 

Thousands. People think I can make it snow. I think they think I’m to blame when it doesn’t. The other day, someone messaged me on Instagram with a picture I’d posted over California of the high-pressure map—somebody had shared it, and wrote “Fuck Bryan Allegretto” over the high pressure.

Hilarious.

People were yelling at me during covid: You’re encouraging people to go out skiing! It wasn’t March 202o, it was January 2022. I’ve since deleted my personal social media. I never wanted to be in the spotlight. That’s the whole reason signing off my forecasts with “BA” became a thing— I didn’t want to use my full name. I just do it because it’s good for the company. Joel realized years ago that people come to us for forecasts —and forecasters. That’s why we still have forecasters. Even though AI can do what we’re doing now.

Is AI doing what you do now? 

We were using METEOS until this season. In December, we launched PEAKS. We built our own machine-learning model. The AI is taking what we were doing—and doing it everywhere, faster. The whole world instantly, in minutes. It can go back and actually ingest decades of government data—estimated weather conditions over the entire US from 1979 to 2021—and correct the errors. 

What makes it so accurate?

Before PEAKS, it wasn’t very specific. The data used to be what Joel calls “blobby”—like giant blobs, just big splotches of color over a mountain range. It’s like, if you take a pen and press into a piece of paper, the ink will spill out. The AI is like if you just tap the paper. A dot versus a blot. Now we can know how much it will snow, say, in the parking lot at Palisades and how much at the summit. It’s less blobby, more rigid and defined. 

Defined how?

All weather models output forecasts on a grid. The gridpoints are essentially averaged data over the grid box. So a model with a 25-kilometer grid resolution averages data over 25 kilometers, or around 16 miles. This is far too large an area, especially in mountainous terrains where a few miles can make a massive difference in experienced conditions. The AI is downscaling the models into smaller and smaller grid boxes. We are able to train a model to transform lower-resolution data from the same period into this high-resolution “ground truth” data. Then the model can generalize this training to global real-time downscaling. PEAKS is learning wind patterns, thermal gradients, terrain, and weather patterns and connecting all these factors to learn how to transition from coarse resolution into high, three-kilometer resolution—leading to more precise forecasts. We’ve basically taught the AI how to forecast like us. Except 50% more accurate. Now, when I wake up at 4 a.m., PEAKS has already done it.

So … then what are you doing at four in the morning?

Oh, I’ll still do the forecasting. I like to double-check it—but I don’t really need to. PEAKS has allowed me to spend more time on writing. Now instead of spending four hours forecasting and then rushing to write it,  I’ve been able to make my forecasts more interesting, more entertaining. Yeah, AI could probably write it—but I want to. It’s all about the personal connection. 

How did last year’s federal funding cuts for the NWS and NOAA affect your business? Are you guys concerned about that going forward?

We had those discussions when it first happened. In forecasting, you still need humans: to launch the weather balloon, staff the weather stations, collect the initial data. Some people in our office panicked—they had spouses or friends getting laid off. We were wondering if we’d have less data coming in, if it’d make the models less accurate. But the backlash in the weather community was swift. I think they were like, There are important things you can’t cut. It was pretty short-term. Are we worried going forward?  No, not as long as the data keeps coming in! We won’t survive without the government publishing data.

What’s next? 

We recently bought a small company called StormNet that tracks severe weather, probability of lightning, hail, tornadoes. We just launched it. Used to be like, “The storm is an hour away.” Now we can say, “In seven days there might be a tornado here.” And next winter, we’re working on a feature that can help forecast avalanches using AI. Right now, it’s still manual—people going out testing the snow layers. Forecasting is limited. This wouldn’t replace the avalanche centers, but it will be able to look at everything, including slope angle and previous weather and current conditions, and forecast further out, give people more advance—and location specific—warning. Help alert the public sooner.

Help save lives. 

I talked to one of the guys who left the Frog Lake huts on Sunday, before the storm. Before the group that was caught in the Tahoe avalanche. He told me: “People are always like, Oh, it’s never as bad as they say. But I read OpenSnow. I could tell by the language you were using, that we should get the heck out of there. I wanted no part of that.” We don’t hype storms. Or sugarcoat. Our only incentive is to be accurate.

True that it was the biggest storm in Tahoe in four decades?

In 1982, we got 118 inches over five days, and this one was 111 inches—two storms of similar size created the same level tragedy. It’s too much, too fast. It was snowing three to four inches an hour. That was the fastest we’ve seen. I don’t know what’s the bigger story—the fact that we’ve had the biggest storm in over four decades or the fact that all that snow disappeared in five days.

Do you worry about the future of OpenSnow given, you know, the future of snow?

We’ve had the second-warmest March in at least 45 years. We’re just getting these wild swings now. The seasonal snow averages are almost the same, but we’re seeing more variability than we did in the 1980s and ’90s. We’re either getting really cold and really warm, or really dry and really wet.

Bad years can affect our business, for sure.  It’s certainly affecting the industry—I know Vail, Alterra took big hits this year. Usually we’re okay, because if it’s dry in Tahoe, it’s snowing in Utah or Colorado. Our three biggest markets. I don’t recall a season where the whole, entire West was in the same boat. It’s been the worst year in the West. Yet our traffic keeps going up. Everything is up. The East Coast had a good year, Japan, BC. We’re slowly expanding in those places. It happens to be the first year in 15 years we started marketing. Marketing works!

Amazing.

Joel and I have had this repeat conversation for years—we just had it again two weeks ago: “Can you believe what we’ve done? This was never the goal.” I’m still blown away daily. We’ve never borrowed from investors. No series A, B, C. We’ve gotten offers to sell, but no. We’re still having too much fun. All I know is: Joel and I didn’t come from money. We’ve never chased money or fame, and got both. I think it’s because we never chased them. We’ve always chased the joy of skiing and forecasting powder, and doing that for other people.We were just trying to create something that made us happy.

Are high gas prices good news for EVs? It’s complicated.

I live in a dense city with plentiful public transportation options and limited parking, so I don’t own a car. I’m often utterly clueless about the current price of gasoline.

But as the conflict in Iran has escalated, fossil-fuel prices have been on a roller-coaster, and I’ve started paying attention. In the US, average gas prices are $3.98 a gallon as of March 25, up from under $3 before the war started.

Online there’s been what almost looks like cheerleading about this volatility from some folks, including EV owners—some of the social media posts and op-eds have read as nearly gleeful. The subtext (or even the text) is “I told you so.” 

Don’t get me wrong—this could be an opportunity for EVs to make headway around the world. But there are plenty of reasons that even the carless among us should be concerned about a sustained rise in fossil-fuel prices.

Historically, this is exactly the sort of moment that’s pushed people to reevaluate how they get around. During the oil crisis of the 1970s, Americans switched to smaller, more efficient cars in droves. It was a major opportunity for Japanese automakers, whose vehicles tended to fit this mold better than those produced by their US counterparts.

We’re already seeing early signs that people are interested in going electric. One US-based online car marketplace said that search traffic for EVs was up 20% following the initial attack on Iran. For more popular models like the Tesla Model Y, traffic nearly doubled.

And the interest is global. One car dealership outside London said it’s struggling to keep up with demand and is sending staff to buy more EVs at auction, according to Reuters. Another in Manila told Bloomberg that it got a month’s worth of orders in two weeks.

The timing here is really interesting in the US in particular, because we’re about to see a wave of more affordable used EVs hit the market. Three years ago, a leasing boom started with the Inflation Reduction Act, which included incentives for EVs, including leases. About 300,000 such leases are set to expire this year, and many of those vehicles could come up for sale, increasing the available supply of affordable used EVs.

The interest is there, but what would it really take for more drivers to make the switch?

Nice, round numbers do tend to get people’s attention. Some point to $4 per gallon (which the national average is quite close to right now). At that price, the total cost of ownership for an EV is comfortably lower than the cost for a gas-powered car, even with higher electricity prices, according to data from the energy consultancy BloombergNEF.

Then again, maybe that won’t quite do the trick: One survey from Cox Automotive found that most US consumers would consider switching to an EV or hybrid if gas prices hit $6 per gallon.

But this is also the second big incident of fossil-fuel volatility in the last five years, which could make consumers more ready to make the switch, as Elaine Buckberg, a senior fellow at Harvard, told Bloomberg. (The first was in the summer of 2022 when Russia invaded Ukraine.)

I’m a climate and energy reporter, and I care about addressing climate change. So I’m always happy to hear about people shifting to EVs or any other option that helps cut down on greenhouse-gas emissions.

But one aspect that I think is getting lost here is that sustained high fossil-fuel prices will be bad for even those of us who are untethered from the burdens of vehicle ownership. Fuel cost makes up between 50% and 60% of the cost of shipping goods overseas. Fertilizer production today requires natural gas, which has gotten significantly more expensive since the war began, particularly in Europe.

Jet fuel prices have basically doubled in the last month, according to the International Air Transport Association. Since those prices account for something like a quarter of an airline’s operating cost, that could soon make air travel—and anything that’s shipped by plane—more expensive.

And if all this adds up to an economic downturn, it’s bad for big projects that need financing (even wind and solar farms) and for people who want to borrow money to buy a home or a car (including an EV).

If you’re in the market for a car, maybe this uncertainty is what you needed to consider electric. But until we’re able to truly decarbonize not only our transportation but the rest of our economy, even this carless reporter is going to be worried about high gas prices.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here

The Download: a battery pivot to AI, and rewriting math

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why this battery company is pivoting to AI 

Qichao Hu doesn’t mince words about the state of the battery industry. “Almost every Western battery company has either died or is going to die. It’s kind of the reality,” he says.  

Hu is the CEO of SES AI, a Massachusetts-based battery company. It previously developed advanced lithium batteries for major industries, but is now shifting to AI materials discovery. Read our story to find out why.  

—Casey Crownhart 

This startup wants to change how mathematicians do math 

Axiom Math, a California startup, has released a free AI tool with a big ambition: discovering mathematical patterns that could unlock solutions to long-standing problems. 

Most of the successes with AI tools have involved finding solutions to existing problems. But that’s not all they could do. There are lots of problems in math that require new ideas nobody has ever had, which could come from spotting patterns that have never been spotted before.  

Axiom Math’s new tool aims to find these hidden links. Read the full story to discover their plans—and how AI in general could change mathematics

—Will Douglas Heaven 

Are high gas prices good news for EVs? It’s complicated. 

As the conflict in Iran has escalated, fossil-fuel prices have been on a roller-coaster—and some EV owners are celebrating.  

They believe the volatility will create an opportunity for electric vehicles to make headway. But even the carless among us should be concerned about a sustained rise in fossil-fuel prices.  

To find out why, read the full story

—Casey Crownhart 

This article is from The Spark, our weekly climate newsletter. Sign up to receive it in your inbox every Wednesday. 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 Meta and YouTube have been fined for designing addictive products 
They must pay damages of $6 million for harming young people. (Guardian
+ The verdicts will reshape legal protections for Big Tech. (WSJ $) 
+ They could also ripple through social media markets worldwide. (Rest of World
+ Juries have started taking the lead in the push for child online safety. (NYT

2 SpaceX aims to file for IPO as soon as this week 
It’s hoping to raise more than $75 billion. (The Information
+ Rocket stocks soared on the report. (BBC)  
+ But rivals are challenging SpaceX’s dominance. (MIT Technology Review

3 A new AI safety bill would halt data center construction 
It was introduced by Bernie Sanders. (Wired
+ Nobody wants a data center in their backyard. (MIT Technology Review 
+ One solution: launch them into space. (MIT Technology Review)  

4 Meta has laid off 700 employees 
After raising compensation for top earners. (NYT $) 

5 Elon Musk wants a Delaware judge to recuse herself over an emoji 
She liked a LinkedIn post criticizing him. (CNBC
+ The case had ruled Musk misled investors during the Twitter purchase. (Reuters

6 Reddit will require “fishy” accounts to verify that a human runs them 
The process aims to combat the deluge of bots. (Ars Technica

7 Uber and Pony AI aim to launch Europe’s first robotaxi service in Croatia 
Pony AI is also running trials in Luxembourg, while Uber is testing in London. (The Verge

8 Google says quantum computers could break all cryptographic security by 2029 
It’s set a timeline to secure the quantum era. (Gizmodo
+ Quantum computers could soon solve health care problems. (MIT Technology Review

9 New research shows cloning doesn’t produce perfect copies 
Clones have lots of extra, potentially dangerous mutations. (New Scientist

10 The landmark AI Scientist has just completed peer review  
It’s billed as the first AI tool built to fully automate the scientific process. (Nature

Quote of the day 

“For years, social media companies have profited from targeting children while concealing their addictive and dangerous design features. Today’s verdict is a referendum—from a jury, to an entire industry.” 

—Attorney Rachel Lanier offers her view on yesterday’s fines for Meta and YouTube, the Washington Post reports.  

One More Thing 

A high-angle drone shot of Lustica bay resort with forested mountains in the background

GETTY IMAGES

Longevity enthusiasts want to create their own independent state. They’re eyeing Rhode Island.  

It’s incredibly difficult and expensive to study innovative ways to slow or reverse aging. In response, longevity enthusiasts have devised an ambitious plan: establish an independent state for life-extension experiments.  

They envision a jurisdiction that slashes red tape, encourages self-experimentation with unproven treatments, and eliminates laws that limit how companies develop drugs.  

Exactly where their longevity state might emerge is still being worked out—but one appealing location is Rhode Island. Read the full story to learn more about the plans.  

—Jessica Hamzelou 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 
 
+ These gleaming photos of ancient insects in amber are time capsules of the dinosaur age. 
+ Paint with pixels across a world map at this unique digital canvas
+ Hands have a new shield against hammers: a nail holder that protects your fingers. 
+ This new audio player uses cartridges to give digital music a soul. 

Framework for Quality AI Content Marketing

AI-generated marketing content is only effective when it attracts organic traffic from search engines, LLMs, and Google Discover.

Content marketing exists to attract, engage, and retain customers. For ecommerce marketers, attraction is often the primary role.

Historically, that meant search. Articles ranked, generated visits, and fed the top of the funnel. Plain and simple.

Retention matters too, but content marketing works best when it acquires prospects.

AI Gives and Takes

The advent of ubiquitous AI is a double-edged sword for content marketers.

On the one hand, AI makes producing content cheap, at least in a utilitarian sense. But AI has also flooded the internet with relatively low-value content and changed the way consumers search.

Moreover, in 2026:

  • AI has increased the percentage of zero-click search results.
  • Many customers begin and end searches with AI chat.
  • AI-generated articles increase competition for organic traffic.
  • Feeds such as Google Discover and Perplexity Discover are traffic generators.

Consider Google’s February 2026 algorithm update, which focused on Google Discover. According to DiscoverSnoop, a Google Discover-focused research firm, several large websites lost significant Discover exposure after the rollout.

  • “The biggest loser appears to be Yahoo, which lost nearly 50% of its content, with its audience plunging by 62%.”
  • “Go.com, which was redirecting to ABCNews.com, totally disappeared from the ranking, dropping to zero, and ABCNews.com was not able to replace it.”
  • “Among mainstream publishers, the Fox franchises (News, Business, Weather) experienced a visibility drop of more than 40%.”

Algorithm updates, zero-click search results, and changes in consumer behavior create a vicious cycle of AI-generated content.

It works something like this.

As organic traffic declines across search, LLMs, and feeds, the relative cost of content rises. To offset that cost, marketers turn to AI. But more AI-generated content increases competition, worsening performance.

Competing articles from the same AI models and prompts are similar in tone and substance. It’s “AI slop” applied to content marketing.

Quality Is the Solution

A year ago, AI offered a speed or cost advantage. That advantage has largely disappeared.

The differentiator now is execution. Marketers must produce AI-assisted content that is structured, validated, and refined. In practical terms, that means improving quality.

Marketers first need to overcome a bias. We must assume AI-generated content can be at least as good as that of humans. To this end, consider a recent quiz from The New York Times comparing human-written text to an AI-generated rewrite. Thus far, roughly half of Times’ readers preferred the AI-generated versions.

Second, we need to believe that AI-assisted content can be optimized and systematized.

12-Step Framework

The way to improve AI-generated content is through better processes, not prompts.

A practical approach is to treat content generation in steps. Each adds structure, reduces risk, and improves quality. Human editors can participate at any stage. But in general, these are steps the AI can take for content automation.

A step-by-step framework can improve the quality of AI-generated content. Click image to enlarge.

  1. Idea. Pick a specific topic and goal for the article.
  1. Sources and brief. Gather strong source material and set the rules for format, tone, and style.
  1. Validate. Check the inputs. Are the sources credible?
  1. Summarize. Pull the useful material from each source. Focus on relevant facts, data, and claims.
  1. Outline. Prompt the AI to provide a clear structure for how the article opens, progresses, and ends.
  1. Draft. Prompt the AI to generate the full article from the outline and summaries.
  1. Edit. Ask the AI to critique the draft against the brief, summary, and outline.
  1. Plagiarism. This is often overlooked. Have the AI compare the draft against sources. Consider a dedicated plagiarism checker, such as Grammarly’s API.
  1. No AI-speak. Ensure the output reads naturally.
  1. Optimize. Prompt the AI to optimize the article for search engines, answer engines, and Google Discover. Consider using the Discover click-through predictor.
  1. Grade. Prompt the AI to grade the article against steps 7-10. Assign the good scores to a human reviewer.
  1. Refresh trigger. Have the AI set a review date for updates.

AI has lowered the cost of producing content. It has not lowered the standard required to compete. In fact, the opposite is true.

The marketers who win in 2026 will generate the best content, not the most.

Google Takes Search Live Global With Gemini 3.1 Flash Live via @sejournal, @MattGSouthern

Google is expanding Search Live to more than 200 countries and territories, bringing voice and camera conversations to AI Mode globally.

The expansion is powered by Gemini 3.1 Flash Live, a new audio model that Google calls its highest-quality yet. It’s inherently multilingual, so you can speak with Search in your preferred language without switching settings.

Search Live was previously limited to the U.S.

What’s Changing

Search Live lets you talk to Google Search inside AI Mode instead of typing a query. You ask a question out loud and get an audio response, then continue with follow-ups. Web links appear on screen alongside the voice responses.

The feature also supports camera input. Point your phone at a product label or a piece of equipment and ask Search about what it sees. Google Lens users can tap a “Live” option to start a conversation about what’s in the camera view.

With today’s expansion, both voice and camera capabilities are available in every market where AI Mode is active.

The New Model

Gemini 3.1 Flash Live replaces the previous audio model powering Search Live. Google published benchmark results alongside the announcement.

Gemini Live can now follow a conversation thread for twice as long as the previous model, according to Google. Though the company didn’t specify what the previous limit was.

Beyond Search, 3.1 Flash Live is available to developers in preview through the Gemini Live API in Google AI Studio.

Why This Matters

Search Live turns search into a spoken conversation with camera input. Until now, the feature was limited to U.S. users. Today’s expansion makes it available in the markets where AI Mode is live, across more than 200 countries and territories.

There’s no public data yet on how many people use Search Live or how it affects query volume. But Google has been building toward this for the past year. The company launched Search Live in June, added video input in July, and upgraded to Gemini 2.5 Flash Native Audio in December. Each update expanded what the feature can do and who can use it.

Looking Ahead

Google didn’t announce additional Search Live features alongside this expansion. The focus is on geographic reach and the underlying model upgrade.

How the model performs in production across different languages and markets will be worth watching as adoption data becomes available.

Google Adds New Performance Max Controls And Reporting Features via @sejournal, @brookeosmundson

Google has announced a new set of updates to its Performance Max campaign type, focused on two areas advertisers have consistently asked for: more control over who campaigns prioritize, and better visibility into where budget is going.

The updates include first-party audience exclusions, budget reporting, expanded audience reporting, and placement reporting segmented by network.

Read on for more updates and what this means for your campaigns.

New First-Party Audience Exclusions

The first update Google announced was framed around more precise steering for your target audience.

Advertisers can now exclude specific first-party customer lists from Performance Max campaigns.

If your goal is acquiring net-new customers, excluding existing customer lists can help reduce wasted spend on people who may have converted anyway. It also creates a cleaner setup for evaluating whether Performance Max is actually contributing incremental value.

That said, this still depends heavily on how clean and current your first-party data is. If your customer match lists are outdated, incomplete, or poorly segmented, this feature won’t solve the problem by itself.

It also does not turn Performance Max into a precision audience campaign. Advertisers should still think of this as directional steering, not rigid targeting.

New Reporting Features Focused On Budget And Audience Visibility

The second part of Google’s update is around different reporting levers.

The first update is around the budget report. Advertisers can now find the budget report directly within a Performance Max campaign to help forecast the end-of-month spend. It can also provide scenarios on how changing the daily budget impacts potential performance.

Google is also expanding audience reporting with more detailed demographic and segment-level performance views, including breakdowns such as age range and gender.

Image credit: Google, March 2026

That should give advertisers more context around who the system is actually reaching, rather than just what overall campaign performance looks like.

The last reporting update announced is around network reports. Advertisers can now segment placement reports by network to show:

  • Where ads have served
  • More visibility to ensure brand safety across all Google-owned channels

The placement report lives under the “When and where ads showed” tab.

Why This Matters For Advertisers

Google has continued on its promise to provide more transparency to advertisers in these automated campaign types. They’re continuing to make Performance Max more useful for marketers trying to manage it more intentionally.

The first-party audience exclusion update gives advertisers a more practical way to support acquisition-focused strategies. Brands trying to reduce overlap between prospecting and retention efforts may find this especially helpful.

The reporting updates will likely have broader day-to-day value.

Budget reporting should make it easier to monitor pacing and explain monthly spend behavior, especially for teams working within strict budget expectations or reporting back to stakeholders.

Expanded audience reporting gives advertisers more context around who campaigns are actually reaching. That matters when conversion volume alone doesn’t tell the full story.

Network segmentation in placement reporting also adds a layer of visibility many advertisers have wanted for a long time, particularly those keeping a close eye on brand safety and placement quality.

Taken together, these updates give advertisers more visibility into how Performance Max is spending and who it’s reaching.

Looking Ahead

This rollout is more useful than groundbreaking, but that does not make it insignificant.

Google continues to fill in some of the operational gaps that have made Performance Max harder to manage than many advertisers would like.

For teams already using it, these updates should make campaign oversight a little easier.

For teams that have been frustrated by limited visibility, this is another step toward making Performance Max more workable in real account management.

When The Training Data Cutoff Becomes A Ranking Factor via @sejournal, @DuaneForrester

Every AI system serving answers today operates with two fundamentally different memory architectures, and the boundary between them runs along a single invisible line: the training data cutoff. Content published before that line is baked into the model’s weights, always accessible, confident, and unreferenced. Content published after that line only surfaces when the model retrieves it in real time, which introduces a different retrieval path, a different confidence profile, and, critically, different presentation behavior in synthesized answers. If you’re optimizing for brand visibility in AI-generated search, this distinction is not a footnote. It is the organizing principle.

The mechanism most practitioners are still treating as one thing is actually two.

The shorthand “AI doesn’t know things after its cutoff date” is technically accurate but strategically incomplete. What it obscures is that post-cutoff and pre-cutoff content don’t just occupy different time periods. They occupy different systems inside the same model.

Parametric memory is what the model learned during training: facts, relationships, concepts, and entities whose representations are encoded directly into the model’s weights. When you ask a model something within its parametric knowledge, it doesn’t look anything up. It synthesizes from internalized representations, which is why responses from parametric knowledge tend to be fluent, fast, and stated without qualification. The model isn’t consulting a source. It’s recalling.

Retrieval-augmented memory, by contrast, is what the model fetches at inference time. When a query either touches post-cutoff territory or triggers the model’s search function, a retriever collects documents from a live index, compresses the most relevant passages, and injects them into the context window alongside the original prompt. The model then synthesizes from those passages. Think of it this way: Parametric memory is everything you learned in school, internalized and available instantly. Retrieval is picking up your phone to look something up. Both produce answers, but the confidence signature and attribution behavior are structurally different, and that difference matters to how your brand content gets presented.

The Platforms Are Not Behaving The Same Way

One reason this dynamic gets underappreciated is that the five platforms your audience actually uses have meaningfully different cutoff dates and retrieval architectures, which means the practical implications vary by platform.

ChatGPT’s flagship GPT-5 series carries a knowledge cutoff of August 2025, but the older GPT-4o model, which remains widely deployed via API integrations and older interfaces, cuts off at October 2023. Web search is available in the ChatGPT interface but is selectively triggered rather than on by default for every query, meaning a substantial portion of ChatGPT responses still draw from parametric memory. Gemini 3 and 3.1 carry a January 2025 parametric cutoff, but Google’s Search Grounding tool is available as a supplementary mechanism that can be activated contextually. Gemini’s deep integration with Google infrastructure gives it a more natural path to real-time retrieval than models from other providers, but it does not automatically retrieve for every query. Claude (this current Sonnet 4.6 generation) holds a reliable knowledge cutoff of August 2025 and a broader training data cutoff of January 2026, with web search available as a tool but not automatically deployed on every response. Microsoft Copilot is unique in that its web grounding capability runs through Bing and is configurable at the enterprise level, meaning it is off by default in US government cloud deployments, leaving those instances fully dependent on parametric memory. Regulated industry users need to make their choice, but the feature exists.

Then there is Perplexity, which operates differently from all of the above. Perplexity is RAG-native by design, running a live retrieval pipeline on essentially every query through a distributed index built on Vespa AI, with real-time web crawling supplemented by external search APIs. For Perplexity, the training cutoff is largely irrelevant to the end user because the system routes around it by default. The practical consequence is that Perplexity citations tend to be current and attributed, while ChatGPT, Gemini, Claude, and Copilot responses vary between confident parametric synthesis and hedged retrieval depending on query type and configuration.

What this means in practice is that your brand visibility strategy cannot treat “AI search” as a monolith. The platform your prospective buyer uses when comparing enterprise software vendors may have a completely different memory architecture than the one your marketing team tested last week.

Why The Cutoff Creates A Structural Confidence Advantage For Older Content

This is the part of the cutoff discussion that gets the least attention, and it has direct implications for how your brand claims land inside synthesized answers.

When a model operates within its parametric knowledge, it does not need to retrieve, attribute, or hedge. It simply answers. The academic literature on dynamic retrieval confirms that models trigger retrieval based on initial confidence in the original question: when parametric confidence is high, retrieval often isn’t triggered at all. When retrieval is triggered, the response mechanics shift. The model must now weave in attributed information from fetched documents, which introduces phrases like “according to a recent report,” “sources indicate,” or “based on search results.” These attribution constructs are not cosmetic. They signal to the reader (and to the response synthesis logic) that the cited claim exists in a different epistemic register than a confident parametric assertion.

The practical example is straightforward. Ask most current AI models what Salesforce’s CRM market position is, and if that information is well-represented in training data, you’ll get a confident, unqualified synthesis. Ask about a product positioning shift from six months ago, after the cutoff, and you get either a retrieval-dependent answer with caveats and citations or a gap in coverage. Your brand’s foundational narrative, if it exists clearly in parametric memory, presents with the confidence of internalized knowledge. Your recent product news, if it only exists in the retrieval layer, arrives with the hedging language of external evidence. Both appear, but they sound different.

The Strategic Layer: Timing Content For The Cutoff-To-RAG Pipeline

What can practitioners actually do with this? The answer requires rethinking how we talk about content calendaring.

Traditional content calendaring is organized around audience timing, seasonal relevance, and channel cadence. Cutoff-aware content calendaring adds a fourth axis: anticipated model training windows. If you know that major model training runs tend to lag publication by several months to a year, and you know that training data sampling favors well-cited, well-distributed content, then there is a strategic argument for prioritizing the publication and amplification of your most foundational brand claims well in advance of those windows. A capabilities brief, a positioning paper, a definitional piece that establishes your category leadership, these are the kinds of assets that benefit from being embedded in parametric memory rather than living only in the retrieval layer.

The inverse implication is equally important. Time-sensitive content such as product updates, event coverage, pricing announcements, and campaign materials is inherently post-cutoff territory for any model trained before publication. That content must succeed in the retrieval layer, which means it needs to be indexed, cited, and structured for chunk-level retrieval rather than optimized for the parametric embedding that foundational content targets. These are different content jobs requiring different distribution strategies, and treating them the same is one of the more common structural errors in current AI visibility practice.

The practical execution of cutoff-aware content calendaring does not require inside knowledge of any model’s training schedule, which is rarely disclosed. What it requires is treating content type as a determinant of content timing: foundational brand positioning gets published and amplified early and consistently, long before you need it in AI answers; time-sensitive content gets optimized for retrieval quality through proper indexing, machine-readable structure, and citation-friendly formatting. Next week’s article addresses that second half in detail.

What ‘Freshness’ Actually Means When Two Memory Systems Are In Play

It is worth addressing directly how this framework differs from Google’s freshness model, because the intuitions built up from fifteen years of SEO practice don’t map cleanly onto AI search behavior.

In Google’s architecture, freshness signals follow a model roughly described as Query Deserves Freshness: for certain query types, recently published or recently updated content receives a ranking boost that causes it to displace older content in results. Fresh content wins, stale content loses, and the implication for practitioners is that regular updates maintain ranking position.

The AI dual-memory model works differently. Pre-cutoff content and post-cutoff content don’t compete directly on a freshness dimension. They coexist in different retrieval layers and can both appear in a single synthesized response. A model answering a question about your product category might draw its foundational description from parametric memory trained on content from two years ago, then supplement it with a retrieved mention of your latest release, all within the same paragraph. The optimization challenge is not to keep one piece of content fresh enough to outrank another. It is to ensure that what lives in parametric memory says what you want it to say, and that what lives in the retrieval layer is structured to be found, parsed, and attributed accurately.

The implications for content update strategy also diverge. In traditional SEO, updating a page often signals freshness and can improve rankings. In AI retrieval, updating a page changes what gets indexed in the retrieval layer but does nothing to update what’s already embedded in parametric memory. The only mechanism that changes parametric memory is a new model training run. This means the stakes around getting foundational content right before training windows are considerably higher than the stakes around quarterly page refreshes, and the measurement challenge is different in kind.

The Thread Connecting This To Everything That Follows

This article is a layer added onto the consistency problem described in “The AI Consistency Paradox.” Inconsistency across queries isn’t random noise. A significant portion of it is structurally explained by the dual-memory architecture: the same model asked the same question on different days may draw from parametric memory or trigger retrieval depending on phrasing, context, and platform configuration, producing different confidence signatures and different content. The measurement problem introduced here, which is how do you know which memory layer your brand content is living in, is precisely what cutoff-aware content calendaring is designed to address at the strategic level and what the next article will address at the technical level.

The next article looks at machine-readable content structure as a mechanism for increasing retrieval quality, which is where parametric timing and retrieval optimization meet.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: SkillUp/Shutterstock; Paulo Bobita/Search Engine Journal

How To Avoid Top Down SEO Systems Failures With The Visibility Governance Maturity Model via @sejournal, @theshelleywalsh

Most SEO failures aren’t caused by bad SEOs. They’re caused by organizations that don’t have the systems to support them.

That’s the argument Ash Nallawalla has been building across five books and over 24 years of enterprise SEO experience in Australia. As a visibility governance consultant based in Melbourne, Ash has worked in-house for some of Australia’s biggest brands, and seen firsthand what happens when no one above the SEO team understands what they do or why it matters.

On IMHO, I spoke with Ash about why he believes visibility needs to be governed at board level, how his maturity model works, and why the rise of AI-mediated discovery makes this more urgent than ever.

“Governance is not a constraint on speed. However, the absence of governance is.”

When No One Owns It, Everything Breaks

Most SEO failures are structural. Which means the team didn’t fail, but the system did. And the damage could be disproportionate to the cause. A governance gap of weeks could create months of recovery. And governance is not a constraint on speed. However, the absence of governance is.

Ash shared an example that illustrates just how catastrophic a governance gap can be.

At one organization, he discovered in Google Search Console 22 million pages as “currently not indexed.” When Australia only has 25 million residents, he knew something has seriously gone wrong.

This was down to someone internally in the past who had decided that creating a page for every combination of facet would be a good idea.

“There were 10 quintillion pages. And if you’ve not heard that number before, it is one followed by 18 zeros,” Ash explained. “We calculated that if Googlebot could read a thousand URLs a second, it would take 310 billion years to crawl all of them.”

Despite this, the site was still ranking well and receiving 5 million Googlebot visits per day. The problem was invisible to anyone above the SEO or product manager level.

“That place didn’t have governance because no one above the SEO level or the product manager level realized the problem. They just knew someone was doing SEO and yes, we’re getting lots of traffic.”

This kind of structural failure is what drove Ash to write his first book, “Accidental SEO Manager,” in 2022. As he put it, “In reality most people come into SEO with no background and that applies to the managers who are looking after enterprise SEO.”

A Maturity Model For Visibility Governance

Ash has since developed what he calls the Visibility Governance Maturity Model (VGMM), borrowing from the Carnegie Mellon capability maturity framework used in software development. It maps governance across seven domains, SEO (including local and international), content, website performance, accessibility, and AI governance, into five levels expressed as a percentage score.

“The C-suite gets to know that our visibility governance is at 80% or it’s at 20% or 30% whatever it is, and that corresponds to five levels.”

“Some of these questions are single points of failure. And if you said ‘not in place’ for any of them, it doesn’t matter what your real score is, you are capped at level two.” Ash explained.

A single point of failure (SPOF) might be something as fundamental as whether anyone is responsible for robots.txt. In some companies, Ash noted, they don’t even know what robots.txt is.

Selling Governance To Skeptics

When boards push back against the need for governance, Ash uses three arguments.

First, the system test: “If things work wonderfully this month, are we guaranteed that next month and the month after that things will work wonderfully? And if not, then there is a problem that we need to investigate.”

Second, the rework cost. Fixing a visibility failure after the fact is far more expensive than preventing it, especially when the failure involves AI systems.

“If suddenly ChatGPT stops recommending your brand, you may not realize it. Your traffic is up. Your rankings are where they were. That’s not effective, but your competitors are doing better than you.”

And third, for the skeptics who worry governance will slow things down: “You will move faster with governance than without it because you might have these big problems and it may take you an unknown amount of time to fix them.”

What To Tell A Board That’s Never Heard Of Visibility Governance

When pitching to a board for the first time, Ash recommends leading with money, then reframing SEO as infrastructure.

“Organic search visibility, which is the traditional SEO, is infrastructure. It’s not just a marketing exercise. It’s a capital asset with a yield.”

He frames AI-mediated discovery as a new category of risk, something boards are already familiar with in other contexts. Brand visibility can erode silently without any alerts firing, and traditional controls aren’t detecting it.

“If their paid costs are slowly creeping up, that’s not always because the search engine is charging more. It’s also because they’re having to advertise more. And that’s one of the early hints that there could be an external system that is brewing, and it’s taking customers away, and that’s the AI-mediated search that their potential customers are beginning to use, and they’re being led in other directions.

So the second thing that I say to them is that the risk profile of visibility has changed in the last two years, and your traditional controls are not detecting it.”

Ash shared a real example where his CIO once asked why Bing Chat was recommending competitors but not their own brand. The cause turned out to be a blocked Common Crawl bot (CCBot), which Bing Chat had relied on during its learning phase. “We unblocked CCBot, and within a few months, it started recommending our brand.”

There’s also a reputational dimension. If customers are leaving bad reviews on platforms the company doesn’t monitor, large language models are learning from that sentiment, and quietly dropping the brand from their recommendations.

“When you share responsibility without ownership, then governance will fail.”

Ash recommends boards ask four questions:

  • Who owns accountability for visibility performance at a strategic level?
  • Is that person senior enough to influence things?
  • Is visibility reporting reaching the board in a way that distinguishes between performing well today and being structurally sound tomorrow?
  • Are we treating AI-mediated visibility as a governance matter, or as a technology novelty someone in marketing is keeping an eye on?

The Leadership Test

Ash closed with what he calls the leadership test, a challenge to any organization that relies on individual heroics rather than systems.

“If your SEO depends on individuals pushing uphill against the system, then gradually their capability will vanish when they leave.”

He advocates for internal wikis, documented learnings, and hiring for capability rather than cultural fit. The goal is to reduce dependence on individuals and build structures that survive personnel changes.

“I’m saying to boards, put visibility on the agenda at every meeting, even if it’s a one sentence from the responsible person, ‘visibility is fine’ or whatever they want to report, but it reminds the board at every meeting that SEO and now external visibility are both very important infrastructure matters.”

Visibility Governance Isn’t Just For Enterprise

While governance is most obviously an enterprise concern, the principles apply broadly. Smaller companies are just as vulnerable to silent visibility erosion, perhaps more so, because they have fewer resources to detect or recover from it.

Where AI systems are reshaping how brands get discovered, the organizations that treat visibility as a governance matter rather than a marketing task are the ones most likely to survive the shift.

Watch the full interview with Ash Nallawalla here:

Thank you to Ash Nallawalla for offering his insights and being my guest on IMHO, and read more about the Visibility Governance Maturity Model in the Managing SEO series of books.

More Resources:


This post was originally published on Shelley Edits.


Featured Image: Shelley Walsh/Search Engine Journal

Are We Due Another Florida-Style Update? via @sejournal, @TaylorDanRW

Editor’s note: this article was written a few days before the core update that started to roll out on March 24.

Updates like Florida, Allegra, and Brandy were major turning points in search because they fundamentally reshaped how websites were ranked and how SEO was practiced.

These updates caused sudden and dramatic shifts where rankings dropped overnight, entire categories of websites lost visibility, and tactics that once delivered consistent performance stopped working almost immediately.

A similar question is now starting to emerge as AI-generated content increases and large volumes of low-value pages begin to fill the web. The scale and speed of content production feel familiar and echo the build-up that came before earlier algorithmic resets.

The systems that power search have evolved, yet the pressures acting on them are beginning to look very similar. A repeat in the same form is unlikely, but the conditions that created those updates are returning, and a comparable reset remains a realistic possibility if those conditions continue to worsen.

Scaled Low-Value Content Is Worse Than Ever

The underlying problem of low-value content at scale is returning, driven largely by the capabilities of AI. The cost and effort required to produce content have dropped significantly, which allows pages to be created faster and in greater volume than ever before. This has led to rapid expansion across many areas of search, particularly in informational queries where barriers to entry are relatively lower.

The more prominent issue is the level of similarity across that content.

Much of what is produced follows the same structure, covers the same points, and reaches similar conclusions. The result is content that is readable and technically correct, but lacks depth, originality, and meaningful differentiation, core elements that make content useful, valuable, and give it longevity in Google’s serving index.

There are mirrors to the content farm era that Panda addressed, where the problem was not just the number of pages but the fact that those pages were largely interchangeable. The current wave of AI content reflects the same issue at a much larger scale and with a higher baseline level of quality, which makes it both more effective and harder to filter.

The Rolling Correction With Real-Time Updates

Google is already responding to these challenges through its existing systems, which work together to continuously evaluate and adjust content visibility. The Helpful Content System assesses quality across entire sites, SpamBrain identifies patterns that indicate low-value or manipulative behavior, and core updates refine rankings across the index.

These systems create a rolling correction where change is constant rather than concentrated in a single event. The March 2024 core update demonstrates this approach because it targeted low-quality and scaled content without creating a clear break. Some sites lost visibility, some improved, and many experienced mixed results over time.

This reflects a deliberate shift in how quality is managed because the goal is to maintain balance continuously rather than reset the system in one moment. That approach depends on the system keeping pace with the scale of the problem it is trying to manage.

Continuous Systems Aren’t Always Enough

The issue is not only that more content is being produced, but that it is being produced at a speed that may outpace the system’s ability to fully evaluate it. A gap can form between content production and content assessment, which allows low-value pages to gain visibility before being properly filtered.

As that gap widens, the quality of search results can decline in subtle but noticeable ways. Users may encounter repetitive or shallow content across similar queries, which reduces trust in the results over time. This does not represent a full breakdown of the system, but it does show increasing pressure, and if users lose trust in the results, they stop coming to Google, which impacts Google’s ability to generate revenue.

The assumption that continuous evaluation can handle unlimited scale is being tested, and the limits of that system are not yet clear.

The Case For Another Florida

The possibility of another large-scale update depends on whether the current system can continue to manage this pressure effectively.

A scenario exists where Google introduces a more aggressive update that recalibrates quality thresholds across the board and reduces the visibility of low-value content more quickly and more broadly. We know that Google trains on a subset of quality that it knows is created to the highest standards (as disclosed at the Search Central Live in Bangkok in 2025). The form this would take would differ from Florida, but the impact could feel similar because large numbers of sites could lose visibility in a short period of time.

Such an update would likely follow a period where search results feel consistently weak or repetitive and where users begin to question their reliability. Evidence that existing systems cannot correct the issue quickly enough would increase the likelihood of a more aggressive intervention from Google.

Recalibrating Content As A Tactic

Content strategy has shifted from efficiency to defensibility because the ability to produce content at scale is no longer a meaningful advantage. AI has made content production widely accessible, and this has put pressure on agencies and in-house teams to be able to produce more with the same resources – but measuring this by total content output versus the overall content quality is a trade-off I feel many are sleepwalking into.

Content that performs well now tends to offer something that cannot be easily replicated.

This often includes real experience, a clear and informed perspective, or genuinely useful insight that goes beyond standardized output. Strong alignment with user intent also plays a critical role in maintaining visibility over time.

These principles are not new, but they are enforced more consistently and may be applied more aggressively if the system requires it.

This Is A System Under Pressure

The likelihood of another Florida-style update depends on how well the current system continues to perform under increasing pressure. Google’s approach has shifted toward continuous evaluation, which reduces the need for large and sudden changes under normal conditions.

The conditions that led to past updates are beginning to re-emerge in a different form, driven by the scale of AI-generated content. A more decisive intervention becomes more likely if those conditions continue to build and begin to affect user trust in search results.

The system currently operates through steady and ongoing adjustment, without a clear reset point or a single moment of change. Content is evaluated continuously based on whether it deserves to be indexed and served to users.

History shows that gradual systems can give way to more direct action when pressure builds too much, and if that point is reached again, the response is likely to be a statement move.

More Resources:


Featured Image: hmorena/Shutterstock

Google’s March Spam Update Felt Muted But May Signal Bigger Changes via @sejournal, @martinibuster

Google’s March 2026 Spam Update was welcomed by many in the SEO community who were hoping for relief from listicles, AI content rewriters, and Google’s own AI Overviews that “rehash other people’s content.” The update unexpectedly finished in less than twenty-four hours, with a collective shrug and a yawn. Yet despite the underwhelming nature of the update, it still yielded a few interesting insights and takeaways.

Hopeful SEOs

Google’s spam announcement was largely welcomed by many in the SEO community who were hoping that spammy sites positioned above them would lose their rankings but the muted response spoke to an update that didn’t seem to land where people expected it to.

EmarketerZ expressed the hope that sites struggling under the weight of spammy sites ranking above them might have their comeback moment.

They tweeted:

“Google’s latest spam update might just be the comeback moment publishers have been waiting for—finally a shot at reclaiming the traffic they lost in the last one 🤣”

Over on LinkedIn Adrian M. responded to Google’s announcement by expressing that it’s about time, calling out fake engagement tactics as an area they’d like to see cleaned out.

They wrote:

“It was only a matter of time, and it’s exactly what the industry needed. Many SEO agencies have been relying on bot networks and residential proxies to simulate organic engagement and inflate their monthly reports. I’ve recently audited e-commerce servers pushed to the brink of crashing (503 errors) just by these automated, fake “add-to-cart” scripts masquerading as real users. This update will finally clean up the vanity metrics and force the market to return to genuine content marketing and real user acquisition. Excellent move by the Search team!”

Muted Response From Digital Marketers

Many SEOs who have been vocal about spammy GEO tactics and regular old spam jamming up the search results were oddly quiet through the duration of the spam update.

Glenn Gabe had this to say:

“Wait, what? The March 2026 Spam Update has completed rolling out. Damn, that was fast. :)”

And Lily Ray tweeted:

The Google subreddit announcing Google’s spam update only had six responses, four of which were conversations asking for a link to the official announcement. It’s fair to say the response on Reddit’s Google subreddit was a shrug and yawn.

The response over on the SEO subreddit was similar, with some of the comments doubting much of anything will change.

One person expressed the hope that this time AI-generated content farms will get wiped out.

They wrote:

“I’m betting on a big hit to AI-generated content farms and those super thin affiliate sites. google’s been hinting at this for a while, feels like it’s finally coming.”

But another Redditor nicknamed mrtornado79 responded with a big nah… and a useful insight.

“It’s been “finally coming” for three years. At this point it’s basically an SEO drinking game — spam update drops, someone says “this is the one that kills AI content farms,” nothing particularly dramatic happens, repeat.

Google called this a “normal spam update.” Not a paradigm shift. Not the AI content apocalypse. Normal.”

That point about the March Spam Update not being a paradigm shift was a good observation about Google’s understated announcement and it probably explains why Google didn’t even bother to update their Spam Update information.

A couple of the SEO Facebook Groups didn’t even have a discussion about the update, which in itself is a comment about how SEOs feel about Google’s spam updates: It could be a sign of how much wind has been taken out of the sails of low-level affiliate spammers and PBN sellers.

Wait, What… That Was It?

The end of the update was generally met by silence on many of the ongoing discussions across the Internet.

WebmasterWorld member Micha expressed the general underwhelment best:

“Huh? The update is over?”

It’s quite possible that Redditor mrtornado79’s opinion that it was not going to be a paradigm shift was the best view of what just happened.

What May Happen Next

The big question now may not be what just happened but rather what is going to happen next.

I’ve always seen Google’s spam updates as a clearing of the table in preparation for the next course. If a core update follows soon, then that may be what this muted spam update was about. That can be anything from the introduction of new AI-driven features (like those title rewrites they were recently experimenting with) to something quiet that will barely be noticed, like an infrastructure change to accommodate something big and new.

What could Google implement over the coming months?

There have been two patents filed recently which I’ll be publishing information about soon.

1. User Journey Patent
The first one describes a machine learning system that determines how different types of content exposure influence a user’s likelihood of performing a specific action, such as making a purchase or signing up for a service. It’s a system to attribute portions of the final action to specific exposures to content or ads, even when multiple exposures occurred at different times.

2. Automatic Search Results Updates
This patent describes a system that improves search experiences by automatically delivering better results to a user after their original search, without requiring them to search again. This is applicable to both an organic search and an AI assisted search. This transforms search from a one-time activity to information requests that resolve over time. This is really interesting because it makes it possible to ask a question about something that’s going to happen or hasn’t been announced yet, expanding the range of queries that Google can answer.

My general impression of Spam Updates is that they are sometimes a prelude to changes elsewhere in Google’s core algorithm or related infrastructure. It may be an interesting month ahead.

Featured Image by Shutterstock/vchal