Google Analytics Launches Scenario Planner and Projections via @sejournal, @brookeosmundson

Google Analytics has launched Scenario Planner and Projections, two new features designed to help advertisers plan and monitor paid media budgets across channels.

The rollout is part of Google Analytics’ cross-channel budgeting feature, which is still in beta and not yet available to every Google Analytics property.

Read on to learn more about the tools, who’s eligible, and how advertisers can use them.

Introducing Scenario Planner and Projections

The rollout includes two separate tools built for different stages of campaign planning.

Scenario Planner is designed for future planning. It allows advertisers to model different budget allocations across channels and estimate how those changes may impact conversions, revenue, or return on investment. The tool is intended for building media plans ahead of campaign launches or defined planning periods.

Projections is designed for active campaigns. It helps advertisers evaluate whether current spend is pacing toward selected goals and where adjustments may be needed before the reporting period ends. This includes visibility into projected budget delivery, conversions, and revenue by channel.

Google says the tools are meant to be used together. Scenario Planner can be used to build a forward-looking budget plan, while Projections can be used to monitor how campaigns are tracking against that plan once they are live.

The feature is not limited to Google Ads data. Advertisers can incorporate campaign data from both Google and non-Google paid channels, provided cost data and integrations are properly configured.

There are, however, some requirements that may limit access. According to Google, eligibility requirements include:

  • At least one year of conversion data
  • Channels with cost are required and must be data compatible with Primary Channel Grouping
  • At least one year of campaign data from at least two channels (Google and non-Google)

Google also notes that both tools rely on modeled estimates based on historical performance, meaning outputs are directional rather than guaranteed.

Cross-channel budgeting is currently labeled as a beta feature, and Google notes that it may not yet be available to all Google Analytics properties, but is working on expanding to more accounts.

Why This Matters For Advertisers

For many teams, budget planning and performance analysis still happen in separate places.

Planning often lives in spreadsheets or internal forecasts, while performance is measured inside ad platforms and Google Analytics after the fact. That separation can make it harder to evaluate whether budget decisions are working in real time.

These tools bring some of that planning workflow into Google Analytics.

Advertisers now have a way to model budget allocation before campaigns begin and check pacing while campaigns are still running, using the same data source they rely on for performance reporting.

That could be useful for teams managing spend across multiple paid channels, particularly when trying to compare performance beyond a single platform’s recommendations.

At the same time, the usefulness of the feature will depend on data quality and setup. Advertisers with incomplete cost imports, limited historical data, or inconsistent conversion tracking may not be able to fully use the tools or may see less reliable projections.

What Comes Next

For advertisers already using Google Analytics as a central reporting tool, Scenario Planner and Projections may offer a more practical way to pressure-test budget decisions before and during campaign execution.

How useful the tools become in day-to-day planning will likely depend on how many advertisers qualify for access and how reliable the forecasting proves to be over time.

Half Your Traffic Left. The SEO Industry Sent Thoughts and Frameworks

Before AI Overviews launched in May 2024, Define Media Group’s portfolio of major U.S. publishers averaged 1.7 billion organic search clicks per quarter. Steady. Predictable. The kind of number you build a business model on and then stop thinking about, because why would you?

After the launch, traffic dropped 16% and never recovered. When Google expanded AI Overviews in May 2025, the decline accelerated. By Q4 2025, organic search traffic across that portfolio was down 42% from the pre-AIO baseline.

Nearly half the organic traffic, gone, from a portfolio large enough to be directional for the entire publishing industry.

The traffic bargain (you produce content, Google sends clicks, advertising revenue funds the next round of production) has been the economic engine of the open web for 20 years. That engine is seizing up in plain sight, and the industry’s response has been to argue about which dashboard to stare at while it happens.

New Interface, Same Delusion

The first camp did what the SEO industry always does when the ground shifts: they built new tools to measure the shaking.

Prompt tracking. LLM visibility dashboards. Share-of-answer metrics. In under 18 months, an entire vendor category materialized to sell you a number that tells you how often your brand appears in AI-generated responses. It’s Search Console for the chatbot era, and it comes with the same comforting implication: If the number goes up, you’re winning. If it goes down, buy more of the thing that makes it go up.

I’ve written about this before, and I’ll be blunt again: These tools are selling you bullshit with a confidence interval drawn on it in crayon. When a dashboard tells you your brand “appeared in 73% of relevant AI responses,” what it actually measured is: We fired some prompts at an API, got some outputs, and counted mentions. That’s not a ranking. That’s a lottery ticket.

The engineers who built these models cannot fully explain why a specific output appeared. But sure, a SaaS tool perched atop Mount Dunning-Kruger with a trend line has it all figured out.

The industry keeps buying because the alternative is admitting we’re flying blind. Questioning the data means telling the room that the “directional” charts in the client deck are noise dressed up as insight. Nobody wants to be that person. So the vendors keep selling, the dashboards keep flickering, and the number doesn’t need to correlate with revenue. It just needs to fluctuate enough to sustain a subscription.

Jono Alderson made the broader version of this argument in a recent piece, Clicks Don’t Count (and They Never Did). His point: SEO has always measured the interface rather than the forces underneath it. Rankings, traffic, visibility scores. None of these were measures of competitiveness. They were measurements of a presentation layer. We spent two decades optimizing what we could see and calling it strategy.

He’s right. And prompt tracking is the newest iteration of the same mistake. Old retrieval visibility in a trench coat, pretending to be two disciplines.

The second camp is more intellectually serious. Jono’s piece is the best version of this argument, and I agree with more of it than I’m about to make it sound like.

His framework: stop measuring the interface, start measuring competitiveness. Six structural dimensions drawn from marketing science validated for decades: experience integrity, physical availability, mental availability, distinctiveness, reputation, commercial proof. AI systems aggregate signals about brands across the web, not pages in isolation. The entities that are genuinely competitive get recommended and surfaced. Visibility is the output, not the input.

I think this is broadly correct. I also think it has a timing problem the size of a crater.

Those six dimensions operate on timescales of years. Building mental availability is a sustained brand investment. Earning reputation signals is the compound interest of consistently not being terrible. Strengthening distinctive assets requires buy-in from people who’ve never heard of Ehrenberg-Bass and aren’t going to read a blog post to find out.

The traffic collapse is happening in quarters.

Tell a publisher who just lost 42% of their search traffic to “strengthen structural competitiveness” and watch their face. It’s like telling someone whose house is flooding to invest in better drainage. You’re not wrong. You’re just not helping.

Jono knows this, to his credit. When someone in his comments asked how to operationalize the framework, his answer was honest: Redefine SEO to own those areas, or navigate the organizational politics of working with the teams that do. “Lots of organizational politics, either way.” That’s the kind of understatement that only someone who’s actually tried it would make.

What Actually Broke

The measurement debate is a sideshow. The traffic bargain wasn’t a metric. It was the economic foundation of content production on the open web.

Google needed content to crawl. Publishers needed distribution to monetize. Produce something worth indexing, Google sends traffic, you convert it into revenue, that revenue funds more content. The loop ran for 20 years. Everyone pretended it was a partnership rather than a dependency, and the pretence held because the numbers worked.

AI Overviews break the loop. Google synthesizes the answer from your content and serves it directly. The user gets what they need. Your content gets consumed on Google’s surface, with Google’s ads, generating Google’s engagement metrics. You get a citation link that roughly nobody clicks and a warm feeling about “brand visibility.”

Google’s own VP of Product for Search, Robby Stein, recently described how they had to “teach the model how to link out.” Linking to publishers wasn’t the default behavior. It had to be engineered back in. The system’s natural state is to absorb your content and answer the question. Sending traffic your way is the afterthought they bolted on, so the extraction doesn’t look like what it actually is: taking your stuff and serving it as theirs.

The breakage isn’t uniform. Define’s data shows breaking news traffic up 103% across all Google surfaces, while evergreen content dropped 40%. The Top Stories carousel has been largely shielded from AI Overview incursion. Evergreen content has not. The how-to guides, the explainers, the reference material, the content categories that built the SEO industry, are exactly the categories AI Overviews were designed to absorb and replace.

Google is selecting which content survives the transition. Time-sensitive content still drives clicks because you can’t summarize something that’s still developing. Everything else is increasingly raw material for the answer machine, and the machine doesn’t pay for raw materials.

If “competitiveness” replaces traffic as the operating metric, SEO’s scope has to change. Jono’s six dimensions are mostly owned by product, brand, and marketing. Experience integrity is product and UX. Mental availability is brand investment. Reputation is years of not cutting corners. Commercial proof is a function of whether the thing you sell is actually good. SEO teams control technical discoverability, content strategy, and site architecture. That’s one layer of the competitiveness framework, not the whole building.

So the discipline either expands into a cross-functional strategic role (good luck explaining to the CMO that SEO now owns brand strategy because the retrieval models changed) or it contracts honestly and positions itself as the technical infrastructure that makes competitiveness legible to machines. Either option beats “we’ll get you more organic traffic,” which is a promise that ages worse every quarter.

Clicks may not have been the right metric. Jono makes a persuasive case. We measured the interface and called it the system.

But clicks paid the bills. They funded editorial teams, justified content investment, and sustained the publishing ecosystem that both search engines and AI systems depend on for training data and retrieval sources. Without content to crawl, there’s nothing to index. Without content to train on, there’s nothing to synthesise. The irony is apparently lost on the company deploying AI Overviews.

Nobody’s building a transition strategy. The prompt-tracking vendors are selling the new dashboard. The strategists are selling the long view. Google won’t help. They broke the bargain, and their Discover push suggests they’d rather build a distribution surface they fully control than repair the one that shared value with publishers. The AI companies need content to exist, but haven’t worked out how to fund its production.

Everyone’s got a framework. Nobody’s got an answer.

The clicks didn’t count. But something needs to. Soon.

More Resources:


This post was originally published on The Inference.


Featured Image: Accogliente Design/Shutterstock

How Zero-Party & First-Party Data Can Fuel Your Intent-Based SEO Strategy via @sejournal, @rio_seo

There’s an interesting paradox currently occurring in the realm of marketing. Marketers have more tools and data at their fingertips, yet despite this influx of information, marketing leaders also somehow have less clarity than ever before.

Over the past decade, Google’s algorithms and privacy regulations have significantly shifted traditional SEO best practices. SEO has evolved from a precise science to more of a trust discipline, where marketers must infuse credibility and authority into their content to improve visibility.

The new opportunity at hand isn’t scraping more consumer behavior but rather listening to it in a new manner. By diving deeper into zero-party data, information customers willingly share, and first-party data, behavior observed directly on your own channels, chief marketing officers can shape their SEO strategies around real human intent.

Search success will be contingent on whether brands understand their audience well enough to create relevant, authentic, and trustworthy content at every step of the customer journey, not just when an algorithm prompts them to.

The Connection Between Zero-Party Data And SEO

Zero-party data is marketing’s cleanest and clearest source of truth. It uncovers the information customers want you to have. It unveils their preferences, motivations, and needs through methods like surveys, quizzes, chatbots, and more.

First-party data shows what users do. Zero-party data shows you why they did what they did. When paired together, both forms of data bridge the gap between analytics and empathy.

For example, a retail brand might ask site visitors in a post-purchase survey, “What is most likely to motivate you to make a purchase?” The choices the site visitor can choose between are price, sustainability, or convenience. Now, consider if nearly half of those respondents chose “sustainability.”

This insight shouldn’t fall into a void, but rather should be acted upon quickly. It’s not a trend but rather a clear signal. The content and SEO teams can now focus on creating content around “eco-friendly shopping” and other relevant sustainability topics, while communications teams can align messaging around the same topic. In turn, seamless collaboration and alignment take place.

Moving Beyond Keywords To Conversations

Traditional SEO honed in on what people typed into the search bar. Zero-party data reveals what people mean when they’re searching for a business, product, or service. Algorithms are increasingly rewarding intent satisfaction when evaluating content. When your content addresses and is built on declared motivations, like why someone is looking for your specific solution, you’re aligned with the future of search.

How To Turn Customer Data Into Search Strategy

The issue isn’t that CMOs aren’t collecting data; it’s the struggle with turning it into action that drives meaningful change.

An intent-based SEO strategy has three phases, which we will discuss next (capture, interpret, and activate).

Phase 1: Capture

Customers aren’t going to hand over information if they don’t see a clear value in doing so. To encourage this, marketers must highlight a mutual benefit in the information exchange. A few methods include:

  • Gated research studies.
  • Short post-purchase surveys.
  • Interactive quizzes or calculators.
  • Preference centers so customers only receive communication around specified topics that matter most to them.
  • Incentives such as coupons and exclusive promotions for newsletter subscribers.

Each of the aforementioned information exchanges becomes a declared-intent breadcrumb. Users have granted your business permission to act on their feedback and are much more actionable than cookie trails alone.

Phase 2: Interpret

Collecting information from myriad channels can make it difficult to determine where they should focus their attention first. To dissect and pull out the insights that matter most from unstructured and structured feedback, CMOs should invest in qualitative analysis tools. Tools like text analytics, for example, can make it easy for CMOs and CX teams alike to mine for common themes.

Customer Data Platforms (CDPs), can also help you create audiences and segments to deliver more personalized content that resonates with customers. This might look like a retail marketing manager only receiving newsletters, ebooks, or blogs that are related to the retail industry and trends.

These types of thematic content pillars can help inform supporting search queries, schema markup, content priorities, and more.

Phase 3: Activate

In this phase, you’ll set your plans into action. First, connect declared intent to keyword intent. For example, if customers talk about “security peace of mind,” this gives you clear insight into what they’re interested in learning more about and how your company can help. You could create content that explicitly speaks to “how we secure your personal data.”

On the other hand, if they’re talking about “easy to implement,” it may be beneficial for you to provide explainer-type content, such as a short video or an FAQ page (with FAQ schema), to address “how to integrate [product name]” searches.

Zero-party data helps move the needle with SEO efforts; from a guessing game to an action engine, producing content that doesn’t just satisfy search algorithms, but also the people behind the search, too.

Leadership Enablement: Aligning Teams, Culture, And Technology

To build an insight-to-action culture, CMOs should encourage teams to share qualitative learnings regularly, whether through a cadence of weekly meetings, via email, or a combination of the two. Customer experience teams should make Voice of Customer insights loud and clear to help inform SEO and content briefs.

It’s also important to highlight and reward cross-functional wins to showcase how working together helps drive growth. This might look like an SEO strategy that was informed by CX feedback or a case study that solves a pressing challenge clients typically face, informed by online reputation feedback.

Operationalize The Feedback Loop

CMOs can install a regular “intent feedback loop” to operationalize the data your company receives and act upon that data. This might look like:

  • Gather declared data (surveys, chatbot transcripts, online reviews, call center logs).
  • Identify what motivates consumers most (customers often talk about time savings, value for money, trust issues, emotions).
  • Update content briefs and keyword maps (primary and secondary keywords, content requirements, search intent to ensure you’re staying up to speed).
  • Measure whether your content is landing with your intended audience on an emotional and intellectual level. Engagement, recall, and action are key determinants of content success, not just how it ranks.

This type of feedback framework helps organizations embed customers’ preferences and desires directly into the content published, helping your business create the content that actually connects with your target audience.

The Metrics To Add

Measuring what matters most is integral to assess the impact of zero-party data analysis efforts. Alongside other SEO metrics, the following can gain a holistic view of your SEO performance:

Resonance Metrics

Engagement quality is a true testament of attention. Meanwhile, volume, while great to have, is somewhat meaningless if you have an abundance of unqualified leads. Instead, look at:

  • Average engagement time: How long people stick around to view your content.
  • Return visits: People who come back to consume more of your content.
  • Scroll depth: Visitors should scroll down to read the entirety of your content because they find it to be that interesting.

Relevance Metrics

Marketers must track growth in high-intent and branded queries, as these are most often the terms that someone who is on the verge of buying will use when searching for your business. If you’re showing up for phrases customers typically use when at the decision-making stage, such as “State Farm compared vs. Geico car insurance,” this indicates deeper resonance.

Relationship Metrics

Loyalty metrics, while not a metric SEOs track, can correlate with how well your SEO program is working. Reframing SEO performance as a reflection of customer understanding helps CMOs dig a layer deeper, past solely tactics, and understand deeper-rooted customer emotions that could be preventing your business from scaling. Look at:

  • Zero-party response rate: The percentage of users who are willing to share their personal information and experiences.
  • Repeat engagement: Consumers who continue to engage with your business and see value in doing so.
  • Customer lifetime value: How valuable a customer is to your business over time (how much they purchase, do they churn quickly)
  • Retention rate: Customers who continue to do business with you that you’ve worked hard to acquire and keep.

The Future Belongs To Human-Declared Intent

We may be in the age of AI, but the future is human. Yes, AI can generate a keyword-optimized blog in a matter of seconds, but human touch is where the real value is. And human-informed data will be your business’s ultimate differentiator.

Zero- and first-party data reveal pertinent insights that elevate organizations when this data is acted upon. It unlocks insights into why people search and not just what they search for. It also uncovers where in the sales journey customers are getting stuck and blockers for purchasing.

Moving forward, to fuel your SEO efforts:

  • Ask customers what matters most to them.
  • Listen to what they have to say.
  • Create content that addresses those asks.
  • Optimize it for human needs, not just engagement and clicks.
  • Measure customer experience metrics, not just SEO.

When marketing leaders take consumer feedback to heart, they bridge the gap between traffic and trust, building stronger relationships that lead to more purchases, repeat customers, and improved brand experiences.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

Is Your Website Ready for AI Search? A Practical Audit for CMOs via @sejournal, @lorenbaker

AI-driven discovery is reshaping how brands earn visibility and conversions. Most CMS stacks weren’t built for this shift.

Is your CMS structured for AI-powered search and answer engines?
Can your content be interpreted, reused, and surfaced by machine-driven systems?
Is your current tech stack quietly limiting performance in search?

Discoverability depends on structured data, flexible architecture, and systems that adapt quickly.

Watch the on-demand webinar to see how to evaluate whether your Drupal site, or other CMS, is built for what’s next.

How To Audit Your CMS for AI-Driven Search & Conversion Performance

In this practical, marketer-focused on-demand session, we’ll walk through how CMOs and marketing leaders can assess whether their current CMS and digital stack support modern search behavior or restrict it.

You’ll leave with a clear understanding of what AI readiness means at the platform level, and how to identify risk areas before they impact growth.

You’ll Learn:

  • Where enterprise Drupal implementations most often fall short in AI-driven discovery
  • How AI search changes SEO strategy, content modeling, and conversion optimization
  • What defines an AI-ready CMS stack, including structured content, composable architecture, and open-source flexibility

Check out the slides below or watch the full presentation, on demand!

The Download: tracing AI-fueled delusions, and OpenAI admits Microsoft risks

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The hardest question to answer about AI-fueled delusions 

What actually happens when people spiral into delusion with AI? To find out, Stanford researchers analyzed transcripts from chatbot users who experienced these spirals. 

Their findings suggest that chatbots have a unique ability to turn a benign, delusion-like thought into a dangerous obsession. But the research struggles to answer a vital question: does AI cause delusions or merely amplify them? Read the full story to understand the answer’s enormous implications. 

—James O’Donnell 

This story is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday. 

The next era of space exploration 

Our footprint in the solar system is rapidly expanding. Programs to build permanent Moon bases and find life on Mars have transitioned from science fiction to active space agency missions. The scientists behind them will not only shed new light on the cosmos, but also reveal where humanity is headed. 

To examine what the future holds in store, MIT Technology Review features editor Amanda Silverman will sit down on Wednesday with award-winning science journalist and author Robin George Andrews for an exclusive subscriber-only Roundtable conversation about “The Next Era of Space Exploration.” Register here to join the session at 16:00 GMT / 12:00 PM ET / 9:00 AM PT. 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 OpenAI has admitted its close ties with Microsoft are a business risk 
It highlighted the dangers in a pre-IPO document. (CNBC
+ OpenAI is wooing private equity firms with a sweeter deal than Anthropic’s. (Reuters $) 
+ It’s also building a fully automated researcher. (MIT Technology Review
+ And wants to muscle in on Google’s search dominance. (Telegraph $) 

2 The US just banned all new foreign-made consumer routers 
Citing national security concerns. (BBC
+ The EU has been urged to tighten rules for big tech-built smart TVs. (Guardian

3 Elon Musk’s “Terafab” chip factory faces a harsh reality check 
In the form of chip production shortages. (Bloomberg
+ Future AI chips could be built on glass. (MIT Technology Review

4 Mark Zuckerberg is building an AI CEO to help him run Meta 
He wants everyone to have their own personal AI agent. (WSJ $) 
+ But don’t let the hype about agents get ahead of reality. (MIT Technology Review

5 Palantir has become a “poisonous” flashpoint on the campaign trail  
Candidates are facing scrutiny over their ties to the company. (FT $) 
+ Palantir’s access to sensitive UK data is also causing concern. (Guardian

6 Mistral’s CEO has called for AI companies to pay a content levy in Europe 
It would apply to all commercial models on the continent. (FT $) 
+ Siemens’ CEO says Europe risks “disaster” from prioritizing AI independence. (FT $) 

7 Hong Kong police can now demand device passwords under a new law 
Refusing to comply could lead to a year in jail. (Guardian)  

8  Russia’s aspiring SpaceX rival has put its first internet satellites into orbit  
It plans to create a low-Earth orbit network. (Bloomberg $) 

9 A biotech startup wants to replace animal testing with nonsentient “organ sacks” 
The genetically engineered system is backed by billionaire Tim Draper (Wired $)  
+ Several new technologies are promising alternatives to lab animals. (MIT Technology Review

10 AI agents in a video game spontaneously created their own religion 
They reinterpreted a mission in the MMORPG. (Gizmodo
+ They’re not the first agents to get religious. (MIT Technology Review

Quote of the day 

“I think we’ve achieved AGI.” 

—Nvidia CEO Jensen Huang tells the Lex Fridman Podcast that artificial general intelligence is already here (at least by one generous definition). 

One More Thing 

MICHAEL BYERS

Beyond gene-edited babies: the possible paths for tinkering with human evolution 

In 2018, a Chinese scientist created the world’s first gene-edited babies, a milestone that fell between a medical breakthrough and the start of a slippery slope toward human enhancement. 

He achieved the feat with CRISPR, which was sweeping across biology labs because it was so easy to use. For his actions, He was sentenced to three years in prison, and his work was roundly excoriated. Yet even his biggest critics saw the basic idea as inevitable. 

In the years since, CRISPR has continued getting easier and easier to administer. What does that mean for the future of our species? Read the full story to find out why. 

—Antonio Regalado 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 
 
+ This candle-powered Game Boy is a romantic approach to gaming during a blackout. 
+ Apparently, Monopoly would be more fun if we actually followed the rules. 
+ Watching rubber bands explode these everyday objects is strangely hypnotic. 
+This spellbinding site simulates what Earth looked like hundreds of millions of years ago. 

This scientist rewarmed and studied pieces of his friend’s cryopreserved brain

L. Stephen Coles’s brain sits cushioned in a vat at a storage facility in Arizona. It has been held there at a temperature of around −146 degrees °C for over a decade, largely undisturbed.

That is, apart from the time, a little over a year ago, when scientists slowly lifted the brain to take photos of it. Years before, the team had removed tiny pieces of it to send to Coles’s friend. Coles, a researcher who studied aging, was interested in cryogenics—the long-term storage of human bodies and brains in the hope that they might one day be brought back to life. Before he died, he asked cryobiologist Greg Fahy to study the effects of the preservation procedure on his brain. Coles was especially curious about whether his cooled brain would crack, says Fahy.

Coles’s brain was preserved shortly after he died in 2014, but Fahy has only recently got around to analyzing those samples. He says that Coles’s brain is “astonishingly well preserved.”

“We can see every detail [in the structure of the brain biopsies],” says Fahy, who is chief scientific officer at biotech companies Intervene Immune and 21st Century Medicine (where he is also executive director). He hopes this means that Coles’s brain still stands a chance of reanimation at some point in the future.

Other cryobiologists are less optimistic. “This brain is not alive,” says John Bischof, who works on ways to cryopreserve human organs at the University of Minnesota.

Still, Fahy’s research could help provide a tool to neuroscientists looking for new ways to study the brain. And while human reanimation after cryopreservation may be the stuff of science fiction, using the technology to preserve organs for transplantation is within reach.

Banking a brain

Coles, a gerontologist who spent the latter part of his career studying human longevity, opted to have his brain cryogenically preserved when he died of pancreatic cancer.

After he was declared dead, Coles’s body was kept at a low temperature while he was transferred to Alcor, a cryonics facility in Arizona. His head was removed from his body, and a team perfused his brain with “cryoprotective” chemicals that would prevent it from freezing. They then removed it from his skull and cooled it to −146 °C.

Coles had another request. As a scientist, he wanted his cryopreserved brain to be studied. Hundreds of people have opted to have their brains—with or without the rest of their bodies—stored at cryonic facilities (the remains of 259 individuals are currently stored as either whole bodies or heads at Alcor). But scientists know very little about what has happened to those brains, and there’s no evidence to suggest they could be revived. Coles had met Fahy through their shared interest in longevity, and he asked him to investigate.

“He thought that if he had himself cryopreserved, we could learn from his brain whether cracking was going to happen or not,” says Fahy. That’s what typically happens when organs are put into liquid nitrogen at −196 °C, he says. The extreme cooling creates “tension in the system,” he says. “If you tap it, it’ll just shatter.” This cracking is less likely at the slightly warmer temperatures used for preservation. 

Fahy was involved from the time the samples were taken.

“We had Greg Fahy on the phone coordinating the whole thing, [including] where the biopsies were taken,” says Nick Llewellyn, who oversees research at Alcor. (Llewellyn was not at Alcor at the time but has discussed the procedure with his colleagues.) The biopsied samples were stored in liquid nitrogen and earmarked for Fahy. The rest of the brain was cooled and kept in a temperature-controlled storage container at Alcor.

Bouncing back

It wasn’t until years later that Fahy got around to studying those biopsies. He was interested in how the cryoprotectant—which is toxic—might have affected the brain cells. Previous research has shown that flooding tissues with cryoprotectant can distort the structure of cells, essentially squashing them.

It’s one of the many challenges facing cryobiologists interested in storing human tissues at very low temperatures. While the vitrification of eggs and embryos—which cools them to −196 °C and essentially turns them to glass—has become relatively routine (thanks in part to Fahy’s own work on mouse embryos back in the 1980s), preserving whole organs this way is much harder. It is difficult to cool bigger objects in a uniform way, and they are prone to damaging ice crystal formation, even when cryoprotectants are used, as well as cracking.

Fahy found that when he rewarmed and rehydrated Coles’s brain cells, their structure seemed to bounce back to some degree. Fahy demonstrated the effect over a Zoom call: “It looks like this,” he said with his hands as if in prayer, “and it goes back to this,” he added, connecting his forefingers and thumbs to create a triangle shape.

The structure of the tissue looks pretty intact, too, to him at least, though he admits a purist expecting a pristine structure would be disappointed. He and his colleagues have been able to see remarkable details in the cells and their component parts. “There’s nothing we don’t see,” says Fahy, who has shared his results, which have not yet been peer reviewed, at the preprint server bioRxiv. “It seems that [by taking the cryogenic approach] you can preserve everything.”

As for the cracking, “from what I was told, no cracks were observed [by the team that initially preserved the brain],” says Fahy. The team at Alcor took photographs of the brain when they took the biopsies, but the images were later lost due to a server malfunction, he says. In the more recent photos, the brain is covered in a layer of frost, which makes it impossible to see if there are any cracks, he adds. Attempts to remove the frost might damage the brain, so the team has decided to leave it alone, he says.

Back to life?

Fahy and his colleagues used chemicals to “fix” Coles’s brain samples once they had been rewarmed. That process is typically used to stop fresh tissue samples from decaying, but it also effectively kills them.

But he thinks his results suggest that it might be possible to cryopreserve small pieces of brain tissue and reanimate them to learn more about how they work. Functional recovery seems to be possible in mice—a few weeks ago a team in Germany showed that they were able to revive brain slices that had been stored at −196 °C. Those brain samples showed electrical activity after being cooled and rewarmed.

If cryobiologists can achieve the same feat with human brain samples, those samples could provide neuroscientists with new insights into how living brains work.

Brain cryopreservation “can capture a little bit more of the complexities of the brain,” says Shannon Tessier, a cryobiologist at Massachusetts General Hospital who is developing technologies to preserve hearts, livers, and kidneys for transplantation. “[Being] able to use human brains from deceased individuals [could] add another layer to the research tool kit,” she says.

And Fahy’s paper shows “what happens when we try and vitrify a one-liter, dense, massive goop,” says Matthew Powell-Palm, a cryobiologist at Texas A&M University. “We now have a strong indication that quite large [tissues and organs] can be vitrified by perfusion [without forming too much ice],” he says.

All of the scientists I spoke to, including Fahy, are also working on ways to cool and preserve organs for transplantation. These are in short supply partly because once an organ is removed from a donor, it usually must be transplanted into its recipient within a matter of hours. 

Cryopreservation could buy enough time to make use of more organs, find better organ-donor matches, and potentially even prepare recipients’ immune systems and save them from a lifetime of immunosuppressant drugs, says Bischof, who has also been developing new technologies for organ cryopreservation.

Bischof, Fahy, and others have made huge strides in their attempts so far, and they have managed to remove, cryopreserve, and transplant organs in rabbits and rats, for example. “We’re at the cusp of human-scale organ cryopreservation,” says Bischof.

But when it comes to preserving brains, donation isn’t the aim. Coles had hoped to be reanimated—a far more ambitious goal that hinges on the ability to restore brain function.

Brain reanimation

Fahy acknowledges that while the structure of Coles’s brain samples did bounce back, there is no evidence to suggest the cells could be brought back to life and regain electrical activity and a functioning metabolism. “Restoring it to function … that’s a whole other story,” he says.

But he thinks that successful cryopreservation of the brain “is the gateway to human suspended animation, which [could allow] us to get to the stars someday.” Figuring out human preservation would also allow people to avoid death through what he calls “medical time travel”—journeying to an unspecified time in the future when science will have found a cure for whatever was due to kill that person. “That would be an ultimate goal to pursue,” he says.

“I put the chances [of brain reanimation] at pretty low,” says Alcor’s own Llewellyn. “The kind of technology we need is practically unfathomable.”

The brains already in storage at Alcor and other facilities have been preserved in ways that “have not been validated to work for reanimation,” says Tessier. An expectation that they’ll one day be brought back to life in some form is “quite a jump of faith and hope that’s not based on science,” she says.

As Powell-Palm puts it: “There are so many ways in which those neurons could be toast.”

Exclusive eBook: Are we ready to hand AI agents the keys?

We’re starting to give AI agents real autonomy, but are we prepared for what could happen next?

This subscriber-only eBook explores this and angles from experts, such as “If we continue on the current path … we are basically playing Russian roulette with humanity.”

by Grace Huckins June 12, 2025

Related Stories:

Access all subscriber-only eBooks:

How Foreign Brands Test the U.S. Market

You have a product. You’ve done the research. The U.S. market feels like the obvious next step, but you haven’t launched there yet. You’ve wondered, “What if it doesn’t work?”

That voice is right to ask. Most products fail not because the item is bad, but because of inadequate preparation and misjudged demand.

I’m the founder of OT Growth Labs, a Los Angeles-based agency helping international brands launch and scale in the U.S. Since 2008 I’ve served worldwide in executive ecommerce marketing roles for leading consumer companies.

The U.S. is the world’s largest consumer market. But for brands coming from Europe, Asia, or Latin America, it’s often where products die quietly. Consumers are different, compliance is different, and your domestic playbook won’t travel.

So before spending big money, test the demand in two ways:

  • Virtual testing measures interest before inventory exists.
  • Physical testing sells a real product in small quantities.

Virtual Testing

Screenshot of a person looking at a computer screen.

Virtual product demonstrations are low-cost, fast to launch, and require no inventory.

Virtual testing gauges whether consumers want your product — before you make it. It’s ideal for early-stage brands, limited budgets, or high-risk products. It won’t replace physical sales, but it’s a smart first filter.

Start with a landing page.

Explain your product thoroughly and the problem it solves. Disclose packaging, format, ingredients, claims, and label design. Give visitors an action step, such as joining a waitlist, requesting early access, or opting in to receive launch notifications.

Drive traffic through ads, social media, and influencers. It’s an encouraging signal if visitors sign up.

Brands with a platform or app already generating traffic can avoid a separate landing page by upselling to existing users. It saves time and money.

Don’t test a single concept. Run two or three variations and compare results. In my experience, the version that wins in the U.S. is rarely the one that worked at home. U.S. consumers respond to numbers and to bold, specific language: “clinically tested,” “formulated by veterinarians,” “organic.” They want proof up front.

Virtual testing:

  • Pros. Low cost, fast to launch, no inventory.
  • Cons. Measures interest only, not product or purchase intent.

Physical Testing

Image of a physical bottle

Selling a physical test product offers real data, reviews, and market validation.

The most reliable way to validate demand is to sell a product. Ship a small batch to the U.S. from your current manufacturer, or produce in the U.S. with a minimum run.

The latter option, manufacturing in the U.S., is longer and more expensive, but often worth it in my experience. “Made in the U.S.A.” on the label is frequently a strong selling point.

Physical testing answers questions that a landing page cannot: Does the product perform? Does the packaging hold up? Is the formula good? Is the price right? What do customers say?

Sales will tell you more than months of research, as will reviews, which are critical. An overwhelming percentage of U.S. consumers rely on reviews before buying.

Brands in adjacent categories often use physical testing as a learning loop. They launch a small batch, collect reviews, improve the formula or positioning, and then scale. The final version wins because of findings from the tests.

Physical testing:

  • Pros: Real sales data, reviews, market validation.
  • Cons: Expensive and slow. A small batch can take a year from start to shelf. It requires compliance prep, label and design creation, and formula testing. Finding a manufacturer willing to run small batches is a challenge.

Test, Then Scale

Entering the U.S. market is getting harder. Tariffs are rising, and regulations are tightening. Imports valued at less than $800 are no longer exempt from duties — a direct hit on international companies shipping small quantities.

Foreign brands succeed in the U.S. through testing and information-gathering, not just superior products.

Start small, the market will tell you the rest.

Google Begins Rolling Out The March 2026 Spam Update via @sejournal, @MattGSouthern

Google started rolling out the March 2026 spam update today, according to the Google Search Status Dashboard.

The update is global and in all languages, with a rollout that may take a few days.

What’s New

The Search Status Dashboard listed the update as an incident affecting ranking at 12:00 PM PT on March 24, with the release note posted at 12:18 PM PDT.

Google’s description reads:

“Released the March 2026 spam update, which applies globally and to all languages. The rollout may take a few days to complete.”

Google hasn’t published a blog post or announced new spam policies with this rollout. So far, it seems to be a standard spam update, not a broader policy change like the March 2024 update, which added categories such as content abuse, expired domain abuse, and site reputation abuse.

How Spam Updates Work

Google describes spam updates as improvements to spam-prevention systems like SpamBrain, targeting sites violating spam policies, which can lead to lower rankings or removal from search results.

Spam updates differ from core updates, which re-assess content quality. Spam updates enforce policies against violations like cloaking, link spam, and content abuse.

Sites affected by a spam update can recover, but recovery takes time. Google states improvements may only appear once automated systems detect compliance over months.

Context

This is Google’s first spam update since the August 2025 spam update, which ran from August 26 to September 22 and took nearly 27 days to complete. That update was characterized by SISTRIX as penalty-only, with affected spammy domains losing visibility but no broad ranking changes.

Google’s estimated timeline of “a few days” for the March 2026 update suggests a shorter rollout than recent spam updates, though timelines can stretch. The December 2024 spam update completed in seven days. The August 2025 update took nearly four weeks.

The March 2026 spam update comes about three weeks after the February Discover update finished rolling out.

Why This Matters

Ranking changes during spam update rollouts can happen quickly. Monitoring Search Console data over the next few days will help distinguish spam-related drops from normal fluctuation.

Google hasn’t announced new spam policy categories with this update, so the existing spam policies remain the relevant framework for evaluating any impact.

Looking Ahead

Google will update the Search Status Dashboard when the rollout is complete. Search Engine Journal will report on the completion and any observed effects.


Featured Image: Hurunaga Yuuka/Shutterstock

Google Adds AI & Bot Labels To Forum, Q&A Structured Data via @sejournal, @MattGSouthern

Google updated its Discussion Forum and Q&A Page structured data documentation, adding several new supported properties to both markup types.

The most notable addition is digitalSourceType, a property that lets forum and Q&A sites indicate when content was created by a trained AI model or another automated system.

Content Source Labeling Comes To Forum Markup

The new digitalSourceType property uses IPTC digital source enumeration values to indicate how content was created. Google supports two values:

  • TrainedAlgorithmicMediaDigitalSource for content created by a trained model, such as an LLM.
  • AlgorithmicMediaDigitalSource for content created by a simpler algorithmic process, such as an automatic reply bot.

The property is listed as recommended, not required, for both the DiscussionForumPosting and Comment types in the Discussion Forum docs, and for Question, Answer, and Comment types in the Q&A Page docs.

Google already uses similar IPTC source type values in its image metadata documentation to identify how images were created. The update extends that concept to text-based forum and Q&A content.

New Comment Count Property

Google added commentCount as a recommended property across both documentation pages. It lets sites declare the total number of comments on a post or answer, even when not all comments appear in the markup.

The Q&A Page documentation includes a new formula: answerCount + commentCount should equal the total number of replies of any type. This gives Google a clearer picture of thread activity on pages where comments are paginated or truncated.

Expanded Shared Content Support

The Discussion Forum documentation expanded its sharedContent property. Previously, sharedContent accepted a generic CreativeWork type. The updated docs now explicitly list four supported subtypes:

  • WebPage for shared links.
  • ImageObject for posts where an image is the primary content.
  • VideoObject for posts where a video is the primary content.
  • DiscussionForumPosting or Comment for quoted or reposted content from other threads.

The addition of DiscussionForumPosting and Comment as accepted types is new. Google’s updated documentation includes a code example showing how to mark up a referenced comment with its URL, author, date, and text.

The image property description was also updated across both docs with a note about link preview images. Google now recommends placing link preview images inside the sharedContent field’s attached WebPage rather than in the post’s image field.

Why This Matters

For sites that publish a mix of human and machine-generated content, the digitalSourceType addition provides a structured way to communicate that to Google. The new properties are optional, and no existing implementations will break.

Google has not said how it will use the digitalSourceType data in its ranking or display systems. The documentation only describes it as a way to indicate content origin.

Looking Ahead

The update does not include changes to required properties, so existing forum and Q&A structured data implementations remain valid. Sites that want to adopt the new properties can add them incrementally.