The Download: creating the perfect baby, and carbon removal’s lofty promises

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The race to make the perfect baby is creating an ethical mess

An emerging field of science is seeking to use cell analysis to predict what kind of a person an embryo might eventually become.

Some parents turn to these tests to avoid passing on devastating genetic disorders that run in their families. A much smaller group, driven by dreams of Ivy League diplomas or attractive, well-behaved offspring, are willing to pay tens of thousands of dollars to optimize for intelligence, appearance, and personality.

But customers of the companies emerging to provide it to the public may not be getting what they’re paying for. Read the full story.

—Julia Black

This story is from our forthcoming print issue, which is all about the body. If you haven’t already, subscribe now to receive future issues once they land. Plus, you’ll also receive a free digital report on nuclear power.

The problem with Big Tech’s favorite carbon removal tech

Sucking carbon pollution out of the atmosphere is becoming a big business—companies are paying top dollar for technologies that can cancel out their own emissions.

Tech giants like Microsoft are betting big on one technology: bioenergy with carbon capture and storage (BECCS). But there are a few potential problems with BECCS, as my colleague James Temple laid out in a new story. And some of the concerns echo similar problems with other climate technologies we cover, like carbon offsets and alternative jet fuels. Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

2025 climate tech companies to watch: Fervo Energy and its advanced geothermal power plants

Some places on Earth hit the geological jackpot for generating electricity. In those spots, three conditions naturally align: high temperatures, plentiful water, and rock that’s permeable enough for fluids to circulate through.

Enhanced geothermal systems aim to replicate those conditions in far more places—producing a steady supply of renewable energy wherever they’re deployed. Fervo Energy uses fracking techniques to create geothermal reservoirs capable of delivering enough electricity to power massive data centers and hundreds of thousands of homes. Read the full story.

—Celina Zhao

Fervo Energy is one of our 10 climate tech companies to watch—our annual list of some of the most promising climate tech firms on the planet. Check out the rest of the list here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Meta removed a Facebook group that shared ICE agent sightings
It’s the latest tech company to acquiesce to US government pressure. (NYT $)
+ Meta says the group violates its policies against “coordinated harm.” (NBC News)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

2 Loss-making AI startups are still soaring in value
If it looks like a bubble, and sounds like a bubble… (FT $)
+ AI-backed energy firms have also ballooned in value. (WSJ $)
+ Scaling isn’t always the answer, y’know. (Wired $)

3 Facial recognition is failing people with facial differences
Yet it’s being embedded in everything from phone unlocking systems to public services. (Wired $)

4 Tech billionaires are backing a startup that treats tumors with sound waves
It’s being touted as a less-invasive alternative to chemotherapy. (Bloomberg $)

5 Scam texts are a billion-dollar criminal enterprise
And we’re being inundated with more of them than ever before. (WSJ $)
+ The people using humor to troll their spam texts. (MIT Technology Review)

6 South Korea has rolled back an AI textbook program for schools
Turns out it was riddled with inaccuracies and added to teachers’ workloads. (Rest of World)
+ The country is considering allowing Google and Apple to make hi-res maps. (TechCrunch)

7 YouTube is setting its sights on sports
Which makes sense, given that it’s conquered pretty much all the other TV genres. (Hollywood Reporter $)

8 Job hunting in the age of AI is bleak
Even the best candidates are being overlooked. (The Atlantic $)
+ The job market is a mess too. (Slate $)

9 A new channel broadcasts a livestream direct from the ISS 🌏
If you’ve ever wanted to be an astronaut, watching this is the next best thing. (The Guardian)

10 The end of support for Windows 10 is an e-waste disaster
Up to 400 million machines could be heading to the scrap heap. (404 Media)
+ The US government has cut funding for a battery-metals recycler. (Bloomberg $)
+ AI will add to the e-waste problem. Here’s what we can do about it. (MIT Technology Review)

Quote of the day

“We are not the elected moral police of the world.”

—OpenAI CEO Sam Altman reacts to the outcry sparked by his company’s decision to relax its rules to let adults hold erotic conversations with ChatGPT, CNBC reports.

One more thing

Inside India’s scramble for AI independence

Despite its status as a global tech hub, India lags far behind the likes of the US and China when it comes to homegrown AI.

That gap has opened largely because India has chronically underinvested in R&D, institutions, and invention. Meanwhile, since no one native language is spoken by the majority of the population, training language models is far more complicated than it is elsewhere.

So when the open-source foundation model DeepSeek-R1 suddenly outperformed many global peers, it struck a nerve. This launch by a Chinese startup prompted Indian policymakers to confront just how far behind the country was in AI infrastructure—and how urgently it needed to respond. Read the full story.

—Shadma Shaikh

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ This haunting shot of a hyena is this year’s Wildlife Photographer of the Year award winner (thanks Laurel!)
+ Madonna sure has a lot of famous friends.
+  This little giraffe is so sleepy 🦒
+ Late ‘80s dance heads, rise up!

Unlocking the potential of SAF with book and claim in air freight

Used in aviation, book and claim offers companies the ability to financially support the use of SAF even when it is not physically available at their locations.

As companies that ship goods by air or provide air freight related services address a range of climate goals aiming to reduce emissions, the importance of sustainable aviation fuel (SAF) couldn’t be more pronounced. In its neat form, SAF has the potential to reduce life cycle GHG emissions by up to 80% compared to conventional jet fuel.

In this exclusive webcast, leaders discuss the urgency for reducing air freight emissions for freight forwarders and shippers, and reasons why companies should use SAF. They also explain how companies can best make use of the book and claim model to support their emissions reduction strategies.

Learn from the leaders

  • What book and claim is and how companies can use it
  • Why SAF use is so important
  • How freight-forwarders and shippers can both potentially utilise and contribute to the benefits of SAF

Featured speakers

Raman Ojha, President, Shell Aviation. Raman is responsible for Shell’s global aviation business, which supplies fuels, lubricants, and lower carbon solutions, and offers a range of technical services globally. During almost 20 years at Shell, Raman has held leadership positions across a variety of industry sectors, including energy, lubricants, construction, and fertilisers. He has broad experience across both matured markets in the Americas and Europe, as well as developing markets including China, India, and Southeast Asia.  

Bettina Paschke, VP ESG Accounting, Reporting & Controlling, DHL Express. Bettina Paschke leads ESG Accounting, Reporting & Controlling, at DHL Express a division of DHL Group. In her role, she is responsible for ESG, including, EU Taxonomy Reporting, and Carbon Accounting. She has more than 20 years’ experience in Finance. In her role she is driving the Sustainable Aviation Fuel agenda at DHL Express and is engaged in various industry initiatives to allow reliable book and claim transactions.

Christoph Wolff, Chief Executive Officer at Smart Freight Centre. Christoph Wolff is currently the Chief Executive Officer at Smart Freight Centre, leading programs focused on sustainability in freight transport. Prior to this role, Christoph served as the Senior Advisor and Director at ACME Group, a global leader in green energy solutions. With a background in various industries, Christoph has held positions such as Managing Director at European Climate Foundation and Senior Board Advisor at Ferrostaal GmbH. Christoph has also worked at Novatec, Solar Millennium AG, DB Schenker, McKinsey & Company, and served as an Assistant Professor at Northwestern University – Kellogg School of Management. Christoph holds multiple degrees from RWTH Aachen University and ETH Zürich, along with ongoing executive education at the University of Michigan.

Watch the webcast.

This discussion is presented by MIT Technology Review Insights in association with Avelia. Avelia is a Shell owned solution and brand that was developed with support from Amex GBT, Accenture and Energy Web Foundation. The views from individuals not affiliated with Shell are their own and not those of Shell PLC or its affiliates. Cautionary note | Shell Global

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Not all offerings are available in all jurisdictions. Depending on jurisdiction and local laws, Shell may offer the sale of Environmental Attributes (for which subject to applicable law and consultation with own advisors, buyers might be able to use such Environmental Attributes for their own emission reduction purposes) and/or Environmental Attribute Information (pursuant to which buyers are helping subsidize the use of SAF and lower overall aviation emissions at designated airports but no emission reduction claims may be made by buyers for their own emissions reduction purposes). Different offerings have different forms of contracts, and no assumptions should be made about a particular offering without reading the specific contractual language applicable to such offering.

Take our quiz: How much do you know about antimicrobial resistance?

This week we had some terrifying news from the World Health Organization: Antibiotics are failing us. A growing number of bacterial infections aren’t responding to these medicines—including common ones that affect the blood, gut, and urinary tract. Get infected with one of these bugs, and there’s a fair chance antibiotics won’t help. 

The scary truth is that a growing number of harmful bacteria and fungi are becoming resistant to drugs. Just a few weeks ago, the US Centers for Disease Control and Prevention published a report finding a sharp rise in infections caused by a dangerous type of bacteria that are resistant to some of the strongest antibiotics. Now, the WHO report shows that the problem is surging around the world.

In this week’s Checkup, we’re trying something a bit different—a little quiz. You’ve probably heard about antimicrobial resistance (AMR) before, but how much do you know about microbes, antibiotics, and the scale of the problem? Here’s our attempt to put the “fun” in “fundamental threat to modern medicine.” Test your knowledge below!

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Recommerce for the Holidays

The resale economy is no longer a niche. As the 2025 Christmas shopping season approaches, the sale of pre-owned, refurbished, or overstock goods is impacting how merchants attract consumers and clear excess inventory.

Salesforce predicts that U.S. resale transactions — peer-to-peer, marketplaces, other channels — will account for $64 billion in holiday revenue this year. Fashion resale alone could top $26 billion in 2025.

Recommerce is fast becoming both a profitable sales channel and a discount strategy that avoids brand erosion.

Nearly every recommerce survey this year points to growth. Deloitte reported that some 150 U.S. fashion brands now offer in-house resale programs, up more than 300% since 2021.

Tariffs, inflation, and changing consumer values all push shoppers toward pre-owned goods.

Home page of Lulumon Like New

Lululemon Like New is the company’s dedicated resale site.

The Resale Consumer

The resale shopper, however, is not only a bargain hunter.

Recommerce’s new appeal could stem from taste, value, and access. Consumers want products that feel distinct and attainable, and if those products happen to be cheaper than first-run items, all the better.

Consider Pinterest’s Autumn Trend Report 2025, released in August. The platform reported a 550% increase in searches for “dream thrift finds” and more than a 1,000% increase in searches related to a “vintage autumn aesthetic.”

This apparent combination of thrift and taste may explain why even relatively expensive brands such as Lululemon, Madewell, and Nike sell their own used, reconditioned, or overstock products at steep discounts.

Strategic Resale

For ecommerce operators, recommerce isn’t merely a revenue opportunity. It is a strategic pricing tool.

Instead of relying on blanket markdowns that dilute brand equity, merchants can move open-box returns, refurbished goods, and aged or seasonal inventory into a “pre-owned” or “like-new” category.

This approach reframes discounting as value-driven and appeals to new customer segments, especially Gen Zs and Millennials who associate thrift with intelligence and authenticity.

Holiday Execution

The imminent Christmas shopping season is an excellent time to test recommerce.

New inventory. Recommerce can begin with overstock and slow-moving products, not just used items. Instead of markdowns, list products in a “like new” category that frames savings as smart and value-driven.

Merchants can also A/B test results. Offer the same SKU twice — one discounted, one recommerce — and compare performance. Many shops find the “like new” label maintains value while attracting price-conscious buyers.

Returned. Don’t forget returned and reconditioned items. These often sit idle during the holidays. Try creating a dedicated section for open-box or lightly used goods that meet resale standards.

Early returns from Black Friday and Cyber Monday can become new listings within days. Build a fast intake process: inspect, relabel, and relist within 72 hours. Every extra day in storage is a lost chance to capture demand.

After Christmas. Present post-holiday campaigns as “Smart Finds” or “Returned Favorites.” Turn liquidation into a recommerce story. The approach converts returns into marketing and signals that the retailer values reuse and efficiency.

Recommerce Tech

With planning and organization, the resale channel is an option for nearly any ecommerce site.

Nonetheless, several apps and add-ons make reselling relatively easier.

  • Archive and Trove integrate with Shopify and other platforms for resale logistics.
  • Loop Returns and ReturnLogic automatically route eligible returns into resale channels.
  • B-Stock provides liquidation options for bulk or unsellable items.

Tracking resale margins separately in Google Analytics or ecommerce dashboards can quantify whether recommerce cannibalizes or complements sales.

Holiday Testing

Recommerce increasingly contributes to U.S. retail growth. For merchants, it is a margin and retention strategy that redefines how inventory flows through the business.

Testing recommerce during the 2025 holiday shopping season allows retailers to gauge sales without committing to a full-scale effort. If it performs well, the channel can become permanent.

Google Reminds SEOs How The URL Removals Tool Works via @sejournal, @martinibuster

Google’s John Mueller answered a question about removing hacked URLs that are showing in the index. He explained how to remove the sites from appearing in the search results and then discussed the nuances involved in dealing with this specific situation.

Removing Hacked Pages From Google’s SERPs

The person asking the question was the victim of the Japanese hacking attack, so-called because the attackers create hundreds or even thousands of rogue Japanese language web pages. The person had dealt with the issue and removed the spammy infected web pages, leaving them with 404 pages that are still referenced in Google’s search results.

They now want to remove them from Google’s search index so that the site is no longer associated with those pages.

They asked:

“My site recently got a Japanese attack. However, I shifted that site to a new hosting provider and have removed all data from there.

However, the fact is that many Japanese URLs have been indexed.

So how do I deindex those thousands of URLs from my website?”

The question reflects a common problem in the aftermath of a Japanese hack attack, where hacked pages stubbornly remain indexed long after the pages were removed. This shows that site recovery is not complete once the malicious content is removed; Google’s search index needs to clear the pages, and that can take a frustratingly long time.

How To Remove Japanese Hack Attack Pages From Google

Google’s John Mueller recommended using the URL Removals Tool found in Search Console. Contrary to the implication inherent in the name of the tool, it doesn’t remove a URL from the search index; it just removes it from showing in Google’s search results faster if the content has already been removed from the site or blocked from Google’s crawler. Under normal circumstances, Google will remove a page from the search results after the page is crawled and noted to be blocked or gone (404 error response).

Three Prerequisites For URL Removals Tool

  1. The page is removed and returns a 404 or 410 server response code.
  2. The URL is blocked from indexing by a robots meta tag:
  3. The URL is prevented from being crawled by a robots.txt file.

Google’s Mueller responded:

“You can use the URL removal tool in search console for individual URLs (also if the URLs all start with the same thing). I’d use that for any which are particularly visible (check the performance report, 24 hours).

This doesn’t remove them from the index, but it hides them within a day. If the pages are invalid / 404 now, they’ll also drop out over time, but the removal tool means you can stop them from being visible “immediately”. (Redirecting o 404 are both ok, technically a 404 is the right response code)”

Mueller clarified that the URL Removals Tool does not delete URLs from Google’s index but instead hides them from search results, faster than natural recrawling would. His explanation is a reminder that the tool has a temporary search visibility effect and is not a way to permanently remove a URL from Google’s index itself. The actual removal from the search index happens after Google verifies that the page is actually gone or blocked from crawling or indexing.

Featured Image by Shutterstock/Asier Romero

Google Business Profile Tests Cross-Location Posts via @sejournal, @MattGSouthern

Google Business Profile appears to be testing a feature that lets managers share the same update across multiple locations from a single dialog.

Tim Capper reported seeing the option. After publishing an update, a “Copy post” dialog appears with the prompt: “Copy the update to other profiles you manage.”

The interface displays a list of business locations with checkboxes so you can choose which profiles receive the same update.

We’ve asked Google for comment on availability and eligibility requirements and will update this article if we receive a response.

What’s New

From what’s visible in the screenshots, the workflow streamlines cross-posting for multi-location accounts.

You publish an update to one profile, then immediately see a pop-up listing other profiles you manage.

You can select one or many locations and post the same update without repeating the process.

Why It Matters

If you manage multiple locations, this could save time by reducing repetitive posting. It may also help keep messaging consistent across locations.

Make sure updates remain locally relevant before copying them everywhere.

How To Check If You Have Access

If you manage more than one profile in the same account, publish a standard update to one location.

If your account is in the test, you should see a “Copy post” dialog immediately after posting, with a list of other profiles you manage.

If You Don’t See It

Not all accounts will have access during tests. Keep posting as usual and check again periodically. If you manage many locations, confirm that all profiles are grouped under the same account with the correct permissions.

Looking Ahead

If Google proceeds with a wider launch, expect details on supported post types, scheduling, and limits. We’ll update this story if Google confirms the feature or publishes documentation.

Your Brand Is Being Cited By AI. Here’s How To Measure It via @sejournal, @DuaneForrester

Search has never stood still. Every few years, a new layer gets added to how people find and evaluate information. Generative AI systems like ChatGPT, Copilot Search, and Perplexity haven’t replaced Google or Bing. They’ve added a new surface where discovery happens earlier, and where your visibility may never show up in analytics.

Call it Generative Engine Optimization, call it AI visibility work, or just call it the next evolution of SEO. Whatever the label, the work is already happening. SEO practitioners are already tracking citations, analyzing which content gets pulled into AI responses, and adapting strategies as these platforms evolve weekly.

This work doesn’t replace SEO, rather it builds on top of it. Think of it as the “answer layer” above the traditional search layer. You still need structured content, clean markup, and good backlinks, among the other usual aspects of SEO. That’s the foundation assistants learn from. The difference is that assistants now re-present that information to users directly inside conversations, sidebars, and app interfaces.

If your work stops at traditional rankings, you’ll miss the visibility forming in this new layer. Tracking when and how assistants mention, cite, and act on your content is how you start measuring that visibility.

Your brand can appear in multiple generative answers without you knowing. These citations don’t show up in any analytics tool until someone actually clicks.

Image Credi: Duane Forrester

Perplexity explains that every answer it gives includes numbered citations linking to the original sources. OpenAI’s ChatGPT Search rollout confirms that answers now include links to relevant sites and supporting sources. Microsoft’s Copilot Search does the same, pulling from multiple sources and citing them inside a summarized response. And Google’s own documentation for AI overviews makes it clear that eligible content can be surfaced inside generative results.

Each of these systems now has its own idea of what a “citation” looks like. None of them report it back to you in analytics.

That’s the gap. Your brand can appear in multiple generative answers without you knowing. These are the modern zero-click impressions that don’t register in Search Console. If we want to understand brand visibility today, we need to measure mentions, impressions, and actions inside these systems.

But there’s yet another layer of complexity here: content licensing deals. OpenAI has struck partnerships with publishers including the Associated Press, Axel Springer, and others, which may influence citation preferences in ways we can’t directly observe. Understanding the competitive landscape, not just what you’re doing, but who else is being cited and why, becomes essential strategic intelligence in this environment.

In traditional SEO, impressions and clicks tell you how often you appeared and how often someone acted. Inside assistants, we get a similar dynamic, but without official reporting.

  • Mentions are when your domain, name, or brand is referenced in a generative answer.
  • Impressions are when that mention appears in front of a user, even if they don’t click.
  • Actions are when someone clicks, expands, or copies the reference to your content.

These are not replacements for your SEO metrics. They’re early indicators that your content is trusted enough to power assistant answers.

If you read last week’s piece, where I discussed how 2026 is going to be an inflection year for SEOs, you’ll remember the adoption curve. During 2026, assistants are projected to reach around 1 billion daily active users, embedding themselves into phones, browsers, and productivity tools. But that doesn’t mean they’re replacing search. It means discovery is happening before the click. Measuring assistant mentions is about seeing those first interactions before the analytics data ever arrives.

Let’s be clear. Traditional search is still the main driver of traffic. Google handles over 3.5 billion searches per day. In May 2025, Perplexity processed 780 million queries in a full month. That’s roughly what Google handles in about five hours.

The data is unambiguous. AI assistants are a small, fast-growing complement, not a replacement (yet).

But if your content already shows up in Google, it’s also being indexed and processed by the systems that train and quote inside these assistants. That means your optimization work already supports both surfaces. You’re not starting over. You’re expanding what you measure.

Search engines rank pages. Assistants retrieve chunks.

Ranking is an output-aligned process. The system already knows what it’s trying to show and chooses the best available page to match that intent. Retrieval, on the other hand, is pre-answer-aligned. The system is still assembling the information that will become the answer and that difference can change everything.

When you optimize for ranking, you’re trying to win a slot among visible competitors. When you optimize for retrieval, you’re trying to be included in the model’s working set before the answer even exists. You’re not fighting for position as much as you’re fighting for participation.

That’s why clarity, attribution, and structure matter so much more in this environment. Assistants pull only what they can quote cleanly, verify confidently, and synthesize quickly.

When an assistant cites your site, it’s doing so because your content met three conditions:

  1. It answered the question directly, without filler.
  2. It was machine-readable and easy to quote or summarize.
  3. It carried provenance signals the model trusted: clear authorship, timestamps, and linked references.

Those aren’t new ideas. They’re the same best practices SEOs have worked with for years, just tested earlier in the decision chain. You used to optimize for the visible result. Now you’re optimizing for the material that builds the result.

One critical reality to understand: citation behavior is highly volatile. Content cited today for a specific query may not appear tomorrow for that same query. Assistant responses can shift based on model updates, competing content entering the index, or weighting adjustments happening behind the scenes. This instability means you’re tracking trends and patterns, not guarantees (not that ranking was guaranteed, but they are typically more stable). Set expectations accordingly.

Not all content has equal citation potential, and understanding this helps you allocate resources wisely. Assistants excel at informational queries (”how does X work?” or “what are the benefits of Y?”). They’re less relevant for transactional queries like “buy shoes online” or navigational queries like “Facebook login.”

If your content serves primarily transactional or branded navigational intent, assistant visibility may matter less than traditional search rankings. Focus your measurement efforts where assistant behavior actually impacts your audience and where you can realistically influence outcomes.

The simplest way to start is manual testing.

Run prompts that align with your brand or product, such as:

  • “What is the best guide on [topic]?”
  • “Who explains [concept] most clearly?”
  • “Which companies provide tools for [task]?”

Use the same query across ChatGPT Search, Perplexity, and Copilot Search. Document when your brand or URL appears in their citations or answers.

Log the results. Record the assistant used, the prompt, the date, and the citation link if available. Take screenshots. You’re not building a scientific study here; you’re building a visibility baseline.

Once you’ve got a handful of examples, start running the same queries weekly or monthly to track change over time.

You can even automate part of this. Some platforms now offer API access for programmatic querying, though costs and rate limits apply. Tools like n8n or Zapier can capture assistant outputs and push them to a Google Sheet. Each row becomes a record of when and where you were cited. (To be fair, it’s more complicated than 2 short sentences make it sound, but it’s doable by most folks, if they’re willing to learn some new things.)

This is how you can create your first “ai-citation baseline“ report if you’re willing to just stay manual in your approach.

But don’t stop at tracking yourself. Competitive citation analysis is equally important. Who else appears for your key queries? What content formats do they use? What structural patterns do their cited pages share? Are they using specific schema markup or content organization that assistants favor? This intelligence reveals what assistants currently value and where gaps exist in the coverage landscape.

We don’t have official impression data yet, but we can infer visibility.

  • Look at the types of queries where you appear in assistants. Are they broad, informational, or niche?
  • Use Google Trends to gauge search interest for those same queries. The higher the volume, the more likely users are seeing AI answers for them.
  • Track assistant responses for consistency. If you appear across multiple assistants for similar prompts, you can reasonably assume high impression potential.

Impressions here don’t mean analytics views. They mean assistant-level exposure: your content seen in an answer window, even if the user never visits your site.

Actions are the most difficult layer to observe, but not because assistant ecosystems hide all referrer data. The tracking reality is more nuanced than that.

Most AI assistants (Perplexity, Copilot, Gemini, and paid ChatGPT users) do send referrer data that appears in Google Analytics 4 as perplexity.ai / referral or chatgpt.com / referral. You can see these sources in your standard GA4 Traffic Acquisition reports. (useful article)

The real challenges are:

Free-tier users don’t send referrers. Free ChatGPT traffic arrives as “Direct” in your analytics, making it impossible to distinguish from bookmark visits, typed URLs, or other referrer-less traffic sources. (useful article)

No query visibility. Even when you see the referrer source, you don’t know what question the user asked the AI that led them to your site. Traditional search gives you some query data through Search Console. AI assistants don’t provide this.

Volume is still small but growing. AI referral traffic typically represents 0.5% to 3% of total website traffic as of 2025, making patterns harder to spot in the noise of your overall analytics. (useful article)

Here’s how to improve tracking and build a clearer picture of AI-driven actions:

  1. Set up dedicated AI traffic tracking in GA4. Create a custom exploration or channel group using regex filters to isolate all AI referral sources in one view. Use a pattern like the excellent example in this Orbit Media article to capture traffic from major platforms ( ^https://(www.meta.ai|www.perplexity.ai|chat.openai.com|claude.ai|gemini.google.com|chatgpt.com|copilot.microsoft.com)(/.*)?$ ). This separates AI referrals from generic referral traffic and makes trends visible.
  2. Add identifiable UTM parameters when you control link placement. In content you share to AI platforms, in citations you can influence, or in public-facing URLs. Even platforms that send referrer data can benefit from UTM tagging for additional attribution clarity. (useful article)
  3. Monitor “Direct” traffic patterns. Unexplained spikes in direct traffic, especially to specific landing pages that assistants commonly cite, may indicate free-tier AI users clicking through without referrer data. (useful article)
  4. Track which landing pages receive AI traffic. In your AI traffic exploration, add “Landing page + query string” as a dimension to see which specific pages assistants are citing. This reveals what content AI systems find valuable enough to reference.
  5. Watch for copy-paste patterns in social media, forums, or support tickets that match your content language exactly. That’s a proxy for text copied from an assistant summary and shared elsewhere.

Each of these tactics helps you build a more complete picture of AI-driven actions, even without perfect attribution. The key is recognizing that some AI traffic is visible (paid tiers, most platforms), some is hidden (free ChatGPT), and your job is to capture as much signal as possible from both.

Machine-Validated Authority (MVA) isn’t visible to us as it’s an internal trust signal used by AI systems to decide which sources to quote. What we can measure are the breadcrumbs that correlate with it:

  • Frequency of citation
  • Presence across multiple assistants
  • Stability of the citation source (consistent URLs, canonical versions, structured markup)

When you see repeat citations or multi-assistant consistency, you’re seeing a proxy for MVA. That consistency is what tells you the systems are beginning to recognize your content as reliable.

Perplexity reports almost 10 billion queries a year across its user base. That’s meaningful visibility potential even if it’s small compared to search.

Microsoft’s Copilot Search is embedded in Windows, Edge, and Microsoft 365. That means millions of daily users see summarized, cited answers without leaving their workflow.

Google’s rollout of AI Overviews adds yet another surface where your content can appear, even when no one clicks through. Their own documentation describes how structured data helps make content eligible for inclusion.

Each of these reinforces a simple truth: SEO still matters, but it now extends beyond your own site.

Start small. A basic spreadsheet is enough.

Columns:

  • Date.
  • Assistant (ChatGPT Search, Perplexity, Copilot).
  • Prompt used.
  • Citation found (yes/no).
  • URL cited.
  • Competitor citations observed.
  • Notes on phrasing or ranking position.

Add screenshots and links to the full answers for evidence. Over time, you’ll start to see which content themes or formats surface most often.

If you want to automate, set up a workflow in n8n that runs a controlled set of prompts weekly and logs outputs to your sheet. Even partial automation will save time and let you focus on interpretation, not collection. Use this sheet and its data to augment what you can track in sources like GA4.

Before investing heavily in assistant monitoring, consider resource allocation carefully. If assistants represent less than 1% of your traffic and you’re a small team, extensive tracking may be premature optimization. Focus on high-value queries where assistant visibility could materially impact brand perception or capture early-stage research traffic that traditional search might miss.

Manual quarterly audits may suffice until the channel grows to meaningful scale. This is about building baseline understanding now so you’re prepared when adoption accelerates, not about obsessive daily tracking of negligible traffic sources.

Executives understand and prefer dashboards, not debates about visibility layers, so show them real-world examples. Put screenshots of your brand cited inside ChatGPT or Copilot next to your Search Console data. Explain that this is not a new algorithm update but a new front end for existing content. It’s up to you to help them understand this critical difference.

Frame it as additive reach. You’re showing leadership that the company’s expertise is now visible in new interfaces before clicks happen. That reframing keeps support for SEO strong and positions you as the one tracking the next wave.

It’s worth noting that citation practices exist within a shifting legal landscape. Publishers and content creators have raised concerns about copyright and fair use as AI systems train on and reproduce web content. Some platforms have responded with licensing agreements, while legal challenges continue to work through courts.

This environment may influence how aggressively platforms cite sources, which sources they prioritize, and how they balance attribution with user experience. The frameworks we build today should remain flexible as these dynamics evolve and as the industry establishes clearer norms around content usage and attribution.

AI assistant visibility is not yet a major traffic source. It’s a small but growing signal of trust.

By measuring mentions and citations now, you build an early-warning system. You’ll see when your content starts appearing in assistants long before any of your analytics tools do. This means that when 2026 arrives and assistants become a daily habit, you won’t be reacting to the curve. You’ll already have data on how your brand performs inside these new systems.

If you extend the concept here of “data” to a more meta level, you could say it’s already telling us that the growth is starting, it’s explosive, and it’s about to have an impact in consumer’s behaviors. So now is the moment to take that knowledge and focus it on the more day-to-day work you do and start to plan for how those changes impact that daily work.

Traditional SEO remains your base layer. Generative visibility sits above it. Machine-Validated Authority lives inside the systems. Watching mentions, impressions, and actions is how we start making what’s in the shadows measurable.

We used to measure rankings because that’s what we could see. Today, we can measure retrieval for the same reason. This is just the next evolution of evidence-based SEO. Ultimately, you can’t fix what you can’t see. We cannot see how trust is assigned inside the system, but we can see the outputs of each system.

The assistants aren’t replacing search (yet). They’re simply showing you how visibility behaves when the click disappears. If you can measure where you appear in those layers now, you’ll know when the slope starts to change and you’ll already be ahead of it.

More Resources:


Featured Image: Anton Vierietin/Shutterstock


This post was originally published on Duane Forrester Decodes.

Ask A PPC: How To Manage Brand Safety In PPC via @sejournal, @navahf

Brand safety has always been part of the conversation in digital advertising, but recent shifts in the broader media landscape have brought new layers of complexity. Advertisers today are working in a climate where audience expectations, platform behavior, and public scrutiny intersect in ways that are not always easy to predict – or to manage.

In this edition of Ask A PPC, we will explore how advertisers can protect their brand’s integrity across platforms like Google and Microsoft. While this piece comes from a Microsoft employee, the goal is not to highlight one platform over another.

Whether you’re building upper-funnel brand campaigns or performance-driven media, the question of where and how your ads show up has never mattered more. What used to be a set-it-and-forget-it filter has become a strategic consideration that shapes both campaign outcomes and brand perception.

This piece explores brand safety across three key areas: where ads serve, how ads serve, and how your brand voice is carried through.

Where Ads Serve: Context Still Matters

Most PPC campaigns begin with a defined audience. Whether you’re optimizing for reach or conversion, there’s usually a persona or intent signal guiding the targeting.

But placements introduce a separate layer of decision-making. It’s not just who you’re reaching; it’s where that audience is when they see your message. Some advertisers feel comfortable casting a wide net, trusting the platform to find performance. Others prefer a more curated approach, particularly when certain environments may not align with their brand’s tone or audience expectations.

This is where brand controls come into play.

Both Google and Microsoft offer tools to help advertisers manage where their ads appear across display, video, and native inventory. On Google, these settings include “expanded,” “standard,” and “limited” inventory tiers. Microsoft takes a more category-based approach, with exclusions that cover areas like political content, mature themes, and natural disasters.

These controls can help brands preserve access to valuable, high-utility placements (i.e., major news sites), while reducing the risk of serving next to content that might feel misaligned.

There’s also the option to take a more targeted route by targeting specific placements. This can be useful if you already have a strong sense of where your audience converts or where your creative performs well. However, placement-level targeting relies on historical performance and excluding other placements, which can make it harder to uncover new profitable inventory.

A useful test is building one campaign/ad group that leans into known placements while also running a parallel one that’s fully audience-based, while maintaining strict brand controls. This helps you balance performance with brand alignment, without having to commit fully in either direction right away.

For advertisers using video placements, it’s important to understand delivery mechanics as well. Ad placement within videos (pre-roll, mid-roll, post-roll) and the type of content your ads accompany can have an impact on how your brand is perceived. Most platforms offer exclusion settings as well as frequency caps.

How Ads Serve: Maintaining Brand Integrity Through Creative Formatting

The second layer of brand safety goes beyond placements. It’s about how your ad actually appears once it serves.

Ad platforms have made significant investments in dynamic creative. Responsive formats, automated asset combinations, and AI-generated content all promise broader reach and better performance. These features can be incredibly useful for scaling campaigns, though they can introduce variability in how your brand presents.

If you work in a regulated industry, or if your brand has established tone and visual standards, this variability may not feel like a worthwhile tradeoff.

To help with that, both Google and Microsoft have released tools to give advertisers more control. Google offers creative instructions, which let you define parameters around copy, tone, colors, and visual elements. This helps ensure that even dynamically assembled ads still adhere to your guidelines.

Microsoft has integrated brand safety tools powered by Copilot, allowing advertisers to upload brand kits that include fonts, colors, and other visual standards. Copilot can also support A/B testing of creative tones, which can help teams learn how different styles resonate without stepping outside of their guardrails.

Whether or not you choose to lean into these dynamic features depends on your goals and internal thresholds. Some brands may prioritize reach and performance over strict formatting control. Others may want to preserve consistency across every touchpoint. Neither choice is inherently better, and it helps to be clear on what level of flexibility your brand is comfortable with.

Brand Voice: Values, Budget Allocation, And Long-Term Trust

The final piece of brand safety has less to do with campaign setup and more to do with organizational alignment. In short: How do your media decisions reflect your brand values?

This part of the conversation has become more visible in recent years. Public reactions to brand placement decisions have ranged from quiet disengagement to full-scale boycotts. Social media has made it easier for consumers to surface concerns and ask questions about where ad dollars are going.

There’s no single right way to navigate this. Every brand operates with its own set of priorities, risk tolerance, and customer expectations. What one company sees as a necessary stance, another may see as outside its scope.

What can help is having clarity. When you know what your brand stands for, and where those values show up in media strategy, you’re in a better position to make confident decisions about where to invest and where to pause.

If a content environment shifts in a way that no longer feels aligned, it may make sense to reallocate spend. That’s not just a brand safety response; it’s a brand clarity move. It sends a signal to your team and your audience that your budget decisions are rooted in something consistent.

This is also where trust becomes part of the performance equation. If your audience senses that your brand is inconsistent about where and how it shows up, that can erode the relationship you’ve built.

No strategy will remove all risk. Internal alignment on what matters can help reduce ambiguity and create a more resilient brand presence over time.

Final Takeaways: Brand Safety As A Strategic Layer

Brand safety in PPC is not just a reactive setting. It’s a foundational principle that influences everything from targeting to performance to brand perception. Here are three go-dos:

  1. Understand how placements happen. Review inventory settings, set clear exclusions where needed, and test into new placements thoughtfully. Context matters.
  2. Audit how your creative formats. Use platform tools to guide dynamic creative toward your standards. Decide how much flexibility makes sense for your brand and opt out of formatting changes that feel misaligned.
  3. Let your values shape your budget. Internal clarity helps guide external decisions. Know where your brand draws the line and structure your media investments to reflect that understanding.

If you have a PPC question you want answered in a future edition of Ask A PPC, send it in!

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

New WordPress Vibe Coding Simplifies Building Websites via @sejournal, @martinibuster

10Web, an AI website-building platform, launched Vibe for WordPress, an AI-based site builder that works natively with WordPress. Vibe for WordPress aims to simplify and scale the process of creating websites.

Conversational AI WordPress Development

Vibe for WordPress enables users to build websites by explaining what they need in conversational language. It generates a working WordPress site that can be refined in chat, in the drag-and-drop visual editor, or in code mode. This process links AI-generated prototypes with WordPress’s live environment, minimizing manual setup or reliance on outside CMS tools.

Features and Integration

According to 10Web, Vibe connects to the WordPress backend, offering access to plugins, WooCommerce for e-commerce, user management, and built-in SEO tools. The hosted stack includes CDN, SSL, and backups, making each project ready for production. It is open source, so developers can modify or migrate code freely.

By combining AI-based frontend building with the WordPress backend, 10Web positions Vibe as a bridge between flexible AI creation and open-source infrastructure.

10Web describes the benefits:

  • “Unlimited Frontend Freedom — Build any layout, interaction, or animation—no drag-and-drop limits.
  • Real WordPress Backend — Plugins, auth, content models, and WooCommerce (soon) baked in.
  • Prompt → Website — Generate full sites from a prompt, then refine via chat or direct code.
  • All-in-One Hosted Stack — Managed hosting, security, performance tools, backups—plus open-source flexibility.
  • Flexible Delivery — Use the platform today; API, self-hosted, and white-label are on the roadmap.”

Future Roadmap and Availability

Planned updates include WooCommerce support for ecommerce, Custom Post Type support, Figma or screenshot-based prompts, API, self-hosted, and the ability to white label it.

Read more at 10Web:

10Web Unveils First AI-Powered Vibe Coding Frontend Builder with Complete WordPress Backend

Featured Image by Shutterstock/Reyburn

Big Tech’s big bet on a controversial carbon removal tactic

Over the last century, much of the US pulp and paper industry crowded into the southeastern corner of the nation, setting up mills amid sprawling timber forests to strip the fibers from juvenile loblolly, long leaf, and slash pine trees.

Today, after the factories chip the softwood and digest it into pulp, the leftover lignin, spent chemicals, and remaining organic matter form a dark, syrupy by-product known as black liquor. It’s then concentrated into a biofuel and burned, which heats the towering boilers that power the facility—and releases carbon dioxide into the air.

Microsoft, JP MorganChase, and a tech company consortium that includes Alphabet, Meta, Shopify, and Stripe have all recently struck multimillion-dollar deals to pay paper mill owners to capture at least hundreds of thousands of tons of this greenhouse gas by installing carbon scrubbing equipment in their facilities.

The captured carbon dioxide will then be piped down into saline aquifers more than a mile underground, where it should be sequestered permanently.

Big Tech is suddenly betting big on this form of carbon removal, known as bioenergy with carbon capture and storage, or BECCS. The sector also includes biomass-fueled power plants, waste incinerators, and biofuel refineries that add carbon capturing equipment to their facilities.

Since trees and other plants absorb carbon dioxide through photosynthesis and these factories will trap emissions that would have gone into the air, together they can theoretically remove more greenhouse gas from the atmosphere than was released, achieving what’s known as “negative emissions.”

The companies that pay for this removal can apply that reduction in carbon dioxide to cancel out a share of their own corporate pollution. BECCS now accounts for nearly 70% of the announced contracts in carbon removal, a popularity due largely to the fact that it can be tacked onto industrial facilities already operating on large scales.

“If we’re balancing cost, time to market, and ultimate scale potential, BECCS offers a really attractive value proposition across all three of those,” says Brian Marrs, senior director of energy and carbon removal at Microsoft, which has become by far the largest buyer of carbon removal credits as it races to balance out its ongoing emissions by the end of the decade.

But experts have raised a number of concerns about various approaches to BECCS, stressing they may inflate the climate benefits of the projects, conflate prevented emissions with carbon removal, and extend the life of facilities that pollute in other ways. It could also create greater financial incentives to log forests or convert them to agricultural land. 

When greenhouse-gas sources and sinks are properly tallied across all the fields, forests, and factories involved, it’s highly difficult to achieve negative emissions with many approaches to BECCS, says Tim Searchinger, a senior research scholar at Princeton University. That undermines the logic of dedicating more of the world’s limited land, crops, and woods to such projects, he argues.

“I call it a ‘BECCS and switch,’” he says, adding later: “It’s folly at some level.”

The logic of BECCS

For a biomass-fueled power plant, BECCS works like this:

A tree captures carbon dioxide from the atmosphere as it grows, sequestering the carbon in its bark, trunk, branches, and roots while releasing the oxygen. Someone then cuts it down, converts it into wood pellets, and delivers it to a power plant that, in turn, burns the wood to produce heat or electricity.

Usually, that facility will produce carbon dioxide as the wood incinerates. But under both European Union and US rules, the burning of the wood is generally treated as carbon neutral, so long as the timber forests are managed in sustainable ways and the various operations abide by other regulations. The argument is that the tree pulled CO2 out of the air in the first place, and new plant growth will bring that emissions debt back into balance over time. 

If that same power plant now captures a significant share of the greenhouse gas produced in the process and pumps it underground, the process can potentially go from carbon neutral to carbon negative.

But the starting assumption that biomass is carbon neutral is fundamentally flawed, because it doesn’t fully take into account other ways that emissions are released throughout the process, according to Searchinger.

Among other things, a proper analysis must also ask: How much carbon is left behind in roots or branches on the forest floor that will begin to decompose and release greenhouse gases after the plant is removed? How much fossil fuel was burned in the process of cutting, collecting, and distributing the biomass? How much greenhouse gas was produced while converting timber into wood pellets and shipping them elsewhere? And how long will it take to grow back the trees or plants that would have otherwise continued capturing and storing carbon?

“If you’re harvesting wood, it’s essentially impossible to get negative emissions,” Searchinger says.

Burning biomass, or the biofuels created from it, can also produce other forms of pollution that can harm human health, including particulate matter, volatile organic compounds, sulfur dioxide, and carbon monoxide.

Preventing carbon dioxide emissions at a given factory may necessitate capturing certain other pollutants as well, notably sulfur dioxide. But it doesn’t necessarily filter out all the other pollution floating out of the flue stack, notes Emily Grubert, an associate professor of sustainable energy policy at the University of Notre Dame who focuses on carbon management issues and the transition away from fossil fuels. 

Driving demand

The idea that we might be able to use biomass to generate energy and suck down carbon dates back decades. But as global temperatures and emissions both continued to rise, climate modelers found that more and more BECCS or other types of carbon removal would be needed to prevent the planet from tipping past increasingly dangerous warming thresholds.

In addition to dramatic cuts in emissions, the world may need to suck down 11 billion tons of carbon dioxide per year by 2050 and 20 billion by 2100 to limit warming to 2 °C over preindustrial levels, according to a 2022 UN climate panel report. That’s a threshold we’re increasingly likely to blow past.

These grave climate warnings sparked growing interest and investments in ways to draw carbon dioxide out of the atmosphere. Companies sprang up offering to sink seaweed, bury biomass, develop carbon-sucking direct air capture factories, and add alkaline substances to agricultural fields or the oceans. 

But BECCS purchases have dwarfed those other approaches.

For companies with fast-approaching climate deadlines, BECCS is one of the few options for removing hundreds of thousands of tons over the next few years, says Robert Höglund, who cofounded CDR.fyi, ​​a public-benefit corporation that analyzes the carbon removal sector.

“If you have a target you want to meet in 2030 and you want durable carbon removal, that’s the thing you can buy,” he says.

That’s chiefly because these projects can harness the infrastructure of existing industries. At least for now, you don’t have to finance, permit, and develop new facilities.

“They’re not that hard to build, because it’s often a retrofitting of an existing plant,” Höglund says. 

BECCS is also substantially less expensive for buyers than, say, direct air capture, with weighted average prices of $210 a ton compared with $490 among the deals to date, according to CDR.fyi. That’s in part because capturing the carbon dioxide from, say, a pulp and paper mill, where it makes up around 15% of flue gas, takes far less energy than plucking CO2 molecules out of the open air, where they account for just 0.04%.

Microsoft’s big BECCS bet

In 2020, Microsoft announced plans to become carbon negative by the end of this decade and, by midcentury, to remove all the emissions the company generated directly and from electricity use throughout its corporate history. 

It’s leaning particularly heavily on BECCS to meet those climate commitments, with the category accounting for 76% of its known carbon removal purchases to date.

In April, the company announced it would purchase 3.7 million tons of carbon dioxide that a paper and pulp mill, located at some unspecified site in the southern US, will eventually capture and store over a 12-year period. It reached the deal through CO280, a startup based in Vancouver, British Columbia, that is forming joint ventures with paper and pulp mill companies in the US and Canada, to finance, develop, and operate the projects. 

It was the biggest carbon removal purchase on record—until four days later, when Microsoft revealed it had agreed to buy 6.75 million tons of carbon removal from AtmosClear, CDR.fyi noted. That company is building a biomass power plant at the Port of Greater Baton Rouge in Louisiana, which will run largely on sugarcane bagasse (a by-product of sugar production) and forest trimmings. AtmosClear says the facility will be able to capture 680,000 tons of carbon dioxide per year.

“What we’ve seen is a lot of these BECCS projects have been very helpful, if not transformational, for providing investment in rural economies,” Marrs says. “We look at our BECCS deals, in Louisiana with AtmosClear and some other Gulf State providers, like CO280, as a real means of helping support these economies, while at the same time promoting sustainable forestry practices.”

In earlier quarters, Microsoft also made substantial purchases from Orsted, which operates power plants that burn wood pellets; Gaia, which runs facilities that convert municipal waste into energy; and Arbor, whose plants are fueled by “overgrown brush, crop residues, and food waste.” 

Don’t let waste go to waste

Notably, at least three of these projects rely on some form of waste, a category distinct from fresh-cut timber or crops grown for the purpose of fueling BECCS projects. Solid waste, agricultural residues, logging leftovers, and plant material removed from forests to prevent fires present some of the ripest opportunities for BECCS—as well as some difficult questions of carbon accounting.

A 2019 report from the National Academy of Sciences estimated that the US could achieve more than 500 million tons of carbon removal a year through BECCS by 2040, while the world could exceed 3.5 billion tons, by relying just on agricultural by-products, logging residues, and organic waste—without needing to grow crops dedicated to energy.

Roger Aines, chief scientist of the energy program at Lawrence Livermore National Laboratory, argues we should at least be putting these sources to use rather than burning them or leaving them to decompose in fields. (Aines coauthored a similar analysis focused on California’s waste biomass and contributed to a 2022 lab report prepared for Microsoft to evaluate costs and options for carbon removal purchases.)

He stresses that the BECCS sector can learn a lot from using that waste material. For example, it should help to provide a sharper sense of whether the carbon math will work if more land, forests, and crops are dedicated to these sorts of purposes.

“The point is you won’t grow new material to do this in most cases, and won’t have to for a very long time, because there’s so much waste available,” Aines says. “If we get to that point, long into the future, we can address that then.”

Wonky accounting

But the critical question that emerges with waste is: Would it otherwise have been burned or allowed to decompose, or might some of it have been used in some other way that kept the carbon out of the atmosphere? 

Sugarcane bagasse, for instance, is or could also be used to produce recyclable packaging and paper, biodegradable food packaging and cutlery, building materials, or soil amendments that add nutrients back to agricultural fields.

“A lot of the time those materials are being used for something else already, so the accounting gets wonky really quickly,” Grubert says. 

Some fear that the financial incentives to pursue BECCS could also compel companies to trim away more trees and plants than is truly necessary to, say, manage forests or prevent fires—particularly as more and more BECCS plants create greater and greater demand for the limited supplies of such materials.

“Once you start capturing waste, you create an incentive to produce waste, so you have to be very careful about the perverse incentives,” says Danny Cullenward, a researcher and senior fellow at the Kleinman Center for Energy Policy at the University of Pennsylvania who studies carbon markets.

Due diligence 

Like other big tech companies, Microsoft has lost some momentum when it comes to its climate goals, in large part because of the surging energy demands of its AI data centers. 

But the company has generally earned a reputation for striving to clean up its direct emissions where possible and for seeking out high-quality approaches to carbon removal. It has consulted extensively with critically minded researchers at advisory firms like Carbon Direct and demonstrated a willingness to pay higher prices to support more credible projects.

Marrs says the company has extended that scrutiny to its BECCS deals.

“We want as much positive environmental impact as possible from every project,” he says.

“We’re doing months and months of technical due diligence with teams that visit the site, that interview stakeholders, that produce a report for us that we go through in depth with a third-party engineering provider or technical perspective provider,” he adds.

In a follow-up statement, Microsoft stressed that it strives to validate that every BECCS project it supports will achieve negative emissions, whatever the fuel source.

“Across all of these projects, we conducted substantial due diligence to ensure that BECCS feedstocks would otherwise return carbon to the atmosphere in a few years,” the company said. 

Likewise, Jonathan Rhone, the cofounder and chief executive of CO280, stresses that they’ve worked with consultants, carbon market registries, and pulp and paper mills “to make sure we’re adopting the best standards.” He says they strive to conservatively assess the release and uptake of greenhouse gases across the supply chain of the mills they work with, taking into account the type of biomass used by a given plant, the growth rate of the forests it’s harvested from, the distance trucks drive to ship the timber or sawmill residues, the total emissions of the facility, and more.

Rhone says its typical projects will capture and store away on the order of 850,000 to 900,000 tons of carbon dioxide per year. How much that would make up of the plant’s total emissions would vary, based in part on how much of the facility’s energy comes from by-product biomatter and how much comes from fossil fuels. For its first projects, the company will aim to capture 50% to 65% of the CO2 emissions at the pulp and paper mills, but it eventually hopes to exceed 90%. 

In a follow-up email, Rhone said the carbon capture equipment at the mills it works with will also prevent “substantial levels” of particulate matter and sulfur dioxide emissions and might reduce emissions of other pollutants as well.

The company is in active discussions with 10 pulp and paper mills in the Gulf Coast and Canada. Each carbon capture and storage project could cost hundreds of millions of dollars. 

“What we’re trying to do at CO280 is show and demonstrate that we can create a stable, repeatable playbook for developing projects that are low risk and provide the market with what it wants, with what it needs,” Rhone says. 

Proponents of BECCS say we could leverage biomass to deliver substantial volumes of carbon removal, so long as appropriate industry standards are put in place to prevent, or at least minimize, bad behavior.

The question is whether that will be the case—or whether, as the BECCS sector matures, it will veer closer to the pattern of carbon offset markets. 

Studies and investigations have consistently shown that loosely regulated or poorly designed carbon credit and offset programs have allowed, if not invited, companies to significantly exaggerate the climate benefits of tree planting, forest preservation, and similar projects. 

“It appears to me to be something that will be manageable but that we’ll always have to keep an eye on,” Aines says. 

Magic

Even with all these carbon accounting complexities, BECCS projects can often deliver climate benefits, particularly for existing plants.

Adding carbon capture to an operating paper and pulp mill, power plant, or refinery is at least an improvement over the status quo from a climate perspective, insofar as it prevents emissions that would otherwise have continued.

But ambitions for BECCS are already growing beyond existing plants: Last year Drax, the controversial UK power giant, announced plans to launch a Houston-based division tasked with developing enough new BECCS projects to deliver 6 million tons of carbon removal per year, in the US or elsewhere.

Numerous other companies have also built or proposed biomass power plants in recent years, with or without carbon capture systems—decisions driven in part by policies that classify them as carbon neutral.

But if biomass isn’t carbon neutral, as Searchinger and others argue it can’t be in many applications, then these new unfiltered power plants are just adding more emissions to the atmosphere—and BECCS projects aren’t drawing any out of the air. And if that’s the case, it raises tough questions about corporate climate claims that depend on its doing so and the societal trade-offs involved in building lots of new plants dedicated to these purposes.

That’s because crops grown for energy require land, fertilizer, insecticides, and human labor that might otherwise go toward producing food for an expanding global population. And greater demand for wood invites the timber industry to chop down more and more of the world’s forests, which are already sucking up and storing away vast amounts of carbon dioxide and providing homes for immense varieties of plants and animals.

If these projects are merely preventing greenhouse gas from floating into the atmosphere but not drawing any down, we’re better off adding carbon capture and storage (CCS) equipment to an existing natural-gas plant instead, Searchinger argues.

Companies may think that harnessing nature to draw carbon dioxide out of the sky sounds better than cutting the emissions of a fossil-fuel turbine. But the electricity from the latter plant would cost dramatically less, the carbon capture system would reduce emissions more for the amount of same energy generated, and it would avoid the added pressures to cut down trees, he says.

“People think some magic happens—this magic combination of using biomass and CCS creates something bigger than its parts,” Searchinger says. “But it’s not magic; it’s simply the sum of the two.”