LLM Traffic Is Shrinking via @sejournal, @Kevin_Indig

LLM referral traffic has been growing +65% year-to-date. But we should assume 0 in the future.

LLM Referral Traffic Is Shrinking

LLM referral traffic in B2B grew +65.1% since January – but dropped -42.6% since July.

Image Credit: Kevin Indig

My December prediction of 50% organic by 2027 is dead:

  • In December 2024, I analyzed six B2B sites and found LLM referral traffic was growing at such a fast rate it would make up 50% of organic traffic in three years.
  • Today, I’m finding the monthly growth rate of LLM traffic dropped from 25.1% in 2024 to 10.4% in November 2025.
  • Even from January to July 2025, the average growth rate was lower (19.2%) than my projection. That’s fast, but not enough to reach 50% organic traffic in three years.

LLM contribution to organic traffic grew from 0.14% in 2024 to 1.10% in 2025, which is more than I projected (0.79%).

Image Credit: Kevin Indig

But with organic traffic falling due to AI Overviews, this growth becomes meaningless.

Fewer Citations Despite Growing Usage

In August, several factors influenced LLM referral traffic:

  1. Seasonality: Siege Media documented that B2B sites lost LLM traffic in August due to vacation season.
  2. Router: ChatGPT 5, which launched on August 7, has a router that picks the model. The router favors non-reasoning models, which show fewer citations and send less traffic out.
  3. Concentration: Josh from Profound found a higher concentration of referrals to Reddit and Wikipedia starting late July.

Business seasonality has a lower impact because neither ChatGPT (consumer focus) nor Claude (business focus) sees a decrease in site visits.

Image Credit: Kevin Indig
Image Credit: Kevin Indig

ChatGPT mentions, however, dropped by one-third in October and continue dropping in November.

Image Credit: Kevin Indig

Citations for large domains like Reddit or Wikipedia follow suit (based on Profound data).

Major sites see citation declines in September (Image Credit: Kevin Indig)

Conclusion: LLM visits are up, which removes seasonality as dominant cause. The driver of lower referral traffic is ChatGPT, showing fewer citations due to the model router.

Visibility Is The Real Price

Traffic was never the right way to value LLMs because LLMs make clicks redundant:

  • The AI Mode study I published last month validates that clicks only occurred for shopping-related tasks (zero-click share = ~100%).
  • Pew Research has found that only 1% of users click links in AI Overviews.

Focusing on traffic leads to disappointing results. ChatGPT is more like TikTok than Google Search. The currency of the AI world is visibility.

The good news: LLMs grow the pie. Semrush found people don’t use Google less often because they also use ChatGPT. If LLMs are additive to Google Search, the visibility surface grows even though clicks per source shrink. You have more places to be seen, fewer clicks per place.

But our success metrics need to change. Referral traffic neither works for ChatGPT nor Google, as AI Overviews and AI mode swallow more clicks. Instead, we need to adopt visibility-first.

Default To Zero LLM Traffic

  1. Track LLM and organic search seasonality for your vertical to measure the total pie of citations and make sense of drops/spikes.
  2. Monitor total citation and mention count to answer the question, “Are we growing because the market grows?” Lower citations/mentions means fewer chances to influence purchase decisions.
  3. Prioritize brand mentions over citations in LLMs. Mentions without links drive familiarity and influence purchase decisions.
  4. Stop expecting (meaningful) LLM referral traffic. Budget for visibility.
  5. Invest resources where LLMs go to train: UGC and third-party reviews like Reddit, YouTube, review sites, community forums.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!


Featured Image: Paulo Bobita/Search Engine Journal

Holiday PPC Guide 2025: Advanced Strategies For Smarter Bidding, Budgets & Audiences via @sejournal, @siliconvallaeys

The holiday this year brings more competition than ever, but the shopper journey is also shifting. Consumers begin research weeks earlier, often starting in October, and rely on conversational AI or chatbot-style searches to compare products. Microsoft’s holiday insights show that shopping behavior kicks off in October, with many November and December conversions originating from clicks made weeks earlier.

The funnel is changing shape: wider at the top as more shoppers browse early, but shorter at the bottom as they move quickly once urgency kicks in. The key lesson is that PPC strategy must nurture intent early and be ready for compressed buying cycles when urgency arrives.

Holiday shoppers are beginning earlier, researching longer, and converting later. The funnel is wider than ever, but also shorter once the urgency hits.

Bidding: Winning The Ad Auction

Don’t Fear Expensive Clicks, Fear Unprofitable Ones

Holiday auctions bring higher cost-per-click (CPCs), a natural result of more advertisers competing for limited inventory. Success is not about avoiding CPC increases but maintaining strong return on ad spend (ROAS) and protecting profit margins. Teika Metrics’ Black Friday and Cyber Monday (BFCM) data confirms that CPCs climb seasonally, especially on Black Friday and Cyber Monday.

Smart Bidding goals should be tied to profitability, not just revenue, and portfolio bidding can help balance volatility across campaigns. Microsoft and Google also recommend applying seasonality bid adjustments before major holidays so automation anticipates conversion spikes.

Pro Tip: Set seasonality adjustments 24-48 hours before and after Black Friday and Cyber Monday to help Smart Bidding avoid over- or under-reacting.

Smart Bidding With Guardrails: Train The Machine

Automation is powerful, but it is not infallible. It needs monitoring and guardrails. Trust tROAS or tCPA when conditions are stable, but ensure you have bid limits (through portfolio bidding) and guardrails to alert you about unusual performance during peak periods when volatility spikes.

Real-World Example: Last BFCM, a large retailer client of ours using offline conversion import (OCI) saw conversions suddenly vanish. Optmyzr automation flagged the anomaly right away, revealing a Google-side glitch in OCI reporting. Without that safeguard, Smart Bidding would have assumed conversions had dried up and slashed bids during the most important shopping week of the year. Guardrails prevented disaster.

Key Take: Automation doesn’t eliminate risk; it changes the type of risk. Without guardrails, a data glitch can quietly sabotage your bids. With guardrails, you catch it before it becomes a disaster.

Inventory And Feed-Aware Bidding: Don’t Burn Budget On Out-Of-Stock

Holiday shoppers expect items to be in stock, priced competitively, and available with fast delivery. Automating feed hygiene to pause out-of-stock products is essential. Structuring campaigns by margin allows for different tROAS bids that achieve your target profitability.

Pro Tip: If your price is not competitive, shift spend toward SKUs where you can compete on both offer and margin.

And before you worry about bids, ensure the feed can win the impression. Tighten mobile-friendly titles and human-readable attributes (e.g., use “light brown,” not obscure color names), add seasonal terms like “Black Friday deals,” and fix disapprovals early so you don’t lose visibility when auctions heat up.

Create label taxonomies that align with your profit strategy, like “hero products,” “doorbusters,” “low-margin,” “last-chance,” so you can direct bids and budgets to what actually drives profits.

Case Study Insight: When Amazon briefly exited the Google Ads auction, Optmyzr’s analysis showed other advertisers gained clicks at lower CPCs, but ROAS did not improve. Shoppers were expecting Amazon, and when they did not find it, they often failed to convert with alternatives. Winning an auction is meaningless if the offer and expectations do not align.

Budgeting: Flexibility Wins

Turn On Campaigns Now And Control Delivery With Budgets

I normally recommend pausing campaigns that are not needed, rather than reducing their budgets to a very low amount to keep them active. Advertisers sometimes use budget rather than status to “pause” a campaign because they fear the dreaded learning period that may kick in when a campaign is enabled after an extensive period of inactivity.

Pausing does not erase Google’s memory, since “learning” reflects new auction contexts rather than forgotten history. Longer pauses, however, risk drift as consumer behavior shifts. The bigger issue is that paused campaigns with new ads will not undergo review until they are re-enabled, which can delay serving during crucial moments.

So during BFCM, there are good reasons to use budget rather than status because it keeps campaigns actively learning about shifts in consumer behavior, and it ensures new creatives go into the approval process.

Holiday Pitfall Alert: Do not pause campaigns with unapproved creatives close to Black Friday. Get ads reviewed in advance.

Intraday Pacing: Don’t Get Fooled By Conversion Lag

Static daily budgets can be damaging in volatile holiday conditions. Dynamic pacing using scripts or APIs is a better approach, especially when aligned with key milestones like Black Friday, Cyber Monday, shipping cutoffs, and last-minute windows.

On Black Friday and Cyber Monday, pacing must be monitored throughout the day. Hourly reporting in Google Ads makes this possible, but advertisers must also account for conversion lag.

Looking at last year’s data, conversions appear smooth by the hour because lag has already resolved. On the day, however, conversions will often appear behind pace even when clicks and impressions are aligned. Saving hourly reports as the day unfolds will provide a baseline for analyzing lag in future years.

Pro Tip: Do not confuse lag with poor performance. Cutting budgets midday can mean missing the evening conversion surge.

Lock in your total Q4 budget and earmark a supplemental pool for Black Friday, Cyber Monday, and the biggest shopping weekends. Expect higher CPCs and raise day caps accordingly so campaigns don’t exhaust at noon. Finally, audit your automations – safety scripts that pause or cap spend are helpful, but if they fire at the wrong time during BFCM, they can suppress profitable traffic.

Targeting

Audience Signals Are Your Multiplier

First-party data goes beyond CRM lists. It includes your business’s unit economics, such as pricing and profit margins, which can guide automation toward profitability rather than vanity ROAS.

Key Take: First-party data is not only about who your customers are, but also includes all your business data, including how you price. Leverage this to guide when you run ads and how much you bid.

Microsoft has a unique feature that Google doesn’t have: impression-based remarketing, which allows advertisers to retarget users who saw their ads but did not click. This expands reach to pre-qualified audiences and often reduces costs. Combining CRM imports, impression-based remarketing, and profit-based bidding provides automation with richer signals.

Keywords And Keywordless Targeting

With match types getting broader every year, and the growth in keywordless campaign types like Performance Max, advertiser control over queries is eroding. This trend will continue as users shift from keyword searches to prompting, and Google eventually replaces synthetic keywords with a more precise targeting system.

Performance Max is performing well, and we shared details about what trends are working best in our PMax study. AI Max, on the other hand, doesn’t feel quite as ready for primetime, though there is unverified speculation that a September 2025 algorithm update improved performance significantly. Test AI Max using Experiments before setting it loose on your BFCM traffic this year.

Creative: Stand Out In Crowded Auctions

Ads That Win Auctions: CTR Beats Clever Copy

Auctions for bottom-of-the-funnel search ads reward click-through rate (CTR) and predicted CTR, not witty copy. Coverage and clarity matter most. Ad headlines, descriptions, and assets (formerly ad extensions) should be updated with current promotions, shipping cutoffs, and urgency messaging.

However, with 15 potential headlines that Google can choose from for your ad, controlling what is most important to include in messaging requires pinning during BFCM.

Optmyzr’s soon-to-be-published 2025 Responsive Search Ads (RSA) study shows that advertisers who pin multiple variations to the same position achieve better ROAS. Pinning one element restricts the machine too much, while no pinning gives it too much freedom. Multi-asset pinning balances human guidance with algorithmic optimization. Google’s RSA guidance confirms that variation improves performance.

Pro Tip: Plan RSAs in waves and use multi-asset pinning to balance brand strategy with system optimization.

Keep It Fresh: Creative Burnout Happens Faster In Q4

Shoppers tire quickly of repetitive ads, especially in Demand Gen campaigns. But even search ads should be kept fresh, and ads should be staged in waves to appeal to Black Friday and Cyber Monday shoppers, and reflect shipping cutoffs, last-minute gifts, and post-holiday clearance as the holidays approach.

Pre-loading assets ensures they are reviewed and ready to serve. Countdown customizers and promotion extensions can reinforce urgency, but messaging must stay consistent with site offers to maintain trust.

Pro Tip: Schedule creative waves in advance. Do not wait until Cyber Monday morning to swap assets.

Competitive Insights

Competitor Surge Alerts: Auction Insights As A Warning

Auction Insights is a powerful diagnostic tool. Google’s Auction Insights report reveals shifts in competitor behavior, such as impression share surges. Monitoring these trends in November helps advertisers react quickly, whether by increasing brand defense or positioning directly against rivals.

Auction Insights is your battlefield radar for Q4. Ignore it, and you could be blindsided.

Post-Holiday: Turn December Buyers Into January Fans

January Is Your PPC Lab: Retain, Don’t Just Acquire

Holiday buyers are the most expensive to acquire but can become the most profitable if nurtured in Q1. Segment holiday-only versus year-round buyers using customer relationship management (CRM) and ad data, then run loyalty and cross-sell campaigns. Feeding learnings back into bidding and audience systems ensures automation improves over time.

Holiday buyers are the most expensive you will ever acquire. Retarget them in January to make them more profitable.

Final Thoughts

Holiday PPC is the ultimate stress test. CPC inflation, automation, budgets, audiences, creative, competition, and fraud all converge at once. Winning requires guiding automation with better inputs, protecting profitability with strong signals, and owning your message at a time when keyword precision is fading. Prepare early, pace carefully, and place guardrails everywhere they matter most.

Checklist Summary

  • Expect CPC inflation in Q4. Optimize for profit and ROAS, not cheap clicks.
  • Set seasonality bid adjustments and add guardrails so Smart Bidding doesn’t misfire on BFCM.
  • Treat budgets as fluid with intraday pacing. Don’t confuse conversion lag with underperformance.
  • Use first-party data beyond CRM lists. Profit margins and pricing strategy are key signals.
  • Microsoft’s impression-based remarketing lets you retarget high-intent searchers who never clicked.
  • Make creative your control lever in a PMax and broad-match world. Use multi-asset RSA pinning.
  • Monitor Auction Insights, watch for fraud/MFA, and turn expensive Q4 buyers into Q1 loyalists.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Ahrefs Data Shows Brand Mentions Boost AI Search Rankings via @sejournal, @martinibuster

The latest Ahrefs podcast shares data showing that brand mentions on third-party websites help improve visibility across AI search surfaces. What they found is that brand mentions correlate strongly with ranking better in AI search, indicating that we are firmly in a new era of off-page SEO.

Training Data Gets Cited

Tim Soulo, CMO of Ahrefs, said that off-page activity that increases being mentioned on other sites improves visibility in AI search results, both those based on training data and those drawing from live search results. The benefits of conducting off-page SEO apply to both. The only difference is that training data doesn’t get into LLMs right away.

Tim recommends identifying where your industry gets mentioned:

“You just need to see like where your competitors are mentioned, where you are mentioned, where your industry is mentioned.

And you have to get mentions there because then if the AI chatbot would do a search and find those pages and create their answer based on what they see on those pages, this is one thing.

But if some of the AI providers will decide to retrain their entire model on a more recent snapshot of the web, they will use essentially the same pages.”

Tim cautioned that AI companies don’t ingest new web data for training and that there’s a lag in months between how often large language models receive fresh training data from the web.

Appear On Authoritative Websites

Although Tim did not mention specific tactics for obtaining brand mentions, in my opinion, off-page link-building strategies don’t have to change much to build brand mentions.

Tim underlined the importance of appearing on authoritative websites:

“So yeah, …essentially it’s not that you have to use different tactics for those things. You do the same thing, you appear like on credible websites, but yeah, let’s continue.”

The only thing that I would add is that authoritativeness in this situation is if a site gets mentioned by AI search. But the other thing to think about is if a site is simply the go-to for a particular kind of information, relevance

Topicality Of Brand Mentions

The other thing that was discussed is the topicality of the brand mentions, meaning the context in which the brand is discussed. Ryan Law, Ahrefs’ Director of Content Marketing, said that the context of the brand mention is important, and I agree. You can’t always control the narrative, but that’s where old-fashioned PR outreach comes in, where you can include quotes and so on to build the right context.

Law explained:

“Well, that segues very nicely to what I think is probably the most useful discrete tactic you can do, and that is building off-site mentions.

A big part of how LLMs understand what your brand is about and when it should recommend it and the context it should talk about you is based on where you appear in its training data and where you appear on the web.

  • What topics are you commonly mentioned alongside?
  • What other brands are you mentioned alongside?

I think Patrick Stox has been referring to this as the era of off-page SEO. In some ways, the content on your own site is not as valuable as the content about you on other pages on the web.”

Law mentioned that these off-page mentions don’t have to be in the form of links in order to be useful for ranking in AI search.

Testing Shows Brand Mentions Are Important

Law went on to say that their data shows that brand mentions are important for ranking. He mentions a correlation coefficient of 0.67, which is a measure of how strongly two variables are related.

Here are the correlation coefficient scales:

  • 1.0 = perfect positive correlation (two things are related).
  • 0.0 = no correlation.
  • –1.0 = perfect negative correlation (for example, for every minute you drive the distance gets smaller, a negative correlation).

So, a correlation coefficient of 0.67 means that there’s a strong relationship in what’s observed.

Law explained:

“And we did indeed test this with a bit of research.

So we looked at these factors that correlate with the amount of times a brand appears in AI overviews, tested tons of different things, and by far the strongest correlation, very, very strong correlation, almost 0.67, was branded web mentions.

So if your brand is mentioned in a ton of different places on the web, that correlates very highly with your brand being mentioned in lots of AI conversations as well.”

He goes on to recommend identifying industry domains that tend to get cited in AI search for your topics and try to get mentioned on those websites.

Law also recommended getting mentions on user-generated content sites like Reddit and Quora. Next he recommended getting mentioned on review sites and on YouTube video in the transcripts because YouTube videos are highly cited by AI search.

Ahrefs Brand Radar Tool

Lastly, they discussed their Ahrefs tool called Brand Radar that’s useful for identifying domains that are frequently mentioned in AI search surfaces.

Law explained:

“And obviously, we have a tool that does exactly that. It actually helps you find the most commonly cited domains.  …if you put in whatever niche you’re interested in, you can see not only the top domains that get mentioned most often across all of the thousands, hundreds of thousands, millions of conversations we have indexed. You can also see the individual pages that get most commonly mentioned.

Obviously, if you can get your brand on those pages, yeah, immediately your AI visibility is going to shoot up in a pretty dramatic way.”

Citations Are The New Backlinks

Tim Soulo called citations the new backlinks for the AI search era and recommended their Brand Radar tool for identifying where to get mentions. In my opinion, getting a brand mentioned anywhere that’s relevant to your users or customers could also be helpful for ranking in the regular search  as well as AI (Read: Google’s Branded Search Patent)

Watch the Ahrefs podcast starting at about the 6:30 minute mark:

How to Win in AI Search (Real Data, No Hype)

Kinsta Managed WordPress Host Won’t Charge For Bot Traffic via @sejournal, @martinibuster

WordPress managed web hosting company Kinsta announced that it is changing how it bills its customers by not charging users for bandwidth related to unwanted bot and scraper traffic.

Daniel Pataki, CTO at Kinsta explained:

“In the past 12 months we’ve seen bot traffic rise due to the prevalence of both good and bad uses of AI. These bots can not be filtered as effectively, modifying our typical visits-to-bandwidth ratio. We’re working internally and with Cloudflare to improve bot filtering, but our top priority remains our customers’ success. Reducing bot-related costs as quickly as possible will have the greatest impact.”

Bot And Scraper Traffic Out Of Control

Anyone who’s watched their live traffic statistics can confirm that scraper and hacker bots make up a significant amount of traffic to a website, accounting for as much as half of the bandwidth costs for a website. I still remember the time I added a forum to a content site a few years ago and purposely left it without bot protection to see how long it would take to get spammed. I didn’t have to wait long; a spam bot registered itself and started posting spam within minutes.

Kinsta is providing bandwidth-based options that don’t charge for wasted bandwidth while also providing options such as caching and CDNs that help mitigate the impact of bad bot visits.

Kinsta’s announcement explains:

“Now with bandwidth-based options, Kinsta is giving customers more choice, transparency and control in how they pay for hosting: by visits or bandwidth. Customers are not locked into a single pricing model. This is consistent with Kinsta’s long-term approach of delivering quality and building trust. The new pricing option is setting the standard for hosting by giving customers the freedom to choose how they pay, in a way that reflects how the modern web actually works.”

The new feature is available to every visitor-based tier, enables the flexibility to switch between visits and bandwidth-based, and with improved usage notifications plus no charges for scrapers and bad bots the risk of unexpectedly running out of bandwidth is lower.

Read Kinsta’s announcement:

Kinsta Launches Bandwidth-Based Pricing to Give Website Owners and Developers More Hosting Control

Featured Image by Shutterstock/Paul shuang

This startup wants to clean up the copper industry

Demand for copper is surging, as is pollution from its dirty production processes. The founders of one startup, Still Bright, think they have a better, cleaner way to generate the copper the world needs. 

The company uses water-based reactions, based on battery chemistry technology, to purify copper in a process that could be less polluting than traditional smelting. The hope is that this alternative will also help ease growing strain on the copper supply chain.

“We’re really focused on addressing the copper supply crisis that’s looming ahead of us,” says Randy Allen, Still Bright’s cofounder and CEO.

Copper is a crucial ingredient in everything from electrical wiring to cookware today. And clean energy technologies like solar panels and electric vehicles are introducing even more demand for the metal. Global copper demand is expected to grow by 40% between now and 2040. 

As demand swells, so do the climate and environmental impacts of copper extraction, the process of refining ore into a pure metal. There’s also growing concern about the geographic concentration of the copper supply chain. Copper is mined all over the world, and historically, many of those mines had smelters on-site to process what they extracted. (Smelters form pure copper metal by essentially burning concentrated copper ore at high temperatures.) But today, the smelting industry has consolidated, with many mines shipping copper concentrates to smelters in Asia, particularly China.

That’s partly because smelting uses a lot of energy and chemicals, and it can produce sulfur-containing emissions that can harm air quality. “They shipped the environmental and social problems elsewhere,” says Simon Jowitt, a professor at the University of Nevada, Reno, and director of the Nevada Bureau of Mines and Geology.

It’s possible to scrub pollution out of a smelter’s emissions, and smelters are much cleaner than they used to be, Jowitt says. But overall, smelting centers aren’t exactly known for environmental responsibility. 

So even countries like the US, which have plenty of copper reserves and operational mines, largely ship copper concentrates, which contain up to around 30% copper, to China or other countries for smelting. (There are just two operational ore smelters in the US today.)

Still Bright avoids the pyrometallurgic process that smelters use in favor of a chemical approach, partially inspired by devices called vanadium flow batteries.

In the startup’s reactor, vanadium reacts with the copper compounds in copper concentrates. The copper metal remains a solid, leaving many of the impurities behind in the liquid phase. The whole thing takes between 30 and 90 minutes. The solid, which contains roughly 70% copper after this reaction, can then be fed into another, established process in the mining industry, called solvent extraction and electrowinning, to make copper that’s over 99% pure. 

This is far from the first attempt to use a water-based, chemical approach to processing copper. Today, some copper ore is processed with acid, for example, and Ceibo, a startup based in Chile, is trying to use a version of that process on the type of copper that’s traditionally smelted. The difference here is the particular chemistry, particularly the choice to use vanadium.

One of Still Bright’s founders, Jon Vardner, was researching copper reactions and vanadium flow batteries when he came up with the idea to marry a copper extraction reaction with an electrical charging step that could recycle the vanadium.

worker in the lab

COURTESY OF STILL BRIGHT

After the vanadium reacts with the copper, the liquid soup can be fed into an electrolyzer, which uses electricity to turn the vanadium back into a form that can react with copper again. It’s basically the same process that vanadium flow batteries use to charge up. 

While other chemical processes for copper refining require high temperatures or extremely acidic conditions to get the copper into solution and force the reaction to proceed quickly and ensure all the copper gets reacted, Still Bright’s process can run at ambient temperatures.

One of the major benefits to this approach is cutting the pollution from copper refining.  Traditional smelting heats the target material to over 1,200 °C (2,000 °F), forming sulfur-containing gases that are released into the atmosphere. 

Still Bright’s process produces hydrogen sulfide gas as a by-product instead. It’s still a dangerous material, but one that can be effectively captured and converted into useful side products, Allen says.

Another source of potential pollution is the sulfide minerals left over after the refining process, which can form sulfuric acid when exposed to air and water (this is called acid mine drainage, common in mining waste). Still Bright’s process will also produce that material, and the company plans to carefully track it, ensuring that it doesn’t leak into groundwater. 

The company is currently testing its process in the lab in New Jersey and designing a pilot facility in Colorado, which will have the capacity to make about two tons of copper per year. Next will be a demonstration-scale reactor, which will have a 500-ton annual capacity and should come online in 2027 or 2028 at a mine site, Allen says. Still Bright recently raised an $18.7 million seed round to help with the scale-up process.

How scale up goes will be a crucial test of the technology and whether the typically conservative mining industry will jump on board, UNR’s Jowitt says: “You want to see what happens on an industrial scale. And I think until that happens, people might be a little reluctant to get into this.”

The Download: gene-edited babies, and cleaning up copper

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Here’s the latest company planning for gene-edited babies

The news: A West Coast biotech entrepreneur says he’s secured $30 million to form a public-benefit company to study how to safely create genetically edited babies, marking the largest known investment into the taboo technology.  

How they’re doing it: The new company, called Preventive, is being formed to research so-called “heritable genome editing,” in which the DNA of embryos would be modified by correcting harmful mutations or installing beneficial genes. The goal would be to prevent disease.

Why it’s contentious: Creating genetically edited humans remains controversial. The first scientist to do it, in China, was imprisoned for three years. The procedure remains illegal in many countries, including the US, and doubts surround its usefulness as a form of medicine. Read the full story.

—Antonio Regalado

This startup wants to clean up the copper industry

Demand for copper is surging, as is pollution from its dirty production processes. The founders of one startup, Still Bright, think they have a better, cleaner way to generate the copper the world needs. 

The company uses water-based reactions, based on battery chemistry technology, to purify copper in a process that could be less polluting than traditional smelting. And the hope is that this alternative will also help ease growing strain on the copper supply chain. Read the full story.

—Casey Crownhart

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The FDA’s top drug regulator has resigned
George Tidmarsh allegedly abused his position to inflict financial harm on a former associate. (STAT)
+ He’s only been in the post since July. (WP $)
+ It’s just the latest in a long line of slapdash leadership changes at the agency. (AP News)
+ Here’s what food and drug regulation might look like under the Trump administration. (MIT Technology Review)

2 America’s nuclear weapons testing won’t involve explosions
So don’t expect to see mushroom clouds any time soon. (BBC)
+ The tests will involve “the other parts of a nuclear weapon,” apparently. (NYT $)
+ The US is working to modernize its nuclear stockpile too. (The Hill)

3 Mustafa Suleyman wants researchers to stop pursuing conscious AI 
The Microsoft AI boss believes consciousness is reserved for biological beings only. (CNBC)
+ Here’s what the man who coined the term AGI has to say. (Wired $)
+ “We will never build a sex robot,” says Mustafa Suleyman. (MIT Technology Review

4 Elon Musk may relinquish control of Tesla

If the company’s shareholders decide against awarding him close to $1 trillion in stock. (NYT $)
+ One major investor has already said it won’t be supporting the pay package. (Gizmodo)

5 The hottest job in AI right now? Forward-deployed engineers
They’re specialists who help AI companies’ customers adopt their models. (FT $)

6 Hackers are stealing cargo shipments from transportation firms
They’re successfully infecting networks with remote access tools. (Bloomberg $)

7 OpenAI’s o1 model can analyze languages like a human expert
Experts suggest linguistic analysis is a key testbed for assessing the extent to which these models can reason like we can. (Quanta Magazine)

8 US obesity rates have started to drop
And weight-loss drugs are highly likely to be the reason why. (Vox)
+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)

9 Why it’s so tricky to make a good grocery list app
Notes just won’t cut it. (The Verge)

10 Many robots make light work
Lots of machines working in tandem can achieve what they’d struggle to do alone. (WSJ $)
+ Tiny robots inspired by spiders could help deliver diagnoses. (IEEE Spectrum)

Quote of the day

“You can check if there’s a backdoor.”

China’s leader Xi Jinping jokes about the security of two Chinese-made cellphones he gifted to South Korea’s President Lee Jae Myung, the New York Times reports.

One more thing

Digital twins of human organs are here. They’re set to transform medical treatment.

“Digital twins” are the same size and shape as the human organs they’re designed to mimic. They work in the same way. But they exist only virtually. Scientists can do virtual surgery on virtual hearts, figuring out the best course of action for a patient’s condition.

After decades of research, models like these are now entering clinical trials and starting to be used for patient care. The eventual goal is to create digital versions of our bodies—computer copies that could help researchers and doctors figure out our risk of developing various diseases and determine which treatments might work best.

But the budding technology will need to be developed very carefully. Read the full story to learn why.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The Empire State Building Run-Up race sounds amazing, if completely gruelling.
+ Very cool: each year, the scientific staff of the Amundsen–Scott South Pole Station screen horror classic The Thing to prepare themselves for the long, isolated winter ahead.
+ How caterpillars spin their protective little cocoons.
+ One-pot chicken sounds like a great winter warmer of a recipe.

The State of AI: Is China about to win the race? 

The State of AI is a collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power. Every Monday for the next six weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.

In this conversation, the FT’s tech columnist and Innovation Editor John Thornhill and MIT Technology Review’s Caiwei Chen consider the battle between Silicon Valley and Beijing for technological supremacy.

John Thornhill writes:

Viewed from abroad, it seems only a matter of time before China emerges as the AI superpower of the 21st century. 

Here in the West, our initial instinct is to focus on America’s significant lead in semiconductor expertise, its cutting-edge AI research, and its vast investments in data centers. The legendary investor Warren Buffett once warned: “Never bet against America.” He is right that for more than two centuries, no other “incubator for unleashing human potential” has matched the US.

Today, however, China has the means, motive, and opportunity to commit the equivalent of technological murder. When it comes to mobilizing the whole-of-society resources needed to develop and deploy AI to maximum effect, it may be just as rash to bet against. 

The data highlights the trends. In AI publications and patents, China leads. By 2023, China accounted for 22.6% of all citations, compared with 20.9% from Europe and 13% from the US, according to Stanford University’s Artificial Intelligence Index Report 2025. As of 2023, China also accounted for 69.7% of all AI patents. True, the US maintains a strong lead in the top 100 most cited publications (50 versus 34 in 2023), but its share has been steadily declining. 

Similarly, the US outdoes China in top AI research talent, but the gap is narrowing. According to a report from the US Council of Economic Advisers, 59% of the world’s top AI researchers worked in the US in 2019, compared with 11% in China. But by 2022 those figures were 42% and 28%. 

The Trump administration’s tightening of restrictions for foreign H-1B visa holders may well lead more Chinese AI researchers in the US to return home. The talent ratio could move further in China’s favor.

Regarding the technology itself, US-based institutions produced 40 of the world’s most notable AI models in 2024, compared with 15 from China. But Chinese researchers have learned to do more with less, and their strongest large language models—including the open-source DeepSeek-V3 and Alibaba’s Qwen 2.5-Max—surpass the best US models in terms of algorithmic efficiency.

Where China is really likely to excel in future is in applying these open-source models. The latest report from Air Street Capital shows that China has now overtaken the US in terms of monthly downloads of AI models. In AI-enabled fintech, e-commerce, and logistics, China already outstrips the US. 

Perhaps the most intriguing—and potentially the most productive—applications of AI may yet come in hardware, particularly in drones and industrial robotics. With the research field evolving toward embodied AI, China’s advantage in advanced manufacturing will shine through.

Dan Wang, the tech analyst and author of Breakneck, has rightly highlighted the strengths of China’s engineering state in developing manufacturing process knowledge—even if he has also shown the damaging effects of applying that engineering mentality in the social sphere. “China has been growing technologically stronger and economically more dynamic in all sorts of ways,” he told me. “But repression is very real. And it is getting worse in all sorts of ways as well.”

I’d be fascinated to hear from you, Caiwei, about your take on the strengths and weaknesses of China’s AI dream. To what extent will China’s engineered social control hamper its technological ambitions? 

Caiwei Chen responds:

Hi, John!

You’re right that the US still holds a clear lead in frontier research and infrastructure. But “winning” AI can mean many different things. Jeffrey Ding, in his book Technology and the Rise of Great Powers, makes a counterintuitive point: For a general-purpose technology like AI, long-term advantage often comes down to how widely and deeply technologies spread across society. And China is in a good position to win that race (although “murder” might be pushing it a bit!).

Chips will remain China’s biggest bottleneck. Export restrictions have throttled access to top GPUs, pushing buyers into gray markets and forcing labs to recycle or repair banned Nvidia stock. Even as domestic chip programs expand, the performance gap at the very top still stands.

Yet those same constraints have pushed Chinese companies toward a different playbook: pooling compute, optimizing efficiency, and releasing open-weight models. DeepSeek-V3’s training run, for example, used just 2.6 million GPU-hours—far below the scale of US counterparts. But Alibaba’s Qwen models now rank among the most downloaded open-weights globally, and companies like Zhipu and MiniMax are building competitive multimodal and video models. 

China’s industrial policy means new models can move from lab to implementation fast. Local governments and major enterprises are already rolling out reasoning models in administration, logistics, and finance. 

Education is another advantage. Major Chinese universities are implementing AI literacy programs in their curricula, embedding skills before the labor market demands them. The Ministry of Education has also announced plans to integrate AI training for children of all school ages. I’m not sure the phrase “engineering state” fully captures China’s relationship with new technologies, but decades of infrastructure building and top-down coordination have made the system unusually effective at pushing large-scale adoption, often with far less social resistance than you’d see elsewhere. The use at scale, naturally, allows for faster iterative improvements.

Meanwhile, Stanford HAI’s 2025 AI Index found Chinese respondents to be the most optimistic in the world about AI’s future—far more optimistic than populations in the US or the UK. It’s striking, given that China’s economy has slowed since the pandemic for the first time in over two decades. Many in government and industry now see AI as a much-needed spark. Optimism can be powerful fuel, but whether it can persist through slower growth is still an open question.

Social control remains part of the picture, but a different kind of ambition is taking shape. The Chinese AI founders in this new generation are the most globally minded I’ve seen, moving fluidly between Silicon Valley hackathons and pitch meetings in Dubai. Many are fluent in English and in the rhythms of global venture capital. Having watched the last generation wrestle with the burden of a Chinese label, they now build companies that are quietly transnational from the start.

The US may still lead in speed and experimentation, but China could shape how AI becomes part of daily life, both at home and abroad. Speed matters, but speed isn’t the same thing as supremacy.

John Thornhill replies:

You’re right, Caiwei, that speed is not the same as supremacy (and “murder” may be too strong a word). And you’re also right to amplify the point about China’s strength in open-weight models and the US preference for proprietary models. This is not just a struggle between two different countries’ economic models but also between two different ways of deploying technology.  

Even OpenAI’s chief executive, Sam Altman, admitted earlier this year: “We have been on the wrong side of history here and need to figure out a different open-source strategy.” That’s going to be a very interesting subplot to follow. Who’s called that one right?

Further reading on the US-China competition

There’s been a lot of talk about how people may be using generative AI in their daily lives. This story from the FT’s visual story team explores the reality 

From China, FT reporters ask how long Nvidia can maintain its dominance over Chinese rivals

When it comes to real-world uses, toys and companions devices are a novel but emergent application of AI that is gaining traction in China—but is also heading to the US. This MIT Technology Review story explored it.

The once-frantic data center buildout in China has hit walls, and as the sanctions and AI demands shift, this MIT Technology Review story took an on-the-ground look at how stakeholders are figuring it out.

Regex in GSC Reveals ChatGPT Searches

ChatGPT increasingly queries Google and other search engines for answers to prompts. Atlas, ChatGPT’s browser, similarly searches Google for research.

Here’s how to identify that activity in Google Search Console.

Regex in Search Console

Agentic searches tend to use similar query patterns, which regular expressions can often detect.

Agentic queries (i) are usually longer than those of humans because prompts tend to be more detailed, and (ii) typically seek pricing info. Plus, searches from large language models often fan out to explore user feedback.

To use regular expressions in Search Console:

  • Go to the “Performance” > “Add filter.”
  • Choose “Query” > “Custom (regex).”
Screenshot of Search Console showing the regex filtering interface

Filter queries in Search Console with regular expressions.

Regex Patterns

Longer queries

ChatGPT queries are roughly five words on average, about 60% longer than traditional searches. But you can experiment with any length. For instance, this expression filters queries that contain more than 10 words.

([^” “]*s){10,}?

Change “10” to “4” or “25” to find queries longer than 4 or 25 words.

Screenshot of the field to enter to regex

Enter the regex, such as this example for queries longer than 10 words.

Google Analytics 4 can identify pages that generate the most ChatGPT answers. Search Console can then correlate those pages with queries likely generated by AI agents.

To find pages in GA4 that generate the most traffic from ChatGPT:

  • Click “Reports” > “Engagement”
  • Choose “Pages and Screens”
  • Click “Add filter”
  • Select “Session source/medium” (in the filter settings), select “Contains,” and type “ChatGPT.”
  • Click “Apply”

GA4 now filters pages by any source containing “chatgpt.” You can copy those URL paths and create a secondary filter to see only long-tail queries that the pages rank for.

Screenshot of GA4 showing the filtering interface

Filter pages in GA4 for any source containing “chatgpt.’

Brand and transactional queries

LLMs often fan out to gather reviews of products and brands. The fan-outs can research and compare prices to include in answers based on the prompt.

You can see these queries in Search Console by using the following regex:

b(review|reviews|reddit|rating|ratings|support|warranty|return policy|refund|complaint|feedback|scam|legit|trustworthy|experience|issues|buy|purchase|price|cost|cheap|discount|coupon|order|store|near|online|sale|affordable|available|in stock|best|quality|features|specifications|warranty|deal|shop|compare|vs|versus)b

Prominence in listicles

When asked for product recommendations, LLMs typically fan out to “best of” listicles. Publishing articles listing seasonal and general “top products” could elevate visibility for your brand and products.

Here’s a regex to track your brand in listicles:

b(best|top-rated|trusted|famous|top|most|perfect)b

Find informational queries

Consumers prompt ChatGPT for instructions and answers. If it finds a solution, ChatGPT often cites the source. Here’s a regex to find likely URL citations for informational prompts:

b(guide|tutorial|how to|step by step|tips|tricks|ways to|best way to|learn|help|explain|understand|examples|instruction|methods|meaning of|definition)b

Tools to Help

GSC Helper is a Chrome extension that lets you save regex patterns, search for any query directly within Search Console (instead of copy-pasting), and export filtered results into spreadsheets.

Better Regex in Search Console is another Chrome extension with pre-built regex patterns and features to create your own.

Better Regex in Search Console provides pre-built expressions and features to create your own.

Report: Apple To Lean On Google Gemini For Siri Overhaul via @sejournal, @MattGSouthern

Apple is reportedly paying Google to build a custom Gemini AI model that will power a major Siri upgrade targeted for spring 2026, according to Bloomberg’s Mark Gurman.

The custom Gemini model is expected to run on Apple’s Private Cloud Compute infrastructure. Neither Apple nor Google has officially announced the partnership.

What’s Being Reported

Bloomberg reports Apple conducted an internal evaluation comparing AI models from Google and Anthropic for the next-generation Siri.

Google’s Gemini won based largely on financial terms. Bloomberg says Anthropic’s Claude would have cost Apple more than $1.5 billion annually.

According to the report, Google’s models will provide the query planner and summarizer components of Siri’s new architecture. Apple’s own Foundation Models would continue handling on-device personal data processing, with the Google-supplied models running on Apple’s servers.

The project carries the internal codename “Glenwood.”

Apple Won’t Acknowledge Google’s Role

Bloomberg reports Apple plans to market the updated Siri as Apple technology running on Apple servers through an Apple interface, without promoting Google’s involvement.

In practice, Gemini would operate behind the scenes while Apple positions the capabilities as its own work.

Launch Timeline

Bloomberg reports Apple is targeting spring 2026 for the Siri overhaul as part of iOS 26.4.

Earlier Bloomberg reporting also pointed to a smart home display device on a similar timeline that could showcase the assistant’s expanded capabilities.

What We Don’t Know Yet

Financial terms beyond the broad “paying Google” characterization are undisclosed.

Neither company has confirmed the partnership, and the legal and technical data-handling arrangements are not public. It’s also unclear whether the deal is finalized or still being negotiated.

Why This Matters

A Gemini-powered backend could change how Siri answers questions, and who gets credit in AI responses, even if the branding remains Apple-only.

If Bloomberg’s report holds, more answers will start and finish inside Siri and Spotlight on iPhone, which can reduce early web discovery.

The open questions are how sources will appear and whether traffic will be traceable.

Looking Ahead

Apple has already enabled ChatGPT access within Siri and Writing Tools as part of Apple Intelligence, and Anthropic says Claude is available in Xcode 26 for developers.

The potential Gemini partnership would be Apple’s most consequential AI arrangement to date because it would underpin core Siri functionality rather than optional features.

Watch for official details closer to the iOS 26.4 window.


Featured Image: Thrive Studios ID/Shutterstock

GEO Platform Shutdown Sparks Industry Debate Over AI Search via @sejournal, @MattGSouthern

Benjamin Houy shut down Lorelight, a generative engine optimization (GEO) platform designed to track brand visibility in ChatGPT, Claude, and Perplexity, after concluding most brands don’t need a specialized tool for AI search visibility.

Houy writes that, after reviewing hundreds of AI answers, the brands mentioned most often share familiar traits: quality content, mentions in authoritative publications, strong reputation, and genuine expertise.

He claims:

“There’s no such thing as ‘GEO strategy’ or ‘AI optimization’ separate from brand building… The AI models are trained on the same content that builds your brand everywhere else.”

Houy explains in a blog post that customers liked Lorelight’s insights but often churned because the data didn’t change their tactics. In his view, users pursued the same fundamentals with or without GEO dashboards.

He argues GEO tracking makes more sense as one signal inside broader SEO suites rather than as a standalone product. He points to examples of traditional SEO platforms incorporating AI-style visibility signals into existing toolsets rather than creating a separate category.

Debate Snapshot: Voices On Both Sides

Reactions show a genuine split in how marketers see “AI search.”

Some SEO professionals applauded the back-to-basics message. Others countered with cases where assistant referrals appear meaningful.

Here are some of the responses published so far:

  • Lily Ray: “Thank you for being honest and for sharing this publicly. The industry needs to hear this loud and clear.”
  • Randall Choh: “I beg to differ. It’s a growing metric… LLM searches usually have better search intents that lead to higher conversions.”
  • Karl McCarthy: “You’re right that quality content + authoritative mentions + reputation is what works… That’s not a tool. It’s a network.”
  • Nikki Pilkington raised consumer-fairness questions about shuttering a product and whether prior GEO-promotional content should be updated or removed.

These perspectives capture the industry tension. Some see AI search as a new performance channel worth measuring. Others see the same brand signals driving outcomes across SEO, PR, and now AI assistants.

How “AI Search Visibility” Is Being Measured

Because assistants work differently from web search, measurement is still uneven.

Assistants surface brands in two main ways: by citing and linking sources directly in answers, and by guiding people into familiar web results.

Referral tracking can come through direct links, copy-and-paste, or branded search follow-ups.

Attribution is messy because not all assistants pass clear referrers. Teams often combine UTM tagging on shared links with branded-search lift, direct-traffic spikes, and assisted-conversion reports to triangulate “LLM influence.”

That patchwork makes case studies persuasive but hard to generalize.

Why This Matters

The main question is whether AI search needs its own optimization framework or if it primarily benefits from the same brand signals.

If Houy is correct, standalone GEO tools might only produce engaging dashboards that seldom influence strategy.

On the other hand, if the advocates are correct, overlooking assistant visibility could mean missing out on profitable opportunities between traditional search and LLM-referred traffic.

What’s Next

It’s likely that SEO platforms will continue to fold “AI visibility” into existing analytics rather than creating a separate category.

The safest path for businesses is to continue doing the brand-building work that assistants already reward, while testing assistant-specific measurements where they are most likely to pay off.


Featured Image: Roman Samborskyi/Shutterstock