CFO: Brands Rarely Max Out Meta Ads

Abir Syed is an accountant turned marketer turned chief financial officer. He says ecommerce marketing success largely depends on creative volume, and few merchants have exhausted any channel, much less Meta.

Abir is co-founder of UpCounting, an accounting and fractional CFO firm in Montréal, Canada. In our recent conversation, he shared common financial mistakes of merchants, key metrics to monitor, and, yes, how to grow ad revenue on Meta.

The entire audio of our conversation is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Who are you and what do you do?

Abir Syed: I am the co-founder of UpCounting, an accounting and fractional chief financial officer firm focused on ecommerce. We handle everything from basic bookkeeping and transactional work to high-level needs, such as due diligence, back-office implementations, cash flow forecasting, and financial modeling.

I am also a certified public accountant and previously ran both an ecommerce brand and a marketing agency. Most finance professionals lack hands-on experience in advertising or customer acquisition, but I have lived those challenges, and that background significantly shapes how I advise founders.

Marketing is usually an ecommerce brand’s most significant expense; understanding it is essential for providing meaningful financial guidance. So we structure our reporting, dashboards, and forecasting around the realities of ecommerce operations — not just accounting accuracy but actionable insights tied to contribution margin, customer behavior, marketing performance, and growth strategy.

Bandholz: What is the most common financial mistake founders make?

Syed: I see three major issues repeatedly. First, many founders track the wrong numbers. They monitor revenue or look at profit once a month, but rarely examine contribution margin or cash flow. Contribution margin is often ignored entirely, leading to major blind spots. Top-line revenue means little without understanding the economics underneath.

Second, operators often misunderstand what is required to enable growth. I am frequently asked to review struggling ad accounts. A recurring issue is underinvesting in creativity. Founders try to force growth by pushing return on ad spend harder, rather than improving the creative foundation required to scale spend while maintaining healthy acquisition costs.

Third, omnichannel brands frequently fail to separate channel performance. I see profit and loss statements with a single cost of goods sold line combining, say, Shopify, Amazon, and wholesale. Blending everything prevents founders from seeing how each channel is truly performing. Wholesale, for instance, operates on a very different cash cycle.

Bandholz: How often should operators review their financials?

Syed: It depends on the business’s size, complexity, and growth goals.

Most operators should review key historical metrics weekly — cash flow, expenses, and anything unusual moving through the business. A weekly cadence helps identify problems early.

More detailed reporting, such as margin and channel breakdowns, is usually best reviewed monthly. That interval provides cleaner data and enough distance to spot trends rather than reacting to noise.

The most overlooked piece is forecasting. Few brands build forward-looking financial models because it is difficult, yet essential for aggressive growth. Forecasting helps you understand the implications of scaling. Conservative operators can get away without it, but brands pushing hard need projections. Too many founders grow quickly with no plan, no modeling, and no clarity on future cash needs.

Bandholz: How do you decide if a marketing channel is maxed out?

Syed: It is difficult to know with total certainty, but in most cases, brands have not truly saturated a channel, especially Meta. There is usually far more room available than teams realize.

I often compare similar brands in the same category. One might spend $200,000 a month on Meta while also allocating resources to podcasts, TikTok, affiliates, and other channels. Another in the same space might spend $200,000 a day on Meta. They often have similar products, audiences, and brand quality. The difference is creative volume. The larger spender produces an enormous amount of fresh creative, while the other is effectively using a strategy from years ago.

Most brands have not come close to saturating Meta. They are simply underfunding creative strategy.

Increasing creative volume opens new audience pockets and helps find additional winning ads. If the creative that got you to $200,000 in monthly sales has plateaued, you must increase output to climb further. The more the creative volume, the higher the revenue. The pace depends on profitability, reinvestment capacity, creative quality, and a bit of luck.

Working with a media-buying agency that also produces creative can cost upwards of $7,000 per month, ideally under 10% of ad spend. Smaller brands may temporarily spend as much as 30%.

Bandholz: How should brands budget for bookkeeping?

Syed: Smaller brands face a minimum cost for competent bookkeeping. Hiring in-house rarely makes sense until the company is very large. A Shopify-only brand doing $1–5 million annually should expect to spend $2,000-$3,000 per month. Cheaper options exist, but the trade-off is often lower accuracy and weaker communication.

The challenge is that many founders cannot discern whether financial data is clean. It is similar to hiring an internet security expert when you lack technical knowledge — you might overlook major issues until something breaks. We have onboarded many clients who tried cheaper options, only to find their data was consistently incorrect.

To scale aggressively or make data-driven decisions, you need accurate, timely financials and guidance on interpreting them.

Once a brand surpasses roughly $5 million in annual sales, bookkeeping for multiple sales channels typically costs $5,000 to $8,000 per month.

Bandholz: Where can people support you, hire you, follow you?

Abir: Our site is UpCounting.com. I’m on LinkedInInstagram, and X.

YouTube Shorts Algorithm May Now Favor Fresh Over Evergreen via @sejournal, @MattGSouthern

YouTube appears to have changed how it recommends Shorts, according to analysts who work with some of the platform’s largest channels. The shift reportedly began in mid-September and deprioritizes videos older than roughly 30 days, favoring more recent uploads.

Mario Joos, a retention strategist who works with MrBeast, Stokes Twins, and Alan’s Universe, first identified the pattern after weeks of trying to explain a broad dip in performance across his clients. Dot Esports reports that Joos analyzed data across channels with 100 million to one billion monthly views and found a consistent drop in impressions for older Shorts.

What The Data Shows

Joos says YouTube has “changed the short-form content algorithm for the worse.” His analysis identified a threshold around 28-30 days. Shorts older than that window now receive far fewer impressions than they did before mid-September.

The pattern wasn’t immediately obvious in channel-wide analytics because newer content masked the decline. Only after filtering specifically for Shorts posted before the 30-day cutoff did the picture become clear.

Joos posted a graph detailing the drop-off for seven major Shorts channels, though he withheld their names for client sensitivity. Every chart showed the same moment: around September, older Shorts’ view counts dropped sharply and stayed far lower than before.

He described the change as “the flattening.” In his view, YouTube is pushing creators toward high-volume uploads at the expense of quality. Joos says he understands this approach from a corporate standpoint as a competitive response to TikTok, but warns it disproportionately affects creators who depend on their Shorts income.

Joos is explicit about his uncertainty. He calls this “a carefully constructed working theory and not a confirmed fact.” Some commenters on his analysis note they have not experienced similar drops on their channels. Others corroborate his findings.

Creators Confirm The Pattern

Tim Chesney, a creator with two billion lifetime views across his channels, confirmed the pattern on X. He wrote:

“Can confirm this is true. 2B views on this chart, and in September all of the evergreen videos simply tanked. I think pushing fresh content makes sense, but when you think about it, it makes investing into your content and spending time improving it, irrelevant.”

Chesney argues that the shift pushes creators to “produce more instead of better.” He warned that if the trend continues, YouTube will become a “trash bin” of low-effort content similar to what he sees on TikTok.

This echoes concerns from earlier in the year. In August, multiple creators documented synchronized view drops that appeared related to separate platform modifications. Gaming channel Bellular News documented precipitous declines in desktop viewership starting August 13, though that change appeared related to how YouTube counted views from browsers with ad-blocking software.

The September Shorts shift appears to be a distinct change affecting the recommendation algorithm rather than view counting methodology.

The Evergreen Value Proposition

For years, the case for video content has rested on compounding value. Unlike trend-dependent posts that fade quickly, evergreen videos continue generating views and revenue long after publication. One production investment pays off across months or years.

This model has been central to how creators and businesses justify video investment. A tutorial published today should still attract viewers next year. A how-to guide should compound views as search demand persists.

A recency-focused algorithm undermines that math. If older Shorts stop generating impressions after 30 days, the value equation changes. Creators would need to publish continuously to maintain visibility, shifting resources from quality to quantity.

The economics become punishing. Instead of building a library that works while you sleep, creators face a treadmill where last month’s content stops contributing. Revenue becomes dependent on constant production rather than accumulated assets.

The Broader Context

The reported Shorts change follows a familiar pattern for anyone who has watched Google Search evolve. Freshness signals have long played a role in ranking, sometimes appearing to override comprehensive, well-researched content.

For SEO professionals, this matters beyond YouTube. Video strategy has often been pitched as a hedge against organic search volatility. As AI Overviews and zero-click results reduce traffic from traditional search, YouTube has represented an alternative channel with different dynamics.

If YouTube is applying similar freshness-over-quality logic, that changes the risk calculus. Practitioners evaluating where to invest their content resources may find the same frustrations emerging across both platforms.

This also reflects a broader pattern in how Google communicates with creators. YouTube’s Creator Liaison position exists to bridge the gap between platform and creators, but analysts and creators consistently report limited transparency about algorithm changes. The company rarely confirms or explains modifications until long after creators have identified them through their own data analysis.

Why This Matters

The value proposition of evergreen Shorts depends on long-tail performance. A shift toward recency-based ranking would require higher publishing frequency to maintain the same visibility.

Practitioners frustrated with Google Search volatility may find similar dynamics emerging on YouTube. The promise of a stable alternative channel looks less reliable if algorithm changes can abruptly devalue your content library.

This also affects how you advise clients considering video investment. The traditional pitch of “build once, earn forever” requires qualification if evergreen content has an effective shelf life of 30 days.

What To Do Now

If you publish Shorts, check your analytics for view declines on content older than 30 days. Compare September 2025 performance against earlier months. Look specifically at videos that previously showed steady long-tail performance.

The pattern Joos identified spans channels of very different sizes and categories. That breadth suggests a platform-level change rather than isolated performance issues. Whether YouTube acknowledges it or not, the data these analysts are reporting points to a shift worth monitoring closely.

Looking Ahead

YouTube hasn’t confirmed any changes to Shorts ranking. Without official documentation, these remain analyst observations and creator reports.

During Google’s Q3 earnings call, Philipp Schindler noted that recommendation systems are “driving robust watch time growth” and that Gemini models are enabling “further discovery improvement.” The company didn’t specify how these improvements affect content distribution or whether recency now plays a larger role in recommendations.


Featured Image: Mijansk786/Shutterstock

PPC Pulse: AI Max Insights, Cyber Monday Trends & A New Google Asset via @sejournal, @brookeosmundson

The conversations shaping PPC this week focused on how AI interprets intent, how holiday demand played out across Shopping and Performance Max, and how Google is adding more automated language directly into ads.

Google shared more clarity around AI Max, while Adalysis shared AI Max match type behavior, retail analysts broke down early Cyber Monday performance trends, and a potential new Google automated ad asset surfaced that raises questions about brand control.

Here is what stands out for advertisers this week and where you should pay attention.

AI Max Clarifications & New Insights On Match Types

The conversation around AI Max is not slowing down.

A YouTube short circulating this week highlighted Google reaffirming a key message. Match types still serve a purpose, even as AI takes on more interpretation of intent.

This also aligns with a LinkedIn post from two weeks ago where Google Ads Liaison, Ginny Marvin, clarified some misconceptions around the use and functionality of AI Max. Specifically, around:

  • What AI Max is designed to do.
  • If AI Max repackages existing features.
  • What users should expect based on their current keyword match type setup.
  • How to measure incremental lift.
Screenshot taken by author from LinkedIn, December 2025

The post got a lot of chatter in the comments, specifically around Brad Geddes’s comment, with refuting information, stating:

We’re seeing many instances of AI max matching to exact match keywords or exact match variants. So when you look at your totals, the AI max column is a mixture of the AI max matches along with search terms your exact match keywords would have matched to if AI max didn’t exist.

This led Adalysis to publish a thoughtful breakdown of search term behavior within AI Max. The post shows clear examples where the model expands into adjacent intent that still feels relevant, but not necessarily tied to the exact keyword chosen.

This mirrors what many practitioners are already seeing. Search terms look broader. Relevance varies. The model relies on intention, not precision, which shifts how advertisers think about coverage.

Why This Matters For Advertisers

The bigger takeaway here is that your structure still steers the model. AI Max may evaluate intent more flexibly, but it is not inventing direction on its own.

It relies on the signals you set through match types, keyword groupings, and the guardrails you place around your campaigns. When advertisers downplay match types or assume AI will sort everything out, query quality usually becomes harder to manage.

A thoughtful keyword strategy gives the model clearer boundaries to work within. It also helps you understand why certain queries show up and how the system interpreted them.

The more intentional your structure, the more predictable your outcomes. This is the difference between AI supporting your strategy and AI creating a strategy for you.

Cyber Monday PPC Trends Across Shopping And PMax

Cyber Monday data and insights came in quickly this year. Optmyzr shared performance highlights from accounts it manages, showing steady results and more predictable cost patterns than many expected.

Some of its main findings included:

  • Brands spent more YoY to stay visible, even though impressions declined.
  • Clicks and CTR increased YoY.
  • Early conversion data reports decreased ROAS and increased CPA, but noted this isn’t final

Optmyzr reiterated that they would share more final details around conversions and ROAS at a later time due to conversion lag.

Mike Ryan also reviewed more than 2.5 million euros spent on Black Friday in PMax and Shopping spend across retailers and reported noticeable differences from previous years. Some of his findings were similar to Optmyzr, including that advertisers spent 31% more, but average order value (AOV) decreased 6%.

Essentially, advertiser spend efficiency decreased significantly YoY.

As he observed hourly trend data, he noted revenue peaked during early evening hours, advocating to keep budget healthy all throughout the day to capitalize on that intent.

Lastly, he found unique competition up 12%, and confirmed that Amazon still runs Shopping ads in Europe (while they’ve stopped running in the United States earlier this year).

Why This Matters For Advertisers

The data tells a consistent story. Attention is still there, but it is more expensive to earn. Optmyzr’s numbers show higher spend year over year, even as impressions dipped, which reinforces that visibility continues to cost more. Clicks and CTR were up across both ecommerce and lead gen, which signals that people were still shopping and comparing options. The interest is not gone. The price of reaching that interest simply climbed.

The bigger takeaway for advertisers is that strong engagement does not solve the efficiency problem. Costs rose across the board, which puts even more pressure on the post-click experience. When attention is not the constraint anymore, landing page clarity, offer strength, and conversion flow become the real differentiators. The accounts that invested in those areas will feel less of the margin squeeze that defined this year’s shopping window.

New Automated Ad Asset Appears In Google Ads

A new automated asset gained attention this week when Anthony Higman shared a screenshot showing Google testing a “What People Are Saying” asset.

Screenshot taken by author from LinkedIn, December 2025

The asset included AI-generated summary text that looked more like a sentiment recap than a traditional review snippet. What stood out is that the text did not appear to be pulled from the advertiser’s site or from structured reviews. It looked generated by Google based on potential store ratings and reviews.

This is another example of Google introducing language directly into ads, even before advertisers get official documentation or a clear explanation of how the text is produced. The extension reads confidently, but the source of the claims is not obvious.

That has already sparked discussion about accuracy, oversight, and how much creative control advertisers may lose as automated assets continue to expand.

Why This Matters For Advertisers

This asset signals that Google is continuing to explore new ways to surface AI-generated supporting text in ads. That makes oversight more important, simply because advertisers may see language that does not come directly from their own assets.

While the goal is to enhance relevance and provide helpful context to users, it also means brands should keep an eye on auto-applied assets to ensure the messaging aligns with how they want to show up in search. A quick review process can go a long way in avoiding surprises and keeping ad copy consistent with your broader strategy.

Theme Of The Week: Context Shapes Performance

Across all three updates, the common thread is how context influences outcomes.

AI Max decisions depend heavily on the structure you set. Cyber Monday performance reflected a market where attention remained strong but came at a higher cost, putting more weight on what happens after the click. The new automated extension shows Google continuing to experiment with ways to add context inside ads.

Together, these updates point to a simple reality. The more intentional you are with structure, creative, and user experience, the more predictable your results become, even as automation takes on a larger role.

More Resources:


Featured Image: Pixel-Shot/Shutterstock

Complete Crawler List For AI User-Agents [Dec 2025] via @sejournal, @vahandev

AI visibility plays a crucial role for SEOs, and this starts with controlling AI crawlers. If AI crawlers can’t access your pages, you’re invisible to AI discovery engines.

On the flip side, unmonitored AI crawlers can overwhelm servers with excessive requests, causing crashes and unexpected hosting bills.

User-agent strings are essential for controlling which AI crawlers can access your website, but official documentation is often outdated, incomplete, or missing entirely. So, we curated a verified list of AI crawlers from our actual server logs as a useful reference.

Every user-agent is validated against official IP lists when available, ensuring accuracy. We will maintain and update this list to catch new crawlers and changes to existing ones.

The Complete Verified AI Crawler List (December 2025)

Name Purpose Crawl Rate of SEJ (pages/hour) Verified IP List Robots.txt disallow Complete User Agent
GPTBot AI training data collection for GPT models (ChatGPT, GPT-4o) 100 Official IP List User-agent: GPTBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.3; +https://openai.com/gptbot)
ChatGPT-User AI agent for real-time web browsing when users interact with ChatGPT 2400 Official IP List User-agent: ChatGPT-User
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; ChatGPT-User/1.0; +https://openai.com/bot
OAI-SearchBot AI search indexing for ChatGPT search features (not for training) 150 Official IP List User-agent: OAI-SearchBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36; compatible; OAI-SearchBot/1.3; +https://openai.com/searchbot
ClaudeBot AI training data collection for Claude models 500 Official IP List User-agent: ClaudeBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
Claude-User AI agent for real-time web access when Claude users browse <10>

Not available User-agent: Claude-User
Disallow: /sample-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Claude-User/1.0; +Claude-User@anthropic.com)
Claude-SearchBot AI search indexing for Claude search capabilities <10>

Not available User-agent: Claude-SearchBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Claude-SearchBot/1.0; +https://www.anthropic.com)
Google-CloudVertexBot AI agent for Vertex AI Agent Builder (site owners’ request only) <10>

Official IP List User-agent: Google-CloudVertexBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.7390.122 Mobile Safari/537.36 (compatible; Google-CloudVertexBot; +https://cloud.google.com/enterprise-search)
Google-Extended Token controlling AI training usage of Googlebot-crawled content. User-agent: Google-Extended
Allow: /
Disallow: /private-folder
Gemini-Deep-Research AI research agent for Google Gemini’s Deep Research feature <10>

Official IP List User-agent: Gemini-Deep-Research
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Gemini-Deep-Research; +https://gemini.google/overview/deep-research/) Chrome/135.0.0.0 Safari/537.36
Google  Gemini’s chat when a user asks to open a webpage <10>

Google
Bingbot Powers Bing Search and Bing Chat (Copilot) AI answers 1300 Official IP List User-agent: BingBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) Chrome/116.0.1938.76 Safari/537.36
Applebot-Extended Doesn’t crawl but controls how Apple uses Applebot data. <10>

Official IP List User-agent: Applebot-Extended
Allow: /
Disallow: /private-folder
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.4 Safari/605.1.15 (Applebot/0.1; +http://www.apple.com/go/applebot)
PerplexityBot AI search indexing for Perplexity’s answer engine 150 Official IP List User-agent: PerplexityBot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; PerplexityBot/1.0; +https://perplexity.ai/perplexitybot)
Perplexity-User AI agent for real-time browsing when Perplexity users request information <10>

Official IP List User-agent: Perplexity-User
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Perplexity-User/1.0; +https://perplexity.ai/perplexity-user)
Meta-ExternalAgent AI training data collection for Meta’s LLMs (Llama, etc.) 1100 Not available User-agent: meta-externalagent
Allow: /
Disallow: /private-folder
meta-externalagent/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler)
Meta-WebIndexer Used to improve Meta AI search. <10>

Not available User-agent: Meta-WebIndexer
Allow: /
Disallow: /private-folder
meta-webindexer/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler)
Bytespider AI training data for ByteDance’s LLMs for products like TikTok <10>

Not available User-agent: Bytespider
Allow: /
Disallow: /private-folder
Mozilla/5.0 (Linux; Android 5.0) AppleWebKit/537.36 (KHTML, like Gecko) Mobile Safari/537.36 (compatible; Bytespider; https://zhanzhang.toutiao.com/)
Amazonbot AI training for Alexa and other Amazon AI services 1050 Not available User-agent: Amazonbot
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot) Chrome/119.0.6045.214 Safari/537.36
DuckAssistBot AI search indexing for DuckDuckGo search engine 20 Official IP List User-agent: DuckAssistBot
Allow: /
Disallow: /private-folder
DuckAssistBot/1.2; (+http://duckduckgo.com/duckassistbot.html)
MistralAI-User Mistral’s real-time citation fetcher for “Le Chat” assistant <10>

Not available User-agent: MistralAI-User
Allow: /
Disallow: /private-folder
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; MistralAI-User/1.0; +https://docs.mistral.ai/robots)
Webz.io Data extraction and web scraping used by other AI training companies. Formerly known as Omgili. <10>

Not available User-agent: webzio
Allow: /
Disallow: /private-folder
webzio (+https://webz.io/bot.html)
Diffbot Data extraction and web scraping used by companies all over the world. <10>

Not available User-agent: Diffbot
Allow: /
Disallow: /private-folder
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 (.NET CLR 3.5.30729; Diffbot/0.1; +http://www.diffbot.com)
ICC-Crawler AI and machine learning data collection <10>

Not available User-agent: ICC-Crawler
Allow: /
Disallow: /private-folder
ICC-Crawler/3.0 (Mozilla-compatible; ; https://ucri.nict.go.jp/en/icccrawler.html)
CCBot Open-source web archive used as training data by multiple AI companies <10>

Official IP List User-agent: CCBot
Allow: /
Disallow: /private-folder
CCBot/2.0 (https://commoncrawl.org/faq/)

The user-agent strings above have all been verified against Search Engine Journal server logs.

Popular AI Agent Crawlers With Unidentifiable User Agent

We’ve found that the following didn’t identify themselves:

  • you.com.
  • ChatGPT’s agent Operator.
  • Bing’s Copilot chat.
  • Grok.
  • DeepSeek.

There is no way to track this crawler from accessing webpages other than by identifying the explicit IP.

We set up a trap page (e.g., /specific-page-for-you-com/) and used the on-page chat to prompt you.com to visit it, allowing us to locate the corresponding visit record and IP address in our server logs. Below is the screenshot:

Screenshot by author, December 2025

What About Agentic AI Browsers?

Unfortunately, AI browsers such as Comet or ChatGPT’s Atlas don’t differentiate themselves in the user agent string, and you can’t identify them in server logs and blend with normal users’ visits.

Chatgpt's Atlas browser user agetn string from server logs records
ChatGPT’s Atlas browser user agent string from server logs records (Screenshot by author, December 2025)

This is disappointing for SEOs because tracking agentic browser visits to a website is important for reporting POV.

How To Check What’s Crawling Your Server

Some hosting companies offer a user interface (UI) that makes it easy to access and look at server logs, depending on what hosting service you are using.

If your hosting doesn’t offer this, you can get server log files (usually located  /var/log/apache2/access.log in Linux-based servers) via FTP or request it from your server support to send it to you.

Once you have the log file, you can view and analyze it in either Google Sheets (if the file is in CSV format), Screaming Frog’s log analyzer, or, if your log file is less than 100 MB, you can try analyzing it with Gemini AI.

How To Verify Legitimate Vs. Fake Bots

Fake crawlers can spoof legitimate user agents to bypass restrictions and scrape content aggressively. For example, anyone can impersonate ClaudeBot from their laptop and initiate crawl request from the terminal. In your server log, you will see it as Claudebot is crawling it:

curl -A 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)' https://example.com

Verification can help to save server bandwidth and prevent harvesting content illegally. The most reliable verification method you can apply is checking the request IP.

Check all IPs and scan to match if it’s one of the officially declared IPs listed above. If so, you can allow the request; otherwise, block.

Various types of firewalls can help you with this via allowlist verified IPs (which allows legitimate bot requests to pass through), and all other requests impersonating AI crawlers in their user agent strings are blocked.

For example, in WordPress, you can use Wordfence free plugin to allowlist legitimate IPs from the official lists (as above) and add blocking custom rules as below:

The allowlist rule is superior, and it will let legitimate crawlers pass through and block any impersonation request which comes from different IPs.

However, please note that it is possible to spoof an IP address, and in that case, when bot user agent and IPs are spoofed, you won’t be able to block it.

Conclusion: Stay In Control Of AI Crawlers For Reliable AI Visibility

AI crawlers are now part of our web ecosystem, and the bots listed here represent the major AI platforms currently indexing the web, although this list is likely to grow.

Check your server logs regularly to see what’s actually hitting your site and make sure you inadvertently don’t block AI crawlers if visibility in AI search engines is important for your business. If you don’t want AI crawlers to access your content, block them via robots.txt using the user-agent name.

We’ll keep this list updated as new crawlers emerge and update existing ones, so we recommend you bookmark this URL, or revisit this article on a regular basis to keep your AI crawler list up to date.

More Resources:


Featured Image: BestForBest/Shutterstock

SEO Pulse: Google Updates Console, Maps & AI Mode Flow via @sejournal, @MattGSouthern

Google packed a lot into this week, with Search Console picking up AI-powered configuration, Maps loosening its real-name rule for reviews, and a new test nudging more people from AI Overviews into AI Mode.

Here’s what that means for you.

Google Search Console Tests AI-Powered Report Configuration

Google introduced an experimental AI feature in Search Console that lets you describe the report you want and have the tool build it for you.

The feature, announced in a Google blog post, lives inside the Search results Performance report. You can type something like “compare clicks from UK versus France,” and the system will set filters, comparisons, and metrics to match what it thinks you mean.

For now, the feature is limited to Search results data, while Discover, News, and video reports still work the way they always have. Google says it’s starting with “a limited set of websites” and will expand access based on feedback.

The update is about configuration, not new metrics. It can help you set up a table, but it will not change how you sort or export data, and it does not add separate reporting for AI Overviews or AI Mode.

Why SEOs Should Pay Attention

If you spend a lot of time rebuilding the same types of reports, this can save you some setup time. It’s easier to describe a comparison in one sentence than to remember which checkboxes and filters you used last month.

The tradeoff is that you still need to confirm what the AI actually did. When a view comes from a written request instead of a manual series of clicks, it’s easy for a small misinterpretation to slip through and show up in a deck or a client email.

This is not a replacement for understanding how your reports are put together. It also does nothing to answer a bigger question for SEO professionals about how much traffic is coming from Google’s AI surfaces.

What SEO Professionals Are Saying

On LinkedIn, independent SEO consultant Brodie Clark summed up the launch with:

“Whoa, Google Search Console just rolled out another gem: a new AI-powered configuration to analyse your search traffic. The new feature is designed to reduce the effort it takes for you to select, filter, and compare your data.”

He then walks through how it can apply filters, set comparisons, and pick metrics for common tasks.

Under the official Search Central post, one commenter joked about the gap between configuration and data:

“GSC: ‘Describe the dataview you want to see’ Me: ‘Show me how much traffic I receive from AI overviews and AI mode’ :’)”

The overall mood is that this is a genuine quality-of-life improvement, but many SEO professionals would still rather get first-class reporting for AI Overviews and AI Mode than another way to slice existing Search results data.

Read our full coverage: Google Adds AI-Powered Configuration To Search Console

Google Maps Reviews No Longer Require Real Names

Google Maps now lets people leave reviews under a custom display name and profile picture instead of their real Google Account name. The change rolled out globally and is documented in recent Google Maps updates.

You set this up in the Contributions section of your profile. Once you choose a display name and avatar, that identity appears on new reviews and can be applied to older ones if you edit them, while Google still ties everything back to a real account with a full activity history.

The change is more than cosmetic because review identity shapes how people interpret trust and intent when they scan a local business profile.

Why SEOs Should Pay Attention

Reviews remain one of the strongest local ranking signals, based on Whitespark’s Local Search Ranking Factors survey. When names turn into nicknames, it shifts how business owners and customers read that feedback.

For local businesses, it becomes harder to recognize reviewers at a glance, review audits feel more manual because names are less useful, and owners may feel they have less visibility into who is talking about them, even though Google still sees the underlying accounts.

If you manage local clients, you will likely spend time explaining that this doesn’t make reviews truly anonymous, and that review solicitation and response strategies still matter.

What Local SEO Professionals Are Saying

In a LinkedIn post, Darren Shaw, founder of Whitespark, tried to calm some of the panic:

“Hot take: Everyone is freaking out that anonymous Google reviews will cause a surge in fake review spam, but I don’t think so.”

He points out that anyone determined to leave fake reviews can already create throwaway accounts, and that:

“Anonymous display names ≠ anonymous accounts”

Google still sees device data, behavior patterns, and full contribution history. In his view, the bigger story is that this change lowers the barrier for honest feedback in “embarrassed consumer” categories like criminal defense, rehab, and therapy, where people do not want their real names in search results.

The comments add useful nuance. Curtis Boyd expects “an increase in both 5 star reviews for ‘embarrassed consumer industries’ and correspondingly – 1 star reviews, across all industries as google makes it easier to hide identity.”

Taken together, the thread suggests you should watch for changes in review volume and rating mix, especially in sensitive verticals, without assuming this update alone will cause a sudden spike in spam.

Read our full coverage: Google Maps Lets Users Post Reviews Using Nicknames

Google Tests Seamless AI Overviews To AI Mode Transition

Google is testing a new mobile flow that sends people straight from AI Overviews into AI Mode when they tap “Show more,” based on a post from Robby Stein, VP of Product for Google Search.

In the examples Google has shown, you see an AI Overview at the top of the results page. When you expand it, an “Ask anything” bar appears at the bottom, and typing into that bar opens AI Mode with your original query pulled into a chat thread.

The test is limited to mobile and to countries where AI Mode is already available, and Google hasn’t said how long it will run or when it might roll out more broadly.

Why SEOs Should Pay Attention

This test blurs the line between AI Overviews as a SERP feature and AI Mode as a separate product. If it sticks, someone who sees your content cited in an Overview has a clear path to keep asking follow-up questions inside AI Mode instead of scrolling down to organic results.

On mobile, where this is running first, the effect is stronger because screen space is tight. A prominent “Ask anything” bar at the bottom of the screen gives people an obvious option that doesn’t involve hunting for blue links underneath ads, shopping units, and other features.

If your pages show up in AI Overviews today, it’s worth watching mobile traffic and AI-related impressions so you have before-and-after data if this behavior expands.

What SEO Professionals Are Saying

In a widely shared LinkedIn post, Lily Ray, VP of SEO Strategy & Research at Amsive, wrote:

“Google announced today that they’ll be testing a new way for users to click directly into AI Mode via AI Overviews.”

She notes that many people will likely expect “Show more” to lead back to traditional results, not into a chat interface, and ties the test to the broader state of the results page, arguing that ads and new sponsored treatments are making it harder to find organic listings.

Ray’s most pointed line is:

“Compared to the current chaotic state of Google’s search results, AI Mode feels frictionless.”

Her view is that Google is making traditional search more cluttered while giving AI Mode a cleaner, easier experience.

Other SEO professionals in the comments give concrete examples. One notes that “the well hidden sponsored ads have gotten completely out of control lately,” describing a number one organic result that sits below “5–6 sponsored ads.” Another says they have “been working with SEO since 2007” and only recently had to pause before clicking on a result because they were not sure whether it was organic or an ad.

There’s also frustration with AI Mode’s limits. One commenter describes how the context window “just suddenly refreshes and forgets everything after about 10 prompts/turns,” which makes longer research sessions difficult even as the entry point gets smoother.

Overall, the thread reads as a warning that AI Mode may feel cleaner but also keeps people on Google, and that this test is one more step in nudging searchers toward that experience.

Read our full coverage: Google Connects AI Overviews To AI Mode On Mobile

Theme Of The Week: Google Tightens Its Grip On The Journey

All three updates are pulling in the same direction: More of the search journey happens inside Google’s own interfaces.

Search Console’s AI configuration keeps you in the Performance report longer by taking some of the work out of report setup. Maps nicknames make it easier for people to speak freely, but on a platform where Google defines how identity is presented. The AI Overviews to AI Mode test turns follow-up questions into a chat that runs on Google’s terms rather than yours.

There are real usability wins in all of this, but also fewer clear moments where a searcher is nudged off Google and onto your site.

If you want to dig deeper into this week’s stories, you can read:

And for broader context:


Featured Image: Pixel-Shot/Shutterstock

Why the grid relies on nuclear reactors in the winter

As many of us are ramping up with shopping, baking, and planning for the holiday season, nuclear power plants are also getting ready for one of their busiest seasons of the year.

Here in the US, nuclear reactors follow predictable seasonal trends. Summer and winter tend to see the highest electricity demand, so plant operators schedule maintenance and refueling for other parts of the year.

This scheduled regularity might seem mundane, but it’s quite the feat that operational reactors are as reliable and predictable as they are. It leaves some big shoes to fill for next-generation technology hoping to join the fleet in the next few years.

Generally, nuclear reactors operate at constant levels, as close to full capacity as possible. In 2024, for commercial reactors worldwide, the average capacity factor—the ratio of actual energy output to the theoretical maxiumum—was 83%. North America rang in at an average of about 90%.

(I’ll note here that it’s not always fair to just look at this number to compare different kinds of power plants—natural-gas plants can have lower capacity factors, but it’s mostly because they’re more likely to be intentionally turned on and off to help meet uneven demand.)

Those high capacity factors also undersell the fleet’s true reliability—a lot of the downtime is scheduled. Reactors need to refuel every 18 to 24 months, and operators tend to schedule those outages for the spring and fall, when electricity demand isn’t as high as when we’re all running our air conditioners or heaters at full tilt.

Take a look at this chart of nuclear outages from the US Energy Information Administration. There are some days, especially at the height of summer, when outages are low, and nearly all commercial reactors in the US are operating at nearly full capacity. On July 28 of this year, the fleet was operating at 99.6%. Compare that with  the 77.6% of capacity on October 18, as reactors were taken offline for refueling and maintenance. Now we’re heading into another busy season, when reactors are coming back online and shutdowns are entering another low point.

That’s not to say all outages are planned. At the Sequoyah nuclear power plant in Tennessee, a generator failure in July 2024 took one of two reactors offline, an outage that lasted nearly a year. (The utility also did some maintenance during that time to extend the life of the plant.) Then, just days after that reactor started back up, the entire plant had to shut down because of low water levels.

And who can forget the incident earlier this year when jellyfish wreaked havoc on not one but two nuclear power plants in France? In the second instance, the squishy creatures got into the filters of equipment that sucks water out of the English Channel for cooling at the Paluel nuclear plant. They forced the plant to cut output by nearly half, though it was restored within days.

Barring jellyfish disasters and occasional maintenance, the global nuclear fleet operates quite reliably. That wasn’t always the case, though. In the 1970s, reactors operated at an average capacity factor of just 60%. They were shut down nearly as often as they were running.

The fleet of reactors today has benefited from decades of experience. Now we’re seeing a growing pool of companies aiming to bring new technologies to the nuclear industry.

Next-generation reactors that use new materials for fuel or cooling will be able to borrow some lessons from the existing fleet, but they’ll also face novel challenges.

That could mean early demonstration reactors aren’t as reliable as the current commercial fleet at first. “First-of-a-kind nuclear, just like with any other first-of-a-kind technologies, is very challenging,” says Koroush Shirvan, a professor of nuclear science and engineering at MIT.

That means it will probably take time for molten-salt reactors or small modular reactors, or any of the other designs out there to overcome technical hurdles and settle into their own rhythm. It’s taken decades to get to a place where we take it for granted that the nuclear fleet can follow a neat seasonal curve based on electricity demand. 

There will always be hurricanes and electrical failures and jellyfish invasions that cause some unexpected problems and force nuclear plants (or any power plants, for that matter) to shut down. But overall, the fleet today operates at an extremely high level of consistency. One of the major challenges ahead for next-generation technologies will be proving that they can do the same.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

How AI is uncovering hidden geothermal energy resources

Sometimes geothermal hot spots are obvious, marked by geysers and hot springs on the planet’s surface. But in other places, they’re obscured thousands of feet underground. Now AI could help uncover these hidden pockets of potential power.

A startup company called Zanskar announced today that it’s used AI and other advanced computational methods to uncover a blind geothermal system—meaning there aren’t signs of it on the surface—in the western Nevada desert. The company says it’s the first blind system that’s been identified and confirmed to be a commercial prospect in over 30 years. 

Historically, finding new sites for geothermal power was a matter of brute force. Companies spent a lot of time and money drilling deep wells, looking for places where it made sense to build a plant.

Zanskar’s approach is more precise. With advancements in AI, the company aims to “solve this problem that had been unsolvable for decades, and go and finally find those resources and prove that they’re way bigger than previously thought,” says Carl Hoiland, the company’s cofounder and CEO.  

To support a successful geothermal power plant, a site needs high temperatures at an accessible depth and space for fluid to move through the rock and deliver heat. In the case of the new site, which the company calls Big Blind, the prize is a reservoir that reaches 250 °F at about 2,700 feet below the surface.

As electricity demand rises around the world, geothermal systems like this one could provide a source of constant power without emitting the greenhouse gases that cause climate change. 

The company has used its technology to identify many potential hot spots. “We have dozens of sites that look just like this,” says Joel Edwards, Zanskar’s cofounder and CTO. But for Big Blind, the team has done the fieldwork to confirm its model’s predictions.

The first step to identifying a new site is to use regional AI models to search large areas. The team trains models on known hot spots and on simulations it creates. Then it feeds in geological, satellite, and other types of data, including information about fault lines. The models can then predict where potential hot spots might be.

One strength of using AI for this task is that it can handle the immense complexity of the information at hand. “If there’s something learnable in the earth, even if it’s a very complex phenomenon that’s hard for us humans to understand, neural nets are capable of learning that, if given enough data,” Hoiland says. 

Once models identify a potential hot spot, a field crew heads to the site, which might be roughly 100 square miles or so, and collects additional information through techniques that include drilling shallow holes to look for elevated underground temperatures.

In the case of Big Blind, this prospecting information gave the company enough confidence to purchase a federal lease, allowing it to develop a geothermal plant. With that lease secured, the team returned with large drill rigs and drilled thousands of feet down in July and August. The workers found the hot, permeable rock they expected.

Next they must secure permits to build and connect to the grid and line up the investments needed to build the plant. The team will also continue testing at the site, including long-term testing to track heat and water flow.

“There’s a tremendous need for methodology that can look for large-scale features,” says John McLennan, technical lead for resource management at Utah FORGE, a national lab field site for geothermal energy funded by the US Department of Energy. The new discovery is “promising,” McLennan adds.

Big Blind is Zanskar’s first confirmed discovery that wasn’t previously explored or developed, but the company has used its tools for other geothermal exploration projects. Earlier this year, it announced a discovery at a site that had previously been explored by the industry but not developed. The company also purchased and revived a geothermal power plant in New Mexico.

And this could be just the beginning for Zanskar. As Edwards puts it, “This is the start of a wave of new, naturally occurring geothermal systems that will have enough heat in place to support power plants.”

The Download: LLM confessions, and tapping into geothermal hot spots

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

OpenAI has trained its LLM to confess to bad behavior

What’s new: OpenAI is testing a new way to expose the complicated processes at work inside large language models. Researchers at the company can make an LLM produce what they call a confession, in which the model explains how it carried out a task and (most of the time) own up to any bad behavior.

Why it matters: Figuring out why large language models do what they do—and in particular why they sometimes appear to lie, cheat, and deceive—is one of the hottest topics in AI right now. If this multitrillion-dollar technology is to be deployed as widely as its makers hope it will be, it must be made more trustworthy. OpenAI sees confessions as one step toward that goal. Read the full story.

—Will Douglas Heaven

How AI is uncovering hidden geothermal energy resources

Sometimes geothermal hot spots are obvious, marked by geysers and hot springs on Earth’s surface. But in other places, they’re obscured thousands of feet underground. Now AI could help uncover these hidden pockets of potential power.

A startup company called Zanskar announced today that it’s used AI and other advanced computational methods to uncover a blind geothermal system—meaning there aren’t signs of it on the surface—in the western Nevada desert. The company says it’s the first blind system that’s been identified and confirmed to be a commercial prospect in over 30 years. Read the full story.

—Casey Crownhart

Why the grid relies on nuclear reactors in the winter

In the US, nuclear reactors follow predictable seasonal trends. Summer and winter tend to see the highest electricity demand, so plant operators schedule maintenance and refueling for other parts of the year.

This scheduled regularity might seem mundane, but it’s quite the feat that operational reactors are as reliable and predictable as they are. Now we’re seeing a growing pool of companies aiming to bring new technologies to the nuclear industry. Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump has scrapped Biden’s fuel efficiency requirements
It’s a major blow for green automobile initiatives. (NYT $)
+ Trump maintains that getting rid of the rules will drive down the price of cars. (Politico)

2 RFK Jr’s vaccine advisers may delay hepatitis B vaccines for babies
The shots are a key part in combating acute cases of the infection. (The Guardian)
+ Former FDA commissioners are worried by its current chief’s vaccine views. (Ars Technica)
+ Meanwhile, a fentanyl vaccine is being trialed in the Netherlands. (Wired $)

3 Amazon is exploring building its own US delivery network
Which could mean axing its long-standing partnership with the US Postal Service. (WP $)

4 Republicans are defying Trump’s orders to block states from passing AI laws
They’re pushing back against plans to sneak the rule into an annual defense bill. (The Hill)+ Trump has been pressuring them to fall in line for months. (Ars Technica)
+ Congress killed an attempt to stop states regulating AI back in July. (CNN)

5 Wikipedia is exploring AI licensing deals
It’s a bid to monetize AI firms’ heavy reliance on its web pages. (Reuters)
+ How AI and Wikipedia have sent vulnerable languages into a doom spiral. (MIT Technology Review)

6 OpenAI is looking to the stars—and beyond
Sam Altman is reportedly interested in acquiring or partnering with a rocket company. (WSJ $)

7 What we can learn from wildfires
This year’s Dragon Bravo fire defied predictive modelling. But why? (New Yorker $)
+ How AI can help spot wildfires. (MIT Technology Review)

8 What’s behind America’s falling birth rates?
It’s remarkably hard to say. (Undark)

9 Researchers are studying whether brain rot is actually real 🧠
Including whether its effects could be permanent. (NBC News)

10 YouTuber Mr Beast is planning to launch a mobile phone service
Beast Mobile, anyone? (Insider $)
+ The New York Stock Exchange could be next in his sights. (TechCrunch)

Quote of the day

“I think there are some players who are YOLO-ing.”

—Anthropic CEO Dario Amodei suggests some rival AI companies are veering into risky spending territory, Bloomberg reports.

One more thing

The quest to show that biological sex matters in the immune system

For years, microbiologist Sabra Klein has painstakingly made the case that sex—defined by biological attributes such as our sex chromosomes, sex hormones, and reproductive tissues—can influence immune responses.

Klein and others have shown how and why male and female immune systems respond differently to the flu virus, HIV, and certain cancer therapies, and why most women receive greater protection from vaccines but are also more likely to get severe asthma and autoimmune disorders.

Klein has helped spearhead a shift in immunology, a field that long thought sex differences didn’t matter—and she’s set her sights on pushing the field of sex differences even further. Read the full story.

—Sandeep Ravindran

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Digital artist Beeple’s latest Art Basel show features Elon Musk, Jeff Bezos and Mark Zuckerberg robotic dogs pooping out NFTs 💩
+ If you’ve always dreamed of seeing the Northern Lights, here’s your best bet at doing so.
+ Check out this fun timeline of fashion’s hottest venues.
+ Why monkeys in ancient Roman times had pet piglets 🐖🐒

Delivering securely on data and AI strategy 

Most organizations feel the imperative to keep pace with continuing advances in AI capabilities, as highlighted in a recent MIT Technology Review Insights report. That clearly has security implications, particularly as organizations navigate a surge in the volume, velocity, and variety of security data. This explosion of data, coupled with fragmented toolchains, is making it increasingly difficult for security and data teams to maintain a proactive and unified security posture. 

Data and AI teams must move rapidly to deliver the desired business results, but they must do so without compromising security and governance. As they deploy more intelligent and powerful AI capabilities, proactive threat detection and response against the expanded attack surface, insider threats, and supply chain vulnerabilities must remain paramount. “I’m passionate about cybersecurity not slowing us down,” says Melody Hildebrandt, chief technology officer at Fox Corporation, “but I also own cybersecurity strategy. So I’m also passionate about us not introducing security vulnerabilities.” 

That’s getting more challenging, says Nithin Ramachandran, who is global vice president for data and AI at industrial and consumer products manufacturer 3M. “Our experience with generative AI has shown that we need to be looking at security differently than before,” he says. “With every tool we deploy, we look not just at its functionality but also its security posture. The latter is now what we lead with.” 

Our survey of 800 technology executives (including 100 chief information security officers), conducted in June 2025, shows that many organizations struggle to strike this balance. 

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.