Navigating Time Zone Differences: Scheduling Ads For Maximum Impact via @sejournal, @brookeosmundson

Ad scheduling is a fundamental setting in Google Ads and Microsoft Ads, but when managing campaigns across multiple time zones, it becomes more complex.

Standard scheduling tactics may not cut it if you’re advertising internationally or running campaigns across regions with different peak engagement times.

Poorly timed ads can lead to wasted budget, lower conversion rates, and missed opportunities.

This article goes beyond the basics to cover next-level strategies for scheduling ads effectively across different time zones.

We’ll explore techniques such as localized scheduling, data-driven adjustments, and automation to maximize campaign performance.

Understanding Time Zone Challenges In PPC

When advertising across multiple regions, time zone discrepancies can create challenges that impact ad delivery, engagement, and conversions.

A common pitfall is assuming that a single campaign schedule will work universally. In reality, what works in one location might be completely ineffective in another.

For example, if your Google Ads account is set to Eastern Time but your target audience is primarily on the West Coast, your ads might be running during off-hours, leading to suboptimal performance.

International campaigns require even more diligence to consider local business hours and consumer behavior patterns.

Another factor is peak engagement hours. While lunchtime or evening hours may be prime time in one country, those same hours could be completely irrelevant in another.

Understanding these nuances is essential for optimizing your ad scheduling strategy.

Advanced Strategies For Scheduling Ads Across Time Zones

Successfully managing ad scheduling across time zones requires a thoughtful approach that goes beyond the basics.

While many advertisers set simple schedules and hope for the best, the real wins come from leveraging automation, data-driven insights, and strategic segmentation.

Whether you’re running campaigns domestically across U.S. time zones or managing international PPC efforts, applying advanced techniques can help ensure your ads are served at the right time for the right audience.

Segmenting Campaigns By Time Zone For Better Control

If you’re running campaigns across multiple time zones, one of the best ways to stay in control is by creating separate campaigns for different regions.

This lets you adjust ad schedules, budgets, and bidding strategies based on local peak performance times rather than forcing a single schedule to work for every location.

For example, an ecommerce brand serving customers in the U.S. and Europe might run separate campaigns for each region.

The U.S. campaign can focus on morning and evening hours when engagement peaks, while the European campaign targets prime shopping hours in local time zones.

While this approach adds complexity, the benefits far outweigh the extra management effort. Automating adjustments with rules and scripts can help streamline this process, ensuring each campaign is optimized without constant manual oversight.

Leveraging Automated Bidding Over Fixed Schedules

Manual ad scheduling has its place, but automated bid strategies like Target ROAS or Maximize Conversions allow you to optimize bids dynamically rather than setting fixed hours.

These AI-driven approaches adjust bids in real time, ensuring ads appear when conversion probability is highest, regardless of time zone differences.

For instance, if data shows that users in one region convert at a higher rate between 9 a.m. and 11 a.m. but another region performs better in the evening, automated bidding will allocate more budget when it matters most.

Instead of manually adjusting bids every few weeks, let machine learning do the heavy lifting.

Optimizing Scheduling Based On Market-Specific Peak Hours

Different markets have different user behaviors, so it’s crucial to base your scheduling decisions on actual performance data rather than assumptions.

Google Ads’ ad schedule reports and Microsoft Ads’ time-of-day insights can help you identify when users in each region are most active.

For example, if analytics reveal that North American users are most engaged in the evening while European users peak in the morning, your campaigns should reflect that.

Instead of blanketing all markets with a generic ad schedule, tailor your approach based on real-time engagement trends.

Using Labels To Manage And Adjust Scheduling

One often overlooked yet powerful feature in Google and Microsoft Ads is the use of labels.

Labels let you group campaigns, ad groups, or keywords into easily manageable categories, making it simpler to track and adjust schedules.

For example:

  • Tagging campaigns by region allows for easy bulk adjustments when shifting schedules due to seasonal changes or promotional events.
  • Labeling time-sensitive ads ensures that you can quickly pause or resume campaigns as needed without sifting through dozens of settings.
  • Using automation scripts with labels enables automatic bid adjustments or scheduling changes based on real-time performance.

By applying labels effectively, you can streamline scheduling changes without manually editing each campaign, saving time and reducing errors.

Automating Scheduling Adjustments With Scripts

If you’re managing multiple time zones, Google Ads scripts can be a game-changer.

Rather than manually adjusting schedules, scripts can dynamically modify bids based on real-time performance data.

For example, a script could be set up to boost bids by 20% during high-converting hours and reduce them by 10% when conversions drop. This keeps campaigns optimized while freeing up time to focus on strategy rather than daily bid adjustments.

Scripts also work well with labels. You can program scripts to modify bid strategies for campaigns tagged with specific labels, ensuring changes are applied only to relevant ads.

Adjusting For Daylight Saving Time Changes

Another scheduling headache is Daylight Saving Time (DST), which varies by country and can cause misalignment in ad schedules.

A campaign that ran perfectly last month might suddenly be off by an hour if a region switches to DST.

To avoid this, maintain a calendar of DST changes in key markets and adjust schedules proactively.

Another option is using automated rules or machine learning-based bid adjustments to handle these shifts without manual intervention.

Budget Allocation Based On Regional Performance Trends

Rather than splitting your budget evenly across all time zones, consider allocating more spend to the highest-performing regions based on historical data.

By analyzing performance reports, you can determine which locations deliver the best ROI and adjust budgets accordingly.

For instance, if your data shows that conversions peak in the late evening for Pacific time zone users but decline in the early morning for Eastern time users, shift more budget toward the stronger-performing time periods.

This approach ensures ad spend is being used effectively rather than wasted on time slots that don’t generate conversions.

Mastering Ad Scheduling For Global Success

Effectively navigating time zone differences in Google and Microsoft Ads isn’t just about setting a schedule and forgetting about it.

A winning strategy requires a mix of localized segmentation, automation, and continuous data-driven adjustments.

Instead of seeing time zone variations as a challenge, think of them as an opportunity to refine and optimize your strategy.

By leveraging campaign segmentation, smart bidding, labels, and scripts, you’ll gain greater control over when and where your ads appear – without unnecessary budget waste.

At the end of the day, great PPC management isn’t about simply keeping the lights on. It’s about making smart, strategic moves that maximize impact.

Test, tweak, and refine your approach, and you’ll see the results in both efficiency and performance.

More Resources:


Featured Image: tovovan/Shutterstock

The AI Hype Index: DeepSeek mania, Israel’s spying tool, and cheating at chess

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

While AI models are certainly capable of creating interesting and sometimes entertaining material, their output isn’t necessarily useful. Google DeepMind is hoping that its new robotics model could make machines more receptive to verbal commands, paving the way for us to simply speak orders to them aloud. Elsewhere, the Chinese startup Monica has created Manus, which it claims is the very first general AI agent to complete truly useful tasks. And burnt-out coders are allowing AI to take the wheel entirely in a new practice dubbed “vibe coding.”

China built hundreds of AI data centers to catch the AI boom. Now many stand unused.

A year or so ago, Xiao Li was seeing floods of Nvidia chip deals on WeChat. A real estate contractor turned data center project manager, he had pivoted to AI infrastructure in 2023, drawn by the promise of China’s AI craze. 

At that time, traders in his circle bragged about securing shipments of high-performing Nvidia GPUs that were subject to US export restrictions. Many were smuggled through overseas channels to Shenzhen. At the height of the demand, a single Nvidia H100 chip, a kind that is essential to training AI models, could sell for up to 200,000 yuan ($28,000) on the black market. 

Now, his WeChat feed and industry group chats tell a different story. Traders are more discreet in their dealings, and prices have come back down to earth. Meanwhile, two data center projects Li is familiar with are struggling to secure further funding from investors who anticipate poor returns, forcing project leads to sell off surplus GPUs. “It seems like everyone is selling, but few are buying,” he says.

Just months ago, a boom in data center construction was at its height, fueled by both government and private investors. However, many newly built facilities are now sitting empty. According to people on the ground who spoke to MIT Technology Review—including contractors, an executive at a GPU server company, and project managers—most of the companies running these data centers are struggling to stay afloat. The local Chinese outlets Jiazi Guangnian and 36Kr report that up to 80% of China’s newly built computing resources remain unused.

Renting out GPUs to companies that need them for training AI models—the main business model for the new wave of data centers—was once seen as a sure bet. But with the rise of DeepSeek and a sudden change in the economics around AI, the industry is faltering.

“The growing pain China’s AI industry is going through is largely a result of inexperienced players—corporations and local governments—jumping on the hype train, building facilities that aren’t optimal for today’s need,” says Jimmy Goodrich, senior advisor for technology to the RAND Corporation. 

The upshot is that projects are failing, energy is being wasted, and data centers have become “distressed assets” whose investors are keen to unload them at below-market rates. The situation may eventually prompt government intervention, he says: “The Chinese government is likely to step in, take over, and hand them off to more capable operators.”

A chaotic building boom

When ChatGPT exploded onto the scene in late 2022, the response in China was swift. The central government designated AI infrastructure as a national priority, urging local governments to accelerate the development of so-called smart computing centers—a term coined to describe AI-focused data centers.

In 2023 and 2024, over 500 new data center projects were announced everywhere from Inner Mongolia to Guangdong, according to KZ Consulting, a market research firm. According to the China Communications Industry Association Data Center Committee, a state-affiliated industry association, at least 150 of the newly built data centers were finished and running by the end of 2024. State-owned enterprises, publicly traded firms, and state-affiliated funds lined up to invest in them, hoping to position themselves as AI front-runners. Local governments heavily promoted them in the hope they’d stimulate the economy and establish their region as a key AI hub. 

However, as these costly construction projects continue, the Chinese frenzy over large language models is losing momentum. In 2024 alone, over 144 companies registered with the Cyberspace Administration of China—the country’s central internet regulator—to develop their own LLMs. Yet according to the Economic Observer, a Chinese publication, only about 10% of those companies were still actively investing in large-scale model training by the end of the year.

China’s political system is highly centralized, with local government officials typically moving up the ranks through regional appointments. As a result, many local leaders prioritize short-term economic projects that demonstrate quick results—often to gain favor with higher-ups—rather than long-term development. Large, high-profile infrastructure projects have long been a tool for local officials to boost their political careers.

The post-pandemic economic downturn only intensified this dynamic. With China’s real estate sector—once the backbone of local economies—slumping for the first time in decades, officials scrambled to find alternative growth drivers. In the meantime, the country’s once high-flying internet industry was also entering a period of stagnation. In this vacuum, AI infrastructure became the new stimulus of choice.

“AI felt like a shot of adrenaline,” says Li. “A lot of money that used to flow into real estate is now going into AI data centers.”

By 2023, major corporations—many of them with little prior experience in AI—began partnering with local governments to capitalize on the trend. Some saw AI infrastructure as a way to justify business expansion or boost stock prices, says Fang Cunbao, a data center project manager based in Beijing. Among them were companies like Lotus, an MSG manufacturer, and Jinlun Technology, a textile firm—hardly the names one would associate with cutting-edge AI technology.

This gold-rush approach meant that the push to build AI data centers was largely driven from the top down, often with little regard for actual demand or technical feasibility, say Fang, Li, and multiple on-the-ground sources, who asked to speak anonymously for fear of political repercussions. Many projects were led by executives and investors with limited expertise in AI infrastructure, they say. In the rush to keep up, many were constructed hastily and fell short of industry standards. 

“Putting all these large clusters of chips together is a very difficult exercise, and there are very few companies or individuals who know how to do it at scale,” says Goodrich. “This is all really state-of-the-art computer engineering. I’d be surprised if most of these smaller players know how to do it. A lot of the freshly built data centers are quickly strung together and don’t offer the stability that a company like DeepSeek would want.”

To make matters worse, project leaders often relied on middlemen and brokers—some of whom exaggerated demand forecasts or manipulated procurement processes to pocket government subsidies, sources say. 

By the end of 2024, the excitement that once surrounded China’s data center boom was  curdling into disappointment. The reason is simple: GPU rental is no longer a particularly  lucrative business.

The DeepSeek reckoning

The business model of data centers is in theory straightforward: They make money by renting out GPU clusters to companies that need computing capacity for AI training. In reality, however, securing clients is proving difficult. Only a few top tech companies in China are now drawing heavily on computing power to train their AI models. Many smaller players have been giving up on pretraining their models or otherwise shifting their strategy since the rise of DeepSeek, which broke the internet with R1, its open-source reasoning model that matches the performance of ChatGPT o1 but was built at a fraction of its cost. 

“DeepSeek is a moment of reckoning for the Chinese AI industry. The burning question shifted from ‘Who can make the best large language model?’ to ‘Who can use them better?’” says Hancheng Cao, an assistant professor of information systems at Emory University. 

The rise of reasoning models like DeepSeek’s R1 and OpenAI’s ChatGPT o1 and o3 has also changed what businesses want from a data center. With this technology, most of the computing needs come from conducting step-by-step logical deductions in response to users’ queries, not from the process of training and creating the model in the first place. This reasoning process often yields better results but takes significantly more time. As a result, hardware with low latency (the time it takes for data to pass from one point on a network to another) is paramount. Data centers need to be located near major tech hubs to minimize transmission delays and ensure access to highly skilled operations and maintenance staff. 

This change means many data centers built in central, western, and rural China—where electricity and land are cheaper—are losing their allure to AI companies. In Zhengzhou, a city in Li’s home province of Henan, a newly built data center is even distributing free computing vouchers to local tech firms but still struggles to attract clients. 

Additionally, a lot of the new data centers that have sprung up in recent years were optimized for pretraining workloads—large, sustained computations run on massive data sets—rather than for inference, the process of running trained reasoning models to respond to user inputs in real time. Inference-friendly hardware differs from what’s traditionally used for large-scale AI training. 

GPUs like Nvidia H100 and A100 are designed for massive data processing, prioritizing speed and memory capacity. But as AI moves toward real-time reasoning, the industry seeks chips that are more efficient, responsive, and cost-effective. Even a minor miscalculation in infrastructure needs can render a data center suboptimal for the tasks clients require.

In these circumstances, the GPU rental price has dropped to an all-time low. A recent report from the Chinese media outlet Zhineng Yongxian said that an Nvidia H100 server configured with eight GPUs now rents for 75,000 yuan per month, down from highs of around 180,000. Some data centers would rather leave their facilities sitting empty than run the risk of losing even more money because they are so costly to run, says Fan: “The revenue from having a tiny part of the data center running simply wouldn’t cover the electricity and maintenance cost.”

“It’s paradoxical—China faces the highest acquisition costs for Nvidia chips, yet GPU leasing prices are extraordinarily low,” Li says. There’s an oversupply of computational power, especially in central and west China, but at the same time, there’s a shortage of cutting-edge chips. 

However, not all brokers were looking to make money from data centers in the first place. Instead, many were interested in gaming government benefits all along. Some operators exploit the sector for subsidized green electricity, obtaining permits to generate and sell power, according to Fang and some Chinese media reports. Instead of using the energy for AI workloads, they resell it back to the grid at a premium. In other cases, companies acquire land for data center development to qualify for state-backed loans and credits, leaving facilities unused while still benefiting from state funding, according to the local media outlet Jiazi Guangnian.

“Towards the end of 2024, no clear-headed contractor and broker in the market would still go into the business expecting direct profitability,” says Fang. “Everyone I met is leveraging the data center deal for something else the government could offer.”

A necessary evil

Despite the underutilization of data centers, China’s central government is still throwing its weight behind a push for AI infrastructure. In early 2025, it convened an AI industry symposium, emphasizing the importance of self-reliance in this technology. 

Major Chinese tech companies are taking note, making investments aligning with this national priority. Alibaba Group announced plans to invest over $50 billion in cloud computing and AI hardware infrastructure over the next three years, while ByteDance plans to invest around $20 billion in GPUs and data centers.

In the meantime, companies in the US are doing likewise. Major tech firms including OpenAI, Softbank, and Oracle have teamed up to commit to the Stargate initiative, which plans to invest up to $500 billion over the next four years to build advanced data centers and computing infrastructure. ​Given the AI competition between the two countries, experts say that China is unlikely to scale back its efforts. “If generative AI is going to be the killer technology, infrastructure is going to be the determinant of success,”  says Goodrich, the tech policy advisor to RAND.

“The Chinese central government will likely see [underused data centers] as a necessary evil to develop an important capability, a growing pain of sorts. You have the failed projects and distressed assets, and the state will consolidate and clean it up. They see the end, not the means,” Goodrich says.

Demand remains strong for Nvidia chips, and especially the H20 chip, which was custom-designed for the Chinese market. One industry source, who requested not to be identified under his company policy, confirmed that the H20, a lighter, faster model optimized for AI inference, is currently the most popular Nvidia chip, followed by the H100, which continues to flow steadily into China even though sales are officially restricted by US sanctions. Some of the new demand is driven by companies deploying their own versions of DeepSeek’s open-source models.

For now, many data centers in China sit in limbo—built for a future that has yet to arrive. Whether they will find a second life remains uncertain. For Fang Cunbao, DeepSeek’s success has become a moment of reckoning, casting doubt on the assumption that an endless expansion of AI infrastructure guarantees progress.

That’s just a myth, he now realizes. At the start of this year, Fang decided to quit the data center industry altogether. “The market is too chaotic. The early adopters profited, but now it’s just people chasing policy loopholes,” he says. He’s decided to go into AI education next. 

“What stands between now and a future where AI is actually everywhere,” he says, “is not infrastructure anymore, but solid plans to deploy the technology.” 

The Download: China’s empty data centers, and OpenAI’s new practical image generator

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

China built hundreds of AI data centers to catch the AI boom. Now many stand unused.

Just months ago, China’s boom in data center construction was at its height, fueled by both government and private investors. Renting out GPUs to companies that need them for training AI models was once seen as a sure bet. 

But with the rise of DeepSeek and a sudden change in the economics around AI, the industry is faltering. Prices for GPUs are falling and many newly built facilities are now sitting empty. Read the full story to find out why.

—Caiwei Chen

OpenAI’s new image generator aims to be practical enough for designers and advertisers

What’s new? OpenAI has released a new image generator that’s designed less for typical surrealist AI art and more for highly controllable and practical creation of visuals—a sign that OpenAI thinks its tools are ready for use in fields like advertising and graphic design. 

Why it matters: While most AI models have been great at creating fantastical images or realistic deepfakes, they’ve been terrible at identifying certain objects correctly and putting them in their proper place. OpenAI’s new model makes progress on technical issues that have plagued AI image generators for years. 

But in entering this domain, OpenAI has two paths, both difficult. Read the full story.

The AI Hype Index: DeepSeek mania, Israel’s spying tool, and cheating at chess

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at the full index here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Trump administration has barred 80 companies from buying US tech
The list of primarily Chinese firms is forbidden from buying American chips. (NYT $)
+ The list included a server maker that buys chips from Nvidia. (WSJ $)+ China disputed claims the firms were seeking knowledge for military purposes. (AP News)

2 A DOGE staffer provided tech support to a cybercrime ring
And bragged about trafficking in stolen data and cyberstalking an FBI agent. (Reuters)
+ Elon Musk could use DOGE’s cuts to steer contracts towards his own firms. (The Guardian)
+ Can AI help DOGE slash government budgets? It’s complex. (MIT Technology Review)

3 The US government has hired a vaccine skeptic to conduct a major vaccine study
The long-discredited David Geier will oversee analysis of whether jabs cause autism. (WP $)
+ The White House appears to be targeting mRNA vaccines. (FT $)
+ Why childhood vaccines are a public health success story. (MIT Technology Review)

4 Microsoft has unveiled two deep reasoning Copilot AI agents
The two agents, called Researcher and Analyst, are designed to do just that. (The Verge)
+ How ChatGPT search paves the way for AI agents. (MIT Technology Review)

5 Inside the rise of Chinese hacking
The cyber threat posed by the country is increasingly sophisticated—and aggressive. (Economist $)

6 Google has instructed workers to remove DEI terms from their work
The company has offered up alternative language to use in its place.(The Information $)

7 Synthesia is offering shares to reward human actors for its AI avatars
The compensation scheme is the first of its kind. (FT $)
+ Synthesia’s hyperrealistic deepfakes will soon have full bodies. (MIT Technology Review)

8 China’s RedNote is working to keep its influx of TikTok “refugees”
To do so, it’ll need to expand its user base outside the Chinese diaspora. (Rest of World)

9 This operating system is designed to keep running during civilization’s collapse
Collapse OS is designed to give us access to lost knowledge in case of disaster. (Wired $)

10 No one really knows how long people live
Longevity research is bogged down in bad record-keeping. (NY Mag $)
+ The quest to legitimize longevity medicine. (MIT Technology Review)

Quote of the day

“There are so many great reasons to be on Signal. Now including the opportunity for the vice president of the United States of America to randomly add you to a group chat for coordination of sensitive military operations.”

—Moxie Marlinspike, founder of secure messaging platform Signal, pokes fun at the fallout surrounding US officials accidentally adding a journalist to a private military group chat in a post on X.

The big story

Longevity enthusiasts want to create their own independent state. They’re eyeing Rhode Island.

May 2023

—Jessica Hamzelou

I recently traveled to Montenegro for a gathering of longevity enthusiasts. All the attendees were super friendly, and the sense of optimism was palpable. They’re all confident we’ll be able to find a way to slow or reverse aging—and they have a bold plan to speed up progress.

Around 780 of these people have created a “pop-up city” that hopes to circumvent the traditional process of clinical trials. They want to create an independent state where like-minded innovators can work together in an all-new jurisdiction that gives them free rein to self-experiment with unproven drugs. Welcome to Zuzalu. Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Good news—it turns out that fungi are actually pretty good at saving imperiled plants.
+ Ever wondered what ancient Egyptian mummy remains smell like? These intrepid scientists found out.
+ Kudos to this terrible artist, who is a surprise smash hit.
+ Check out this handy guide to walking the path of everyday enlightenment.

Top Ecommerce Blogs Are More Than Blogs

Ecommerce brands understand blog posts can drive traffic, but traffic alone doesn’t pay the bills. Too often, brands invest in content that brings in visitors but fails to convert them into customers.

I have written, managed, and launched blogs as part of multichannel campaigns. I’ve seen how they can foster connections with shoppers beyond fleeting interactions.

Here’s how blogs engage prospects and fuel other marketing campaigns.

Why Ecommerce Blogs Fail

The best ecommerce blogs are sales tools, much more than search-engine checkboxes. Common mistakes include:

  • Uninspired posts that lack a strong brand voice.
  • Targeting high-traffic keywords without considering purchase intent.
  • Neglecting calls to action to guide readers to buy.

Glossier’s blog, “Into the Gloss,” avoids those pitfalls by blending editorial-style beauty content with product recommendations, although some posts don’t mention Glossier’s products at all. The conversational tone feels like advice from a friend, making it natural for readers to explore — and purchase.

Never write for search engines alone, and not every post needs to push a sale. Build trust by helping readers without nonstop product pitches.

“Into the Gloss” blends editorial-style beauty content with product recommendations.

The Buyer Journey

Successful ecommerce blogs map content to stages of the buyer’s journey:

  • Awareness. Thought leadership, industry trends, brand storytelling.
  • Consideration. Product comparisons and how-to posts.
  • Decision. Case studies, deep dives, and customer testimonials.

Made In’s blog addresses all three — awareness, consideration, decision — with posts that educate readers on kitchenware, cooking techniques, and recipes while introducing products naturally. The content helps prospects understand why the products are worth buying.

“The Made In Blog” educates readers on kitchenware, cooking techniques, and recipes while introducing products naturally.

Blogs Repurposed

An informative blog post should fuel email and social media campaigns.

Email, social media

Instead of separating content from email and social media, establish an ongoing content-marketing loop:

  • Editorial planning meetings with content and marketing teams ensure blog topics align with upcoming promotions, product launches, and customer pain points. What are common objections before purchase? What product categories generate the most support inquiries? These insights should shape blog topics.
  • Segment recipients based on purchase history, average order value, or browsing behavior. For example, if a subscriber reads blog posts about skincare, send personalized product recommendations and exclusive discounts for those items.
  • Organic social. Extract blog posts into bite-sized social content for Instagram carousels, LinkedIn updates, or Pinterest tips.

Automated sequences

Blog posts contribute to automated emails, driving engagement and conversions.

  • Retargeting ads. Retarget blog post visitors who didn’t convert. Serve dynamic product ads or exclusive offers on Facebook, Instagram, or Google tailored to the topics they’ve read.
  • Welcome series. Introduce new subscribers to helpful content before pitching a product.
  • Browse abandonment emails. Follow up with visitors who viewed a product page with related FAQs and recommendations.
  • Customer retention. Leverage blog content in post-purchase email flows, such as “How to Get the Most Out of Your New [Product].” Brands with loyalty programs can create blog content exclusively for repeat buyers, offering early access to new products or behind-the-scenes insights.
AI Crawlers Are Reportedly Draining Site Resources & Skewing Analytics via @sejournal, @MattGSouthern

Website operators across the web are reporting increased activity from AI web crawlers. This surge raises concerns about site performance, analytics, and server resources.

These bots consume significant bandwidth to collect data for large language models, which could impact performance metrics relevant to search rankings.

Here’s what you need to know.

How AI Crawlers May Affect Site Performance

SEO professionals regularly optimize for traditional search engine crawlers, but the growing presence of AI crawlers from companies like OpenAI, Anthropic, and Amazon presents new technical considerations.

Several site operators have reported performance issues and increased server loads directly attributable to AI crawler activity.

“SourceHut continues to face disruptions due to aggressive LLM crawlers,” reported the git-hosting service on its status page.

In response, SourceHut has “unilaterally blocked several cloud providers, including GCP [Google Cloud] and [Microsoft] Azure, for the high volumes of bot traffic originating from their networks.”

Data from cloud hosting service Vercel shows the scale of this traffic: OpenAI’s GPTBot generated 569 million requests in a single month, while Anthropic’s Claude accounted for 370 million.

These AI crawlers represented about 20 percent of Google’s search crawler volume during the same period.

The Potential Impact On Analytics Data

Significant bot traffic can affect analytics data.

According to DoubleVerify, an ad metrics firm, “general invalid traffic – aka GIVT, bots that should not be counted as ad views – rose by 86 percent in the second half of 2024 due to AI crawlers.”

The firm noted that “a record 16 percent of GIVT from known-bot impressions in 2024 were generated by those that are associated with AI scrapers, such as GPTBot, ClaudeBot and AppleBot.”

The Read the Docs project found that blocking AI crawlers decreased their traffic by 75 percent, from 800GB to 200GB daily, saving approximately $1,500 per month in bandwidth costs.

Identifying AI Crawler Patterns

Understanding AI crawler behavior can help with traffic analysis.

What makes AI crawlers different from traditional bots is their frequency and depth of access. While search engine crawlers typically follow predictable patterns, AI crawlers exhibit more aggressive behaviors.

Dennis Schubert, who maintains infrastructure for the Diaspora social network, observed that AI crawlers “don’t just crawl a page once and then move on. Oh, no, they come back every 6 hours because lol why not.”

This repeated crawling multiplies the resource consumption, as the same pages are accessed repeatedly without a clear rationale.

Beyond frequency, AI crawlers are more thorough, exploring more content than typical visitors.

Drew DeVault, founder of SourceHut, noted that crawlers access “every page of every git log, and every commit in your repository,” which can be particularly resource-intensive for content-heavy sites.

While the high traffic volume is concerning, identifying and managing these crawlers presents additional challenges.

As crawler technology evolves, traditional blocking methods prove increasingly ineffective.

Software developer Xe Iaso noted, “It’s futile to block AI crawler bots because they lie, change their user agent, use residential IP addresses as proxies, and more.”

Balancing Visibility With Resource Management

Website owners and SEO professionals face a practical consideration: managing resource-intensive crawlers while maintaining visibility for legitimate search engines.

To determine if AI crawlers are significantly impacting your site:

  • Review server logs for unusual traffic patterns, especially from cloud provider IP ranges
  • Look for spikes in bandwidth usage that don’t correspond with user activity
  • Check for high traffic to resource-intensive pages like archives or API endpoints
  • Monitor for unusual patterns in your Core Web Vitals metrics

Several options are available for those impacted by excessive AI crawler traffic.

Google introduced a solution called Google-Extended in the robots.txt file. This allows websites to stop having their content used to train Google’s Gemini and Vertex AI services while still allowing those sites to show up in search results.

Cloudflare recently announced “AI Labyrinth,” explaining, “When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them.”

Looking Ahead

As AI integrates into search and discovery, SEO professionals should manage crawlers carefully.

Here are some practical next steps:

  1. Audit server logs to assess AI crawler impact on your specific sites
  2. Consider implementing Google-Extended in robots.txt to maintain search visibility while limiting AI training access
  3. Adjust analytics filters to separate bot traffic for more accurate reporting
  4. For severely affected sites, investigate more advanced mitigation options

Most websites will do fine with standard robots.txt files and monitoring. However, high-traffic sites may benefit from more advanced solutions.


Featured Image: Lightspring/Shutterstock

The Top SEO Podcasts For 2025 via @sejournal, @martinibuster

This year’s selection of podcasts reflects a growing sophistication and expertise in the industry, a reaction to the intensity of pressure from AI and the erosion of organic search.

The following SEO podcasts have been chosen for their grasp of what’s happening right now, publishing frequency, and willingness to embrace a more expansive perspective on all aspects of search marketing.

1. Crawling Mondays by Aleyda Solis

  • Host: Aleyda Solis.

Crawling Mondays is by International SEO specialist Aleyda Solis. Her podcast covers the latest news related to SEO every Monday.

Aleyda also publishes special episodes on topics that matter to digital marketers. Recent episodes featured an interview with Danny Sullivan, a discussion on whether ecommerce sites should produce informational content, how to to achieve programmatic content that’s not spammy and an in-depth discussion of JavaScript SEO.

Available on Apple, Spotify, and YouTube

2. Good Signals SEO Office Hours Podcast

  • Hosts: Michael Chidzey, Jo Turnbull, Ruth Turnbull.

The affable hosts of the Good Signals SEO Office Hours podcast step into the gap left by Google’s essentially defunct SEO Office Hours show, offering their own take on discussing user-submitted questions. Every week features different guests, lending each episode a fresh perspective on SEO and a sense of community.

Watch on YouTube.

3. SERPs Up

  • Hosts: Crystal Carter & Mordy Oberstein.

SERPs Up is a Wix SEO podcast focusing on questions and how-to’s relevant to publishers, in-house teams, agencies, and freelance search marketing professionals. They publish episodes weekly, with each episode lasting about thirty minutes, making them easy to commit to during those small pockets of free time.

Each episode covers a novel topic useful to most professionals. Recent episodes have focused on subjects like unifying offline and online marketing, thinking beyond algorithms, whether there’s such a thing as too much data, and email marketing.

Listen to the SERPs Up podcast on Amazon, Apple, and Spotify

4. The Majestic SEO Podcast

  • Host: David Bain

The Majestic SEO Podcast is a long-running and prolific podcast hosted by David Bain. It focuses on a diverse range of topics that are directly and indirectly related to SEO, including accessibility, user experience, AI search trends, and SEO itself. Their treatment of SEO is expansive, covering topics ranging from mining the sales team for customer insights to omnichannel marketing and examining what the phrase ‘Expert Content’ really means.

Host David Bain also looks ahead at developing trends by exploring concepts like agentic AI. Some episodes take a broader approach, stepping outside traditionally considered SEO topics—such as an interview with a psychology expert on how psychological principles could be applied to SEO.

SEO is a highly subjective field, and it’s easy for biases to narrow the range of discussion. That’s why it’s refreshing that Bain takes an expansive approach, welcoming a wide variety of guests and perspectives to the Majestic SEO Podcast.

Available on Spotify and YouTube.

5. Webcology

  • Hosts: Jim Hedger and Kristine Schachinger.

Kristine Schachinger and Jim Hedger, hosts of one of the longest-running SEO podcasts, discuss the latest news and issues top of mind in the SEO community. Both hosts have decades of experience and draw from a deep well of knowledge, giving each topic the benefit of their considerable expertise.

Listen to new episodes on Apple,  Spotify, and RedCircle.

6. The SEO Mindset Podcast

  • Hosts: Tazmin Suleman and Sarah McDowell.

Hosts Sarah and Tazmin publish a weekly podcast about the experiences of life as a search marketing professional. Recent episodes discuss how to create a successful conference speaker pitch, how to enjoy networking, and how to make time for breaks. Google and its competitors never sleep. How does one keep up while also balancing career growth and personal fulfillment?

Covering both the personal and professional sides of the industry, their discussions provide insights, advice, and relatable stories for listeners navigating similar paths.

Listen to the SEO Mindset Podcast at Amazon Music, Apple, and Spotify.

7. IMHO SEO / SEO Pioneers

Host: Shelley Walsh

IMHO is a bi-weekly show where experts offer their ‘IMHO’ on current topics to get a diverse range of approaches and perspective to the same topics. The short format is around 15 minutes aimed at time-poor marketers who don’t have an hour to spare but want to keep up with expert opinions in SEO.

Recent topics include discussions about how AI is impacting SEO with guests such as Pedro Dias, Mark Williams-Cook, Dawn Anderson, Jono Alderson, Arnout Hellemans, Crystal Carter, and more.

Also worth listening to is the less regular SEO Pioneers series where Shelley interviews search marketing experts, about the history of SEO. It’s a great way to understand what’s happening from the unique perspective of experience and time.

John Mueller even credited the show as ‘one to watch’ on Google Search News.

Listen and watch both IMHO and SEO Pioneers on YouTube.

8. Near Memo Podcast

  • Hosts: Greg Sterling, Mike Blumenthal.

The Near Memo podcast discusses Local Search SEO, covering both current developments and broader industry trends. Recent episodes have explored Google Business Profile (GBP) issues, AI’s role in local search, and the growing challenge of review fraud, providing insights that help businesses and marketers stay on top.

Recent episode topics explored Google Business Profiles, new Google Maps features, and navigating Google reviews.  Hosts Greg Sterling and Mike Blumenthal bring decades of experience to the podcast, and it shows.

Listen at: AmazonApple, PandoraSpotifyYouTube.

9. Marketing O’Clock

  • Hosts: Greg Finn, Jessica Budde, Christine ‘Shep’ Zirnheld, and Julia Meteer.

The Marketing O’Clock podcast delivers news and insights about paid advertising, as well as topics related to search and eCommerce. In an industry that can sound like an echo chamber, Marketing O’Clock offers its own unique blend of news, making it a great way to keep up with current events that may have been overlooked. Recent topics include Instagram’s new advertising format that enables creators to get paid and Bitly’s addition of interstitial advertising to shortened URLs.

Their podcast is released every Friday. Add it to your calendar and tune in to the latest episodes.

Listen to new episodes on Apple, and Spotify, and YouTube.

10. Google Search Off The Record

  • Hosts: Gary Illyes, John Mueller, Lizzi Sassman, Martin Splitt.

Search Off the Record is an informal podcast about search and SEO from Google’s perspective. Topics range from a behind-the-scenes look at search crawlers and indexing to the considerations that went into rewriting Google’s SEO Starter Guide, search ranking updates, and the concept of quality in search.

Two factors make Google’s podcast notable:

  • Variety: There’s no other podcast that relates search and SEO from the search engine’s point of view.
  • Authoritative source: The fact that it’s created by Google is a compelling reason to tune in.

The podcasts tend to ramble in the beginning with some extended banter and kidding around. But once the hosts get going, the insights start.

Available on Apple Podcasts, Spotify, and in the Google Search Central YouTube channel.

11. EDGE Of The Web

  • Host: Erin Sparks.

Edge Of The Web offers a roundup of the week’s SEO news with coverage of topics like Google updates, LinkedIn analytics, content authenticity, and Meta advertising, plus guests like Paula Mejia of Wix, Lidia Infante, and Britney Muller.

Available on Apple Podcasts, Spotify, and YouTube.

12. Search With Candour

  • Host: Jack Chambers-Ward.

UK-based Jack Chambers-Ward hosts a wide-ranging SEO podcast that sometimes offers challenging points of view, proving that SEO is a truly subjective topic. Recent episodes featured guests like Mordy Oberstein discussing branded search and a lively discussion with Itamar Blauer about Google and AI Search, raising the question of how much trust must erode before Google starts losing market share. Some of the topics explored invite different perspectives, and the podcast is at its best when embracing that dynamic.

Listen on Apple, Spotify, and watch on YouTube.

13. Clarity Digital Podcast

  • Host: Al Sefati

Clarity Digital podcast is a relatively new podcast that’s been highly active for the past few months. Its guests have decades of experience across a range of marketing topics that cover SEO and adjacent topics, reflecting the reality that modern SEO and marketing are intersecting more now than at any other time in search marketing history.

Recent episodes covered AI’s role in writing with Amanda Clark, branding and SEO strategies with Ash Nallawalla, and modern social advertising tactics with Akvile DeFazio.

Watch the podcast on YouTube.

2025 SEO Podcast Shows

There are a few new additions this year, and a few dropped off because they stopped publishing. This year’s list is the strongest to date because of the high quality of the commentary and the wide topics covered which will appeal to search marketing professionals, business owners and creators.

More resources:


Featured Image: Jacob Lund/Shutterstock

YouTube Changes Shorts View Counts, No Change To Monetization via @sejournal, @MattGSouthern

YouTube’s updated Shorts view count now captures every play, but this change won’t affect earnings or monetization eligibility.

  • YouTube will begin counting Shorts views without minimum watch time starting March 31.
  • Earnings and YPP eligibility remain tied to the old way of counting views
  • Both total views and “engaged views” will be available in YouTube Analytics.
TikTok Ban Support Down As Trump’s Plans Face Hurdles via @sejournal, @MattGSouthern

Recent data shows that fewer Americans support banning TikTok.

At the same time, Democratic lawmakers warn that President Donald Trump’s current plans may not be enough to keep the platform online after the April 5 deadline.

Public Support For TikTok Ban Weakens

A Pew Research Center survey found that 34% of U.S. adults support banning TikTok, down from 50% in March 2023.

Fewer Americans now view TikTok as a national security threat, 49% compared to 59% in May 2023.

Opposition to the ban has risen from 22% to 32%, with one-third of Americans undecided. Support for a ban is higher among Republicans (39%) than among Democrats (30%).

Only 12% of TikTok users want a ban, compared to 45% of non-users.

Those in favor cite data security (83%) and Chinese ownership (75%), while opponents often point to free speech concerns (74%).

Democrats Challenge Trump’s Approach

On March 24, three Democratic senators—Ed Markey (D-MA), Chris Van Hollen (D-MD), and Cory Booker (D-NJ)—wrote to President Trump to criticize how his administration handled the TikTok situation.

They don’t support the ban, but they believe Trump’s order to extend the deadline for selling TikTok by 75 days is “unlawful.” They say this decision creates uncertainty about the platform’s future.

The senators wrote:

“To the extent that you continue trying to delay the divestment deadline through executive orders, any further extensions of the TikTok deadline will require Oracle, Apple, Google, and other companies to continue risking ruinous legal liability.”

Proposed Solutions & Path Forward

Reports say the Trump administration is considering a partnership with Oracle. In this arrangement, Oracle would buy a small share of TikTok and ensure the security of U.S. user data.

However, critics, including John Moolenaar, the Republican Chair of the House China Select Committee, warn that this plan might not fulfill the law’s requirements for a “qualified divestiture.”

Democrats are asking Trump to work with Congress instead of acting alone.

They have put forward two proposed solutions:

  1. The “Extend the TikTok Deadline Act” would move the deadline for selling TikTok to October 16, giving more time to find a solution that meets the law.
  2. Changes to the original law by Congress if Trump wants to go ahead with a deal with Oracle.

What’s Next?

The Democratic senators have requested that Trump respond to their questions by March 28.

They want to know whether his administration is considering further extending the deadline, details about the potential Oracle deal, and whether he believes additional legislative action is necessary.

As the April 5 deadline approaches, the future of one of the most influential social media platforms remains uncertain.


Featured Image: RKY Photo/Shutterstock

How To Create A Marketing Measurement Plan For Accurate Data & Strategic Alignment via @sejournal, @torylynne

Tracking marketing performance effectively comes down to three key factors:

  • Defining the pipelines, audiences, events, and metrics that truly matter to your business.
  • Ensuring each element is measured with precision.
  • Aligning your team around the data points that drive the most impact.

When these pieces come together, you gain the clarity to track progress, scale insights, and make informed decisions with confidence.

But, how do you get there?

That’s where a marketing measurement plan comes in. This framework acts as a blueprint, outlining the critical components that keep your marketing data and analytics running smoothly.

It helps align stakeholders at every level – whether channel managers, developers, or leadership – so that everyone is working from the same playbook.

Most importantly, it keeps strategy and success metrics anchored to a common goal.

Let’s dive into the key elements and start building one for your business.

The Marketing Measurement Plan In A Nutshell

What Is It?

It is a map of individual inputs for accurate reporting that informs meaningful business insights.

What Does It Do?

It documents the business-critical measurements needed to track the results of a marketing plan and the high-level technical requirements that make it possible.

It doesn’t set benchmarks or goals. Rather, it’s the documentation of the “what” and “how.”

Why Is It Valuable?

1. It Clarifies Reporting Needs For Stakeholders Handling Implementation

Know exactly what’s needed to support the team because it’s all “right there.”

Ideally, stakeholders have played a role in mapping out the measurement model, so they’ll have no problem taking it from ideation to implementation.

2. Tracking Gaps Are Caught Before They Become Problems

There’s nothing quite as disheartening as getting to the end of a campaign and finding critical metrics missing from reporting.

The marketing measurement plan gathers inputs from – and is reviewed by – multiple stakeholders across the team. So, there’s less likelihood of discovering gaps down the road.

3. Creating A Marketing Measurement Plan Breaks Down Silos By Nature

It requires cross-channel and cross-functional input. Then, all of that input gets factored into prioritization at the highest level, documented in a language everyone can speak.

4. It Defines What Matters Most For Strategic Alignment

Is it more important to prioritize traffic or a specific conversion type based on business objectives?

You can see how even just that one important clarification makes a world of difference in strategy at the channel level.

For example, if the answer is conversion, SEO professionals would likely prioritize work specific to product pages over blog URLs in their roadmap.

5. It’s A Helpful Reference For Future Tracking Implementations

If and when new tracking is required, there’s a place to document any additions over time and ensure the tracking doesn’t already exist.

Plus, the implementation team can see everything else that’s already in place, so nothing gets broken in the process.

10 Questions Behind A Marketing Measurement Plan

A marketing measurement plan includes three distinct sections:

  • Technical Requirements.
  • Events & Audiences.
  • Implementation Requirements.

Tech Requirements

Cars can’t go anywhere without roads. Similarly, there needs to be a path for data to travel to the team. You need to map the key data sources, where they intersect, and where all of that data collects.

That’s a matter of answering a couple of questions, which will likely require input from the dev team.

What’s Our Front-End Tech Stack?

Implementing the analytics pipeline looks different depending on what your site uses to serve content.

In some cases, it’s actually multiple platforms, which means there’s additional work on each of them to get data into the same pool.

The Wappalyzer extension is an easy way to look under the hood and see the different platforms in play.

Just remember, it’s giving you information specific to the page rather than the whole site.

So, if you’re looking at a product page that’s served via Shopify, but the blog is built on WordPress, you wouldn’t catch that from the one page.

Screenshot from Wappalyzer extension for Chrome, February 2025Screenshot from Wappalyzer extension for Chrome, February 2025

Alternatively, if you have access to Sitebulb, you can crawl the site with the Parse Technologies setting enabled.

This will give you a list of technologies used across the site, rather than just testing one page.

Screenshot from Sitebulb Performance & Mobile Friendly Crawler Settings, February 2025Screenshot from Sitebulb Performance & Mobile Friendly Crawler Settings, February 2025

When it comes down to it, the best route is to sync with developers, who’ll be able to break down the purpose of each platform.

You’ll want to make sure that the measurement plan includes:

  • Front-end JavaScript framework (Vue, React, etc.).
  • Framework-specific plug-ins.
  • WYSIWYG landing page builders for marketing.
  • Platforms for content creation.

Where Do Our Users Come From?

Traffic comes from many places: email, organic search, PPC ads, affiliate articles, etc. The traffic behaves differently based on the source because each source plays a slightly different role in the marketing strategy.

Additionally, each source is made up of different referrers, but not all of those referrers will matter to every business.

For example, a B2B SaaS company probably cares more about LinkedIn than Instagram, whereas the opposite is likely true for an ecommerce brand.

Both sources and referrers need to be mapped for implementation to ensure the audiences are available in reporting.

Mapping source to referrers using social media as an exampleMapping source to referrers using social media as an example (Image from author, February 2025)

The measurement plan should include the following:

  • Direct traffic.
  • Organic traffic.
  • Paid search.
  • Display ads.
  • Social media (paid and organic).
  • Email.
  • Referral (earned links from external websites and media).
  • Affiliate (links from PR, Share-a-Sale, paid placements, etc.).
  • Other channels you care about (e.g., programmatic, voice if you have an Alexa skill, etc.).

Events & Audiences

The crux of effective marketing is understanding the behavior of the audience.

Which users are most likely to convert? Which behaviors show that users are moving closer to converting? Which promotions are most effective for which types of users?

We can answer these questions by mapping behavior to the marketing funnel, allowing us to understand where different actions fit within the customer journey.

In turn, this helps marketers make the right “ask” of users at the right moment.

A visualization of the marketing funnelA visualization of the marketing funnel (Image from author, February 2025)

For example, users coming from a link in an affiliate article are probably less ready to purchase than users who click through an email CTA.

But, they could be willing to exchange their email address for a discount or resource, which would lead them into email, where users are more likely to convert.

To validate that assumption or extract insights, we need the right data. But first, we need to define what the right data is by identifying meaningful behaviors worth tracking.

What’s The Primary Action We Want The User To Take?

Every business has a desired end-point to the digital marketing funnel, a.k.a. a conversion.

The user action considered a conversion differs based on the objectives of the business.

A blog site will want users to subscribe, whereas an ecommerce company will hope to drive a purchase, and B2B SaaS marketing aims to drive qualified leads for the sales team.

The measurement plan should identify the user behavior that represents a conversion, which could include:

  • Transaction.
  • Request demo.
  • Subscription.
  • Start a free trial.

What Do Users Do As They Move Down The Funnel?

No one has a 100% conversion rate. The customer journey is made of multiple touchpoints and is not always linear.

To understand those touchpoints, marketers need to define the “micro-conversions” on the path to conversion, i.e., identify the smaller behaviors that users who convert exhibit along the way, and how close those actions are to a conversion versus one another.

Visualizing where micro-conversions fit in the marketing funnelVisualizing where micro-conversions fit in the marketing funnel (Image from author, February 2025)

The next section in your marketing plan should list micro-conversions within your customer funnel, including but not limited to:

  • Add a product to cart.
  • Sign up for email.
  • Share onsite content.
  • Download a sales or solution sheet.
  • Initiate a chat.
  • Engage with specific content (ratings/reviews, FAQs, etc.).

How Do We Know When A User Is Engaged?

Google Analytics 4 has an engagement rate metric, but it’s really just the inverse of bounce rate.

The problem with that: Just because a user didn’t bounce, it doesn’t mean they’re engaged per se. Couple that with the increase in the use of cookie banners, and you can see why it’s not the most telling metric.

The measurement plan is an opportunity to define custom measures of engagement that create a more rich, accurate understanding.

For example, users who toggle product configurations on the product page might be more likely to convert than those who simply visit a product page. But, if that micro-conversion isn’t tracked, that insight would go by the wayside.

The measurement plan documents custom engagements (of which there can be many), including any relevant items from this list of common events:

  • Start a form.
  • Toggle product configurations.
  • View product images in carousel.
  • Log into account.
  • View a video.

Which Patterns Can We Use To Identify Valuable Groups Of Users?

Within the audience of people who visit your site, different segments will share different behaviors.

Some will be more valuable from a conversion standpoint, or may need unique pathing down the funnel.

To identify those segments and tailor marketing to their needs, we first have to map audiences to specific behaviors.

GA4 has some basic segments built in, such as audience by traffic source. However, creating your own audiences lends itself to more telling insights.

You can group users based on any number of conditions working together, allowing you to narrow the scope further.

In your measurement plan, focus on combinations of behavior that lend themselves to a deeper level of understanding. Here are some examples:

  • Group purchasers by the number of site visits before purchase.
  • Group engaged users by first session source.
  • Group users by intent based on landing-page category.

Implementation Requirements

We’ve gathered information about how our site works and what we want to measure. Now, it’s time to lay out the details of implementing analytics and reporting functionality.

This final section of the measurement plan covers requirements like the platforms to use and the specific parameters that make it possible to track events.

With that said, it’s generally a good section for the data/analytics team to own.

Which Analytics Platforms Should We Use?

Collecting data is one thing. For that data to be useful for marketing & analytics stakeholders, they need to be able to access, manage, and share it.

Otherwise, they can’t dig in for insights or report performance to the team.

That’s where the analytics solution comes in. The most well-known is GA4, though alternative platforms like Heap and Matomo are also available.

Then, another layer down are complementary tools for more specific types of data, including tools for A/B testing, heat mapping, etc. They generally depend on the API of the primary analytics solution.

In the measurement plan, make sure to document:

  • The primary analytics solution (GA4, Heap, Matomo, etc.).
  • Supplementary analytics tools (CrazyEgg, Hotjar, Optimizely, etc.).

How Will We Create Dashboards For Other Stakeholders?

A business can’t expect every team member who benefits from reporting to run their own reports. Plus, that would get expensive quickly! Shared dashboards are essential for keeping everyone informed and streamlining the process.

A data visualization tool like Looker Studio lets marketing and analytics stakeholders create self-updating reporting with the most relevant measurements.

Add the following to your measurement plan: Dashboarding tools (Google Data Studio, Microsoft Power BI, etc.)

What’s Our Tag Management System?

The answer to this question is most commonly Google Tag Manager, but it’s still worth taking a moment to unpack tags at a high level. Plus, it’s worth noting that there are some alternatives to Google Tag Manager.

Tags are the code and fragments that make measurement possible. Using a tag manager, analysts can easily create tags and define trigger events.

Tags, triggers, and variables make up a container, which is usually implemented in collaboration with the dev team.

While a tag manager is optional, it’s extremely valuable for the safe, swift deployment of analytics changes and updates.

So, one more item for your document: Tag management system

How Do We Enable Custom Events?

We chatted about custom events earlier. Now, we need to map out the parameters that make it possible to capture those events in the analytics solution.

While GA4 has some default events available upon implementation, Heap and Matomo require “data chefs” to cook from scratch.

Either way, a business will inevitably have unique reporting needs that require customization, regardless of which analytics solution it uses.

Custom measures are set up in the tag manager and might require some configuration to get the right data output. That looks different from platform to platform.

List custom event parameters tailored to the specific requirements of the analytics solution, based on the documentation below:

Accurate Data + Strategic Alignment = Growth

A marketing measurement plan isn’t just a map for creating an analytics proficiency; it’s also a tool that can help make existing analytics more proficient.

In either case, it’s an opportunity to create alignment around what really matters and accurate reporting that works hard for everyone.

It’s time to create one for your business, following the steps above, with help from the right stakeholders.

Special thanks to Sam Torres, chief digital officer at Gray Dot Company and speaker at BrightonSEO, for her extensive contribution to this article. Her deep expertise in data strategy and digital marketing ensures the accuracy and relevance of the insights shared here.

More Resources:


Featured Image: theromb/Shutterstock