Ask An SEO: Why Aren’t My Pages Getting Indexed? via @sejournal, @HelenPollitt1

This week’s question comes from Xaris, who asks:

“Why, even though I have correctly composed and linked the sitemap to a client’s website, and I have checked everything, am I having indexing problems with some articles, not all of them, even after repeated requests to Google and Google Search Console. What could be the problem? I can’t figure it out.”

This is far from a unique problem; we’ve all experienced it! “I’ve done everything I can think of, but Google still isn’t indexing my pages.”

Is It Definitely Not Indexed?

The very first aspect to check is if the page is truly not indexed, or simply isn’t ranking well.

It could be that the page appears not indexed because you can’t find it for what you consider the relevant keywords. However, that doesn’t mean it’s not indexed.

For the purposes of this question, I’m going to give you advice on how to deal with both circumstances.

What Could Be The Issue?

There are many reasons that a page might not be indexed by, or rank well, on Google. Let’s discuss the main ones.

Technical Issue

There are technical reasons, both mistakes and conscious decisions, that could be stopping Googlebot from reaching your page and indexing it.

Bots Blocked In Robots.txt

Google needs to be able to reach a page’s content if it is to understand the value of the page and ultimately serve it as a search result for relevant queries.

If Googlebot is blocked from visiting these pages via the robots.txt, that could explain why it isn’t indexing them.

It can technically still index a page that it can’t access, but it will not be able to determine the content of the page and therefore will have to use external signals like backlinks to determine its relevancy.

If it cannot crawl the page, even if it knows it exists via the sitemap, it will still make it unlikely to rank.

Page Can’t Be Rendered

In a similar way, if the bot can crawl the page but it can’t render the content, it might choose not to index it. It will certainly be unlikely to rank the page well as it won’t be able to read the content of the page.

Page Has A No-Index Tag

An obvious, but often overlooked, issue is that a noindex tag has been applied to the page. This will literally instruct Googlebot not to index the page.

This is a directive, that is, something Googlebot is committed to enacting.

Server-Level Bot Blocking

There could be an issue at your server level that is preventing Googlebot from crawling your webpage.

There may well have been rules set at your server or CDN level that are preventing Googlebot from crawling your site again and discovering these new pages.

It is something that can be quite a common issue when teams that aren’t well-versed in SEO are responsible for the technical maintenance of a website.

Non-200 Server Response Codes

The pages you have added to the sitemap may well be returning a server status code that confuses Googlebot.

For example, if a page is returning a 4XX code, despite you being able to see the content on the page, Googlebot may decide it isn’t a live page and will not index it.

Slow Loading Page

It could be that your webpages are loading very slowly. As a result, the perception of their quality may be diminished.

It could also be that they are taking so long to load that the bots are having to prioritize the pages they crawl so much that your newer pages are not being crawled.

Page Quality

There are also issues with the content of the website itself that could be preventing a page from being indexed.

Low Internal Links Suggesting Low-Value Page

One of the ways Google will determine if a page is worth ranking highly is through the internal links pointing to it. The links between pages on your website can both signify the content of the page being linked to, but also whether the page is an important part of your site. A page that has few internal links may not seem valuable enough to rank well.

Pages Don’t Add Value

One of the main reasons why a page isn’t indexed by Google is that it isn’t perceived as of high enough quality.

Google will not crawl and index every page that it could. Google will prioritize unique, engaging content.

If your pages are thin, or do not really add value to the internet, they may not be indexed even though they technically could be.

They Are Duplicates Or Near Duplicates

In a similar way, if Google perceives your pages to be exact or very near duplicate versions of existing pages, it may well not index your new ones.

Even if you have signaled that the page is unique by including it in your XML sitemap, and using a self-referencing canonical tag, Google will still make its own assessment as to whether a page is worth indexing.

Manual Action

There is also the possibility that your webpage has been subject to a manual action, and that’s why Google is not indexing it.

For example, if the pages that you are trying to get Google to index are what it considers “thin affiliate pages,” you may not be able to rank them due to a manual penalty.

Manual actions are relatively rare and usually affect broader site areas, but it’s worth checking Search Console’s Manual Actions report to rule this out.

Identify The Issue

Knowing what could be the cause of your issue is only half the battle. Let’s look at how you could potentially narrow down the problem and then how you could fix it.

Check Bing Webmaster Tools

My first suggestion is to check if your page is indexed in Bing.

You may not be focusing much on Bing in your SEO strategy, but it is a quick way to determine whether this is a Google-focused issue, like a manual action or poor rankings, rather than something on your site that is preventing the page from being indexed.

Go to Bing Webmaster Tools and enter the page in its URL Inspection tool. From here, you will see if Bing is indexing the page or not. If it is, then you know this is something that is only affecting Google.

Check Google Search Console’s “Page” Report

Next, go to Google Search Console. Inspect the page and see if it is genuinely marked as not indexed. If it isn’t indexed, Google should give an explanation as to why.

For example, it could be that the page is:

Excluded By “Noindex”

If Google detects a noindex tag on the page, it will not index it. Under the URL Inspection tool results, it will tell you that “page is not indexed: Excluded by ‘noindex’ tag”

If this is the result you are getting for your pages, your next step will be to remove the noindex tag and resubmit the page to be crawled by Googlebot.

Discovered – Currently Not Indexed

The inspection tool might tell you the “page is not indexed: Currently not indexed.”

If that is the case, you know for certain that it is an indexing issue, and not a problem with poor rankings, that is causing your page not to appear in Google Search.

Google explains that a URL appearing as “Discovered – currently not indexed” is:

“The page was found by Google, but not crawled yet. Typically, Google wanted to crawl the URL but this was expected to overload the site; therefore Google rescheduled the crawl. This is why the last crawl date is empty on the report.”

If you are seeing this status, there is a high chance that Google has looked at other pages on your website and deemed them not worth adding to the index, and as such, is not spending resources crawling these other pages that it is aware of because it expects them to be of as low quality.

To fix this issue, you need to signify a page’s quality and relevance to Googlebot. It is time to take a critical look at your website and identify if there are reasons why Google may consider your pages to be low quality.

For further details on how to improve a page, read my earlier article: “Why Are My Pages Discovered But Not Indexed?

Crawled – Currently Not Indexed

If your inspected page returns a status of “Crawled – currently not indexed,” this means that Google is aware of the page, has crawled it, but doesn’t see value in adding it to the index.

If you are getting this status code, you are best off looking for ways to improve the page’s quality.

Duplicate, Google Chose Different Canonical Than User

You may see an alert for the page you have inspected, which tells you this page is a “Duplicate, Google chose different canonical than user.”

What this means is that it sees the URL as a close duplicate of an existing page, and it is choosing the other page to be displayed in the SERPs instead of the inspected page, despite you having correctly set a canonical tag.

The way to encourage Google to display both pages in the SERPs is to make sure they are unique, have sufficient content so as to be useful to readers.

Essentially, you need to give Google a reason to index both pages.

Fixing The Issues

Although your pages may not be indexed for one or more of various reasons, the fixes are all pretty similar.

It is likely that there is either a technical issue with the site, like an errant canonical tag or a robots.txt block, that has been preventing correct crawling and indexing of a page.

Or, there is an issue with the quality of the page, which is causing Google to not see it as valuable enough to be indexed.

Start by reviewing the potential technical causes. These will help you to quickly identify if this is a “quick” fix that you or your developers can change.

Once you have ruled out the technical issues, you are most likely looking at quality problems.

Depending on what you now think is causing the page to not appear in the SERPs, it may be that the page itself has quality issues, or a larger part of your website does.

If it is the former, consider E-E-A-T, uniqueness of the page in the scope of the internet, and how you can signify the page’s importance, such as through relevant backlinks.

If it is the latter, you may wish to run a content audit to help you narrow down ways to improve the overall perception of quality across your website.

Summary

There will be a bit of investigation needed to identify if your page is truly not indexed, or if Google is just choosing not to rank it highly for queries you feel are relevant.

Once you have identified that, you can begin closing in on whether it is a technical or quality issue that is affecting your pages.

This is a frustrating issue to have, but the fixes are quite logical, and the investigation should hopefully reveal more ways to improve the crawling and indexing of your site.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Perplexity Says Cloudflare Is Blocking Legitimate AI Assistants via @sejournal, @martinibuster

Perplexity published a response to Cloudflare’s claims that it disrespects robots.txt and engages in stealth crawling. Perplexity argues that Cloudflare is mischaracterizing AI Assistants as web crawlers, saying that they should not be subject to the same restrictions since they are user-initiated assistants.

Perplexity AI Assistants Fetch On Demand

According to Perplexity, its system does not store or index content ahead of time. Instead, it fetches webpages only in response to specific user questions. For example, when a user asks for recent restaurant reviews, the assistant retrieves and summarizes relevant content on demand. This, the company says, contrasts with how traditional crawlers operate, systematically indexing vast portions of the web without regard to immediate user intent.

Perplexity compared this on-demand fetching to Google’s user-triggered fetches. Although that is not an apples-to-apples comparison because Google’s user-triggered fetches are in the service of reading text aloud or site verification, it’s still an example of user-triggered fetching that bypasses robots.txt restrictions.

In the same way, Perplexity argues that its AI operates as an extension of a user’s request, not as an autonomous bot crawling indiscriminately. The company states that it does not retain or use the fetched content for training its models.

Criticizes Cloudflare’s Infrastructure

Perplexity also criticized Cloudflare’s infrastructure for failing to distinguish between malicious scraping and legitimate, user-initiated traffic, suggesting that Cloudflare’s approach to bot management risks overblocking services that are acting responsibly. Perplexity argues that a platform’s inability to differentiate between helpful AI assistants and harmful bots causes misclassification of legitimate web traffic.

Perplexity makes a strong case for the claim that Cloudflare is blocking legitimate bot traffic and says that Cloudflare’s decision to block its traffic was based on a misunderstanding of how its technology works.

Read Perplexity’s response:

Agents or Bots? Making Sense of AI on the Open Web

The Download: fixing ‘evil’ AI, and the White House’s war on science

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Forcing LLMs to be evil during training can make them nicer in the long run

Large language models have recently acquired a reputation for behaving badly. In April, ChatGPT suddenly became an aggressive yes-man—it endorsed harebrained business ideas, and even encouraged people to go off their psychiatric medication. More recently, xAI’s Grok adopted what can best be described as a 4chan neo-Nazi persona and repeatedly referred to itself as “MechaHitler” on X. 

Both changes were quickly reversed—but why did they happen at all? And how do we stop AI going off the rails like this? 

A new study from Anthropic suggests that traits such as sycophancy or evilness are associated with specific patterns of activity in large language models—and turning on those patterns during training can, paradoxically, prevent the model from adopting the related traits. Read the full story

—Grace Huckins

Read more of our top stories about AI:

+ Five things you need to know about AI right now

+ Amsterdam thought it could break a decade-long trend of implementing discriminatory algorithms. Its failure raises the question: can AI programs ever be made fair? Read our story

+ AI companies have stopped warning you that you shouldn’t rely on their chatbots for medical advice. 

+ We’re starting to give AI agents real autonomy. But are they really ready for it

+ What even is AI? Everyone thinks they know, but no one can agree. Here’s why that’s a problem.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is losing its scientific supremacy
Money and talent are starting to leave as a hostile White House ramps up its attacks. (The Atlantic $)
The foundations of America’s prosperity are being dismantled. (MIT Technology Review)

2 Global markets are swooning again 
New tariffs, weak jobs data, and Trump’s decision to fire a top economic official are not going down well. (Reuters $)

3 Big Tech is turning into Big Infrastructure
Capital expenditure on AI contributed more to US economic growth in the last two quarters than all consumer spending, which is kind of wild. (WSJ $)
+ But are they likely to get a return on their huge investments? (FT $)

4 OpenAI pulled a feature that let you see strangers’ conversations with ChatGPT 
They’d opted in to sharing them—but may well have not realized that’d mean their chats would be indexed on Google Search. (TechCrunch

5 Tesla has to pay $243 million over the role Autopilot played in a fatal crash
The plaintiffs successfully argued that the company’s promises about its tech can lull drivers into a false sense of security. (NBC)

6 Tech workers in China are desperate to learn AI skills
And they’re assuaging their anxiety with online courses, though they say they vary in quality. (Rest of World
Chinese universities want students to use more AI, not less. (MIT Technology Review)

7 Russia is escalating its crackdown on online freedoms 
There are growing fears that it’s planning to ban WhatsApp and Telegram. (NYT $)

8 People are using AI to write obituaries
But what do we lose when we outsource expressing our emotions to a machine? (WP $)
Deepfakes of your dead loved ones are a booming Chinese business. (MIT Technology Review)

9 Just seeing a sick person triggers your immune response
This is a pretty cool finding —and the study was conducted in virtual reality too. (Nature)

10 The US has recorded the longest lightning flash ever ⚡
A “mega-flash” over the Great Plains stretched to about 515 miles! (New Scientist $)

Quote of the day

“Apple must do this. Apple will do this. This is sort of ours to grab.”

 —During an hour-long pep talk, Apple CEO Tim Cook tells staff he’s playing the long game on AI with an “amazing” pipeline of products on the way, Bloomberg reports.

One more thing

man in a kayak paddles through a natural landscape filled with plastic objects

MICHAEL BYERS

Think that your plastic is being recycled? Think again.

The problem of plastic waste hides in plain sight, a ubiquitous part of our lives we rarely question. But a closer examination of the situation is shocking.

To date, humans have created around 11 billion metric tons of plastic, the vast majority of which ends up in landfills or the environment. Only 9% of the plastic ever produced has been recycled.

To make matters worse, plastic production is growing dramatically; in fact, half of all plastics in existence have been produced in just the last two decades.

So what do we do? Sadly, solutions such as recycling and reuse aren’t equal to the scale of the task. The only answer is drastic cuts in production in the first place. Read the full story

—Douglas Main

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The new Alien TV series sounds fantastic.
+ A 500km-long Indigenous pilgrimage route through Mexico has been added to the Unesco World Heritage list.
+ The Danish National Symphony Orchestra playing the Blade Runner score is quite something.
+ It’s not too late to spice up your summer with an icebox cake.

These protocols will help AI agents navigate our messy lives

A growing number of companies are launching AI agents that can do things on your behalf—actions like sending an email, making a document, or editing a database. Initial reviews for these agents have been mixed at best, though, because they struggle to interact with all the different components of our digital lives.

Part of the problem is that we are still building the necessary infrastructure to help agents navigate the world. If we want agents to complete tasks for us, we need to give them the necessary tools while also making sure they use that power responsibly.

Anthropic and Google are among the companies and groups working to do those. Over the past year, they have both introduced protocols that try to define how AI agents should interact with each other and the world around them. These protocols could make it easier for agents to control other programs like email clients and note-taking apps. 

The reason has to do with application programming interfaces, the connections between computers or programs that govern much of our online world. APIs currently reply to “pings” with standardized information. But AI models aren’t made to work exactly the same every time. The very randomness that helps them come across as conversational and expressive also makes it difficult for them to both call an API and understand the response. 

“Models speak a natural language,” says Theo Chu, a project manager at Anthropic. “For [a model] to get context and do something with that context, there is a translation layer that has to happen for it to make sense to the model.” Chu works on one such translation technique, the Model Context Protocol (MCP), which Anthropic introduced at the end of last year. 

MCP attempts to standardize how AI agents interact with the world via various programs, and it’s already very popular. One web aggregator for MCP servers (essentially, the portals for different programs or tools that agents can access) lists over 15,000 servers already. 

Working out how to govern how AI agents interact with each other is arguably an even steeper challenge, and it’s one the Agent2Agent protocol (A2A), introduced by Google in April, tries to take on. Whereas MCP translates requests between words and code, A2A tries to moderate exchanges between agents, which is an “essential next step for the industry to move beyond single-purpose agents,” Rao Surapaneni, who works with A2A at Google Cloud, wrote in an email to MIT Technology Review

Google says 150 companies have already partnered with it to develop and adopt A2A, including Adobe and Salesforce. At a high level, both MCP and A2A tell an AI agent what it absolutely needs to do, what it should do, and what it should not do to ensure a safe interaction with other services. In a way, they are complementary—each agent in an A2A interaction could individually be using MCP to fetch information the other asks for. 

However, Chu stresses that it is “definitely still early days” for MCP, and the A2A road map lists plenty of tasks still to be done. We’ve identified the three main areas of growth for MCP, A2A, and other agent protocols: security, openness, and efficiency.

What should these protocols say about security?

Researchers and developers still don’t really understand how AI models work, and new vulnerabilities are being discovered all the time. For chatbot-style AI applications, malicious attacks can cause models to do all sorts of bad things, including regurgitating training data and spouting slurs. But for AI agents, which interact with the world on someone’s behalf, the possibilities are far riskier. 

For example, one AI agent, made to read and send emails for someone, has already been shown to be vulnerable to what’s known as an indirect prompt injection attack. Essentially, an email could be written in a way that hijacks the AI model and causes it to malfunction. Then, if that agent has access to the user’s files, it could be instructed to send private documents to the attacker. 

Some researchers believe that protocols like MCP should prevent agents from carrying out harmful actions like this. However, it does not at the moment. “Basically, it does not have any security design,” says Zhaorun Chen, a  University of Chicago PhD student who works on AI agent security and uses MCP servers. 

Bruce Schneier, a security researcher and activist, is skeptical that protocols like MCP will be able to do much to reduce the inherent risks that come with AI and is concerned that giving such technology more power will just give it more ability to cause harm in the real, physical world. “We just don’t have good answers on how to secure this stuff,” says Schneier. “It’s going to be a security cesspool really fast.” 

Others are more hopeful. Security design could be added to MCP and A2A similar to the way it is for internet protocols like HTTPS (though the nature of attacks on AI systems is very different). And Chen and Anthropic believe that standardizing protocols like MCP and A2A can help make it easier to catch and resolve security issues even as is. Chen uses MCP in his research to test the roles different programs can play in attacks to better understand vulnerabilities. Chu at Anthropic believes that these tools could let cybersecurity companies more easily deal with attacks against agents, because it will be easier to unpack who sent what. 

How open should these protocols be?

Although MCP and A2A are two of the most popular agent protocols available today, there are plenty of others in the works. Large companies like Cisco and IBM are working on their own protocols, and other groups have put forth different designs like Agora, designed by researchers at the University of Oxford, which upgrades an agent-service communication from human language to structured data in real time.

Many developers hope there could eventually be a registry of safe, trusted systems to navigate the proliferation of agents and tools. Others, including Chen, want users to be able to rate different services in something like a Yelp for AI agent tools. Some more niche protocols have even built blockchains on top of MCP and A2A so that servers can show they are not just spam. 

Both MCP and A2A are open-source, which is common for would-be standards as it lets others work on building them. This can help protocols develop faster and more transparently. 

“If we go build something together, we spend less time overall, because we’re not having to each reinvent the wheel,” says David Nalley, who leads developer experience at Amazon Web Services and works with a lot of open-source systems, including A2A and MCP. 

Nalley oversaw Google’s donation of A2A to the Linux Foundation, a nonprofit organization that guides open-source projects, back in June. With the foundation’s stewardship, the developers who work on A2A (including employees at Google and many others) all get a say in how it should evolve. MCP, on the other hand, is owned by Anthropic and licensed for free. That is a sticking point for some open-source advocates, who want others to have a say in how the code base itself is developed. 

“There’s admittedly some increased risk around a single person or a single entity being in absolute control,” says Nalley. He says most people would prefer multiple groups to have a “seat at the table” to make sure that these protocols are serving everyone’s best interests. 

However, Nalley does believe Anthropic is acting in good faith—its license, he says, is incredibly permissive, allowing other groups to create their own modified versions of the code (a process known as “forking”). 

“Someone could fork it if they needed to, if something went completely off the rails,” says Nalley. IBM’s Agent Communication Protocol was actually spun off of MCP. 

Anthropic is still deciding exactly how to develop MCP. For now, it works with a steering committee of outside companies that help make decisions on MCP’s development, but Anthropic seems open to changing this approach. “We are looking to evolve how we think about both ownership and governance in the future,” says Chu.

Is natural language fast enough?

MCP and A2A work on the agents’ terms—they use words and phrases (termed natural language in AI), just as AI models do when they are responding to a person. This is part of the selling point for these protocols, because it means the model doesn’t have to be trained to talk in a way that is unnatural to it. “Allowing a natural-language interface to be used between agents and not just with humans unlocks sharing the intelligence that is built into these agents,” says Surapaneni.

But this choice does come with drawbacks. Natural-language interfaces lack the precision of APIs, and that could result in incorrect responses. And it creates inefficiencies. 

Usually, an AI model reads and responds to text by splitting words into tokens. The AI model will read a prompt, split it into input tokens, generate a response in the form of output tokens, and then put these tokens into words to send back. These tokens define in some sense how much work the AI model has to do—that’s why most AI platforms charge users according to the number of tokens used. 

But the whole point of working in tokens is so that people can understand the output—it’s usually faster and more efficient for machine-to-machine communication to just work over code. MCP and A2A both work in natural language, so they require the model to spend tokens as the agent talks to other machines, like tools and other agents. The user never even sees these exchanges—all the effort of making everything human-readable doesn’t ever get read by a human. “You waste a lot of tokens if you want to use MCP,” says Chen. 

Chen describes this process as potentially very costly. For example, suppose the user wants the agent to read a document and summarize it. If the agent uses another program to summarize here, it needs to read the document, write the document to the program, read back the summary, and write it back to the user. Since the agent needed to read and write everything, both the document and the summary get doubled up. According to Chen, “It’s actually a lot of tokens.”

As with so many aspects of MCP and A2A’s designs, their benefits also create new challenges. “There’s a long way to go if we want to scale up and actually make them useful,” says Chen. 

5 Content Marketing Ideas for September 2025

In September 2025, ecommerce content marketers will find inspiration in holidays, both serious and quirky.

Content such as articles and videos can build a relationship with shoppers and keep folks visiting a business’s website. Content drives visibility in generative AI platforms and organic search and fuels social media and email marketing.

The content challenge, however, is generating new topics and material.

What follows are five content marketing ideas your business can use in September 2025.

Labor Day

Photo of a male cooking outdoors on a grill

Labor Day signals seasonal change, creating content marketing opportunities.

Celebrated in the U.S. on the first Monday of September, Labor Day has its roots in the 19th-century workers’ movement, when unions fought for higher wages, better working conditions, and shorter work days at the peak of the Industrial Revolution.

While it retains some of its workers’ pride, the holiday now represents the unofficial end of summer and an occasion for grilling, parades, and Autumn preparation.

It’s a major retail event, too.

Taken together, the day’s history, emphasis on seasonal change, and revenue value make it an exceptional marketing opportunity.

Here are some example article or video titles.

  • “Ultimate Guide to Hosting a Last-Minute Labor Day BBQ” could list products such as grilling accessories, BBQ rubs, and similar.
  • “How to Dress for Labor Day 2025” would feature inspiration styles and the option to “buy the look.”
  • “Get Your Home Ready for the Fall” could leverage the season’s “fresh start,” offering tips on decluttering, organizing, or swapping out decor.

Classical Music Month

A female playing a cello.

Classical Music Month provides ideas beyond music and instruments.

President Bill Clinton established Classical Music Month in 1994.

“Classical music is a celebration of artistic excellence. It spans centuries and generations, delighting and inspiring listeners of all ages. During Classical Music Month, we recognize the many talented composers, conductors, and musicians who bring classical music to our ears and enrich our lives,” wrote Clinton in his official proclamation.

Classical Music Month is an obvious marketing opportunity for music stores, but plenty of other retailers could benefit. Here are example titles:

  • A kitchen or party supplier: “The Perfect Classical Music Playlist for a Relaxing Evening”
  • A formal wear shop: “What to Wear to the Symphony This Fall”
  • Niche memorabilia store: “The 10 Greatest Classical Movie Themes”

National Read a Book Day

A man reading a book on a couch next to a dog

Reading is a pastime and a marketing opportunity.

According to various “national day” websites, September 6, 2025, is National Read a Book Day. It’s hardly an official holiday, but there are plenty of opportunities to create blog posts, long-form articles, videos, or podcasts aimed at bibliophiles.

Marketers can frame National Read a Book Day not to sell books, but rather a holiday celebrating quiet, comfort, and imagination. This approach opens up myriad content options.

Here are some example article or video titles.

  • “19 Gift Ideas for Readers Who Already Own Too Many Books”
  • “How to Create the Ultimate Reading Nook at Home”
  • “Evening Reading Routines That Make You Sleep Better”

National Salami Day

Photo of salamis and crackers on a cutting board

National Salami Day is ripe with content marketing flavor.

There is no doubt that September 7th’s National Salami Day is a playful observance, meant to bring a little laughter and food to its celebrants.

Merchants selling items such as charcuterie boards, knives, specialty foods, cheese, wine, or picnic accessories will likely have the most success with salami articles, videos, and podcasts. But folks in other industries can attract readers and viewers, too.

Consider that Newsweek addressed National Salami Day, and t-shirt shop Redbuddle has several salami-themed products.

Instructional and How-to

Mr. Porter has long been an example of good ecommerce content marketing.

How-to articles and videos are the foundation of ecommerce content marketing, delivering on the three pillars of attracting, engaging, and retaining shoppers.

Instructional content is also a powerful lead magnet and can fuel search, social media, newsletters, and shoppable videos.

Take inspiration from many retail websites. Here are five how-to articles from The Journal by men’s apparel merchant Mr. Porter:

Cloudflare Delists And Blocks Perplexity From Crawling Websites via @sejournal, @martinibuster

Cloudflare announced that they delisted Perplexity’s crawler as a verified bot and are now actively blocking Perplexity and all of its stealth bots from crawling websites. Cloudflare acted in response to multiple user complaints against Perplexity related to violations of robots.txt protocols, and a subsequent investigation revealed that Perplexity was using aggressive rogue bot tactics to force its crawlers onto websites.

Cloudflare Verified Bots Program

Cloudflare has a system called Verified Bots that whitelists bots in their system, allowing them to crawl the websites that are protected by Cloudflare. Verified bots must conform to specific policies, such as obeying the robots.txt protocols, in order to maintain their privileged status within Cloudflare’s system.

Perplexity was found to be violating Cloudflare’s requirements that bots abide by the robots.txt protocol and refrain from using IP addresses that are not declared as belonging to the crawling service.

Cloudflare Accuses Perplexity Of Using Stealth Crawling

Cloudflare observed various activities indicative of highly aggressive crawling, with the intent of circumventing the robots.txt protocol.

Stealth Crawling Behavior: Rotating IP Addresses

Perplexity circumvents blocks by using rotating IP addresses, changing ASNs, and impersonating browsers like Chrome.

Perplexity has a list of official IP addresses that crawl from a specific ASN (Autonomous System Number). These IP addresses help identify legitimate crawlers from Perplexity.

An ASN is part of the Internet networking system that provides a unique identifying number for a group of IP addresses. For example, users who access the Internet via an ISP do so with a specific IP address that belongs to an ASN assigned to that ISP.

When blocked, Perplexity attempted to evade the restriction by switching to different IP addresses that are not listed as official Perplexity IPs, including entirely different ones that belonged to a different ASN.

Stealth Crawling Behavior: Spoofed User Agent

The other sneaky behavior that Cloudflare identified was that Perplexity changed its user agent in order to circumvent attempts to block its crawler via robots.txt.

For example, Perplexity’s bots are identified with the following user agents:

  • PerplexityBot
  • Perplexity-User

Cloudflare observed that Perplexity responded to user agent blocks by using a different user agent that posed as a person crawling with Chrome 124 on a Mac system. That’s a practice called spoofing, where a rogue crawler identifies itself as a legitimate browser.

According to Cloudflare, Perplexity used the following stealth user agent:

“Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36”

Cloudflare Delists Perplexity

Cloudflare announced that Perplexity is delisted as a verified bot and that they will be blocked:

“The Internet as we have known it for the past three decades is rapidly changing, but one thing remains constant: it is built on trust. There are clear preferences that crawlers should be transparent, serve a clear purpose, perform a specific activity, and, most importantly, follow website directives and preferences. Based on Perplexity’s observed behavior, which is incompatible with those preferences, we have de-listed them as a verified bot and added heuristics to our managed rules that block this stealth crawling.”

Takeaways

  • Violation Of Cloudflare’s Verified Bots Policy
    Perplexity violated Cloudflare’s Verified Bots policy, which grants crawling access to trusted bots that follow common-sense rules like honoring the robots.txt protocol.
  • Perplexity Used Stealth Crawling Tactics
    Perplexity used undeclared IP addresses from different ASNs and spoofed user agents to crawl content after being blocked from accessing it.
  • User Agent Spoofing
    Perplexity disguised its bot as a human user by posing as Chrome on a Mac operating system in attempts to bypass filters that block known crawlers.
  • Cloudflare’s Response
    Cloudflare delisted Perplexity as a Verified Bot and implemented new blocking rules to prevent the stealth crawling.
  • SEO Implications
    Cloudflare users who want Perplexity to crawl their sites may wish to check if Cloudflare is blocking the Perplexity crawlers, and, if so, enable crawling via their Cloudflare dashboard.

Cloudflare delisted Perplexity as a Verified Bot after discovering that it repeatedly violated the Verified Bots policies by disobeying robots.txt. To evade detection, Perplexity also rotated IPs, changed ASNs, and spoofed its user agent to appear as a human browser. Cloudflare’s decision to block the bot is a strong response to aggressive bot behavior on the part of Perplexity.

ChatGPT Nears 700 Million Weekly Users, OpenAI Announces via @sejournal, @MattGSouthern

OpenAI’s ChatGPT is on pace to reach 700 million weekly active users, according to a statement this week from Nick Turley, VP and head of the ChatGPT app.

The milestone marks a sharp increase from 500 million in March and represents a fourfold jump compared to the same time last year.

Turley shared the update on X, writing:

“This week, ChatGPT is on track to reach 700M weekly active users — up from 500M at the end of March and 4× since last year. Every day, people and teams are learning, creating, and solving harder problems. Big week ahead. Grateful to the team for making ChatGPT more useful and delivering on our mission so everyone can benefit from AI.”

How Does This Compare to Other Search Engines?

Weekly active user (WAU) counts aren’t typically shared by traditional search engines, making direct comparisons difficult. Google reports aggregate data like total queries or monthly product usage.

While Google handles billions of searches daily and reaches billions of users globally, its early growth metrics were limited to search volume.

By 2004, roughly six years after launch, Google was processing over 200 million daily searches. That figure grew to four billion daily searches by 2009, more than a decade into the company’s existence.

For Microsoft’s Bing search engine, a comparable data point came in 2023, when Microsoft reported that its AI-powered Bing Chat had reached 100 million daily active users. However, that refers to the new conversational interface, not Bing Search as a whole.

How ChatGPT’s Growth Stands Out

Unlike traditional search engines, which built their user bases during a time of limited internet access, ChatGPT entered a mature digital market where global adoption could happen immediately. Still, its growth is significant even by today’s standards.

Although OpenAI hasn’t shared daily usage numbers, reporting WAU gives us a picture of steady engagement from a wide range of users. Weekly stats tend to be a more reliable measure of product value than daily fluctuations.

Why This Matters

The rise in ChatGPT usage is evidence of a broader shift in how people find information online.

A Wall Street Journal report cites market intelligence firm Datos, which found that AI-powered tools like ChatGPT and Perplexity make up 5.6% of desktop browser searches in the U.S., more than double their share from a year earlier.

The trend is even stronger among early adopters. Among people who began using large language models in 2024, nearly 40% of their desktop browser visits now go to AI search tools. During the same period, traditional search engines’ share of traffic from these users dropped from 76% to 61%, according to Datos.

Looking Ahead

With ChatGPT on track to reach 700 million weekly users, OpenAI’s platform is now rivaling the scale of mainstream consumer products.

As AI tools become a primary starting point for queries, marketers will need to rethink how they approach visibility and engagement. Staying competitive will require strategies focused as much on AI optimization as on traditional SEO.


Featured Image: Photo Agency/Shutterstock

How AI Search Should Be Shaping Your CEO’s & CMO’s Strategy [Webinar] via @sejournal, @theshelleywalsh

AI is rapidly changing the rules of SEO. From generative ranking to vector search, the new rules are not only technical but also reshaping how business leaders make decisions.

Join Dan Taylor on August 14, 2025, for an exclusive SEJ Webinar tailored for C-suite executives and senior leaders. In this session, you’ll gain essential insights to understand and communicate SEO performance in the age of AI.

Here’s what you’ll learn:

AI Search Is Impacting Everything. Are You Ready?

AI search is already here, and it’s impacting everything from SEO KPIs to customer journeys. This webinar will give you the tools to lead your teams through the shift with confidence and precision.

Register now for a business-first perspective on AI search innovation. If you can’t attend live, don’t worry. Sign up anyway, and we’ll send you the full recording.

Which SEO Jobs AI Will Reshape & Which Might Disappear via @sejournal, @DuaneForrester

You’ve probably seen the headlines like: “AI will kill SEO,” “AI will replace marketing roles,” or the latest panic: “Is your digital marketing job safe?”

Well, maybe not those exact headlines, but you get the idea, and I’m sure you have seen something similar.

Let’s clear something up: AI is not making SEO irrelevant. It’s making certain tasks obsolete. And yes, some jobs built entirely around those tasks are at risk.

A recent Microsoft study analyzed over 200,000 Bing Copilot interactions to measure task overlap between human job functions and AI-generated outputs. Their findings are eye-opening:

  • Translators and Interpreters: 98% overlap with AI tasks.
  • Writers and Authors: 88% overlap.
  • Public Relations Specialists: 79% overlap.

SEO as a field wasn’t directly named in the study, but many roles common within SEO map tightly to these job categories.

If you write, edit, report, research, or publish content as part of your daily work, this isn’t a hypothetical shift. It’s already happening.

(Source: Microsoft AI Job Impact – Business Insider – follow through this link to reach the download location for the original PDF of the study. BI summarizes the information, but links to MSFT, which in turn links to the source for the PDF.)

What’s Actually Changing

AI isn’t replacing SEO. It’s changing what “search engine optimization” means, and where and how value is measured.

In traditional SEO, the focus was clear:

  • Rank high.
  • Earn the click.
  • Optimize the page for humans and crawlers.

That still matters. But, in AI-powered search systems, the sequence is different:

  1. Content is chunked behind the scenes, paragraphs, lists, and answers are sliced and stored in vector form.
  2. Prompts trigger retrieval, the LLM pulls relevant chunks, often based on embeddings, not just keywords. (So, concepts and relationships, not keywords per se.)
  3. Only a few chunks make it into the answer. Everything else is invisible, no matter how high it once ranked.

This new paradigm shifts the rules of engagement. Instead of asking, “Where do I rank?” the better question is, “Was my content even retrieved?” That makes this a binary system, not a sliding scale.

In this new world of retrieval, the direct answer to the question, “Where do I rank?” could be “ChatGPT,” “Perplexity,” “Claude,” or “CoPilot,” instead of a numbered position.

In some ways, this isn’t as big a shift as some folks would have you believe. After all, as the old joke asks, “Where do you hide a dead body?” To which the correct answer is “…on Page 2 of Google’s results!”

Morbid humor aside, the implication is no one goes there, so there’s no value, and while that sentiment actually drops a lot of the real, nuanced details that actual click through rate data shows us (like the top of page 2 results actually has better CTRs than the bottom of page 1 typically), it does serve up a meta point: If you’re not in the first few results on a traditional SERP, the drop off of CTRs is precipitous.

So, it could be argued that with most “answers” today in generative AI systems being comprised of a very limited set of references, that today’s AI-based systems offer a new display path for consumers, but ultimately, those consumers will only be interacting with the same number of results they historically engaged with.

I mean, if we only ever really clicked on the top 3 results (generalizing here), and the rest were surplus to needs, then cutting an AI-sourced answer down to some words with only 1, 2 or 3 cited results amounts to a similar situation in terms of raw numbers of choice for consumers … 1, 2 or 3 clickable options.

Regardless, it does mark a shift in terms of work items and workflows, and here’s how that shift shows up across some core SEO tasks. Obviously, there could be many more, but these examples help set the stage:

  • Keyword research becomes embedding relevance and semantic overlap. It’s not about the exact phrase match in a gen AI result. It’s about aligning your language with the concepts AI understands. It’s about the concept of query fan-out (not new, by the way, but very important now).
  • Meta tag and title optimization become chunked headers and contextual anchor phrases. AI looks for cues inside content to determine chunk focus.
  • Backlink building becomes trust signal embedding and source transparency. Instead of counting links, AI asks: Does this source feel credible and citable?
  • Traffic analytics becomes retrieval testing and AI response monitoring. The question isn’t just how many visits you got, it’s whether your content shows up at all in AI-generated responses.

What this means for teams:

  • Your title tag isn’t just a headline; it’s a semantic hook for AI retrieval.
  • Content format matters more: bullets, tables, lists, and schema win because they’re easier to cite.
  • You need to test with prompts to see if your content is actually getting surfaced.

None of this invalidates traditional SEO. But, the visibility layer is moving. If you’re not optimizing for retrieval, you’re missing the first filter, and ranking doesn’t matter if you’re never in the response set.

The SEO Job Risk Spectrum

Microsoft’s study didn’t target SEO directly, but it mapped 20+ job types by their overlap with current AI tasks. I used those official categories to extrapolate risk within SEO job functions.

Image Credit: Duane Forrester

High Risk – Immediate Change Needed

SEO Content Writers

Mapped to: Writers & Authors (88% task overlap in the study: 88% of these tasks an AI can do today).

Why: These roles often involve creating repeatable, factual content, precisely the kind of output AI handles well today (to a degree, anyway). Think meta descriptions, product overviews, and FAQ pages.

The writing isn’t disappearing, but humans aren’t always required for first drafts anymore. Final drafts, yes, but first? No. And I’m not debating how factual the content is that an AI produces.

We all know the pitfalls, but I’ll say this: If your boss is telling you your job is going away, and your argument is “but AIs hallucinate,” think about whether that’s going to change the outcome of that meeting.

Link Builders/Outreach Specialists

Mapped to: Public Relations Specialists (79% overlap).

Why: Cold outreach and templated link negotiation can now be automated.

AI can scan for unlinked mentions, generate outreach messages, and monitor link placement outcomes, cutting into the core responsibilities of these roles.

Moderate Risk – Upskill To Stay Relevant

SEO Analysts

Mapped to: Market Research Analysts (~65% overlap).

Why: Data gathering and trend reporting are susceptible to automation. But, analysts who move into interpreting retrieval patterns, building AI visibility reports, or designing retrieval experiments can thrive.

Admittedly, SEO is a bit more specialized, but bottom or top of this stack, the risk remains moderate. This one, however, is heavily dependent on your actual job tasks.

Technical SEOs

Mapped to: Web Developers (not perfect, but as close as the study got).

Why: Less overlap with generative AI, but still pressured to evolve. Embedding hygiene, chunk structuring, and schema precision are now foundational.

The most valuable technical SEOs are becoming AI optimization architects. Not leaving their traditional work behind, but adopting new workflows.

Content Strategists/Editors

Mapped to: Editors & Technical Writers.

Why: Editing for humans and tone alone is out. Editing for retrievability is in. Strategists now must prioritize chunking, citation density, and clarity of topic anchors, not just user readability.

Or, at least, now consider that LLM bots are de facto users as well.

Lower Risk – Expanded Value And Influence

SEO Managers/Leads

Mapped to: Marketing Managers.

Why: Managers who understand both traditional and AI SEO have more leverage than ever. They’re responsible for team alignment, training decisions, and tool adoption.

This is a growth role, if guided by data, not gut instinct. Testing is life here.

CMOs/Strategy Executives

Mapped to: Marketing Executives.

Why: Strategic thinking isn’t automatable. AI can suggest, but it can’t set priorities across brand, trust, and investment.

Executives who understand how AI affects visibility will steer their companies more effectively, especially in content-heavy verticals.

Tactical Response By Role Type

Every job category on the risk curve deserves practical action.

Now, let’s look at how people in SEO roles can pivot, strengthen, or evolve, based on clear, verifiable capabilities.

High-Risk Roles: SEO Content Writers, Editors, Link Builders

  • Shift from traditional copywriting to creating structured, retrieval-friendly content.
  • Focus on chunk-based writing: short Q&A blocks, bullet-based explanations, and schema-rich snippets.
  • Learn AI prompt testing: Use platforms like ChatGPT or Google Gemini to query key topics and see if your content is surfaced without requiring a click.
  • Use gen AI visibility tools verified to support AI search tracking:
    • Profound tracks your brand’s appearance in AI search results across platforms like ChatGPT, Perplexity, and Google Overviews. You can see where you’re cited and which topics AI engines associate with you.
    • SERPRecon offers AI-powered content outlines and helps reverse-engineer AI overview logic to show what keywords and phrasing matter most. So, use a tool like this, then take the output as the basis for your query fan-out work.
  • Reinvent your role:
    • Write in chunks that AI can cite.
    • Embed trust signals (clear sourcing, authoritativeness).
    • Collaborate with data teams on embedding accuracy and chunk performance.

Moderate-Risk Roles: SEO Analysts, Technical SEOs, Content Strategists

  • Expand traditional ranking reports with retrievability diagnostics:
    • Use prompt simulations that probe content retrieval in real-time across AI engines.
    • Audit embedding and semantic alignment at the paragraph or chunk level.
  • Employ tools like those mentioned to analyze AI Overviews and generate content improvement outlines.
  • Monitor AI visibility gaps through new dashboards:
    • Track citation share versus competitors.
    • Identify topic clusters where your domain is cited less.
  • Understand structured data and schema:
    • Use markup to clearly define entities, relationships, and context for AI systems.
    • Prioritize formats like FAQPage, HowTo, and Product schema, where applicable. These are easier for LLMs and AI Overviews to cite.
    • Align semantic clarity within chunks to schema-defined roles (e.g., question/answer pairs, step lists) to improve retrievability and surface relevance.
  • Join or lead internal “AI-SEO Workshops”:
    • Teach teams how to test content visibility in ChatGPT, Perplexity, or Google Overviews.
    • Share experiments in prompt engineering, chunk format outcomes, and schema effectiveness.

Lower-Risk Roles: SEO Managers, Digital Leads, CMOs

  • Sponsor retraining initiatives for semantic and vector-led SEO practices.
  • Revise hiring briefs and job descriptions to include skills like embedding knowledge, prompt testing, schema fluency, and chunk analysis.
  • Implement AI-visibility dashboards using dedicated tools:
    • Benchmark brand presence across search engines and generative platforms.
    • Use insights to guide future content and authority decisions.
  • Keep traditional SEO strong alongside AI tactics:
    • Technical optimization, speed, quality of content, etc., still matter.
    • Hybrid success requires both sides working in sync.
  • Set internal AI literacy standards:
    • Offer training on retrieval engineering, LLM behavior, and chunk visibility.
    • Ensure everyone understands AI’s core behaviors, what it cites, and what it ignores.

Reframing The Opportunity

This isn’t a “get out now” scenario for these jobs. It’s a “rebuild your toolkit” moment.

High overlap doesn’t mean you’re obsolete. It means the old version of your job won’t hold value without adaptation. And what gets automated away often wasn’t the best part of the job anyway.

AI isn’t replacing SEO, it’s distilling it. What’s left is:

  • Strategy that aligns with machine logic and user needs.
  • Content structure that supports fast retrieval, not just ranking.
  • Authority based on more, deeper, sometimes implied, trust signals, not just age or backlinks. Like E-E-A-T++.

Think of it this way: AI strips away the boilerplate. What’s left is your real contribution. Your judgment. Your design. Your clarity.

New opportunity lanes are forming right now:

  • Writers who evolve into retrievability engineers.
  • Editors who become semantic format strategists.
  • Technical SEOs who own chunk structuring and indexing hygiene.
  • Analysts who specialize in AI visibility benchmarking.

These aren’t job titles (yet), but the work is happening. If you’re in a role that touches content, structure, trust, or performance, now is the time to sharpen your relevance, not to fear automation.

Final Word

The fundamentals still matter. Technical SEO, content quality, and UX don’t go away; they evolve alongside AI.

No, SEO isn’t dying, it’s becoming more strategic, more semantic, more valuable. AI-driven retrievability is already redefining visibility. Are you ready to adapt?

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: /Shutterstock

Why Your PPC Structure Should Mirror Your Business Model via @sejournal, @brookeosmundson

A lot of PPC accounts are built from the bottom up. You start with keyword research, group them by themes or match types, maybe throw in some location targeting, and go from there.

But then reporting becomes messy. Budget allocation feels random or reactive.

Then, when leadership asks for performance broken out by product line or region, you’re left pulling together a spreadsheet patchwork that still doesn’t tell the full story.

That’s because your PPC account structure doesn’t match how the business actually operates.

When your campaigns mirror your business model, everything starts working together.

You’re not just optimizing for clicks or conversions, you’re aligning with how revenue is made, who’s responsible for what, and how success is measured across the company.

This article will walk through how to shift from a keyword-centric approach to a business-aligned strategy.

Additionally, you’ll leave with practical advice for both restructuring existing accounts and building new ones the right way.

Why Structure Is More Than Just A Clean Campaign View

Let’s be honest: Campaign structure is rarely the most exciting part of PPC. But it’s one of the most important.

The way your account is structured affects everything from how you manage budgets to how clearly you can report on performance.

And yet, too many accounts are still structured around what’s easiest to set up, not what makes the most sense for the business.

If you’ve ever found yourself duplicating reports just to slice performance by business line, or struggled to isolate budgets by region, chances are the issue isn’t performance. It’s how your PPC campaigns are structured.

Well-structured accounts give you clarity, not just control. They help you:

  • Allocate budget where it matters most.
  • Tie campaign results back to business outcomes.
  • Make faster decisions with cleaner data.
  • Align with sales and finance teams instead of operating in a silo.

When your PPC structure reflects how your company makes money, your campaigns do more than drive leads or sales. They’re taking it a step further to support actual business growth.

Rethink The Starting Point By Beginning With The Business Model

Most marketers are taught to start with keyword research. But when you begin with the business model instead, you’re already thinking strategically.

Now, for agencies, this can be harder to manage because you’ve likely got someone trying to win the business, and then a completely different team going to execute on what’s agreed upon.

If you’re still in the discovery phase with a client, start by asking some of these questions:

  • What are the core revenue drivers for the business?
  • Are there different business units, product lines, or services with unique goals?
  • Do some offerings have higher margins, longer sales cycles, or different audiences?
  • Are there geographic differences in how the business operates or sells?

These answers should directly inform how your campaigns are structured.

Let’s say you’re managing PPC for a multi-location financial services brand.

Their retail checking accounts, home loans, and business banking products each serve different customers, generate revenue differently, and likely have different internal stakeholders.

Instead of grouping all financial keywords into one campaign, each of those lines should have its own campaign with distinct goals, budgets, and creative.

You can then track performance in a way that lines up with internal reporting and make adjustments based on real business priorities, not just ad metrics.

A Better Framework For Structuring Your Account

Once you have a clear picture of how the business operates, use that to inform a top-down PPC campaign structure.

Here are three starting points that typically work well.

1. Mirror The Business Unit Or P&L

If the business tracks revenue separately for each product or service line, your campaigns should reflect that.

Not only does this make budgeting easier, but it also keeps reporting clean and relevant for internal teams.

You can speak the same language as your stakeholders and clearly show how paid media supports each part of the business.

Here’s an example breakdown:

  • Campaign A: “Personal Loans | Search | US”
  • Campaign B: “Student Banking | PMax | Northeast”
  • Campaign C: “Small Business Lending | Search | Canada”

Each one can then be built with appropriate audience targeting, bidding strategies, and conversion goals.

2. Segment By Funnel Stage Or Intent

Not all keywords or users are created equal. Think about structuring campaigns around the user’s stage in the journey.

Some examples include:

  • Branded campaigns (warm leads and returning users).
  • Non-branded high-intent campaigns (ready to convert).
  • Informational or research-stage campaigns (top-of-funnel).
  • Competitor-focused campaigns (comparison shoppers).
  • Awareness-driving campaigns (creating demand).

This lets you tailor bid strategy, messaging, and landing pages to match the level of intent and measure success more appropriately.

3. Separate Testing From Scaling

Every account needs room for experimentation. But, testing new keywords, assets, or audiences shouldn’t get in the way of scaling what already works.

A good PPC structure separates out:

  • Evergreen campaigns that consistently drive results.
  • Test campaigns with new targeting, creative, or offers.
  • Seasonal or geo-specific initiatives that need short-term budget support.

This makes it easier to measure impact, allocate budget, and avoid letting unproven elements tank your top-performing campaigns.

For Existing Accounts: When To Rethink Your PPC Structure

If your campaigns have been live for a while, restructuring might feel daunting. But, sometimes a reset is the only way to make your account work smarter.

Here are a few signs it might be time to make a change:

  • You can’t easily map campaign performance back to business priorities.
  • You’re constantly building workaround reports for internal teams.
  • Budget shifts feel reactive instead of strategic.
  • Performance has plateaued, but it’s unclear why.

Before making big changes, start with an audit. Compare how the business is structured vs. how your campaigns are organized.

Are your campaigns aligned with revenue-driving units? Do you have enough control over budgets, bids, and assets for key areas?

If not, consider starting small. Choose one business unit or region and restructure those campaigns first.

Document what you changed, how it aligns with the business, and what you’re measuring. Then, repeat the process for other areas as needed.

If You’re Setting Up A New PPC Account, Here’s Where To Start

New accounts are a blank slate and a great opportunity to get it right from the beginning.

Here’s a simple approach to building a structure around your business model:

  1. Outline your revenue centers. Products, services, regions, etc. Whatever makes sense for the business.
  2. Group campaigns around these core units. Each campaign should have its own budget, goals, and audience strategy.
  3. Map audience intent to campaign type. Use ad groups or asset groups to segment further by funnel stage or user behavior.
  4. Plan for scale. Use a naming convention that can grow with the business and makes sense to anyone reviewing the account.
  5. Set conversion tracking and bidding by campaign type. Not everything should optimize toward the same goal.

This setup makes it easier to scale, test new ideas, and keep everyone from marketing to finance on the same page.

Why Alignment With Sales & Finance Is A Must

When your campaigns align with the business model, it’s easier to speak the language of the teams around you.

Sales wants to know where leads are coming from and how qualified they are. Finance wants to understand return on investment (ROI) by product line or geography.

Executives want to know if paid media is supporting growth in the right areas.

If your campaign structure mirrors the way they already think, the reporting becomes instantly more useful. You’ll spend less time explaining what a campaign does and more time discussing what it’s driving.

When performance is strong, it’s much easier to justify additional investment if you can show that spend ties directly to core business units or revenue goals.

Supporting PPC Structure With The Right Tools And Workflow

Having a smart structure on paper only goes so far. To actually execute and manage it day to day, you need systems that support clarity and consistency.

First, start with naming conventions. A standardized way of naming campaigns, ad groups, and assets helps everyone understand what each item is meant to do.

Include details like business unit, funnel stage, and region to keep things clean and scalable.

Then, align your conversion tracking setup with how the business defines success.

If you’re managing multiple product lines or customer types, don’t lump everything under one conversion goal. Set up separate conversion actions for each key area so you can measure impact more precisely.

Reporting also needs to reflect this structure. Build dashboards that slice performance by business unit, product, geography, or intent stage.

Whether you’re using Looker Studio or a different reporting suite, make sure the views match the way leadership wants to see results.

Don’t forget workflow tools and collaboration. Use shared documents or project management platforms to track which campaigns map to which business outcomes.

Make sure your internal stakeholders understand what each campaign is doing and why. This keeps cross-functional teams aligned and eliminates confusion about what paid media is actually delivering.

Finally, plan regular check-ins to ensure your structure still fits the evolving business.

As product lines shift or priorities change, your campaigns need to reflect that. Structure is not a “set it and forget it” task. Your PPC structure should evolve alongside your business.

It’s Time To Move Past Legacy Structures

Old habits die hard, especially if you’ve been in PPC for years. But, if your campaigns are still organized by match type or broad themes, you’re probably limiting what you can learn and what you can improve.

Campaigns should be built to reflect what matters most to the business.

If you’re not sure where to begin, talk to your sales or finance counterparts. They’ll give you a clearer picture of how the company thinks about performance, and you can structure campaigns to match.

This doesn’t mean throwing out everything you’ve built. But, it does mean stepping back and asking, “Does this structure actually help us measure success and allocate resources in a way that reflects how the business operates?”

If the answer is no, then it’s worth rethinking your setup.

When you take a top-down approach to structuring your campaigns, your PPC program becomes more than just a lead or sales generator. It becomes a strategic driver for the business.

More Resources:


Featured Image: SvetaZi/Shutterstock