The PPC Skills That Won’t Be Replaced By Automation

The best PPC specialists aren’t just campaign managers. They’re business consultants who happen to use paid advertising as their primary tool. As automation handles more tactical optimization, the value of a PPC professional increasingly lies in their ability to solve business problems, not just reduce cost-per-click.

Specialists who command premium rates and drive real growth possess skills that extend far beyond the ad platforms themselves. Here are the consulting capabilities that separate tactical executors from strategic growth partners.

Business Economics And Profit Optimization

Return on ad spend (ROAS) is a lazy metric.

For years, I’ve watched businesses optimize toward arbitrary ROAS targets that bear no relationship to actual profitability. A 400% ROAS sounds impressive until you realize the client is losing money on every sale after accounting for product costs, shipping, and overhead.

Understanding business economics means knowing the difference between revenue generation and profit generation. It means asking questions most PPC specialists never consider. What’s the true cost of this product? How do return rates vary by acquisition channel? What’s the cash flow impact of 30-day payment terms?

When you can structure campaigns around contribution margin rather than revenue multiples, you transition from order taker to strategic advisor. You start having conversations about product mix optimization, not just keyword expansion. You identify that promoting lower margin products at aggressive ROAS targets is destroying profitability, even as revenue climbs.

This shift requires moving beyond platform metrics and integrating P&L understanding into every strategic decision. Tools can help, but the real value comes from combining financial acumen with campaign execution.

Strategic Consulting

The hardest skill to develop is knowing when PPC isn’t the answer.

I’ve sat in countless meetings where stakeholders obsess over minor bid adjustments while ignoring fundamental business problems. The real issue isn’t your Quality Score. It’s that your product market fit is weak, your pricing is uncompetitive, or your checkout process has a 85% abandonment rate.

Great consultants diagnose the actual problem, not just the visible symptoms. They recognize when poor PPC performance stems from weak value propositions that no amount of creative testing will fix. Or pricing strategies that make profitable acquisition impossible. Or product quality issues driving high return rates. Or seasonal demand shifts being misinterpreted as campaign degradation. Or website conversion barriers that make every click more expensive. This strategic approach to scaling requires moving beyond reactive optimizations, which I’ve covered in depth in my SCALE Framework article.

This requires stepping back from the platform interface and analyzing the entire customer journey. It means being comfortable telling a client that, before you optimize their ads, they need to fix their product pages, streamline their checkout, or reconsider their market positioning.

The specialists who can’t make this distinction end up optimizing deck chairs while the ship sinks.

Cross-Channel Strategy And Attribution Understanding

Channel silos are relics of an attribution-obsessed past.

The most valuable insight I can provide a client often has nothing to do with their Google Ads account. It’s recognizing that their Meta prospecting campaigns are generating awareness that makes Search more efficient. Or that their shopping campaigns are supporting brand term performance. Or that their display retargeting is shortening the consideration cycle.

Understanding how channels interact requires moving beyond last click thinking and grasping incrementality. It means knowing when a Search campaign should get credit for a conversion that happened because a user first saw a YouTube ad three weeks prior.

With marketing mix modeling gaining traction, Google’s Meridian being a clear signal, the future belongs to strategists who think in systems, not channels. This doesn’t mean you need to be an expert in every platform. But you need enough understanding to collaborate effectively and build cohesive strategies.

The T-shaped specialist who can manage PPC deeply while understanding SEO, CRO, email, and content marketing will always outperform the narrow specialist who only looks at their own metrics.

Conversion Rate Optimization And Post-Click Experience

Most PPC specialists treat the click as the finish line. It’s actually the starting line.

I’ve watched teams spend weeks debating headline variations while completely ignoring a landing page that converts at 2% when the industry standard is 8%. The math is simple. Improving that conversion rate to 4% has the same impact as doubling your traffic, except it’s often easier and cheaper to execute.

Yet CRO remains dramatically undervalued because it falls into a “no man’s land.” Developers don’t have the marketing context. Marketing teams lack the technical ability to implement changes. Agencies focus on what happens before the click because that’s what they’re paid to manage.

This creates a massive opportunity. The consultant who can identify conversion barriers, inefficient checkout flows, weak trust signals, poor mobile experiences, confusing navigation, and actually drive implementation becomes invaluable.

This requires user research skills, competitive analysis, hypothesis development, and enough technical understanding to work effectively with development teams. It means running structured A/B tests, not just making changes based on best practices you read in a blog post.

When you can demonstrate that optimizing the post-click experience generated a 50% revenue increase without touching ad spend, you’re no longer a PPC manager. You’re a growth consultant.

Stakeholder Management And Change Leadership

The best strategy in the world is worthless if you can’t get it implemented.

I’ve learned this the hard way. Early in my career, I’d present brilliant recommendations backed by compelling data, only to watch them die in committee because I hadn’t built buy-in with the right stakeholders or framed the change in terms that resonated with their priorities.

Consulting is as much about organizational navigation as technical expertise. It requires understanding that the CFO cares about cash flow, the CMO worries about brand equity, and the head of ecommerce is measured on conversion rate. You need to tailor your recommendations accordingly.

Great consultants master the soft skills that don’t appear in any PPC certification. Building credibility gradually rather than expecting instant authority. Communicating complex concepts without condescension. Managing expectations during testing phases when results aren’t immediate. Navigating political dynamics when data conflicts with executive intuition. Knowing when to push hard and when to compromise strategically.

This is especially critical when recommending major strategic shifts like changing attribution or tracking solutions, restructuring account architecture, or reducing spend on sacred cow campaigns that leadership loves but data shows are inefficient.

Change management isn’t about having the right answer. It’s about getting that answer implemented.

Data Translation And Business Storytelling

Data without narrative is just noise.

The ability to transform campaign metrics into business insights that non-technical stakeholders understand might be the most undervalued skill in PPC. Anyone can report that CPC increased 15% month over month. A consultant explains that rising competition from two new market entrants is driving auction pressure, quantifies the revenue impact, and presents three strategic options with clear trade-offs.

This requires moving beyond dashboard screenshots and learning to tell stories with data. Connecting platform metrics to business outcomes executives actually care about. Identifying patterns across multiple data sources like CRM, analytics, and ads platforms. Building business cases that project return on investment and acknowledge risk honestly. Presenting recommendations with clear logic, not just best practices. Adapting your communication style to your audience’s sophistication level.

I’ve found that the specialists who master this skill get invited into strategic planning conversations, not just campaign reviews. They become trusted advisors whose input shapes budget allocation, product roadmaps, and market expansion decisions.

Continuous Learning And Adaptive Thinking

Digital marketing changes daily. Your expertise has a half-life.

The consulting skills that matter most can’t be learned from a certification course. They’re developed through experience, curiosity, and willingness to work outside your comfort zone. The specialists who stay relevant are those who read beyond PPC news. Business strategy, behavioral economics, technology trends. They study industries deeply enough to understand their unique economics and customer behavior. They experiment constantly, even when current approaches are working. They seek out perspectives that challenge their assumptions. They recognize when their mental models are outdated and rebuild them.

What worked in 2020 doesn’t work in 2026. What works today won’t work in 2030. The only sustainable competitive advantage is the ability to learn faster than the market evolves.

Futureproof Your PPC Expertise

As AI and automation handle more tactical execution, the gap between order takers and strategic consultants will widen dramatically. The specialists who thrive will be those who can solve business problems using PPC as one tool among many.

They’ll understand profit mechanics well enough to structure campaigns around real business objectives. They’ll diagnose problems accurately rather than optimizing the wrong things efficiently. They’ll see channels as interconnected systems, not isolated silos. They’ll drive post-click optimization with the same rigor as pre-click management. They’ll navigate organizational complexity to get strategies implemented. They’ll translate data into narratives that drive action.

These aren’t nice-to-have skills for some future state. They’re what separates the valuable from the replaceable right now.

The question isn’t whether you can run a profitable Search campaign. It’s whether you can solve the business problems that make running that campaign worthwhile in the first place.

More Resources:


Featured Image: Master1305/Shutterstock

Using AI For SEO Can Fail Without Real Data (& How Ahrefs Fixes It) via @sejournal, @ahrefs

This post was sponsored by Ahrefs. The opinions expressed in this article are the sponsor’s own.

If you’ve ever run into the limits of solo AI or manual SEO tools, this article is for you.

AI on its own can write and suggest ideas, but without reliable data to anchor those suggestions, it can miss the mark. On the other hand, traditional SEO dashboards are powerful – yet slow and siloed. The emerging sweet spot? Connecting AI to real, live SEO data so you can ask natural language questions and get deep answers fast.

Ahrefs Uses Its Own MCP Server & It Improves SEO Workflows

At its core, MCP stands for Model Context Protocol – an open standard that lets compatible AI assistants (like ChatGPT and Claude) directly access external data sources and tools through a standardized connection. This means you can ask your AI assistant questions like “which keywords my competitor ranks for that I don’t” or “which sites are gaining the most organic traffic this year” – and get answers based on real, up-to-date SEO data instead of guesses.

Imagine you’re planning to launch a new eCommerce product. Instead of manually exporting CSVs from multiple dashboards and painstakingly combining them, you could simply prompt an AI assistant to pull competitive insights, keyword opportunities, and content ideas directly from a connected SEO dataset – all in one place. That’s the power of an MCP integration.

Why AI + Real SEO Data Together Beats Guessing Or Generic Prompts

Most marketers use at least two types of tools: dedicated SEO platforms (for data) and AI assistants (for speed and interpretation). However:

  • AI on its own can hallucinate – it generates plausible-sounding answers, but without live data, those answers may be inaccurate or outdated.
  • SEO dashboards by themselves are often slow – you click around multiple screens, export reports, and manually interpret results.
  • Humans still need to make strategic decisions – but data plus AI frees up your time to focus on strategy, not grunt work.

Connecting AI to a live SEO dataset unites the best of both worlds: the intelligence and language fluency of modern AI with the accuracy and scale of professional SEO metrics.

15 Practical Use Cases & Prompts To Ask Your SEO AI Agent

Below are real prompt ideas and workflows you can incorporate into your planning, competitive research, and SEO execution. These are grouped from simple (fast answers) to advanced (deep analysis) – and all are grounded in actionable insights you can use today.

Level 1: Quick Insights You Can Get in Minutes

These are great for rapid decision-making and daily checks.

1. Identify Sites Growing Organic Traffic

Ask your AI:

Which of these 10 competitors has grown organic search traffic the most over the last 12 months?
This lets you quickly spot who is gaining momentum – and why – without manual reporting.

2. Find Competitor Rankings You Don’t Rank For

Tell me which first-page Google rankings [Competitor A] has that [My Site] doesn’t.
This gives you a direct gap list you can use for content or optimization ideas.

3. Most Linked-To Pages on Any Domain

List the top 10 pages on [domain] by number of backlinks, and show their estimated traffic.
This helps you spot proven content winners and consider similar formats.

4. Identify Organic Competitors

Give me a list of the closest organic search competitors for [My Site].
Great for broadening your competitive set beyond the obvious brands.

5. Combine Keyword Research With Headline Ideas

Help me find keywords people use before buying [product], and suggest related blog post headlines.
This blends keyword discovery with content planning in one step.

Level 2: Intermediate, More Strategic Queries

These involve deeper insights and slightly longer processing time.

6. Find Trending Keywords (and Why)

Show up to 20 trending keywords in my niche that may grow in popularity next year – include explanations.
This is better than a static list – you get context and rationale.

7. Analyze Multiple Domains at Scale

Give me a table of these 20 domains with Domain Rating, Organic Traffic, and number of top-3 rankings.
Great for benchmarking and competitor comparison.

8. Structure an Article With Keyword Insights

Help me build an article outline for [topic] based on keyword research.
This combines research with SEO content planning.

9. Top Ranking Sites for Specific Keyword Set

Among these keyphrases, tell me which sites rank in the highest positions.
Very helpful when exploring emerging niches within broader topics.

10. Find Broken Backlinks for Outreach Opportunities

Identify broken backlinks in this subfolder with high-authority referring domains.
Perfect for targeted link building.

Level 3: Advanced, High-Impact Research

These take more data and processing – but return strategic intelligence you can act on.

11. International SEO Expansion Ideas

Find similar businesses that have expanded into new countries and show where their organic traffic is growing.
A great way to spot untapped markets.

12. Competitor Content Strategy Deep Dive

Analyze top organic competitors and show their content themes, unique angles, and ranking patterns.
This helps refine your content planning with context beyond just keywords.

13. Comprehensive Site SEO Recommendations

You are an SEO expert with access to extensive data – offer recommendations to grow organic traffic for [brand].
This leverages the AI to synthesize data into strategic advice you can execute.

14. In-Depth Industry Ranking Patterns

Provide a list of top keyphrases where a site ranks first-page and includes certain SERP features.
Used for deep pattern discovery in competitive environments.

15. Multi-Domain Backlink Profile Analysis

Show backlink acquisition rates for these five competitors.
Useful for assessing link velocity and authority-building trends.

Tips to Get More Out of Data-Driven AI Prompts

Use these best practices to ensure your AI assistant actually retrieves the correct data:

  • Always specify that you want results from the SEO dataset rather than web search.
  • Include clear context (e.g., competitors, timeframes, regions).
  • Be explicit about limits (e.g., “show only keyword opportunities with volume > X”).
  • Track your usage and data limits via your SEO dashboard so you don’t hit quotas unexpectedly.

Image Credits

Featured Image: Image by Ahrefs. Used with permission.

Microbes could extract the metal needed for cleantech

In a pine forest on Michigan’s Upper Peninsula, the only active nickel mine in the US is nearing the end of its life. At a time when carmakers want the metal for electric-vehicle batteries, nickel concentration at Eagle Mine is falling and could soon drop too low to warrant digging.

But earlier this year, the mine’s owner started testing a new process that could eke out a bit more nickel. In a pair of shipping containers recently installed at the mine’s mill, a fermentation-derived broth developed by the startup Allonnia is mixed with concentrated ore to capture and remove impurities. The process allows nickel production from lower-quality ore. 

Kent Sorenson, Allonnia’s chief technology officer, says this approach could help companies continue operating sites that, like Eagle Mine, have burned through their best ore. “The low-hanging fruit is to keep mining the mines that we have,” he says. 

Demand for nickel, copper, and rare earth elements is rapidly increasing amid the explosive growth of metal-intensive data centers, electric cars, and renewable energy projects. But producing these metals is becoming harder and more expensive because miners have already exploited the best resources. Like the age-old technique of rolling up the end of a toothpaste tube, Allonnia’s broth is one of a number of ways that biotechnology could help miners squeeze more metal out of aging mines, mediocre ore, or piles of waste.

The mining industry has intentionally seeded copper ore with microbes for decades. At current copper bioleaching sites, miners pile crushed copper ore into heaps and add sulfuric acid. Acid-loving bacteria like Acidithiobacillus ferrooxidans colonize the mound. A chemical the organisms produce breaks the bond between sulfur and copper molecules to liberate the metal.

Until now, beyond maintaining the acidity and blowing air into the heap, there wasn’t much more miners could do to encourage microbial growth. But Elizabeth Dennett, CEO of the startup Endolith, says the decreasing cost of genetic tools is making it possible to manage the communities of microbes in a heap more actively. “The technology we’re using now didn’t exist a few years ago,” she says.

Endolith analyzes bits of DNA and RNA in the copper-rich liquid that flows out of an ore heap to characterize the microbes living inside. Combined with a suite of chemical analyses, the information helps the company determine which microbes to sprinkle on a heap to optimize extraction. 

Two people in white coats and hard hats look up at steel columns inside a warehouse.
Endolith scientists use columns filled with copper ore to test the firm’s method of actively managing microbes in the ore to increase metal extraction.
ENDOLITH

In lab tests on ore from the mining firm BHP, Endolith’s active techniques outperformed passive bioleaching approaches. In November, the company raised $16.5 million to move from its Denver lab to heaps in active mines.

Despite these promising early results, Corale Brierley, an engineer who has worked on metal bioleaching systems since the 1970s, questions whether companies like Endolith that add additional microbes to ore will successfully translate their processes to commercial scales. “What guarantees are you going to give the company that those organisms will actually grow?” Brierley asks.

Big mining firms that have already optimized every hose, nut, and bolt in their process won’t be easy to convince either, says Diana Rasner, an analyst covering mining technology for the research firm Cleantech Group. 

“They are acutely aware of what it takes to scale these technologies because they know the industry,” she says. “They’ll be your biggest supporters, but they’re going to be your biggest critics.”

In addition to technical challenges, Rasner points out that venture-capital-backed biotechnology startups will struggle to deliver the quick returns their investors seek. Mining companies want lots of data before adopting a new process, which could take years of testing to compile. “This is not software,” Rasner says.  

Nuton, a subsidiary of the mining giant Rio Tinto, is a good example. The company has been working for decades on a copper bioleaching process that uses a blend of archaea and bacteria strains, plus some chemical additives. But it started demonstrating the technology only late last year, at a mine in Arizona. 

A large piece of machinery hovers over a mound of red dirt.
Nuton is testing an improved bioleaching process at Gunnison Copper’s Johnson Camp mine in Arizona.
NUTON

While Endolith and Nuton use naturally occurring microbes, the startup 1849 is hoping to achieve a bigger performance boost by genetically engineering microbes.

“You can do what mining companies have traditionally done,” says CEO Jai Padmakumar. “Or you can try to take the moonshot bet and engineer them. If you get that, you have a huge win.”

Genetic engineering would allow 1849 to tailor its microbes to the specific challenges facing a customer. But engineering organisms can also make them harder to grow, warns Buz Barstow, a Cornell University microbiologist who studies applications for biotechnology in mining.

Other companies are trying to avoid that trade-off by applying the products of microbial fermentation, rather than live organisms. Alta Resource Technologies, which closed a $28 million investment round in December, is engineering microbes that make proteins capable of extracting and separating rare earth elements. Similarly, the startup REEgen, based in Ithaca, New York, relies on the organic acids produced by an engineered strain of Gluconobacter oxydans to extract rare earth elements from ore and from waste materials like metal recycling slag, coal ash, or old electronics. “The microbes are the manufacturing,” says CEO Alexa Schmitz, an alumna of Barstow’s lab.

To make a dent in the growing demand for metal, this new wave of biotechnologies will have to go beyond copper and gold, says Barstow. In 2024, he started a project to map out genes that could be useful for extracting and separating a wider range of metals. Even with the challenges ahead, he says, biotechnology has the potential to transform mining the way fracking changed natural gas. “Biomining is one of these areas where the need … is big enough,” he says. 

The challenge will be moving fast enough to keep up with growing demand.

The Download: squeezing more metal out of aging mines, and AI’s truth crisis

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Microbes could extract the metal needed for cleantech

In a pine forest on Michigan’s Upper Peninsula, the only active nickel mine in the US is nearing the end of its life. At a time when carmakers want the metal for electric-vehicle batteries, nickel concentration at Eagle Mine is falling and could soon drop too low to warrant digging.

Demand for nickel, copper, and rare earth elements is rapidly increasing amid the explosive growth of metal-intensive data centers, electric cars, and renewable energy projects. But producing these metals is becoming harder and more expensive because miners have already exploited the best resources. Here’s how biotechnology could help.

—Matt Blois

What we’ve been getting wrong about AI’s truth crisis

—James O’Donnell

What would it take to convince you that the era of truth decay we were long warned about—where AI content dupes us, shapes our beliefs even when we catch the lie, and erodes societal trust in the process—is now here?

A story I published last week pushed me over the edge. And it also made me realize that the tools we were sold as a cure for this crisis are failing miserably. Read the full story.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

TR10: Hyperscale AI data centers

In sprawling stretches of farmland and industrial parks, supersized buildings packed with racks of computers are springing up to fuel the AI race.

These engineering marvels are a new species of infrastructure: supercomputers designed to train and run large language models at mind-­bending scale, complete with their own specialized chips, cooling systems, and even energy supplies. But all that impressive computing power comes at a cost.

Read why we’ve named hyperscale AI data centers as of our 10 Breakthrough Technologies this year, and check out the rest of the list.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Elon Musk’s SpaceX has acquired xAI
The deal values the combined companies at a cool $1.25 trillion. (WSJ $)
+ It also paves the way for SpaceX to offer an IPO later this year. (WP $)
+ Meanwhile, OpenAI has accused xAI of destroying legal evidence. (Bloomberg $)

2 NASA has delayed the launch of Artemis II
It’s been pushed back to March due to the discovery of a hydrogen leak. (Ars Technica)
+ The rocket’s predecessor was also plagued by fuel leaks. (Scientific American)

3 Russia is hiring a guerilla youth army online
They’re committing arson and spying on targets across Europe. (New Yorker $)

4 Grok is still generating undressed images of men
Weeks after the backlash over it doing the same to women. (The Verge)
+ How Grok descended into becoming a porn generator. (WP $)
+ Inside the marketplace powering bespoke AI deepfakes of real women. (MIT Technology Review)

5 OpenAI is searching for alternatives to Nvidia’s chips
It’s reported to be unhappy about the speed at which it powers ChatGPT. (Reuters)

6 The latest attempt to study a notoriously unstable glacier has failed
Scientists lost their equipment within Antarctica’s Thwaites Glacier over the weekend. (NYT $)
+ Inside a new quest to save the “doomsday glacier” (MIT Technology Review)

7 The world is trying to wean itself off American technology
Governments are growing increasingly uneasy about their reliance on the US. (Rest of World)

8 AI’s sloppy writing is driving demand for real human writers
Long may it continue. (Insider $)

9 This female-dominated fitness community hates Mark Zuckerberg
His decision to shut down three VR studios means their days of playing their favorite workout game are numbered. (The Verge)
+ Welcome to the AI gym staffed by virtual trainers. (MIT Technology Review)

10 This cemetery has an eco-friendly solution for its overcrowding problem
If you’re okay with your loved one becoming gardening soil, that is. (WSJ $)
+ Why America is embracing the right to die now. (Economist $)
+ What happens when you donate your body to science. (MIT Technology Review)

Quote of the day

“In the long term, space-based AI is obviously the only way to scale…I mean, space is called ‘space’ for a reason.”

—Elon Musk explains his rationale for combining SpaceX with xAI in a blog post.

One more thing

On the ground in Ukraine’s largest Starlink repair shop

Starlink is absolutely critical to Ukraine’s ability to continue in the fight against Russia. It’s how troops in battle zones stay connected with faraway HQs; it’s how many of the drones essential to Ukraine’s survival hit their targets; it’s even how soldiers stay in touch with spouses and children back home.

However, Donald Trump’s fickle foreign policy and reports suggesting Elon Musk might remove Ukraine’s access to the services have cast the technology’s future in the country into doubt.

For now Starlink access largely comes down to the unofficial community of users and engineers, including the expert “Dr. Starlink”—famous for his creative ways of customizing the systems—who have kept Ukraine in the fight, both on and off the front line. He gave MIT Technology Review exclusive access to his unofficial Starlink repair workshop in the city of Lviv. Read the full story.

—Charlie Metcalfe

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The Norwegian countryside sure looks beautiful.
+ Quick—it’s time to visit these food destinations before the TikTok hordes descend.
+ Rest in power Catherine O’Hara, our favorite comedy queen.
+ Take some time out of your busy day to read a potted history of boats 🚣

Why Buyers Buy: Books for Marketers

Whether you’re looking for basic principles, deep dives, or inside info from top marketers, these hand-picked books will help you understand what motivates today’s buyers and drives marketing results.

Applied Consumer Psychology: How to Use Psychological Insights in Marketing

Cover of Applied Consumer Psychology

Applied Consumer Psychology

by Gareth J. Harvey

This encyclopedic text by a leading executive and former academic addresses consumer attention, motivation, and personality in marketing and copywriting. Reviewers call the book “a must-read for anyone serious about marketing,” saying it deserves “a permanent place on every marketer’s bookshelf” and offers “practical insights that marketers can apply immediately.”

Click Here: The Art and Science of Digital Marketing and Advertising

Cover of Click Here

Click Here

by Alex Schultz

Schultz is Meta’s chief marketing officer and V.P. of analytics, and a growth consultant. He writes candidly, asserting that “tools evolve, but principles are timeless,” viewing push notifications as an evolution of direct mail. In “Click Here,” he aims to provide a comprehensive guide to marketing for the internet age.

Hacking the Human Mind: The Behavioral Science Secrets Behind 17 of the World’s Best Brands

Cover of Hacking the Human Mind

Hacking the Human Mind

by Michael Aaron Flicker and Richard Shotton

The authors, consultants to prominent global brands, offer a behind-the-scenes look at the behavioral science techniques used by Apple, Dyson, and Starbucks.

Own The Insight: Turn First-Party Data Into Revenue Faster Than Your Competition Can React

Cover of Own The Insight

Own The Insight

by Lisa L. Fagen

With increasing advertising costs and stricter privacy regulations, capturing your own customer data is more important than ever. Fagen offers practical steps for connecting offline events with online behavior and sales — with examples, checklists, and frameworks.

Emotional Targeting: Win Hearts. Boost Sales. Own the Market

Cover of Emotional Targeting

Emotional Targeting

by Talia Wolf

Wolf, a conversion optimization specialist and consultant to top B2B brands, says buyers’ emotions drive sales, not product features and pricing alone. She shares her “Emotional Targeting Framework” for top marketing performance.

Marketing Psychology Decoded

Cover of Marketing Psychology Decoded

Marketing Psychology Decoded

by Suvodip Sen

Sen brings international business, consulting, and teaching experience to “Marketing Psychology Decoded.” The book covers core concepts such as consumer motivation, perception, buying decisions, and post-purchase behavior, and includes QR-code–linked videos.

Consumer Behavior Essentials You Always Wanted To Know

Cover of Consumer Behavior Essentials

Consumer Behavior Essentials

by Pablo Ibarreche

For 25 years, Ibarreche held brand management roles at Procter & Gamble and AIG. He is now a professor of international marketing at The University of CEMA in Argentina. This self-learning guide focuses on segmentation, tribal marketing, and consumer insights — illustrated with real-world examples, templates, and quizzes.

Science Not Sorcery: Behavioral Economics for Marketers

Cover of Science Not Sorcery

Science Not Sorcery

by Rebecca L. Sullivan

Sullivan practiced marketing for 15 years in the agency space. She now teaches consumer insights at Michigan State University’s Broad College of Business. This practical guide explains how neuroscience, psychology, and behavioral economics impact marketing results.

Hoodwinked: How Marketers Use the Same Tactics as Cults

Cover of Hoodwinked

Hoodwinked

by Mara Einstein, PhD

Einstein is a professor of media studies at City University of New York and a former marketing executive at MTV. She is now a critic of modern marketing techniques. In this book, she examines how marketers manipulate consumers.

Google’s Crawl Team Filed Bugs Against WordPress Plugins via @sejournal, @MattGSouthern

Google’s crawl team has been filing bugs directly against WordPress plugins that waste crawl budget at scale.

Gary Illyes, Analyst at Google, shared the details on the latest Search Off the Record podcast. His team filed an issue against WooCommerce after identifying its add-to-cart URL parameters as a top source of crawl waste. WooCommerce picked up the bug and fixed it quickly.

Not every plugin developer has been as responsive. An issue filed against a separate action-parameter plugin is still sitting unclaimed. And Google says its outreach to the developer of a commercial calendar plugin that generates infinite URL paths fell on deaf ears.

What Google Found

The details come from Google’s internal year-end crawl issue report, which Illyes reviewed during the podcast with fellow Google Search Relations team member Martin Splitt.

Action parameters accounted for roughly 25% of all crawl issues reported in 2025. Only faceted navigation ranked higher, at 50%. Together, those two categories represent about three-quarters of every crawl issue Google flagged last year.

The problem with action parameters is that each one creates what appears to be a new URL by adding text like ?add_to_cart=true. Parameters can stack, doubling or tripling the crawlable URL space on a site.

Illyes said these parameters are often injected by CMS plugins rather than built intentionally by site owners.

The WooCommerce Fix

Google’s crawl team filed a bug report against the plugin, flagging the add-to-cart parameter behavior as a source of crawl waste affecting sites at scale.

Illyes describes how they identified the issue:

“So we would try to dig into like where are these coming from and then sometimes you can identify that perhaps these action parameters are coming from WordPress plug-ins because WordPress is quite a popular CMS content management system. And then you would find that yes, these plugins are the ones that add to cart and add to wish list.”

And then what you would do if you were a Gary is to try to see if they are open source in the sense that they have a repository where you can report bugs and issues and in both of these cases the answer was yes. So we would file issues against these uh plugins.”

WooCommerce responded and shipped a fix. Illyes noted the turnaround was fast, but other plugin developers with similar issues haven’t responded. Illyes didn’t name the other plugins.

He added:

“What I really, really loved is that the good folks at Woolcommerce almost immediately picked up the issue and they solved it.”

Why This Matters

This is the same URL parameter problem Illyes warned about before and continued flagging. Google then formalized its faceted navigation guidelines into official documentation and revised its URL parameter best practices.

The data shows those warnings and documentation updates didn’t solve the problem because the same issues still dominate crawl reports.

The crawl waste is often baked into the plugin layer. That creates a real bind for websites with ecommerce plugins. Your crawl problems may not be your fault, but they’re still your responsibility to manage.

Illyes said Googlebot can’t determine whether a URL space is useful “unless it crawled a large chunk of that URL space.” By the time you notice the server strain, the damage is already happening.

Google consistently recommends robots.txt, as blocking parameter URLs proactively is more effective than waiting for symptoms.

Looking Ahead

Google filing bugs against open-source plugins could help reduce crawl waste at the source. The full podcast episode with Illyes and Splitt is available with a transcript.

Google Updates Googlebot File Size Limit Docs via @sejournal, @MattGSouthern

Google updated its Googlebot documentation to clarify information about file size limits.

The change involves moving information about default file size limits from the Googlebot page to Google’s broader crawler documentation. Google also updated the Googlebot page to be more specific about Googlebot’s own limits.

What’s New

Google’s documentation changelog describes the update as a two-part clarification.

The default file size limits that previously lived on the Googlebot page now appear in the crawler documentation. Google said the original location wasn’t the most logical place because the limits apply to all of Google’s crawlers and fetchers, not just Googlebot.

With the defaults now housed in the crawler documentation, Google updated the Googlebot page to describe Googlebot’s specific file size limits more precisely.

The crawling infrastructure docs list a 15 MB default for Google’s crawlers and fetchers, while the Googlebot page now lists 2 MB for supported file types and 64 MB for PDFs when crawling for Google Search.

The crawler overview describes a default limit across Google’s crawling infrastructure, while the Googlebot page describes Google Search–specific limits for Googlebot. Each resource referenced in the HTML, such as CSS and JavaScript, is fetched separately.

Why This Matters

This fits a pattern Google has been running since late 2025. In November, Google migrated its core crawling documentation to a standalone site, separating it from Search Central. The reasoning was that Google’s crawling infrastructure serves products beyond Search, including Shopping, News, Gemini, and AdSense.

In December, more documentation followed, including faceted navigation guidance and crawl budget optimization.

The latest update continues that reorganization. The 15 MB file size limit was first documented in 2022, when Google added it to the Googlebot help page. Mueller confirmed at the time that the limit wasn’t new. It had been in effect for years. Google was just putting it on the record.

When managing crawl budgets or troubleshooting indexing on content-heavy pages, Google’s docs now describe the limits differently depending on where you look.

The crawling infrastructure overview lists 15 MB as the default for all crawlers and fetchers. The Googlebot page lists 2 MB for HTML and supported text-based files, and 64 MB for PDFs. Google’s changelog does not explain how these figures relate to one another.

Default limits now live in the crawler overview documentation, while Googlebot-specific limits are on the Googlebot page.

Looking Ahead

Google’s documentation reorganization suggests there will likely be more updates to the crawling infrastructure site in the coming months. By separating crawler-wide defaults from product-specific documentation, Google can more easily document new crawlers and fetchers as they are introduced.

GSC Data Is 75% Incomplete via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

My findings this week show Google Search Console data is about 75% incomplete, making single-source GSC decisions dangerously unreliable.

Google filters 3/4 of search impressions for “privacy,” while bot inflation and AIOs corrupt what remains. (Image Credit: Kevin Indig)

1. GSC Used To Be Ground Truth

Search Console data used to be the most accurate representation of what happens in the search results. But privacy sampling, bot-inflated impressions, and AI Overview (AIO) distortion suck the reliability out of the data.

Without understanding how your data is filtered and skewed, you risk drawing the wrong conclusions from GSC data.

SEO data has been on a long path of becoming less reliable, starting with Google killing keyword referrer to excluding critical SERP Features from performance results. But three key events over the last 12 months topped it off:

  • January 2025: Google deploys “SearchGuard,” requiring JavaScript and (sophisticated) CAPTCHA for anyone looking at search results (turns out, Google uses a lot of advanced signals to differentiate humans from scrapers).
  • March 2025: Google significantly amps up the number of AI Overviews in the SERPs. We’re seeing a significant spike in impressions and drop in clicks.
  • September 2025: Google removes num=100 parameter, which SERP scrapers use to parse the search results. The impression spike normalizes, clicks stay down.

On one hand, Google took measures to clean up GSC data. On the other hand, the data still leaves us with more open questions than answers.

2. Privacy Sampling Hides 75% Of Queries

Google filters out a significant amount of impressions (and clicks) for “privacy” reasons. One year ago, Patrick Stox analyzed a large dataset and came to the conclusion that almost 50% are filtered out.

I repeated the analysis (10 sites in B2B out of the USA) across ~4 million clicks and ~450 million impressions.

Methodology:

  • Google Search Console (GSC) provides data through two API endpoints that reveal its filtering behavior. The aggregate query (no dimensions) returns total clicks and impressions, including all data. The query-level query (with “query” dimension) returns only queries meeting Google’s privacy threshold.
  • By comparing these two numbers, you can calculate the filter rate.
  • For example, if aggregate data shows 4,205 clicks but query-level data only shows 1,937 visible clicks, Google filtered 2,268 clicks (53.94%).
  • I analyzed 10 B2B SaaS sites (~4 million clicks, ~450 million impressions), comparing 30-day, 90-day, and 12-month periods against the same analysis from 12 months prior.

My conclusion:

1. Google filters out ~75% of impressions.

Image Credit: Kevin Indig
  • The filter rate on impressions is incredibly high, with three-fourths filtered for privacy.
  • 12 months ago, the rate was only 2 percentage points higher.
  • The range I observed went from 59.3% all the way up to 93.6%.
Image Credit: Kevin Indig

2. Google filters out ~38% of clicks, but ~5% less than 12 months ago.

IMage Credit: Kevin Indig
  • Click filtering is not something we talk about a lot, but it seems Google doesn’t report up to one-third of all clicks that happened.
  • 12 months ago, Google filtered out over 40% of clicks.
  • The range of filtering spans from 6.7% to 88.5%!
Image Credit: Kevin Indig

The good news is that the filter rate has gone slightly down over the last 12 months, probably as a result of fewer “bot impressions.”

The bad news: The core problem persists. Even with these improvements, 38% click-filtering and 75% impression-filtering remain catastrophically high. A 5% improvement doesn’t make single-source GSC decisions reliable when three-fourths of your impression data is missing.

3. 2025 Impressions Are Highly Inflated

Image Credit: Kevin Indig

The last 12 months show a rollercoaster of GSC data:

  • In March 2025, Google intensified the rollout of AIOs and showed 58% more for the sites I analyzed.
  • In July, impressions grew by 25.3% and by another 54.6% in August. SERP scrapers somehow found a way around SearchGuard (the protection “bot” that Google uses to prevent SERP scrapers) and caused “bot impressions” to capture AIOs.
  • In September, Google removed the num=100 parameter, which caused impressions to drop by 30.6%.
Image Credit: Kevin Indig

Fast forward to today:

  • Clicks decreased by 56.6% since March 2025.
  • Impressions normalized (down -9.2%).
  • AIOs reduced by 31.3%.

I cannot come to a causative number of reduced clicks from AIOs, but the correlation is strong: 0.608. We know AIOs reduce clicks (makes logical sense), but we don’t know exactly how much. To figure that out, I’d have to measure CTR for queries before and after an AIO shows up.

But how do you know click decline is due to an AIO and not just poor content quality or content decay?

Look for temporal correlation:

  • Track when your clicks dropped against Google’s AIO rollout timeline (March 2025 spike). Poor content quality shows gradual decline; AIO impact is sharp and query-specific.
  • Cross-reference with position data. If rankings hold steady while clicks drop, that signals AIO cannibalization. Check if the affected queries are informational (AIO-prone) vs. transactional (AIO-resistant). Your 0.608 correlation coefficient between AIO presence and click reduction supports this diagnostic approach.

4. Bot Impressions Are Rising

Image Credit: Kevin Indig

I have reason to believe that SERP scrapers are coming back. We can measure the amount of impressions likely caused by bots by filtering out GSC data by queries that contain more than 10 words and two impressions. The chance that such a long query (prompt) is used by a human twice is close to zero.

The logic of bot impressions:

  • Hypothesis: Humans rarely search for the exact same 5+ word query twice in a short window.
  • Filter: Identify queries with 10+ words that have >1 impression but zero clicks.
  • Caveat: This method may capture some legitimate zero-click queries, but provides a directional estimate of bot activity.

I compared those queries over the last 30, 90, and 180 days:

  • Queries with +10 words and +1 impression grew by 25% over the last 180 days.
  • The range of bot impressions spans from 0.2% to 6.5% (last 30 days).

Here’s what you can anticipate as a “normal” percentage of bot impressions for a typical SaaS site:

  • Based on the 10-site B2B dataset, bot impressions range from 0.2% to 6.5% over 30 days, with queries containing 10+ words and 2+ impressions but 0 clicks.
  • For SaaS specifically, expect a 1-3% baseline for bot impressions. Sites with extensive documentation, technical guides, or programmatic SEO pages trend higher (4-6%).
  • The 25% growth over 180 days suggests scrapers are adapting post-SearchGuard. Monitor your percentile position within this range more than the absolute number.

Bot impressions do not affect your actual rankings – just your reporting by inflating impression counts. The practical impact? Misallocated resources if you optimize for inflated impression queries that humans never search for.

5. The Measurement Layer Is Broken

Single-source decisions based on GSC data alone become dangerous:

  • Three-fourths of impressions are filtered.
  • Bot impressions generate up to 6.5% of data.
  • AIOs reduce clicks by over 50%.
  • User behavior is structurally changing.

Your opportunity is in the methodology: Teams that build robust measurement frameworks (sampling rate scripts, bot-share calculations, multi-source triangulation) have a competitive advantage.


Featured Image: Paulo Bobita/Search Engine Journal

Why SEO Roadmaps Break In January (And How To Build Ones That Survive The Year) via @sejournal, @cshel

SEO roadmaps have a lot in common with New Year’s resolutions: They’re created with optimism, backed by sincere intent, and abandoned far sooner than anyone wants to admit.

The difference is that most people at least make it to Valentine’s Day before quietly deciding that daily workouts or dry January were an ambitious, yet misguided, experiment. SEO roadmaps often start unraveling while Punxsutawney Phil is still deep in REM sleep.

By the third or fourth week of the year, teams are already making “temporary” adjustments. A content cadence slips here. A technical initiative gets deprioritized there. A dependency turns out to be more complicated than anticipated, etc. None of this is framed as failure, naturally, but the original plan is already being renegotiated.

This doesn’t happen because SEO teams are bad at planning. It happens because annual SEO roadmaps are still built as if search were a stable environment with predictable inputs and outcomes.

(Narrator: Search is not, and has never been, a stable environment with predictable inputs or outcomes.)

In January, just like that diet plan, the SEO roadmap looks entirely doable. By February, you’re hiding in a dark pantry with a sleeve of Thin Mints, and the roadmap is already in tatters.

Here’s why those plans break so quickly and how to replace them with a planning model that holds up once the year actually starts moving.

The January Planning Trap

Annual SEO roadmaps are appealing because they feel responsible.

  • They give leadership something concrete to approve.
  • They make resourcing look predictable.
  • They suggest that search performance can be engineered in advance.

Except SEO doesn’t operate in a static system, and most roadmaps quietly assume that it does.

By the time Q1 is halfway over, teams are already reacting instead of executing. The plan didn’t fail because it was poorly constructed. It failed because it was built on outdated assumptions about how search works now.

Three Assumptions That Break By February

1. Algorithms Behave Predictably Over A 12-Month Period

Most annual roadmaps assume that major algorithm shifts are rare, isolated events.

That’s no longer true.

Search systems are now updated continuously. Ranking behavior, SERP layouts, AI integrations, and retrieval logic evolve incrementally –  often without a single, named “update” to react to.

A roadmap that assumes stability for even one full quarter is already fragile.

If your plan depends on a fixed set of ranking conditions remaining intact until December, it’s already obsolete.

2. Technical Debt Stays Static Unless Something “Breaks”

January plans usually account for new technical work like migrations, performance improvements, structured data, internal linking projects.

What they don’t account for is technical debt accumulation.

Every CMS update, plugin change, template tweak, tracking script, and marketing experiment adds friction. Even well-maintained sites slowly degrade over time.

Most SEO roadmaps treat technical SEO as a project with an end date. In reality, it’s a system that requires continuous maintenance.

By February, that invisible debt starts to surface – crawl inefficiencies, index bloat, rendering issues, or performance regressions – none of which were in the original plan.

3. Content Velocity Produces Linear Returns

Many annual SEO plans assume that content output scales predictably:

More content = more rankings = more traffic

That relationship hasn’t been linear for a long time.

Content saturation, intent overlap, internal competition, and AI-driven summaries all flatten returns. Publishing at the same pace doesn’t guarantee the same impact quarter over quarter.

By February, teams are already seeing diminishing returns from “planned” content and scrambling to justify why performance isn’t tracking to projections.

What Modern SEO Roadmap Planning Actually Looks Like

Roadmaps don’t need to disappear, but they do need to change shape.

Instead of a rigid annual plan, resilient SEO teams operate on a quarterly diagnostic model, one that assumes volatility and builds flexibility into execution.

The goal isn’t to abandon strategy. It’s to stop pretending that January can predict December.

A resilient model includes:

  • Quarterly diagnostic checkpoints, not just quarterly goals.
  • Rolling prioritization, based on what’s actually happening in search.
  • Protected capacity for unplanned technical or algorithmic responses.
  • Outcome-based planning, not task-based planning.

This shifts SEO from “deliverables by date” to “decisions based on signals.”

The Quarterly Diagnostic Framework

Instead of locking a yearlong roadmap, break planning into repeatable quarterly cycles:

Step 1: Assess (What Changed?)

At the start of each quarter, and ideally again mid-quarter, evaluate:

  • Crawl and indexation patterns.
  • Ranking volatility across key templates.
  • Performance deltas by intent, not just keywords.
  • Content cannibalization and decay.
  • Technical regressions or new constraints.

This is not a full audit. It’s a focused diagnostic designed to surface friction early.

Step 2: Diagnose (Why Did It Change?)

This is where most roadmaps fall apart: They track metrics but skip interpretation.

Diagnosis means asking:

  • Is this decline structural, algorithmic, or competitive?
  • Did we introduce friction, or did the ecosystem change around us?
  • Are we seeing demand shifts or retrieval shifts?

Without this layer, teams chase symptoms instead of causes.

Step 3: Fix (What Actually Matters Now?)

Only after diagnosis should priorities shift. That shift may involve pausing content production, redirecting engineering resources, or deliberately doing nothing while volatility settles. Resilient planning accepts that the “right” work in February may bear little resemblance to what was approved in January.

How To Audit Mid-Quarter Without Panicking

Mid-quarter reviews don’t mean throwing out the plan. They mean stress-testing it.

A healthy mid-quarter SEO check should answer three questions:

  1. What assumptions no longer hold?
  2. What work is no longer high-leverage?
  3. What risk is emerging that wasn’t visible before?

If the answer to any of those changes execution, that’s not failure. It’s adaptive planning.

The teams that struggle are the ones afraid to admit the plan needs to change.

The Bottom Line

The acceleration introduced by AI-driven retrieval has shortened the gap between planning and obsolescence.

January SEO roadmaps don’t fail because teams lack strategy. They fail because they assume a level of stability that search has not offered in years. If your SEO plan can’t absorb algorithmic shifts, technical debt, and nonlinear content returns, it won’t survive the year. The difference between teams that struggle and teams that adapt is simple: One plans for certainty, the other plans for reality.

The teams that win in search aren’t the ones with the most detailed January roadmap. They’re the ones that can still make good decisions in February.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

WordPress Publishes AI Guidelines To Combat AI Slop via @sejournal, @martinibuster

WordPress published guidelines for using AI for coding plugins, themes, documentation, and media assets. The purpose of the guidelines, guided by five principles, is to keep WordPress contributions transparent, GPL-compatible, and human-accountable, while maintaining high quality standards for AI-assisted work.

The new guidelines lists the following five principles:

  1. “You are responsible for your contributions (AI can assist, but it isn’t a contributor).
  2. Disclose meaningful AI assistance in your PR description and/or Trac ticket comment.
  3. License compatibility matters: contributions must remain compatible with GPLv2-or-later, including AI-assisted output.
  4. Non-code assets count too (docs, screenshots, images, educational materials).
  5. Quality over volume: avoid low-signal, unverified “AI slop”; reviewers may close or reject work that doesn’t meet the bar.”

Transparency

The purpose of the transparency guidelines is to encourage contributors to disclose that AI was used and how it was used so that reviewers can be aware when evaluating the work.

License Compatibility And Tool Choice

Licensing is a big deal with WordPress because it’s designed to be a fully open source publishing platform under the GPLv2 licensing framework. Everything that’s made for WordPress, including plugins and themes, must also be open source. It’s an essential element of everything created with WordPress.

The guidelines specify that AI cannot be used if the output is not licensable under GPLv2.

It also states:

“Do not use tools whose terms forbid using their output in GPL-licensed projects or impose additional restrictions on redistribution.

Do not rely on tools to “launder” incompatible licenses. If an AI output reproduces non-free or incompatible code, it cannot be included.”

AI Slop

Of course, the guidelines address the issue of AI slop. In this case, AI slop is defined as hallucinated references (such as links or APIs that do not exist), overly complicated code where simpler solutions exist, and GitHub PRs that are generic or do not reflect actual testing or experience.

The AI Slop guidelines has recommendations of what they expect from contributors:

“Use AI to draft, then review yourself.

Submit PRs (or patches) that are small, concise and with atomic and well defined commit messages to make reviewing easier.

Run and document real tests.

Link to real Trac tickets, GitHub issues, or documentation that you have verified.”

The guidelines are clear that the WordPress contributors who are responsible for overseeing, reviewing, and deciding whether changes are accepted into a specific part of the project may close or reject contributions that they determine to be AI slop “with little added human insight.”

Takeaways

The new WordPress AI guidelines appear to be about preserving trust in the contribution process as AI becomes more common across development, documentation, and media creation. It in no way discourages the use of AI but rather encourages its use in a responsible manner.

Requiring disclosure, enforcing GPL compatibility, and giving maintainers the authority to reject low-quality submissions, the guidelines set boundaries that protect both the legal integrity of the WordPress project and the time of its reviewers.

Featured Image by Shutterstock/Ivan Moreno sl