Why Google Ads Fails B2B (And How to Fix It)

This post was sponsored by Vehnta. The opinions expressed in this article are the sponsor’s own.

Why isn’t Google Ads working for my B2B marketing campaigns?

How do I improve lead quality in B2B Google Ads campaigns?

What’s the best way to scale Account-Based Marketing (ABM) using Google Ads?

The good news: Google Ads isn’t broken in B2B; it’s just being used wrong.

The platform works brilliantly for consumer brands because their strategies align with consumer behavior, but B2B operates in an entirely different universe with complex buying journeys involving multiple stakeholders.

This guide will help you modify Google Ads to perform better for B2B paid marketing campaigns.

Issue 1: AI Automation Optimizes For The Wrong B2B Objectives

Google’s AI-powered automation creates the biggest challenge for you at this time.

Why? The actions that signal customer engagement for Google Ads do not align with how B2B shoppers behave, leading to incorrect AI analysis of and actions taken on B2B ad success.

For example:

  • Performance Max campaigns optimize for volume conversions rather than quality opportunities, resulting in a doubling of lead volume while halving lead quality.
  • Google Smart Bidding tends to attract users who are likely to take lightweight actions, such as downloads or sign-ups; these actions are unlikely to result in qualified B2B buyers, leading to low-value conversions and wasted spend.

How To Fix Google Ad AI’s Misalignment For B2B PPC

Phase 1: Implement Strategic AI Controls

  1. Disable automatic audience expansion in Search campaigns to maintain targeting precision.
  2. Use Target ROAS instead of Target CPA, setting values based on actual customer lifetime value.
  3. Create separate campaigns for different buying stages with stage-appropriate conversion goals.
  4. Start Performance Max with limited budgets (20-30% of total spend) until optimization stabilizes.

Phase 2: Configure B2B-Specific Signals

  1. Upload customer lists with consistent firmographic data as audience signals.
  2. Set up similar audiences based on highest-value customers, not highest-converting leads.
  3. Monitor search terms weekly and add negatives aggressively.
  4. Use custom conversion goals weighted toward pipeline contribution, not form submissions.

The Easy Way

Vehnta accelerates campaign optimization, enabling precise targeting and performance tracking across your entire B2B account list.

With its Similarity feature and AI-powered Keyword & Ad Generator, you can create high-performing, B2B-optimized campaigns in minutes, avoiding wasted spend on low-value conversions.

Insights are available from day one, and campaigns can be optimized manually or with AI. Plus, with seamless Google Ads integration and automated multilingual message diversification at scale, Vehnta lets you go to market faster and more effectively.

What You Get

  • Faster launch cycles.
  • More qualified leads.
  • Better performance.
  • Scalable impact. without the usual manual overhead.

Campaigns are built on intelligent targeting and high-quality inputs, so optimization starts smart and improves from there.

The Result

  • Reduced wasted budget on low-value conversions like downloads or sign-ups.
  • Focused paid ad spend on high-intent, high-fit prospects.

Issue 2: Generic Targeting Wastes Budget On Wrong Audiences

Most B2B campaigns tend to target broad demographics rather than specific firmographics, resulting in wasted spend on prospects that are a poor fit.

Traditional metrics create a “metrics mirage” where campaigns focused on clicks draw unqualified leads instead of high-intent decision-makers.

Additionally, broad messaging often fails to resonate across diverse markets, whereas precise targeting is effective at scale.

One multinational retailer with 500+ locations across four countries cut costs by 60% and tripled engagement by implementing hyper-local, multilingual campaigns tailored to specific regions.

How To Fix PPC Ad Targeting Waste

Phase 1: Implement Firmographic Precision

Phase 2: Configure Account-Level Monitoring

  • Set up cross-domain tracking to monitor multiple touchpoints from the same organization.
  • Use UTM parameters with company identifiers to track organizational buying patterns.
  • Create audiences based on account-level engagement patterns.

The Easy Way

Vehnta’s Similarity engine leverages a 500M+ company database to identify prospects that match your best customers with surgical precision.

Simply:

  1. Insert one or more existing customers or your Ideal Customer Profile (ICP) into the Similarity Engine.
  2. The Similarty Engine analyzes economic data, industry sectors, and semantic relevance to find similar companies.

This approach makes targeting 10x faster than manual audience research.

Additionally, it provides precision that extends far beyond basic lookalike audiences.

Then, the Search Terms feature provides full visibility into searches performed by your target audience, organized by company and location for actionable insights.

What You Get

  • A radically faster, more precise way to build high-value target lists.
  • Prospect lists that closely mirror your best customers, aligned to your ICP from day one.
  • Full visibility into the actual search behavior of those companies.

The Result

  • Smarter segmentation.
  • Faster activation.
  • Better-performing campaigns fueled by insight, not assumptions.

Issue 3: Marketing/Sales Alignment Problems

B2C metrics fail to capture the complexity of B2B interactions, resulting in a fundamental disconnect between marketing activities and sales outcomes.

Most B2B marketing teams operate under the myth that success requires high lead volumes, but this creates qualification bottlenecks since most B2B sales teams can effectively pursue only a few qualified opportunities simultaneously.

This quality-over-quantity approach delivers results: an enterprise SaaS provider targeting only $1B+ companies achieved 70% cost reduction and 3x engagement by focusing on ultra-precise targeting aligned with sales capacity.

Steps to Fix Marketing/Sales Misalignment

Align Campaigns with Sales Capacity

  • Calculate your sales team’s true capacity for working on qualified opportunities.
  • Set monthly lead generation goals that align with sales capacity, rather than arbitrary growth targets.
  • Develop lead scoring systems that qualify prospects before they reach the sales team.
  • Implement progressive profiling to gather firmographic information during conversion.

Optimize for Opportunity Quality

The Easy Way

Vehnta’s Insight Collection provides real-time business intelligence that automatically qualifies prospects, focusing on high-quality opportunities from pre-qualified target companies instead of generating hundreds of unqualified leads monthly.

The VisionSphere function provides a ranked list of companies most interested in your business, calculated by proprietary algorithms reflecting genuine buying interest.

What You Get

  • Consistently higher-quality pipeline, driven by real-time insight into which companies actually show buying intent.
  • Focused efforts on prospects that are already aligned with your offering.
  • A ranked view of interested accounts.
  • Clarity on where to prioritize and when to engage.
  • More efficient sales motions.
  • Stronger conversion rates.
  • Faster deal velocity.

All the intelligence you need, without the noise.

Issue 4: Scalability Of ABM Approaches

The challenge of scaling Account-Based Marketing through Google Ads lies in managing hundreds of target accounts while maintaining surgical precision.

Traditional ABM approaches require significant manual effort and dedicated specialists, making it difficult to achieve scale without compromising quality.

However, this complexity can be overcome: a global manufacturer targeting 4,000+ plant locations reduced spend from $160K to $40K while generating 2.5x more qualified leads through automated ABM systems.

How To Fix Account-Based Marketing (ABM) Scalability

Phase 1: Implement Automated Account Intelligence

  • Use advanced similarity algorithms to identify high-value prospects matching your best customers.
  • Automate audience research and list-building processes that typically consume weeks of specialist time.
  • Deploy AI-powered campaign creation that generates optimized targeting in minutes.
  • Set up automated monitoring across hundreds of target accounts without additional team members.

Phase 2 Create Scalable Precision Systems

  • Build campaigns that automatically diversify messaging across multiple languages.
  • Implement systems providing full visibility into search behavior across target companies.
  • Use proprietary algorithms to rank companies by genuine buying interest.
  • Deploy real-time optimization eliminating manual analysis while maintaining quality.

The Easy Way

Vehnta accelerates campaign execution through a truly scalable ABM approach, enabling accurate targeting and real-time performance tracking across your entire B2B account list.

Integrated AI Campaign Generation allows marketers to generate highly relevant, B2B-tailored campaigns in minutes, not days, while minimizing budget waste on low-intent traffic. From day one, teams gain access to actionable insights and can fine-tune performance manually or through automated optimization.

Thanks to seamless Google Ads integration and automated multilingual message diversification at scale, Vehnta eliminates the operational friction that often stalls ABM at the execution phase.

What you get: ABM that finally matches the speed and scale of your growth ambitions, without the typical overhead. Campaigns go live faster, reach the right accounts with precision, and continuously improve through data-driven optimization. Marketing teams save time, reduce costs, and drive more qualified pipeline, while maintaining control and strategic clarity. The complexity is gone; the impact remains.

The Strategic Transformation: From Volume to Value

The transformation from failing to succeeding with B2B Google Ads requires fundamentally rethinking how paid search fits into complex, multi-stakeholder B2B sales processes. Companies achieving breakthrough results abandon volume-based B2C tactics for precision-focused, account-based strategies that create budget efficiency and market dominance within targeted segments.

The competitive opportunity is significant: while competitors chase high-volume keywords and vanity metrics, strategic B2B marketers focus on qualified accounts and pipeline impact using advanced targeting intelligence and automated optimization systems.

Ready to transform your B2B Google Ads approach?

Discover how Vehnta works and achieve precision at scale—cut costs, improve targeting, and align every campaign with how your customers actually buy.

Book a demo: boost leads, cut costs.

Image Credits

Featured Image: Image by Vehnta. Used with permission.

The latest threat from the rise of Chinese manufacturing

The findings a decade ago were, well, shocking. Mainstream economists had long argued that free trade was overall a good thing; though there might be some winners and losers, it would generally bring lower prices and widespread prosperity. Then, in 2013, a trio of academic researchers showed convincing evidence that increased trade with China beginning in the early 2000s and the resulting flood of cheap imports had been an unmitigated disaster for many US communities, destroying their manufacturing lifeblood.

The results of what in 2016 they called the “China shock” were gut-wrenching: the loss of 1 million US manufacturing jobs and 2.4 million jobs in total by 2011. Worse, these losses were heavily concentrated in what the economists called “trade-exposed” towns and cities (think furniture makers in North Carolina).

If in retrospect all that seems obvious, it’s only because the research by David Autor, an MIT labor economist, and his colleagues has become an accepted, albeit often distorted, political narrative these days: China destroyed all our manufacturing jobs! Though the nuances of the research are often ignored, the results help explain at least some of today’s political unrest. It’s reflected in rising calls for US protectionism, President Trump’s broad tariffs on imported goods, and nostalgia for the lost days of domestic manufacturing glory.

The impacts of the original China shock still scar much of the country. But Autor is now concerned about what he considers a far more urgent problem—what some are calling China shock 2.0. The US, he warns, is in danger of losing the next great manufacturing battle, this time over advanced technologies to make cars and planes as well as those enabling AI, quantum computing, and fusion energy.

Recently, I asked Autor about the lingering impacts of the China shock and the lessons it holds for today’s manufacturing challenges.

How are the impacts of the China shock still playing out?

I have a recent paper looking at 20 years of data, from 2000 to 2019. We tried to ask two related questions. One, if you looked at the places that were most exposed, how have they adjusted? And then if you look to the people who are most exposed, how have they adjusted? And how do those two things relate to one anothe

It turns out you get two very different answers. If you look at places that were most exposed, they have been substantially transformed. Manufacturing, once it starts going down, never comes back. But after 2010, these trade-impacted local labor markets staged something of an employment recovery, such that employment has grown faster after 2010 in trade-exposed places than non-trade-exposed places because a lot of people have come in. But these are jobs mostly in low-wage sectors. They’re in K–12 education and non-traded health services. They’re in warehousing and logistics. They’re in hospitality and lodging and recreation, and so they’re lower-wage, non-manufacturing jobs. And they’re done by a really different set of people.

The growth in employment is among women, among native-born Hispanics, among foreign-born adults and a lot of young people. The recovery is staged by a very different group from the white and black men, but especially white men, who were most represented in manufacturing. They have not really participated in this renaissance.

Employment is growing, but are these areas prospering?

They have a lower wage structure: fewer high-wage jobs, more low-wage jobs. So they’re not, if your definition of prospering is rapidly rising incomes. But there’s a lot of employment growth. They’re not like ghost towns. But then if you look at the people who were most concentrated in manufacturing—mostly white, non-college, native-born men—they have not prospered. Most of them have not transitioned from manufacturing to non-manufacturing.

One of the great surprises is everyone had believed that people would pull up stakes and move on. In fact, we find the opposite. People in the most adversely exposed places become less likely to leave. They have become less mobile. The presumption was that they would just relocate to find higher ground. And that is not at all what occurred.

What happened to the total number of manufacturing jobs?

There’s been no rebound. Once they go, they just keep going. If there is going to be new manufacturing, it won’t be in the sectors that were lost to China. Those were basically labor-intensive jobs, the kind of low-tech sectors that we will not be getting back. You know—commodity furniture and assembly of things, shoes, construction material. The US wasn’t going to keep them forever, and once they’re gone, it’s very unlikely to get them back.

I know you’ve written about this, but it’s not hard to draw a connection between the dynamics you’re describing—white-male manufacturing jobs going away and new jobs going to immigrants—and today’s political turmoil.

We have a paper about that called “Importing Political Polarization?”

How big a factor would you say it is in today’s political unrest?

I don’t want to say it’s the factor. The China trade shock was a catalyst, but there were lots of other things that were happening. It would be a vast oversimplification to say that it was the sole cause.

But most people don’t work in manufacturing anymore. Aren’t these impacts that you’re talking about, including the political unrest, disproportionate to the actual number of jobs lost?

These are jobs in places where manufacturing is the anchor activity. Manufacturing is very unevenly distributed. It’s not like grocery stores and hospitals that you find in every county. The impact of the China trade shock on these places was like dropping an economic bomb in the middle of downtown. If the China trade shock cost us a few million jobs, and these were all—you know—people in groceries and retail and gas stations, in hospitality and in trucking, you wouldn’t really notice it that much. We lost lots of clerical workers over the last couple of decades. Nobody talks about a clerical shock. Why not? Well, there was never a clerical capital of America. Clerical workers are everywhere. If they decline, it doesn’t wipe out the entire basis of a place.

So it goes beyond the jobs. These places lost their identity.

Maybe. But it’s also the jobs. Manufacturing offered relatively high pay to non-college workers, especially non-college men. It was an anchor of a way of life.

And we’re still seeing the damage.

Yeah, absolutely. It’s been 20 years. What’s amazing is the degree of stasis among the people who are most exposed—not the places, but the people. Though it’s been 20 years, we’re still feeling the pain and the political impacts from this transition.

Clearly, it has now entered the national psyche. Even if it weren’t true, everyone now believes it to have been a really big deal, and they’re responding to it. It continues to drive policy, political resentments, maybe even out of proportion to its economic significance. It certainly has become mythological.

What worries you now?

We’re in the midst of a totally different competition with China now that’s much, much more important. Now we’re not talking about commodity furniture and tube socks. We’re talking about semiconductors and drones and aviation, electric vehicles, shipping, fusion power, quantum, AI, robotics. These are the sectors where the US still maintains competitiveness, but they’re extremely threatened. China’s capacity for high-tech, low-cost, incredibly fast, innovative manufacturing is just unbelievable. And the Trump administration is basically fighting the war of 20 years ago. The loss of those jobs, you know, was devastating to those places. It was not devastating to the US economy as a whole. If we lose Boeing, GM, and Apple and Intel—and that’s quite possible—then that will be economically devastating.

I think some people are calling it China shock 2.0.

Yeah. And it’s well underway.

When we think about advanced manufacturing and why it’s important, it’s not so much about the number of jobs anymore, is it? Is it more about coming up with the next technologies?

It does create good jobs, but it’s about economic leadership. It’s about innovation. It’s about political leadership, and even standard setting for how the rest of the world works.

Should we just accept that manufacturing as a big source of jobs is in the past and move on?

No. It’s still 12 million jobs, right? Instead of the fantasy that we’re going to go back to 18 million or whatever—we had, what, 17.7 million manufacturing jobs in 1999—we should be worried about the fact that we’re going to end up at 6 million, that we’re going to lose 50% in the next decade. And that’s quite possible. And the Trump administration is doing a lot to help that process of loss along.

We have a labor market of over 160 million people, so it’s like 8% of employment. It’s not zero. So you should not think of it as too small to worry about it. It’s a lot of people; it’s a lot of jobs. But more important, it’s a lot of what has helped this country be a leader. So much innovation happens here, and so many of the things in which other countries are now innovating started here. It’s always been the case that the US tends to innovate in sectors and then lose them after a while and move on to the next thing. But at this point, it’s not clear that we’ll be in the frontier of a lot of these sectors for much longer.

So we want to revive manufacturing, but the right kind—advanced manufacturing?

The notion that we should be assembling iPhones in the United States, which Trump wants, is insane. Nobody wants to do that work. It’s horrible, tedious work. It pays very, very little. And if we actually did it here, it would make the iPhones 20% more expensive or more. Apple may very well decide to pay a 25% tariff rather than make the phones here. If Foxconn started doing iPhone assembly here, people would not be lining up for that job.

But at the same time, we do need new people coming into manufacturing.

But not that manufacturing. Not tedious, mind-numbing, eyestrain-inducing assembly.

We need them to do high-tech work. Manufacturing is a skilled activity. We need to build airplanes better. That takes a ton of expertise. Assembling iPhones does not.

What are your top priorities to head off China shock 2.0?

I would choose sectors that are important, and I would invest in them. I don’t think that tariffs are never justified, or industrial policies are never justified. I just don’t think protecting phone assembly is smart industrial policy. We really need to improve our ability to make semiconductors. I think that’s important. We need to remain competitive in the automobile sector—that’s important. We need to improve aviation and drones. That’s important. We need to invest in fusion power. That’s important. We need to adopt robotics at scale and improve in that sector. That’s important. I could come up with 15 things where I think public money is justified, and I would be willing to tolerate protections for those sectors.

What are the lasting lessons of the China shock and the opening up of global trade in the 2000s?

We did it too fast. We didn’t do enough to support people, and we pretended it wasn’t going on.

When we started the China shock research back around 2011, we really didn’t know what we’d find, and so we were as surprised as anyone. But the work has changed our own way of thinking and, I think, has been constructive—not because it has caused everyone to do the right thing, but it at least caused people to start asking the right questions.

What do the findings tell us about China shock 2.0?

I think the US is handling that challenge badly. The problem is much more serious this time around. The truth is, we have a sense of what the threats are. And yet we’re not seemingly responding in a very constructive way. Although we now know how seriously we should take this, the problem is that it doesn’t seem to be generating very serious policy responses. We’re generating a lot of policy responses—they’re just not serious ones.

The Download: China’s winning at advanced manufacturing, and a potential TikTok sale

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The latest threat from the rise of Chinese manufacturing

In 2013, a trio of academics showed convincing evidence that increased trade with China beginning in the early 2000s and the resulting flood of cheap imports had been an unmitigated disaster for many US communities, destroying their manufacturing lifeblood.

The results of what they called the “China shock” were gut-wrenching: the loss of 1 million US manufacturing jobs and 2.4 million jobs in total by 2011.

If in retrospect all that seems obvious, it’s only because the research by David Autor, an MIT labor economist, and his colleagues has become an accepted, albeit often distorted, political narrative these days: China destroyed all our manufacturing jobs! Though the nuances are often ignored, the results help explain at least some of today’s political unrest. It’s reflected in rising calls for US protectionism, President Trump’s broad tariffs on imported goods, and nostalgia for the lost days of domestic manufacturing glory.

Our editor at large David Rotman recently spoke to Autor about what he considers a far more urgent problem——what some are calling China shock 2.0—and the lessons it holds for today’s manufacturing challenges. Read the full story.

Three things I’m into into right now

In each issue of our print magazine, we ask a member of staff to tell us about three things they’re loving at the moment. For our latest edition, which was all about power, I was in the hotseat! Check out my (frankly amazing) recommendations here, and subscribe to catch future editions here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A new TikTok is coming 
It’s reportedly launching a new version in the US in September ahead of a planned sale. (The Information $)
+ It’ll still require the Chinese government’s say-so. (The Verge)

2 Texas Hill Country was caught off guard by the flash floods
But now people are asking: why? (WP $)
+ America’s National Weather Service has been on the receiving end of heavy cuts. (CNN)
+ Bad weather has interrupted ongoing searches for survivors. (WSJ $)

3 Elon Musk is forging ahead with his own political party
To the chagrin of investors in his companies. (The Guardian)
+ Former friend Donald Trump has some thoughts. (Insider $)
+ The America Party is facing an uphill struggle. (WP $)

4 The Trump administration has axed a group focused on birth control safety

They were tasked with advising women which contraceptives to use. (Undark)

5 On-the-job learning is under threat
From a combination of generative AI tools and remote working culture. (FT $)

6 xAI’s ‘improved’ Grok is perpetuating anti-Semitic stereotypes
It made worrying comments about Jewish executives in Hollywood. (TechCrunch)
+ LLMs become more covertly racist with human intervention. (MIT Technology Review)

7 Taiwan wants to lessen its commercial reliance on China
But it won’t be easy. (NYT $)
+ How underwater drones could shape a potential Taiwan-China conflict. (MIT Technology Review)

8 LLMs have improved rapidly in the past few years
Benchmarking them is notoriously tricky, though. (IEEE Spectrum)
+ A Chinese firm has just launched a constantly changing set of AI benchmarks. (MIT Technology Review)

9 Big Tech’s salary divide is getting worse
Those whopping AI pay packets are at least partly to blame. (Insider $)

10 More than 30 tech unicorns have been minted during 2025
And we could see a far few more before the year is out. (TechCrunch)

Quote of the day

“If you go in with the expectation that the AI is as smart or smarter than humans, you’re quickly disappointed by the reality.”

—Eric Schwartz, chief marketing officer of Clorox, tells the Wall Street Journal that AI can’t be relied upon to come up with truly original or engaging ideas.

One more thing

Alina Chan tweeted life into the idea that the virus came from a lab

Alina Chan started asking questions in March 2020. She was chatting with friends on Facebook about the virus then spreading out of China. She thought it was strange that no one had found any infected animal. She wondered why no one was admitting another possibility, which to her seemed very obvious: the outbreak might have been due to a lab accident.

Chan is a postdoc in a gene therapy lab at the Broad Institute, a prestigious research institute affiliated with both Harvard and MIT. Throughout 2020, Chan relentlessly stoked scientific argument, and wasn’t afraid to pit her brain against the best virologists in the world. Her persistence even helped change some researchers’ minds. Read the full story.

—Antonio Regalado

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Why 2025 might just be the year of animal escapes.
+ Very cool—an iron age settlement has been uncovered in England thanks to a lucky metal detectorist.
+ This little armadillo is having the time of their life in a paddling pool.
+ Peace and love to Mr Ringo Starr, 85 years young today!

The digital future of industrial and operational work

Digital transformation has long been a boardroom buzzword—shorthand for ambitious, often abstract visions of modernization. But today, digital technologies are no longer simply concepts in glossy consultancy decks and on corporate campuses; they’re also being embedded directly into factory floors, logistics hubs, and other mission-critical, frontline environments.

This evolution is playing out across sectors: Field technicians on industrial sites are diagnosing machinery remotely with help from a slew of connected devices and data feeds, hospital teams are collaborating across geographies on complex patient care via telehealth technologies, and warehouse staff are relying on connected ecosystems to streamline inventory and fulfillment far faster than manual processes would allow.

Across all these scenarios, IT fundamentals—like remote access, unified login systems, and interoperability across platforms—are being handled behind the scenes and consolidated into streamlined, user-friendly solutions. The way employees experience these tools, collectively known as the digital employee experience (DEX), can be a key component of achieving business outcomes: Deloitte finds that companies investing in frontline-focused digital tools see a 22 % boost in worker productivity, a doubling in customer satisfaction, and as much as a 25 % increase in profitability.

As digital tools become everyday fixtures in operational contexts, companies face both opportunities and hurdles—and the stakes are only rising as emerging technologies like AI become more sophisticated. The organizations best positioned for an AI-first future are crafting thoughtful strategies to ensure digital systems align with the realities of daily work—and placing people at the heart of the whole process.

IT meets OT in an AI world

Despite promising returns, many companies still face a last-mile challenge in delivering usable, effective tools to the frontline. The Deloitte study notes that less than one-quarter (just 23%) of frontline workers believe they have access to the technology they need to maximize productivity. There are several possible reasons for this disconnect, including the fact that operational digital transformation faces unique challenges compared to office-based digitization efforts.

For one, many companies are using legacy systems that don’t communicate easily across dispersed or edge environments. For example, the office IT department might use completely different software than what’s running the factory floor; a hospital’s patient records might be entirely separate from the systems monitoring medical equipment. When systems can’t talk to one another, troubleshooting issues becomes a time-consuming guessing game—one that often requires manual workarounds or clunky patches.

There’s also often a clash between tech’s typical “ship first, debug later” philosophy and the careful, safety-first approach that operational environments demand. A software glitch in a spreadsheet is annoying; a snafu in a power plant or at a chemical facility can be catastrophic.

Striking a careful balance between proactive innovation and prudent precaution will become ever more important, especially as AI usage becomes more common in high-stakes, tightly regulated environments. Companies will need to navigate a growing tension between the promise of smarter operations and the reality of implementing them safely at scale.

Humans at the heart of transformation efforts

With the buzz over AI and automation reaching fever pitch, it’s easy to overlook the single most impactful factor that makes transformation stick: the human element. The convergence of IT and OT goes hand in hand with the rise of digital employee experience. DEX encompasses everything from logging into systems and accessing applications to navigating networks and completing tasks across devices and locations. At its core, DEX is about ensuring technology empowers employees to work efficiently and without disruption—no matter where or how they work.

Companies investing in DEX technology are seeing measurable gains—from reduced help desk tickets and system downtime to harder-to-quantify benefits like higher employee satisfaction and retention. Frictionless digital workplaces, supported by real-time monitoring and automation capabilities, help organizations attend to IT issues before users experience disruptions or productivity levels dip.

There are real-world examples of seamless DEX in action: Swiss energy and infrastructure provider BKW, for instance, recently built a system that lets their IT team remotely assist employees experiencing technical difficulties across more than 140 subsidiaries. For employees, this means no more waiting for an in-person technician when their device freezes or software hiccups; IT can swoop in remotely and solve problems in minutes instead of hours.

The insurance company RLI faced a different but equally frustrating issue before switching to a centralized, remote IT support system: Technical issues like device lag or overheating were often left unreported, as employees didn’t want to disrupt their workflow or bother the IT team with seemingly minor complaints. Those small performance issues, however, could snowball over time, sometimes causing devices to fail completely. To get ahead of this phenomenon, RLI installed monitoring software to observe device performance in real time and catch issues proactively. Now, when a laptop gets too hot or starts slowing down, IT can address it right away—often before the employee even knows there’s a problem.

Ultimately, the organizations making the biggest strides in DEX recognize that digital transformation is as much about experience as it is about infrastructure. When digital tools feel like helpful extensions of workers’ expertise—rather than obstacles standing in the way of their workday—companies are in a better position to realize the full benefits of their investments.

Smart systems and smarter safeguards

Of course, as operational systems become more interconnected, security vulnerabilities multiply in turn. Consider this hypothetical: In a busy manufacturing plant, a piece of machinery suddenly breaks down. Instead of waiting hours for a technician to arrive on-site, a local operator deploys a mobile augmented reality device that projects step-by-step diagnostic instructions onto the machine. Following guidance from a remote specialist, the operator fixes the equipment and has production back on track in mere minutes.

This snappy and streamlined approach to diagnostics is undeniably efficient, but it opens up the factory floor to multiple external touchpoints: live video feeds streaming to remote experts, cloud databases containing sensitive repair procedures, and direct access to the machine’s diagnostic systems. Suddenly, a manufacturing plant that used to be an island is now part of an interconnected network.

Smart companies are getting practical about the challenges associated with this expanding threat surface. For instance, BKW has taken a structured approach to permissions: Subsidiary IT teams can only access their own company’s devices, outside contractors get temporary access for specific tasks, and employees can reach certain high-powered workstations when they need them.

Bühler, a global industrial equipment manufacturer, also uses centrally managed access controls to govern who can connect to which platforms, as well as when and under what conditions. By enforcing consistent policies from its headquarters, the company ensures all remote support activities are fully monitored and aligned with strict cybersecurity protocols, including compliance with ISO 27001 standards. The system allows Bühler’s extensive global technician network to provide real-time assistance without compromising system integrity.

The power of practical innovation

How do you help a technician troubleshoot equipment when the expert is 500 miles away? How do you catch IT problems before they shut down a production line? How do you keep operations secure without burying workers in passwords and protocols?

These are the kinds of practical questions that companies like Bühler, BKW, and RLI Insurance have focused on solving—and it’s part of why they’re succeeding where others struggle. These examples demonstrate a genuine shift in how successful companies think about technology and transformation. Instead of asking, “What’s the latest digital trend we should adopt?” they’re assessing, “What problems are our people actually trying to solve?”

The organizations pulling ahead to digitally transform frontline operations are the ones that have learned to make complex systems feel simple, intuitive, and secure to boot. Such a practical approach will only become more pressing as AI introduces new layers of complexity to operational work.

Ready to make work work better for your business? Learn how at TeamViewer.com.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Producing tangible business benefits from modern iPaaS solutions

When a historic UK-based retailer set out to modernize its IT environment, it was wrestling with systems that had grown organically for more than 175 years. Prior digital transformation efforts had resulted in a patchwork of hundreds of integration flows spanning cloud, on-premises systems, and third-party vendors, all communicating across multiple protocols. 

The company needed a way to bridge the invisible seams stitching together decades of technology decisions. So, rather than layering on yet another patch, it opted for a more cohesive approach: an integration platform as a service (iPaaS) solution, i.e. a cloud-based ecosystem that enables smooth connections across applications and data sources. By going this route, the company reduced the total cost of ownership of its integration landscape by 40%.

The scenario illustrates the power of iPaaS in action. For many enterprises, iPaaS turns what was once a costly, complex undertaking into a streamlined, strategic advantage. According to Forrester research commissioned by SAP, businesses modernizing with iPaaS solutions can see a 345% return on investment over three years, with a payback period of less than six months.

Agile integration for an AI-first world

In 2025, the business need for flexible and friction-free integration has new urgency. When core business systems can’t communicate easily, the impacts ripple across the organization: Customer support teams can’t access real-time order statuses, finance teams struggle to consolidate data for monthly closes, and marketers lack reliable insights to personalize campaigns or effectively measure ROI.

A lack of high-quality data access is particularly problematic in the AI era, which depends on current, consistent, and connected data flows to fuel everything from predictive analytics to bespoke AI copilots. To unleash the full potential of AI, enterprises must first solve for any bottlenecks that prevent information from flowing freely across their systems. They must also ensure data pipelines are reliable and well-governed; when AI models are trained on inconsistent or outdated data, the insights they generate can be misleading or incomplete—which can undermine everything from customer recommendations to financial forecasting.

iPaaS platforms are often well-suited for accomplishing this across dynamic, distributed environments. Built as cloud-native, microservices-based integration hubs, modern iPaaS platforms can scale rapidly, adapt to changing workloads, and support hybrid architectures without adding complexity. They also help simplify the user experience for everyday business users via low-code functionalities that allow both technical and non-technical employees to build workflows with simple drag-and-drop or click-to-configure interfaces.

This self-service model has practical, real-world applications across business functions: For instance, customer service agents can connect support ticketing systems with real-time inventory or shipping data, finance departments can link payment processors to accounting software, and marketing teams can sync CRM data with campaign platforms to trigger personalized outreach—all without waiting for IT to come to the rescue.

Architectural foundations for fast, flexible integration

Several key architectural elements make the agility associated with iPaaS solutions possible:

  1. API-first design that treats every connection as a reusable service
  2. Event-driven capabilities that enable real-time responsiveness
  3. Modular components that can be mixed and matched to address specific business scenarios

These principles are central to making the transition from “spaghetti architecture” to “integration fabric”—a shift from brittle point-to-point connections to intelligent, policy-driven connectivity that spans multidimensional IT environments.

This approach means that when a company wants to add a new application, onboard a new partner, or create a new customer experience, they’re able to do so by tapping into existing integration assets rather than starting from scratch—which can lead to dramatically faster deployment cycles. It also helps enforce consistency and, in some cases, security and compliance across environments (role-based access controls and built-in monitoring capabilities, for example, can allow organizations to apply standards more uniformly).

Further, studies suggest that iPaaS solutions enable companies to unlock new revenue streams by integrating previously siloed data and processes. Forrester research found that organizations adopting iPaaS solutions stand to generate nearly $1 million in incremental profit over three years by creating new digital services, improving customer experiences, and automating revenue-generating processes that were previously manual.

Where iPaaS is headed: convergence and intelligence

All this momentum is perhaps one of the reasons why the global iPaaS market, valued at approximately $12.9 billion in 2024, is projected to reach more than $78 billion by 2032—with growth rates exceeding 25% annually.

This trajectory is contingent on two ongoing trends: the convergence of integration capabilities into broader application development platforms, and the infusion of AI into the integration lifecycle.

Today, the boundaries between iPaaS, automation platforms, and AI development environments are blurring as vendors create unified solutions that can handle everything from basic data synchronization to complex business processes. 

AI and machine learning capabilities are also being embedded directly into integration platforms. Soon, features like predictive maintenance of integration flow or intelligent routing of data based on current conditions are likely to become table stakes. Already, integration platforms are becoming smarter and more autonomous, capable of optimizing themselves and, in some cases, even initiating self-healing actions when problems arise.

At the same time, this shift is transforming how businesses think about integration as a dynamic enabler of AI strategy. In the near future, robust integration frameworks will be essential to operationalize AI at scale and feed these systems the rich, contextual data they need to deliver meaningful insights.

Building integration as competitive advantage

In addition to the retail modernization story detailed earlier, a few more real-world examples highlight the potential of iPaaS:

  • A chemicals manufacturer migrated 363 legacy interfaces to an iPaaS platform and now spins up new integrations 50% faster.
  • A North American bottling company reduced integration runtime costs by more than 50% while supporting 12 legal entities on a single cloud ERP instance through common APIs.
  • A global shipping-technology firm connected its CRM and third-party systems via cloud-based iPaaS solutions, enabling 100% touchless order fulfillment and a 95% cut in cost centers after a nine-month rollout in its first region.

Taken together, these examples make a compelling case for integration as strategy, not just infrastructure. They reflect a shift in mindset, where integration is democratized and embedded into how every team, not just IT, gets work done. Companies that treat integration as a core capability versus an IT afterthought are reaping tangible, enterprise-wide benefits, from faster go-to-market timelines and reduced operational costs to fully automated business processes.

As AI reshapes business processes and customer standards continue to climb, enterprises are realizing that integration architecture determines not only what they can build today, but how quickly they can adapt to whatever comes tomorrow.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Google, Microsoft Clarify Ad Bidding

Google and Microsoft are rolling out updates to improve transparency while removing redundant (and confusing) bidding features. The first Google update concerns AI Max for Search, a focus of the company’s recent Marketing Live event.

AI Max Transparency

Google launched AI Max for Search in May. It is now available in all accounts. Using AI, Google displays ads for more queries and customizes them based on the user. The idea is to reach more potential customers.

My initial impressions of AI Max for Search are positive. I haven’t experienced many conversions in my client’s accounts, perhaps because the additional traffic is low. But I’m receiving complementary traffic instead of cannibalizing the keywords I’ve bid on. In other words, it’s not driving traffic merely to generate clicks and spend, and moreover, the ancillary traffic appears qualified.

Two new segments provide AI Max clarity. First, advertisers can segment the Keyword report by search term match type. The report displays data from AI Max as well as standard exact, phrase, and broad match.

Second, advertisers can view the AI Max search terms and associated landing pages. It’s beneficial for discovering landing pages to test elsewhere in the account or exclude altogether.

Advertisers can view the AI Max search terms and associated landing pages in the search terms report.

For example, a landing page that converts well in AI Max is worth testing as the standard version. Conversely, advertisers could exclude a poor-converting page.

When launched in 2021, Performance Max campaigns did not provide this level of detail. Google added the transparency to AI Max from the start.

Screenshot of the URL exclusions screen in Google Ads

Advertisers can exclude poor-converting landing pages.

Combined Campaign Metrics

Google introduced Brand Reports late last year to show reach and frequency metrics across campaigns. Advertisers could always view the metrics for individual campaigns but not in aggregate. For example, Video and Demand Gen campaigns provided metrics separately, but not for consumers who viewed ads in both campaigns.

Brand Reports include filters by age range, gender, or both. The “co-viewed” metric shows the number of unique consumers who viewed the ad, even if they watched together on connected TV. It’s a good start for displaying the combined reach and frequency of multiple campaigns.

Unfortunately, the report does not provide conversion metrics to reveal how non-search campaigns contribute to overall conversions.

Microsoft Drops tCPA, tROAS

Microsoft is moving away from tCPA and tROAS bid strategies.

Google removed tCPA and tROAS bidding a couple of years ago because they were redundant: Both told Google to optimize for conversions.

Google renamed the strategy to “Maximize conversions with an optional tCPA.” By default, the “Maximize conversions” strategy strives to, well, maximize conversions, but advertisers can still set an optional acquisition target.

For example, advertisers unconcerned about tCPA can instruct Google to generate as many conversions as possible within the budget. But they can check the option to set a target CPA that does not exceed, say, $50.

Microsoft is adopting the same tactic. Beginning August 4, new Microsoft Ad campaigns will provide only the “maximize conversions” and “maximize conversion values” counterparts. Microsoft says it will automatically transition tCPA and tROAS campaigns.

No action is required by Microsoft advertisers. The update removes redundancies and simplifies bidding.

Relying Too Much On AI Is Backfiring For Businesses via @sejournal, @MattGSouthern

As more companies race to adopt generative AI tools, some are learning a hard lesson: when used without oversight or expertise, these tools can cause more problems than they solve.

From broken websites to ineffective marketing copy, the hidden costs of AI mistakes are adding up, forcing businesses to bring in professionals to clean up the mess.

AI Delivers Mediocrity Without Supervision

Sarah Skidd, a product marketing manager and freelance writer, was hired to revise the website copy generated by an AI tool for a hospitality company, according to a report by the BBC.

Instead of the time- and cost-savings the client expected, the result was 20 hours of billable rewrites.

Skidd told the BBC:

“[The copy] was supposed to sell and intrigue but instead it was very vanilla.”

This isn’t an isolated case. Skidd said other writers have shared similar stories. One told her that 90% of their workload now consists of editing AI-generated text that falls flat.

The issue isn’t just quality. According to a study by researchers Anders Humlum and Emilie Vestergaard, real-world productivity gains from AI chatbots are far below expectations.

Although controlled experiments show improvements of over 15%, most users report time savings of just 2.8% of their work hours on average.

Cutting Corners Can Lead To Problems

The risks go beyond boring copy. Sophie, co-owner of Create Designs, a UK-based digital agency, says she’s seen a wave of clients suffer avoidable problems after trying to use AI tools like ChatGPT for quick fixes.

Warner tells the BBC:

“Now they are going to ChatGPT first.”

And that’s often when things go wrong.

In one case, a client used AI-generated code to update an event page. The shortcut crashed their entire website, causing three days of downtime and a $485 repair bill.

Warner says even larger clients encounter similar issues but hesitate to admit AI was involved, making diagnosis harder and more expensive.

Warner added:

“The process of correcting these mistakes takes much longer than if professionals had been consulted from the beginning.”

Training & Infrastructure Matter More Than Tools

The Danish research paper by Humlum and Vestergaard finds businesses that offer AI training and establish internal guidelines see better (if still modest) results.

Workers with employer support saved slightly more time, about 3.6% of work hours compared to 2.2% without guidance.

Even then, the productivity benefits don’t seem to trickle down. The study found no measurable changes in earnings, hours worked, or job satisfaction for 97% of AI users surveyed.

Prof. Feng Li, associate dean for research and innovation at Bayes Business School, told the BBC:

“Human oversight is essential. Poor implementation can lead to reputational damage, unexpected costs—and even significant liabilities.”

The Gap Between AI Speed & Human Standards

Kashish Barot, a copywriter based in Gujarat, India, told the BBC she spends her time editing AI-generated content for U.S. clients.

She says many underestimate what it takes to produce effective writing.

Barot says:

“AI really makes everyone think it’s a few minutes’ work. However, good copyediting, like writing, takes time because you need to think and not just curate like AI.”

The research backs this up: marketers and software developers report slightly higher time savings when employers support AI use, but gains for teachers and accountants are negligible.

While AI tools may speed up certain tasks, they still require human judgment to meet brand standards and audience needs.

Key Takeaways

The takeaway for businesses? AI isn’t a shortcut to quality. Without proper training, strategy, and infrastructure, even the most powerful tools fall short.

What many companies overlook is that AI’s success depends less on the technology itself and more on the people using it, and whether they’ve been equipped to use it well.

Rushed adoption may save time upfront, but it leads to more expensive problems down the line. Whether it’s broken code, off-brand messaging, or public-facing content that lacks nuance, the cost of fixing AI mistakes can quickly outweigh the perceived savings.

For marketers, developers, and business leaders, the lesson is: AI can help, but only when human expertise stays in the loop.


Featured Image: Roman Samborskyi/Shutterstock

How To Get The Perfect Budget Mix For SEO And PPC via @sejournal, @brookeosmundson

There’s no one-size-fits-all answer when it comes to deciding how much of your marketing budget should go toward SEO versus PPC.

But that doesn’t mean the decision should be based on gut instinct or what your competitors are doing.

Marketing leaders are under more pressure than ever to show a return on every dollar spent.

So, it’s not about choosing one over the other. It’s about finding the right balance based on your goals, your timelines, and what kind of results the business expects to see.

This article walks through how to think about budget allocation between SEO and PPC with a focus on what kind of output you can reasonably expect for your spend.

What You’re Actually Paying For

When you spend money on PPC, you’re buying immediate visibility.

Whether it’s Google Ads, Microsoft Ads, or paid social, you’re paying for clicks, impressions, and leads right now.

That cost is largely predictable and better to forecast. For example, if your cost-per-click (CPC) is $3 and your budget is $10,000, you can expect about 3,300 clicks.

PPC spend can be directly tied to pipeline, which is why it’s often favored by performance-driven teams.

With SEO, you’re investing in long-term growth. You’re paying for content, technical fixes, site structure improvements, and link acquisition.

But you don’t pay for clicks or impressions. Once rankings improve, those clicks come organically.

The upside is compounding growth and reduced cost per lead over time.

The downside? It can take months to see meaningful impact, and the cost-to-output ratio is harder to predict.

It’s also worth noting that PPC costs often increase with competition, while SEO costs tend to remain relatively stable over time. That can make SEO more scalable in the long term, especially for brands in high-CPC industries.

How Urgency And Goals Influence Budget Splits

If you need leads or traffic now, PPC should probably get the bulk of your short-term budget.

Launching a new product? Trying to meet quarterly goals? Paid search and social can give you the volume you need pretty quickly.

But if you’re trying to reduce customer acquisition cost (CAC) in the long run or improve visibility in organic search to support brand awareness, SEO deserves more attention. It builds value over time and often pays dividends past the life of your campaign.

Many brands start with a 70/30 or 60/40 split favoring PPC, then shift the mix as organic efforts gain traction.

Just make sure you set clear expectations: SEO is not a quick fix, and over-promising short-term gains can backfire when the board wants results next quarter.

If you’re rebranding, expanding into new markets, or supporting a product launch, a heavier upfront PPC investment makes sense. But brands that already rank well organically or have strong content foundations can afford to rebalance the mix in favor of SEO.

Why Organic Traffic Is Getting Harder To Defend

One emerging challenge for organic marketing is the rise of AI Overviews in Google Search. More brands are seeing a dip in organic traffic even when they maintain strong rankings.

Why?

Because the search experience is shifting. AI-generated summaries are now answering questions directly on the results page, often pushing traditional organic listings further down.

That means your SEO strategy can’t just be about rankings anymore. You need to invest in content that earns visibility in AI Overviews, featured snippets, and other enhanced search features.

This may involve rethinking how content is structured, focusing more on schema markup, FAQs, and direct-answer formats that AI models tend to surface.

In practical terms, your SEO budget should now include:

  • Structured content planning built around entity-based search.
  • Technical SEO improvements like schema and page speed.
  • Multimedia content like images and videos, which AI often pulls into results.
  • Continual refresh of older content to maintain relevance in evolving search formats.

This shift doesn’t mean SEO is no longer worth it. It means you need to be more strategic in how you spend.

Ask your SEO partner or in-house team how they’re adapting to AI search changes, and make sure your budget reflects that evolution.

Budget Planning Based On Realistic Outputs

Let’s put this into numbers. Say you have a $100,000 annual digital marketing budget.

Putting $80,000 toward PPC might get you 25,000 paid clicks and 500 conversions (based on a fictional $3.20 CPC and 2% conversion rate).

The remaining $20,000 on SEO might buy you four high-quality articles a month, technical clean-up work, and backlink outreach.

If done well, this might start showing traction in three to six months and bring in sustained traffic over time.

The key is to model your budget around what’s actually possible for each channel, not just what you hope will happen. SEO efforts often have a longer lag time, but PPC campaigns can run out of gas as soon as you turn off the spend.

You should also budget for maintenance and reinvestment. Even strong SEO performance requires fresh content and updates to keep rankings.

Similarly, PPC campaigns need regular optimization, creative testing, and bid adjustments to stay efficient.

You should also plan for budget allocation across different campaign types: brand vs. non-brand, search vs. display, and prospecting vs. retargeting.

Each serves a different purpose, and over-investing on one without supporting the others can limit growth.

For example, allocating part of your PPC budget to retargeting warm audiences can drastically improve efficiency compared to cold prospecting alone.

While branded search often delivers low-cost conversions, it shouldn’t be your only area of investment if you’re trying to scale.

What To Communicate To Leadership

Leadership wants to know two things: how much are we spending, and what are we getting in return?

A mixed SEO and PPC strategy gives you the ability to answer both.

PPC provides short-term wins you can report on monthly.

SEO builds long-term momentum that pays off in quarters and years.

Explain that PPC is more like a faucet you control. SEO is more like building your own well. Both are valuable.

But if you only have one or the other, you’re either stuck renting traffic or waiting too long to see the impact.

Board members and non-marketing executives often prefer hard numbers. So, when proposing a budget mix, include projected costs per acquisition, estimated traffic volumes, and timelines for ramp-up.

Make it clear where each dollar is going and what kind of return is expected.

If possible, create a model that shows various scenarios. For example, what a 50/50 vs. 70/30 SEO/PPC split might look like in terms of conversions, traffic, and cost per lead over time.

Visuals help ground the conversation in data rather than preference.

Choosing The Right Metrics For Each Channel

One challenge with mixed-channel budget planning is deciding which key performance indicator (KPI) to prioritize.

PPC is easier to measure in terms of direct return on investment (ROI), but SEO plays a broader role in business success.

For PPC metrics, you may want to focus on KPIs like:

  • Impression share.
  • Conversion rate.
  • Cost per acquisition (CPA).
  • Return on ad spend (ROAS).

For SEO metrics, you may want to focus on:

  • Organic traffic growth over time.
  • Ranking improvements.
  • Page engagement.
  • Assisted conversions.

When reporting to leadership, show how the two channels complement each other.

For example, paid search might drive immediate clicks, but your top-converting landing page could rank organically and reduce spend over time.

When To Adjust Your Budget Mix

Your initial budget allocation isn’t set in stone. It should evolve based on performance data, market shifts, and internal needs.

If PPC costs rise but conversion rates drop, that could be a cue to pull back and invest more in organic.

If you’re seeing strong rankings but low engagement, it may be time to shift some SEO funds into conversion rate optimization (CRO) or paid retargeting.

Seasonality and campaign cycles also matter. Retailers may lean heavily on PPC during Q4, while B2B companies might invest more in SEO during longer sales cycles.

Set quarterly review points where you re-evaluate performance and make adjustments. That level of agility shows leadership you’re making informed decisions, not just sticking to arbitrary ratios.

Avoiding Common Budget Mistakes

Some companies go all-in on SEO, expecting miracles. Others burn through paid budgets with nothing left to sustain organic efforts. Both approaches are risky.

A healthy mix means budgeting for:

  • Immediate lead gen (PPC).
  • Long-term traffic growth (SEO).
  • Regular testing and performance analysis.

Don’t forget to budget for what happens after the click: landing page development, CRO, and reporting tools that tie it all together.

Another mistake is treating SEO as a one-time project instead of an ongoing investment. If you only fund it during a site migration or a content sprint, you’ll lose momentum.

Same goes for PPC: Without a proper landing page experience or conversion tracking, even high-performing ads won’t deliver meaningful results.

Balancing Short-Term Wins With Long-Term Growth

There is no universal perfect split between SEO and PPC. But there is a perfect mix for your goals, stage of growth, and available resources.

Take the time to assess what you actually need from each channel and what you can realistically afford. Make sure your projections align with internal timelines and expectations.

And most importantly, keep reviewing your mix as performance data rolls in. The right budget allocation today might look very different six months from now.

Smart marketing leaders don’t choose sides. They choose what makes sense for the business today, and build flexibility into their strategy for tomorrow.

More Resources:


Featured Image: Jirapong Manustrong/Shutterstock

This Is Why AI Won’t Take Your Job (Yet) via @sejournal, @SequinsNsearch

SEO died a thousand times only this year, and the buzzword that resonates across every boardroom (and let’s be honest, everywhere else) is “AI.”

With Google releasing several AI-powered views over the past year and a half, along with the latest take on its own SearchGPT rival AI Mode, we are witnessing a traffic erosion that is very hard to counteract if we stay stuck in our traditional view of our role as search professionals.

And it is only natural that the debate we keep hearing is the same: Is AI eventually going to take our jobs? In a stricter sense, it probably will.

SEO, as we know it, has transformed drastically. It will keep evolving, forcing people to take on new skills and have a broader, multichannel strategy, along with clear and prompt communication to stakeholders who might still be confused about why clicks keep dropping while impressions stay the same.

The next year is expected to bring changes and probably some answers to this debate.

But in the meantime, I was able to draw some predictions, based on my own study investigating humans’ ability to discern AI, to see if the “human touch” really has an advantage over it.

Why This Matters For Us Now

Knowing if people can recognize AI matters for us because people’s behavior changes when they know they’re interacting with it, as compared to when they don’t.

A 2023 study by Yunhao Zhang and Renée Richardson Gosline compared content created by humans, AI, and hybrid approaches for marketing copy and persuasive campaigns.

What they noticed is that when the source was undisclosed, participants preferred AI-generated content, a result that was reversed when they knew how the content was created.

It’s like the transparency on using AI added a layer of diffidence to the interaction, rooted in the common mistrust that is reserved for any new and relatively unknown experience.

At the end of the day, we have consumed human-written content for centuries, but generative AI has been scaled only in the past few years, so this wasn’t even a challenge we were exposed to before.

Similarly, Gabriele Pizzi from the University of Bologna showed that when people interact with an AI chatbot in a simulated shopping environment, they are more likely to consider the agent as competent (and, in turn, trust it with their personal information) when the latter looks more human as compared to “robotic.”

And as marketers, we know that trust is the ultimate seal not only to get a visit and a transaction, but also to form a lasting relationship with the user behind the screen.

So, if recognizing AI content changes the way we interact with it and make decisions, do we still retain the human advantage when AI material gets so close to reality that it is virtually undistinguishable?

Your Brain Can Discriminate AI, But It Doesn’t Mean We Are Infallible Detectors

Previous studies have shown that humans display a feeling of discomfort, known as the uncanny valley, when they see or interact with an artificial entity with semi-realistic features.

How this negative feeling is manifested physiologically with higher activity of our sympathetic nervous system (the division responsible for our “fight or flight” response) before participants can verbally report on or even be aware of it.

It’s a measure of their “gut feeling” towards a stimulus that mimics human features, but does not succeed in doing so entirely.

The uncanny valley phenomenon arises from the fact that our brain, being used to predicting patterns and filling in the blanks based on our own experience, sees these stimuli as “glitches” and spots them as outliers in our known library of faces, bodies, and expressions.

The deviation from the norm and the uncertainty in labeling these “uncanny” stimuli can be triggering from a cognitive perspective, which manifests in higher electrodermal activity (shortened as EDA), a measure of psychological arousal that can be measured with electrodes on the skin.

Based on this evidence, it is realistic to hypothesize that our brain can spot AI before making any active discrimination, and that we can see higher EDA in relation to faces generated with AI, especially when there is something “off” about them.

It is unclear, though, at what level of realism we stop displaying a distinctive response, so I wanted to find that out with my own research.

Here are the questions I set up to answer with my study:

  1. Do we have an in-built pre-conscious “detector” system for AI, and at what point of realistic imitation does it stop responding?
  2. If we do, does it guide our active discrimination between AI and human content?
  3. Is our ability to discriminate influenced by our overall exposure to AI stimuli in real life?

And most of all, can any of the answers to these questions predict what are the next challenges we’ll face in search and marketing?

To answer these questions, I measured the electrodermal activity of 24 participants between 25 and 65 years old as they were presented with neutral, AI-generated, and human-generated images, and checked for any significant differences in responses to each category.

My study ran in three phases, one for each question I had:

  1. A first task where participants visualized neutral, AI, and human static stimuli on a screen without any actions required, while their electrodermal activity was recorded. This was intended to measure the automatic, pre-conscious response to the stimuli presented.
  2. A second behavioral task, where participants had to press a button to categorize the faces that they had seen into AI- vs. human-generated, as fast and accurately as they could, to measure their conscious discrimination skills.
  3. A final phase where participants declared their demographic range and their familiarity with AI on a self-reported scale across five questions. This gave me a self-reported “AI-literacy” score for each participant that I could correlate with any of the other measures obtained from the physiological and behavioral tasks.

And here is what I found:

  • Participants showed a significant difference in pre-conscious activation between conditions, and in particular, the EDA was significantly higher for human faces rather than AI faces (both hyper-realistic and CGI faces). This would support the hypothesis that our brain can tell the difference between AI and human faces before we even initiate a discrimination task.
  • The higher activation for human faces contrasts with the older literature showing higher activation for uncanny valley stimuli, and this could be related to either our own habituation to CGI visuals (meaning they are not triggering outliers anymore), or the automatic cognitive effort involved in trying to extrapolate the emotion of human neutral faces. As a matter of fact, the limitation of EDA is that it tells us something is happening in our nervous system, but it doesn’t tell us what: higher activity could be related to familiarity and preference, negative emotional states, or even cognitive effort, so more research on this is needed.
  • Exposure and familiarity with AI material correlated with higher accuracy when participants had to actively categorize faces into AI-generated and human, supporting the hypothesis that the more we are exposed to AI, the better we become at spotting subtle differences.
  • People were much faster and accurate in categorizing stimuli of the “uncanny valley” nature into the AI-generated bucket, but struggled with hyper-realistic faces, miscategorizing them as human faces in 22% of cases.
  • Active discrimination was not guided by pre-conscious activation. Although a difference in autonomous activity can be seen for AI and human faces, this did not correlate with how fast or accurate participants were. In fact, it can be argued that participants “second-guessed” their own instincts when they knew they had to make a choice.

And yet, the biggest result of all was something I noticed on the pilot I ran before the real study: When the participant is familiar with the brand or the product presented, it’s how they feel about it that guides what we see at the neural level, rather than the automatic response to the image presented.

So, while our brain can technically “tell the difference,” our emotions, familiarity with the brand, the message, and expectations are all factors that can heavily skew our own attitude and behavior, essentially making our discrimination (automatic or not) almost irrelevant in the cascade of evaluations we make.

This has massive implications not only in the way we retain our existing audience, but also in how we approach new ones.

We are now at a stage where understanding what our user wants beyond the immediate query is even more vital, and we have a competitive advantage if we can identify all of this before they explicitly express their needs.

The Road To Survival Isn’t Getting Out Of The Game. It’s Learning The New Rules To Play By

So, does marketing still need real people?

It definitely does, although it’s hard to see that now that every business is ignited by the fear of missing out on the big AI opportunity and distracted by new shiny objects populating the web every day.

Humans thrive on change – that’s how we learn and grow new connections and associations that help us adapt to new environments and processes.

Ever heard of the word neuroplasticity? While it might just sound like a fancy term for learning, it is quite literally the ability of your brain to reshape as a result of experience.

That’s why I think AI won’t take our jobs. We are focusing on AI’s fast progress in the ability to ingest content and recreate outputs that are virtually indistinguishable from our own, but we are not paying attention to our own power of evolving to this new level field.

AI will keep on moving, but so will the needle of our discernment and our behavior towards it, based on the experiences that we build with new processes and material.

My results already indicate how familiarity with AI plays a role in how good we are at recognizing it, and in a year’s time, even the EDA results might change as a function of progressive exposure.

Our skepticism and diffidence towards AI is rooted in the unknown sides of it, paired with a lot of the misuse that we’ve seen as a by-product of a fast, virtually unregulated growth.

The nature of our next interactions with AI will shape our behavior.

I think this is our opportunity as an industry to create valuable AI-powered experiences without sacrificing the quality of our work, our ethical responsibilities toward the user, and our relationship with them. It’s a slower process, but one worth undertaking.

So, even if, at the beginning, I approached this study as a man vs. the machine showdown, I believe we are heading toward the man and the machine era.

Far from the “use AI for everything” approach we tend to see around, below is a breakdown of where I see a (supervised) integration of AI to our job unproblematic, and where I think it still has no space in its current state.

Use: Anything That Provides Information, Facilitates Navigation, And Streamlines User Journeys

  • For example, testing product descriptions based on the features that already reside in the catalog, or providing summaries of real users’ reviews that highlight pros and cons straight away.
  • Virtual try-ons and enabling recommended products based on similarity.
  • Automating processes like identifying internal link opportunities, categorizing intent, and combining multiple data sources for better insights.

Avoid: Anything That’s Based On Establishing A Connection Or Persuading The User

  • This includes any content that fakes expertise and authority in the field. The current technology (and the lack of regulation) even allows for AI influencers, but bear in mind that your brand authenticity is still your biggest asset to preserve when the user is looking to convert. The pitfalls of deceiving them when they expect organic content are greater than just losing a click. This is the work you can’t automate.
  • Similarly, generating reviews or user-generated content at scale to convey legitimacy or value. If you know this is what your users want to get more information on, then you cannot meet their doubts with fake arguments. Gaming tactics are short-lived in marketing because people learn to discern and actively avoid them once they realize they are being deceived. Humans crave authenticity and real peer validation of their decisions because it makes them feel safe. If we ever reach a point where, as a collective, we feel we can trust AI, then it might be different, but that’s not going to happen when most of its current use is dedicated to tricking users into a transaction at all cost, rather than providing the necessary information they need to make an informed decision.
  • Replacing experts and quality control. If it backfired for customer-favorite Duolingo, it will likely backfire for you, too.

The New Goals We Should Be Setting

Here’s where a new journey starts for us.

The collective search behavior has already changed not only as a consequence of any AI-powered view on the SERP that makes our consumption of information and decision-making faster and easier, but also as a function of the introduction of new channels and forms of content (the “Search Everywhere” revolution we hear all about now).

This brings us to new goals as search professionals:

  • Be omnipresent: It’s now the time to work with other channels to improve organic brand awareness and be in the mind of the user at every stage of the journey.
  • Remove friction: Now that we can get answers right off the search engine results page without even clicking to explore more, speed is the new normal, and anything that makes the journey slower is an abandonment risk. Getting your customers what they want straight off the bat (being transparent with your offer, removing unnecessary steps to find information, and improving user experience to complete an action) prevents them from going to seek better results from competitors.
  • Preserve your authenticity: Users want to trust you and feel safe in their choices, so don’t fall into the hype of scalability that could harm your brand.
  • Get to know your customers deeper: Keyword data is no longer enough. We need to know their emotional states when they search, what their frustrations are, and what problems they are trying to solve. And most of all, how they feel about our brand, our product, and what they expect from us, so that we can really meet them where they are before a thousand other options come into play.

We’ve been there before. We’ll adapt again. And I think we’ll come out okay (maybe even more skilled) on the other side of the AI hype.

More Resources:


Featured Image: Stock-Asso/Shutterstock

Google AI Overviews Target Of Legal Complaints In The UK And EU via @sejournal, @martinibuster

The Movement For An Open Web and other organizations filed a legal challenge against Google, alleging harm to UK news publishers. The crux of the legal filing is the allegation that Google’s AI Overviews product is using news content as part of its summaries and for grounding AI answers, but not allowing publishers to opt out of that use without also opting out of appearing in search results.

The Movement For An Open Web (MOW) in the UK published details of a complaint to the UK’s Competition and Markets Authority (CMA):

“Last week, the CMA announced plans to consult on how to make Google search fairer, including providing “more control and transparency for publishers over how their content collected for search is used, including in AI-generated responses.” However, the complaint from Foxglove, the Alliance and MOW warns that news organisations are already being harmed in the UK and action is needed immediately.

In particular, publishers urgently need the ability to opt out of Google’s AI summaries without being removed from search altogether. This is a measure that has already been proposed by other leading regulators, including the US Department of Justice and the South African Competition Commission. Foxglove is warning that without immediate action, the UK – and its news industry – risks being left behind, while other states take steps to protect independent news from Google.

Foxglove is therefore seeking interim measures to prevent Google misusing publisher content pending the outcome of the CMA’s more detailed review.”

Reuters is reporting on an EU antitrust complaint filed in Brussels seeking relief for the same thing:

“Google’s core search engine service is misusing web content for Google’s AI Overviews in Google Search, which have caused, and continue to cause, significant harm to publishers, including news publishers in the form of traffic, readership and revenue loss.”

Publishers And SEOs Critical Of AI Overviews

Google is under increasing criticism from the publisher and the SEO community for sending fewer clicks to users, although Google itself insists it is sending more traffic than ever. This may be one of those occasions where the phrase “let the judge decide” describes where this is all going, because there are no signs that Google is backing down from its decade-long trend of showing fewer links and more answers.

Featured Image by Shutterstock/nitpicker