Consumer Trust And Perception Of AI In Marketing

This edited excerpt is from Ethical AI in Marketing by Nicole Alexander ©2025 and is reproduced and adapted with permission from Kogan Page Ltd.

Recent research highlights intriguing paradoxes in consumer attitudes toward AI-driven marketing. Consumers encounter AI-powered marketing interactions frequently, often without realizing it.

According to a 2022 Pew Research Center survey, 27% of Americans reported interacting with AI at least several times a day, while another 28% said they interact with AI about once a day or several times a week (Pew Research Center, 2023).

As AI adoption continues to expand across industries, marketing applications – from personalized recommendations to chatbots – are increasingly shaping consumer experiences.

According to McKinsey & Company (2023), AI-powered personalization can deliver five to eight times the ROI on marketing spend and significantly boost customer engagement.

In this rapidly evolving landscape, trust in AI has become a crucial factor for successful adoption and long-term engagement.

The World Economic Forum under­scores that “trust is the foundation for AI’s widespread acceptance,” and emphasizes the necessity for companies to adopt self-governance frameworks that prioritize transparency, accountability, and fairness (World Economic Forum, 2025).

The Psychology Of AI Trust

Consumer trust in AI marketing systems operates fundamentally differently from traditional marketing trust mechanisms.

Where traditional marketing trust builds through brand familiarity and consistent experiences, AI trust involves additional psychological dimensions related to automation, decision-making autonomy, and perceived control.

Understanding these differences is crucial for organizations seek­ing to build and maintain consumer trust in their AI marketing initiatives.

Cognitive Dimensions

Neurological studies offer intriguing insights into how our brains react to AI. Research from Stanford University reveals that we process information differently when interacting with AI-powered systems.

For example, when evaluating AI-generated product recommendations, our brains activate distinct neural path­ways compared to those triggered by recommendations from a human salesperson.

This crucial difference highlights the need for marketers to understand how consum­ers cognitively process AI-driven interactions.

There are three key cognitive factors that have emerged as critical influences on AI trust, including perceived control, understanding of mechanisms, and value recognition.

Emotional Dimensions

Consumer trust in AI marketing is deeply influenced by emotional factors, which often override logical evaluations. These emotional responses shape trust in several key ways:

  • Anxiety and privacy concerns: Despite AI’s convenience, 67% of consumers express anxiety about how their data is used, reflecting persistent privacy concerns (Pew Research Center, 2023). This tension creates a paradoxical relationship where consumers benefit from AI-driven marketing while simultaneously fearing its potential misuse.
  • Trust through repeated interactions: Emotional trust in AI systems develops iteratively through repeated, successful interactions, particularly when systems demonstrate high accuracy, consistent performance, and empathetic behavior. Experimental studies show that emotional and behavioral trust accumulate over time, with early experiences strongly shaping later perceptions. In repeated legal decision-making tasks, users exhibited growing trust toward high-performing AI, with initial interactions significantly influencing long-term reliance (Kahr et al., 2023). Emotional trust can follow nonlinear pathways – dipping after failures but recovering through empathetic interventions or improved system performance (Tsumura and Yamada, 2023).
  • Honesty and transparency in AI content: Consumers increasingly value transpar­ency regarding AI-generated content. Companies that openly disclose when AI has been used – for instance, in creating product descriptions – can empower customers by helping them feel more informed and in control of their choices. Such openness often strengthens customer trust and fosters positive perceptions of brands actively embracing transparency in their marketing practices.

Cultural Variations In AI Trust

The global nature of modern marketing requires a nuanced understanding of cultural differences in AI trust. These variations arise from deeply ingrained societal values, historical relationships with technology, and norms around privacy, automation, and decision-making.

For marketers leveraging AI in customer engagement, recognizing these cultural distinctions is crucial for developing trustworthy AI-driven campaigns, personalized experiences, and region-specific data strategies.

Diverging Cultural Trust In AI

Research reveals significant disparities in AI trust across global markets. A KPMG (2023) global survey found that 72% of Chinese consumers express trust in AI-driven services, while in the U.S., trust levels plummet to just 32%.

This stark difference reflects broader societal attitudes toward government-led AI innovation, data privacy concerns, and varying historical experiences with technology.

Another study found that AI-related job displacement fears vary greatly by region. In countries like the U.S., India, and Saudi Arabia, consumers express significant concerns about AI replacing human roles in professional sectors such as medicine, finance, and law.

In contrast, consumers in Japan, China, and Turkey exhibit lower levels of concern, signaling a higher acceptance of AI in professional settings (Quantum Zeitgeist, 2025).

The Quantum Zeitgeist study shows that regions like Japan, China, and Turkey exhibit lower levels of concern about AI replacing human jobs compared to regions like the U.S., India, and Saudi Arabia, where such fears are more pronounced.

This insight is invaluable for marketers crafting AI-driven customer service, finan­cial tools, and healthcare applications, as perceptions of AI reliability and utility vary significantly by region.

As trust in AI diverges globally, understanding the role of cultural privacy norms becomes essential for marketers aiming to build trust through AI-driven services.

Cultural Privacy Targeting In AI Marketing

As AI-driven marketing becomes more integrated globally, the concept of cultural privacy targeting – the practice of aligning data collection, privacy messaging, and AI transparency with cultural values – has gained increasing importance. Consumer attitudes toward AI adoption and data privacy are highly regional, requiring market­ers to adapt their strategies accordingly.

In more collectivist societies like Japan, AI applications that prioritize societal or community well-being are generally more accepted than those centered on individual convenience.

This is evident in Japan’s Society 5.0 initiative – a national vision intro­duced in 2016 that seeks to build a “super-smart” society by integrating AI, IoT, robotics, and big data to solve social challenges such as an aging population and strains on healthcare systems.

Businesses are central to this transformation, with government and industry collaboration encouraging companies to adopt digital technologies not just for efficiency, but to contribute to public welfare.

Across sectors – from manufac­turing and healthcare to urban planning – firms are reimagining business models to align with societal needs, creating innovations that are both economically viable and socially beneficial.

In this context, AI is viewed more favorably when positioned as a tool to enhance collective well-being and address structural challenges. For instance, AI-powered health monitoring technologies in Japan have seen increased adoption when positioned as tools that contribute to broader public health outcomes.

Conversely, Germany, as an individualistic society with strong privacy norms and high uncertainty avoidance, places significant emphasis on consumer control over personal data. The EU’s GDPR and Germany’s support for the proposed Artificial Intelligence Act reinforce expectations for robust transparency, fairness, and user autonomy in AI systems.

According to the OECD (2024), campaigns in Germany that clearly communicate data usage, safeguard individual rights, and provide opt-in consent mechanisms experience higher levels of public trust and adoption.

These contrasting cultural orientations illustrate the strategic need for contextual­ized AI marketing – ensuring that data transparency and privacy are not treated as one-size-fits-all, but rather as culture-aware dimensions that shape trust and acceptance.

Hofstede’s (2011) cultural dimensions theory offers further insights into AI trust variations:

  • High individualism + high uncertainty avoidance (e.g., Germany, U.S.) → Consum­ers demand transparency, data protection, and human oversight in AI marketing.
  • Collectivist cultures with lower uncertainty avoidance (e.g., Japan, China, South Korea) → AI is seen as a tool that enhances societal progress, and data-sharing concerns are often lower when the societal benefits are clear (Gupta et al., 2021).

For marketers deploying AI in different regions, these insights help determine which features to emphasize:

  • Control and explainability in Western markets (focused on privacy and auton­omy).
  • Seamless automation and societal progress in East Asian markets (focused on communal benefits and technological enhancement).

Understanding the cultural dimensions of AI trust is key for marketers crafting successful AI-powered campaigns.

By aligning AI personalization efforts with local cultural expectations and privacy norms, marketers can improve consumer trust and adoption in both individualistic and collectivist societies.

This culturally informed approach helps brands tailor privacy messaging and AI transparency to the unique preferences of consumers in various regions, building stronger relationships and enhancing overall engagement.

Avoiding Overgeneralization In AI Trust Strategies

While cultural differences are clear, overgeneralizing consumer attitudes can lead to marketing missteps.

A 2024 ISACA report warns against rigid AI segmentation, emphasizing that trust attitudes evolve with:

  • Media influence (e.g., growing fears of AI misinformation).
  • Regulatory changes (e.g., the EU AI Act’s impact on European consumer confidence).
  • Generational shifts (younger, digitally native consumers are often more AI-trusting, regardless of cultural background).

For AI marketing, this highlights the need for flexible, real-time AI trust monitoring rather than static cultural assumptions.

Marketers should adapt AI trust-building strategies based on region-specific consumer expectations:

  • North America and Europe: AI explainability, data transparency, and ethical AI labels increase trust.
  • East Asia: AI-driven personalization and seamless automation work best when framed as benefiting society.
  • Islamic-majority nations and ethical consumer segments: AI must be clearly aligned with fairness and ethical governance.
  • Global emerging markets: AI trust is rapidly increasing, making these markets prime opportunities for AI-driven financial inclusion and digital transformation.

The data, drawn from the 2023 KPMG International survey, underscores how cultural values such as collectivism, uncertainty avoidance, and openness to innovation, shape public attitudes toward AI.

For example, trust levels in Germany and Japan remain low, reflecting high uncertainty avoidance and strong privacy expectations, while countries like India and Brazil exhibit notably higher trust, driven by optimism around AI’s role in societal and economic progress.

Measuring Trust In AI Marketing Systems

As AI becomes central to how brands engage customers – from personalization engines to chatbots – measuring consumer trust in these systems is no longer optional. It’s essential.

And yet, many marketing teams still rely on outdated metrics like Net Promoter Score (NPS) or basic satisfaction surveys to evaluate the impact of AI. These tools are helpful for broad feedback but miss the nuance and dynamics of trust in AI-powered experiences.

Recent research, including work from MIT Media Lab (n.d.) and leading behavioral scientists, makes one thing clear: Trust in AI is multi-dimensional, and it’s shaped by how people feel, think, and behave in real-time when interacting with automated systems.

Traditional metrics like NPS and CSAT (Customer Satisfaction Score) tell you if a customer is satisfied – but not why they trust (or don’t trust) your AI systems.

They don’t account for how transparent your algorithm is, how well it explains itself, or how emotionally resonant the interaction feels. In AI-driven environments, you need a smarter way to understand trust.

A Modern Framework For Trust: What CMOs Should Know

MIT Media Lab’s work on trust in human-AI interaction offers a powerful lens for marketers. It breaks trust into three key dimensions:

Behavioral Trust

This is about what customers do, not what they say. When customers engage frequently, opt in to data sharing, or return to your AI tools repeatedly, that’s a sign of behavioral trust. How to track it:

  • Repeat engagement with AI-driven tools (e.g., product recommenders, chatbots).
  • Opt-in rates for personalization features.
  • Drop-off points in AI-led journeys.

Emotional Trust

Trust is not just rational, it’s emotional. The tone of a voice assistant, the empathy in a chatbot’s reply, or how “human” a recommendation feels all play into emotional trust. How to track it:

  • Sentiment analysis from chat transcripts and reviews.
  • Customer frustration or delight signals from support tickets.
  • Tone and emotional language in user feedback.

Cognitive Trust

This is where understanding meets confidence. When your AI explains itself clearly – or when customers understand what it can and can’t do –they’re more likely to trust the output. How to track it:

  • Feedback on explainability (“I understood why I got this recommendation”).
  • Click-through or acceptance rates of AI-generated content or decisions.
  • Post-interaction surveys that assess clarity.

Today’s marketers are moving toward real-time trust dashboards – tools that moni­tor how users interact with AI systems across channels. These dashboards track behavior, sentiment, and comprehension all at once.

According to MIT Media Lab researchers, combining these signals provides a richer picture of trust than any single survey can. It also gives teams the agility to address trust breakdowns as they happen – like confusion over AI-generated content or friction in AI-powered customer journeys.

Customers don’t expect AI to be perfect. But they do expect it to be honest and understandable. That’s why brands should:

  • Label AI-generated content clearly.
  • Explain how decisions like pricing, recommendations, or targeting are made.
  • Give customers control over data and personalization.

Building trust is less about tech perfection and more about perceived fairness, clarity, and respect.

Measuring that trust means going deeper than satisfaction. Use behav­ioral, emotional, and cognitive signals to track trust in real-time – and design AI systems that earn it.


To read the full book, SEJ readers have an exclusive 25% discount code and free shipping to the US and UK. Use promo code ‘SEJ25’ at koganpage.com here.

More Resources: 


References

  • Hofstede, G (2011) Dimensionalizing Cultures: The Hofstede Model in Context, Online Readings in Psychology and Culture, 2 (1), scholarworks.gvsu.edu/cgi/viewcontent. cgi?article=1014&context=orpc (archived at https://perma.cc/B7EP-94CQ)
  • ISACA (2024) AI Ethics: Navigating Different Cultural Contexts, December 6, www.isaca. org/resources/news-and-trends/isaca-now-blog/2024/ai-ethics-navigating-different-cultural-contexts (archived at https://perma.cc/3XLA-MRDE)
  • Kahr, P K, Meijer, S A, Willemsen, M C, and Snijders, C C P (2023) It Seems Smart, But It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task, Proceedings of the 28th International Conference on Intelligent User Interfaces. doi.org/10.1145/3581641.3584058 (archived at https://perma.cc/SZF8-TSK2)
  • KPMG International and The University of Queensland (2023) Trust in Artificial Intelligence: A Global Study, assets.kpmg.com/content/dam/kpmg/au/pdf/2023/ trust-in-ai-global-insights-2023.pdf (archived at https://perma.cc/MPZ2-UWJY)
  • McKinsey & Company (2023) The State of AI in 2023: Generative AI’s Breakout Year, www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023- generative-ais-breakout-year (archived at https://perma.cc/V29V-QU6R)
  • MIT Media Lab (n.d.) Research Projects, accessed April 8, 2025
  • OECD (2024) OECD Artificial Intelligence Review of Germany, www.oecd.org/en/ publications/2024/06/oecd-artificial-intelligence-review-of-germany_c1c35ccf.html (archived at https://perma.cc/5DBS-LVLV)
  • Pew Research Center (2023) Public Awareness of Artificial Intelligence in Everyday Activities, February, www.pewresearch.org/wp-content/uploads/sites/20/2023/02/ PS_2023.02.15_AI-awareness_REPORT.pdf (archived at https://perma.cc/V3SE-L2BM)
  • Quantum Zeitgeist (2025) How Cultural Differences Shape Fear of AI in the Workplace, Quantum News, February 22, quantumzeitgeist.com/how-cultural-differences-shape-fear-of-ai-in-the-workplace-a-global-study-across-20-countries/ (archived at https://perma.cc/3EFL-LTKM)
  • Tsumura, T and Yamada, S (2023) Making an Agent’s Trust Stable in a Series of Success and Failure Tasks Through Empathy, arXiv. arxiv.org/abs/2306.09447 (archived at https://perma.cc/L7HN-B3ZC)
  • World Economic Forum (2025) How AI Can Move from Hype to Global Solutions, www. weforum.org/stories/2025/01/ai-transformation-industries-responsible-innovation/ (archived at https://perma.cc/5ALX-MDXB)

Featured Image: Rawpixel.com/Shutterstock

Perplexity Comet Browser Vulnerable To Prompt Injection Exploit via @sejournal, @martinibuster

Brave published details about a security issue with Comet, Perplexity’s AI browser, that enables an attacker to inject a prompt into the browser and gain access to data in other open browser tabs.

Comet AI Browser Vulnerability

Brave described a vulnerability that can be activated when a user asks the Comet AI browser to summarize a web page. The LLM will read the web page, including any embedded prompts that command the LLM to take action on any open tabs

According to Brave:

“The vulnerability we’re discussing in this post lies in how Comet processes webpage content: when users ask it to “Summarize this webpage,” Comet feeds a part of the webpage directly to its LLM without distinguishing between the user’s instructions and untrusted content from the webpage. This allows attackers to embed indirect prompt injection payloads that the AI will execute as commands. For instance, an attacker could gain access to a user’s emails from a prepared piece of text in a page in another tab.”

A post on Simon Willison’s Weblog shared that Perplexity tried to patch the vulnerability but the fix does not work.

A developer posted the following on X:

“Why is no one talking about this?

This is why I don’t use an AI browser

You can literally get prompt injected and your bank account drained by doomscrolling on reddit:”

Things aren’t looking good for Comet Browser at this time.

How lidar measures the cost of climate disasters

The wildfires that swept through Los Angeles County in January 2025 left an indelible mark on the Southern California landscape. The Eaton and Palisades fires raged for 24 days, killing 29 people and destroying 16,000 structures, with losses estimated at $60 billion. More than 55,000 acres were consumed, and the landscape itself was physically transformed.

Researchers are now using lidar (light detection and ranging) technology to precisely measure these changes in the landscape’s geometry—helping them understand the effects of climate disasters.

Lidar, which measures how long it takes for pulses of laser light to bounce off surfaces and return, has been used in topographic mapping for decades. Today, airborne lidar from planes and drones maps the Earth’s surface in high detail. Scientists can then “diff” the data—compare before-and-after snapshots and highlight all the changes—to identify more subtle consequences of a disaster, including fault-line shifts, volcanic eruptions, and mudslides.

Falko Kuester, an engineering professor at the University of California, San Diego, co-directs ALERTCalifornia, a public safety program that uses real-time remote sensing to help detect wildfires. Kuester says lidar snapshots can tell a story over time.

“They give us a lay of the land,” he says. “This is what a particular region has been like at this point in time. Now, if you have consecutive flights at a later time, you can do a ‘difference.’ Show me what it looked like. Show me what it looks like. Tell me what changed. Was something constructed? Something burned down? Did something fall down? Did vegetation grow?” 

Shortly after the fires were contained in late January 2025, ALERTCalifornia sponsored new lidar flights over the Eaton and Palisades burn areas. NV5, an inspection and engineering firm, conducted the scans, and the US Geological Survey is now hosting the public data sets.  

Comparing a 2016 lidar snapshot and the January 2025 snapshot, Cassandra Brigham and her team at Arizona State University visualized the elevation changes—revealing the buildings, trees, and structures that had disappeared.

“We said, what would be a useful product for people to have as quickly as possible, since we’re doing this a couple weeks after the end of the fires?” says Brigham. Her team cleaned and reformatted the older, lower-resolution data and then subtracted the newer data. The resulting visualizations reveal the scale of devastation in ways satellite imagery can’t match. Red shows lost elevation (like when a building burns), and blue shows a gain (such as tree growth or new construction).

Lidar is helping scientists track the cascading effects of climate-­driven disasters—from the damage to structures and vegetation destroyed by wildfires to the landslides and debris flows that often follow in their wake. “For the Eaton and Palisades fires, for example, entire hillsides burned. So all of that vegetation is removed,” Kuester says. “Now you have an atmospheric river coming in, dumping water. What happens next? You have debris flows, mud flows, landslides.” 

Lidar’s usefulness for quantifying the costs of climate disasters underscores its value in preparing for future fires, floods, and earthquakes. But as policymakers weigh steep budget cuts to scientific research, these crucial lidar data collection projects could face an uncertain future.

Jon Keegan writes about technology and AI, and he publishes Beautiful Public Data (beautifulpublicdata.com), a curated collection of government data sets.

Framework for Ecommerce Merchandising

Merchandising is an ongoing process of presenting products to boost sales. Every ecommerce site merchandises its products either consciously or by default.

  • Nike features in-season sports and products on its home page.
  • Amazon reminds shoppers of their recent searches.
  • Wayfair bundles and cross-sells complementary items.

Many ecommerce platforms have basic merchandising built in directly or in popular themes. These built-ins make merchandising easy, but not necessarily optimized.

Nike’s home page hero slide is seasonal. It features spots or products, such as this example of inspiring shoppers to dress like tennis star Carlos Alcaraz.

Strategy

Merchandising techniques vary. Some focus on visuals or product curation. Others use behavioral economics and personalization for maximum persuasion.

Still others think about merchandising as it applies to a buyer. The result can be a five-step framework, where each step defines a set of tactics that move shoppers from curiosity to purchase.

Inspiration

The first set of tactics aims at product inspiration. It seeks to stimulate the shopper, to help imagine a lifestyle or a need your products solve.

These tactics often manifest themselves as:

  • Hero images featuring products or life situations,
  • Category headers with aspiration images,
  • Seasonal sections (think back to school),
  • Editorial content.

For example, the recipes found on Le Creuset’s site help a shopper imagine making the meal for a special occasion and experiencing the joy of sharing food with folks you love. The content (merchandising) evokes a feeling and, ultimately, sells the enamel-covered cast iron cookware needed to bring that feeling to life.

Content marketing can be a powerful form of product merchandising. Le Creuset uses recipes to inspire shoppers.

Guidance

The second set of tactics in this merchandising framework aims to reduce friction and guide shoppers toward products they are likely to buy.

These techniques often take the form of:

  • Straightforward or supplementary navigation like “Shop by Room” or “Shop by Activity,”
  • Thoughtful search results with autocomplete, synonym mapping, and intelligent ranking.
  • Sortable category pages with filtering by size, color, price, availability, or reviews.

Many ecommerce platforms include these sorts of navigational features, but they require optimizing so that a search results page is a thoughtful arrangement of relevant items, not a keyword-based SKU dump.

Screenshot showing shirts and hoodies on Origin with filters.

Origin allows shoppers to filter apparel for color, size, and activity, such as jiu-jitsu and hunting.

Persuasion

Next come the tactics meant to influence choices and encourage shoppers to make complementary purchases.

While a merchant will measure the success of these efforts in revenue and average order value, persuasive merchandising should seek the shopper’s best interest.

Don’t make frivolous recommendations or use tricks, but do describe your products in a way that encourages the sale. For example, persuasion merchandising might include:

  • Editorial content in written or video format,
  • Social proof such as ratings, reviews, or even “top seller” labels,
  • Product recommendations, including up-selling and cross-selling,
  • Indications of scarcity (“only 12 in stock”),
  • Additional discounts or free shipping messages.
Screenshot from Wasson, a watch maker, of a watch with the message of only 58 remaining in stock.

A low inventory count can be persuasive.

Conversion

On-site merchandising often culminates near the buy-now button or on shopping cart and checkout pages. Here, the aim is to reassure and close the sale.

Conversion messages might be:

  • Clear return policies, guarantees, and payment security icons that convey trust,
  • Suggested add-ons or loyalty perks,
  • Product quality indicators.

A simple, well-placed message that removes a shopper’s final doubts or concerns effectively closes the sale.

These three small graphics — American made, easy returns, and free shipping — make a final argument to buy.

Retention

A final set of merchandising tactics enables retention marketing. It is the conversion after the sale.

Add AI

While each category of merchandising tactics in this framework requires human initiative and planning, AI can certainly help with execution, including content generation and personalization.

Thus for Inspiration, an ecommerce merchandiser might prepare a seasonal banner for back-to-school or the football season. Better still, AI personalization could select or generate dynamic banners to match a shopper’s personal profile and previous behavior.

Putting It Together

  • Inspiration: Spark desire.
  • Guidance: Help find.
  • Persuasion: Help choose.
  • Conversion: Help buy.
  • Retention: Connect.

Merchandising shapes how shoppers discover, evaluate, and purchase products. A sound framework ensures that each step of the shopping journey contributes to sales and customer relationships.

The 5-Step Process To Setting Crystal Clear PPC Goals via @sejournal, @MenachemAni

Many agencies and marketers believe that success in paid media is primarily down to the quality of your ads or the specificity of your landing pages.

While those elements are important, they’re meaningless unless they sit on a foundation of alignment with client needs.

The cleanest account structure and flawless creatives may hit every platform benchmark, but any success will be short-lived if you’re not clued into what’s actually important to your clients.

Higher revenues, more profit, better lead quality, shorter sales cycles – this is what typically matters to the people paying the bills.

At JXT Group, we make sure that the foundation is laid before building a single campaign by gathering a clear picture of how our clients make money, who their ideal customers are, and what a proper conversion looks like.

Here are the five phases we use to engineer that experience.

1. Understand The Business Model

Financially, most Google Ads clients can be split into one of two business models: those that sell products at face value and those that want leads who convert at a later date, typically through an offline interaction.

Verticals like ecommerce and info products sell their goods (physical or otherwise) at face value, allowing you to see revenue figures inside of Google Ads.

Verticals like local services and SaaS rely on capturing interest in the form of phone calls, form fills, and chat sessions. These leads may or may not turn into actual sales later.

Anyone dealing with physical products also has to factor cash flow, procurement costs, shipping fees, and return rates into both how much they can spend as well as how much return they need on their ad spend.

This means that the same 4x return on ad spend (ROAS) can be great for one brand with low expenses, but put another underwater.

It’s why you cannot use platform metrics like ROAS while ignoring what actually results in net profit after fulfillment.

And leads need to be both high in quality and catered to promptly; otherwise, brands run the risk of low final conversion rates.

As marketers, we want to drive the right type of leads at a cost that matches a client’s close rates and order values, resulting in longer feedback loops and tighter customer relationship management (CRM) integration so we can optimize to actual revenue.

2. Match Goals To Client Priorities

Simply put, not every client is chasing the same outcome.

Some want to scale aggressively and are comfortable with a higher cost-per-acquisition (CPA), while others are laser-focused on efficiency and won’t move unless the numbers are dialed in.

I’ve worked with brands whose main goal was a clean presence, ensuring their ads show only on high-quality placements and live up to their internal values.

There are other niche goals, like outbidding a certain competitor or positioning themselves with a certain audience. All of these are valid, but they require different approaches.

Obviously, you can’t do anything until you figure out what matters most to the client. It might sound obvious, but too many agencies make assumptions based on platform key performance indicators (KPIs).

Just because Google says a campaign is performing “well” doesn’t mean it’s aligned with your client’s goals.

We start by asking the right questions, such as:

  • What would success look like six to 12 months from now?
  • Is your first priority profitability, growth, market share, or brand presence?
  • Would you rather trade volume for efficiency or efficiency for volume?

Once that’s established, we structure everything else around it:

  • How much budget is required.
  • Which campaign types to run and how to structure them.
  • What bid strategies we use.
  • How broad or narrow our targeting needs to be.
  • Messaging on ads and landing pages.
  • Negative keyword lists.
  • Targets for impression share, ROAS/CPA, and other KPIs.

Without these first foundational layers, everything else you do is just guesswork.

3. Set Comprehensive And Specific Goals

Once we understand the client’s business model and goals, it’s time to layer in our expertise. This part involves setting realistic goals that balance client desires with what we know is possible.

We’ll typically call on our vertical knowledge, experiences with past clients, and our understanding of unit economics and fulfillment to paint a complete picture.

There’s no room for mistakes like setting an arbitrary ROAS goal without asking what that revenue actually does for the business. After all, a 3x ROAS doesn’t mean much if the margins are thin or there are hidden costs later on.

With lead generation, the conversion doesn’t end with our intake form. In fact, it’s only the first step. The real value happens offline, when the lead turns into a paying customer, and Google has no visibility.

That gap is where the greatest insights and opportunities lie, and it’s vital that we account for it.

Here’s how to goal-set so that media performance ties back to real-world business needs.

Ecommerce

1. Look at the numbers behind the numbers.

This means breaking down the client’s cost structure.

What’s the cost of goods sold? How much does shipping cost per order? Are there fulfillment fees, returns, or seasonal procurement issues? How many other vendors get paid whose fees need to be accounted for in the ROAS target?

These offline costs directly impact ad sustainability.

2. Understand margins at the SKU or category level.

Not every product has the same margin, so some items can scale at a lower ROAS while others need to stay profitable at first touch.

We try to segment products by margin so we can set different targets where it makes sense.

3. Factor in blended performance.

A customer might enter the funnel through Google Ads but convert through another channel, like email.

We’ll study how Google fits into the entire ecosystem rather than trust a narrow window of last-click attribution, so that we can temper expectations based on how it all fits together.

4. Set realistic ROAS targets.

Once we understand the financials, it’s time to work backwards.

What’s the minimum ROAS needed to break even? What target ROAS will let the brand hit profitability goals?

This becomes our baseline and gives us a platform from which to build situational variance for things like seasonal demand, new product launches, and what competitors are doing.

5. Clarify the business objective behind the spend.

Not all brands spend on ads for the same reason. Some want to acquire new customers, others want to clear out inventory, and others still are launching a new product or range.

Each of these goals needs its own approach to bidding, creative, and measurement.

Lead Generation

1. Map the full conversion journey.

What happens after a lead submits a form or makes a call? Who follows up, how quickly, and what’s the typical close rate?

There is a full post-click sales flow that exists after someone registers their interest. If we don’t understand it, we’re optimizing in the dark.

2. Quantify the value of a lead.

Different leads have different values, and Google is not privy to any of this unless you share that data back as offline conversions.

For lead gen clients, we look at historical data on how many leads turn into sales and how quickly, what the average deal size is, and what the margin looks like.

Then, we set up integrations between Google Ads and their CRM to feed this data back and optimize against it.

3. Use the funnel to set a target CPA.

Once we know things like typical deal value and close rate, we can reverse engineer our way to a CPA that leaves enough margin on the plate.

For example, needing 30 leads to close one deal worth $1,000 gives us very limited margins and runs the risk of blowing through the market.

A client that closes 1 in 10 leads with a $5,000 average sale gives us a much higher ceiling on what they can pay per lead while staying profitable.

4. Control anything we can post-click.

Lead gen gives us a greater opportunity to influence conversions after they click. This means landing page user experience and messaging, form length and format, automated email follow-ups, and CRM workflows.

Small changes here can have an outsized impact on close rates and lead quality.

4. Employ Active Listening During Conversations

Meeting with a new client is a bit like hanging out with someone new for the first time. They might not be willing to dive deep or share as openly as we’d like, but it’s our job to make them feel comfortable enough to do so.

Surface-level answers will only take us so far. To set a truly solid strategy, we want to listen to what’s in the spaces between their words.

What are they really trying to solve? Are they really after more profit or market share, or do they just want cleaner reporting now that they have investors to answer to?

A client might say they want “more leads” when what they really need are better leads that their sales team can actually close, but you’ll never see that light if you take everything they say at face value.

Active listening shows up in the details:

  • Picking up on how the client talks about their sales process, not just the form submission.
  • Hearing concerns about inventory issues before pushing hard on a best-seller.
  • Noticing when a CEO cares more about market visibility than ROAS.

It’s a skill that takes time to develop, but it’s also the only way to avoid misalignment and really build trust.

Get this right, and your client will feel like you’re there to make them look great and are willing to run through brick walls for them.

5. Ask Probing, Leading Questions To Reveal The Full Picture

Potential clients who put up walls need you to cut through the noise.

These questions will help you get to the real motivation behind their desire to spend on paid search, as well as allow you to spot red flags that might indicate a difficult client.

Business Direction

  • What would success look like to you in the next six to 12 months? This helps them move beyond “more leads” or “better ROAS” and focus on outcomes.
  • If Google Ads disappeared tomorrow, what would break in your business? This reveals how critical paid media is to their revenue engine.
  • Is this about profitability, growth, or positioning? Few clients won’t say “all three,” but keep pressing, and they’ll tell you what they’d sacrifice first.
  • Are you looking to maintain, grow, or exit? You should know if they’re scaling to sell, which changes everything about risk tolerance and KPIs.

Finance & Economics

  • What’s your average profit margin after all costs, e.g., ads, fulfillment, labor? If they don’t have this information ready and can’t/won’t source it, that should be a red flag about their openness.
  • What do you pay to acquire a customer? What’s the most you can afford to pay? See if they’re thinking in terms of lifetime value or just looking at front-end performance.
  • Do we need to factor in any fixed costs that most media buyers wouldn’t know about? It opens the door to discussions about warehousing, returns, sales commissions, etc.

Lead Quality & Sales Process

  • What do you consider to be a “qualified” lead? This forces them to define quality, which is far superior to treating all leads the same or leaving the definition vague.
  • What happens after a lead comes through? You want to know how long it usually takes to close a deal and what their team does to facilitate that. The answer will show you how strong or weak their internal follow-up process is.
  • How often do you listen to sales calls or review what’s happening post-click? If the answer is never, it tells you the magnitude of the support they’ll need to improve close rates. This might not be something you can control.

Bottlenecks & Internal Dynamics

  • Who has the final say on marketing and business decisions? You’ll avoid many headaches and painful back-and-forth by establishing this upfront.
  • What have you tried in the past that didn’t work, and why not? Ask this to get insight into previous agency relationships, internal friction, or unrealistic expectations.
  • If we start today and in six months you’re unhappy, what will have gone wrong? This one is gold as it can expose fears, past traumas, and give you a roadmap on how to hit alignment.

But, even if you get all these answers and follow all the advice in this article, communication with your clients is the key to establishing a relationship where you’re trusted and given space to operate.

Without proactive and consistent two-way communication, their perceptions may not align with what you’re doing.

Remember: You’re The Expert, But You’re Not In Charge

One thing many agencies and marketers tend to forget as they manage thousands and millions of dollars in ad spend is that we build on leased land. These are not our accounts and campaigns, and we don’t pay the advertising bills.

So, even though it’s important for clients to defer to our expertise, ultimately, they’re the ones who call the shots when it comes to direction and strategy.

The other angle to this is that it’s not our job to make ourselves look good or even to get a solid case study out of an engagement; those are bonuses.

Our job is to service client needs, maximize results within the spend allocated to us, and make our clients look phenomenal in front of the people they answer to.

More Resources:


Featured Image: ugguggu/Shutterstock

How To Leverage AI To Modernize B2B Go-To-Market via @sejournal, @alexanderkesler

In a post “growth-at-all-costs” era, B2B go-to-market (GTM) teams face a dual mandate: operate with greater efficiency while driving measurable business outcomes.

Many organizations see AI as the definitive means of achieving this efficiency.

The reality is that AI is no longer a speculative investment. It has emerged as a strategic enabler to unify data, align siloed teams, and adapt to complex buyer behaviors in real time.

According to an SAP study, 48% of executives use generative AI tools daily, while 15% use AI multiple times per day.

The opportunity for modern Go-to-Market (GTM) leaders is not just to accelerate legacy tactics with AI, but to reimagine the architecture of their GTM strategy altogether.

This shift represents an inflection point. AI has the potential to power seamless and adaptive GTM systems: measurable, scalable, and deeply aligned with buyer needs.

In this article, I will share a practical framework to modernize B2B GTM using AI, from aligning internal teams and architecting modular workflows to measuring what truly drives revenue.

The Role Of AI In Modern GTM Strategies

For GTM leaders and practitioners, AI represents an opportunity to achieve efficiency without compromising performance.

Many organizations leverage new technology to automate repetitive, time-intensive tasks, such as prospect scoring and routing, sales forecasting, content personalization, and account prioritization.

But its true impact lies in transforming how GTM systems operate: consolidating data, coordinating actions, extracting insights, and enabling intelligent engagement across every stage of the buyer’s journey.

Where previous technologies offered automation, AI introduces sophisticated real-time orchestration.

Rather than layering AI onto existing workflows, AI can be used to enable previously unscalable capabilities such as:

  • Surfacing and aligning intent signals from disconnected platforms.
  • Predicting buyer stage and engagement timing.
  • Providing full pipeline visibility across sales, marketing, client success, and operations.
  • Standardizing inputs across teams and systems.
  • Enabling cross-functional collaboration in real time.
  • Forecasting potential revenue from campaigns.

With AI-powered data orchestration, GTM teams can align on what matters, act faster, and deliver more revenue with fewer resources.

AI is not merely an efficiency lever. It is a path to capabilities that were previously out of reach.

Framework: Building An AI-Native GTM Engine

Creating a modern GTM engine powered by AI demands a re-architecture of how teams align, how data is managed, and how decisions are executed at every level.

Below is a five-part framework that explains how to centralize data, build modular workflows, and train your model:

1. Develop Centralized, Clean Data

AI performance is only as strong as the data it receives. Yet, in many organizations, data lives in disconnected silos.

Centralizing structured, validated, and accessible data across all departments at your organization is foundational.

AI needs clean, labeled, and timely inputs to make precise micro-decisions. These decisions, when chained together, power reliable macro-actions such as intelligent routing, content sequencing, and revenue forecasting.

In short, better data enables smarter orchestration and more consistent outcomes.

Luckily, AI can be used to break down these silos across marketing, sales, client success, and operations by leveraging a customer data platform (CDP), which integrates data from your customer relationship management (CRM), marketing automation (MAP), and customer success (CS) platforms.

The steps are as follows:

  • Appoint a data steward who owns data hygiene and access policies.
  • Select a CDP that pulls records from your CRM, MAP, and other tools with client data.
  • Configure deduplication and enrichment routines, and tag fields consistently.
  • Establish a shared, organization-wide dashboard so every team works from the same definitions.

Recommended starting point: Schedule a workshop with operations, analytics, and IT to map current data sources and choose one system of record for account identifiers.

2. Build An AI-Native Operating Model

Instead of layering AI onto legacy systems, organizations will be better suited to architect their GTM strategies from the ground up to be AI-native.

This requires designing adaptive workflows that rely on machine input and positioning AI as the operating core, not just a support layer.

AI can deliver the most value when it unifies previously fragmented processes.

Rather than simply accelerating isolated tasks like prospect scoring or email generation, AI should orchestrate entire GTM motions, seamlessly adapting messaging, channels, and timing based on buyer intent and journey stage.

Achieving this transformation demands new roles within the GTM organization, such as AI strategists, workflow architects, and data stewards.

In other words, experts focused on building and maintaining intelligent systems rather than executing manual processes.

AI-enabled GTM is not about automation alone; it’s about synchronization, intelligence, and scalability at every touchpoint.

Once you have committed to building an AI-native GTM model, the next step is to implement it through modular, data-driven workflows.

Recommended starting point: Assemble a cross-functional strike team and map one buyer journey end-to-end, highlighting every manual hand-off that could be streamlined by AI.

3. Break Down GTM Into Modular AI Workflows

A major reason AI initiatives fail is when organizations do too much at once. This is why large, monolithic projects often stall.

Success comes from deconstructing large GTM tasks into a series of focused, modular AI workflows.

Each workflow should perform a specific, deterministic task, such as:

  • Assessing prospect quality on certain clear, predefined inputs.
  • Prioritizing outreach.
  • Forecasting revenue contribution.

If we take the first workflow, which assesses prospect quality, this would entail integrating or implementing a lead scoring AI tool with your model and then feeding in data such as website activity, engagement, and CRM data. You can then instruct your model to automatically route top-scoring prospects to sales representatives, for example.

Similarly, for your forecasting workflow, connect forecasting tools to your model and train it on historical win/loss data, pipeline stages, and buyer activity logs.

To sum up:

  • Integrate only the data required.
  • Define clear success criteria.
  • Establish a feedback loop that compares model output with real outcomes.
  • Once the first workflow proves reliable, replicate the pattern for additional use cases.

When AI is trained on historical data with clearly defined criteria, its decisions become predictable, explainable, and scalable.

Recommended starting point: Draft a simple flow diagram with seven or fewer steps, identify one automation platform to orchestrate them, and assign service-level targets for speed and accuracy.

4. Continuously Test And Train AI Models

An AI-powered GTM engine is not static. It must be monitored, tested, and retrained continuously.

As markets, products, and buyer behaviors shift, these changing realities affect the accuracy and efficiency of your model.

Plus, according to OpenAI itself, one of the latest iterations of its large language model (LLM) can hallucinate up to 48% of the time, emphasizing the importance of embedding rigorous validation processes, first-party data inputs, and ongoing human oversight to safeguard decision-making and maintain trust in predictive outputs.

Maintaining AI model efficiency requires three steps:

  1. Set clear validation checkpoints and build feedback loops that surface errors or inefficiencies.
  2. Establish thresholds for when AI should hand off to human teams and ensure that every automated decision is verified. Ongoing iteration is key to performance and trust.
  3. Set a regular cadence for evaluation. At a minimum, conduct performance audits monthly and retrain models quarterly based on new data or shifting GTM priorities.

During these maintenance cycles, use the following criteria to test the AI model:

  • Ensure accuracy: Regularly validate AI outputs against real-world outcomes to confirm predictions are reliable.
  • Maintain relevance: Continuously update models with fresh data to reflect changes in buyer behavior, market trends, and messaging strategies
  • Optimize for efficiency: Monitor key performance indicators (KPIs) like time-to-action, conversion rates, and resource utilization to ensure AI is driving measurable gains.
  • Prioritize explainability: Choose models and workflows that offer transparent decision logic so GTM teams can interpret results, trust outputs, and make manual adjustments as needed.

By combining cadence, accountability, and testing rigor, you create an AI engine for GTM that not only scales but improves continuously.

Recommended starting point: Put a recurring calendar invite on the books titled “AI Model Health Review” and attach an agenda covering validation metrics and required updates.

5. Focus On Outcomes, Not Features

Success is not defined by AI adoption, but by outcomes.

Benchmark AI performance against real business metrics such as:

  • Pipeline velocity.
  • Conversion rates.
  • Client acquisition cost (CAC).
  • Marketing-influenced revenue.

Focus on use cases that unlock new insights, streamline decision-making, or drive action that was previously impossible.

When a workflow stops improving its target metric, refine or retire it.

Recommended starting point: Demonstrate value to stakeholders in the AI model by exhibiting its impact on pipeline opportunity or revenue generation.

Common Pitfalls To Avoid

1. Over-Reliance On Vanity Metrics

Too often, GTM teams focus AI efforts on optimizing for surface-level KPIs, like marketing qualified lead (MQL) volume or click-through rates, without tying them to revenue outcomes.

AI that increases prospect quantity without improving prospect quality only accelerates inefficiency.

The true test of value is pipeline contribution: Is AI helping to identify, engage, and convert buying groups that close and drive revenue? If not, it is time to rethink how you measure its efficiency.

2. Treating AI As A Tool, Not A Transformation

Many teams introduce AI as a plug-in to existing workflows rather than as a catalyst for reinventing them. This results in fragmented implementations that underdeliver and confuse stakeholders.

AI is not just another tool in the tech stack or a silver bullet. It is a strategic enabler that requires changes in roles, processes, and even how success is defined.

Organizations that treat AI as a transformation initiative will gain exponential advantages over those who treat it as a checkbox.

A recommended approach for testing workflows is to build a lightweight AI system with APIs to connect fragmented systems without needing complicated development.

3. Ignoring Internal Alignment

AI cannot solve misalignment; it amplifies it.

When sales, marketing, and operations are not working from the same data, definitions, or goals, AI will surface inconsistencies rather than fix them.

A successful AI-driven GTM engine depends on tight internal alignment. This includes unified data sources, shared KPIs, and collaborative workflows.

Without this foundation, AI can easily become another point of friction rather than a force multiplier.

A Framework For The C-Level

AI is redefining what high-performance GTM leadership looks like.

For C-level executives, the mandate is clear: Lead with a vision that embraces transformation, executes with precision, and measures what drives value.

Below is a framework grounded in the core pillars modern GTM leaders must uphold:

Vision: Shift From Transactional Tactics To Value-Centric Growth

The future of GTM belongs to those who see beyond prospect quotas and focus on building lasting value across the entire buyer journey.

When narratives resonate with how decisions are really made (complex, collaborative, and cautious), they unlock deeper engagement.

GTM teams thrive when positioned as strategic allies. The power of AI lies not in volume, but in relevance: enhancing personalization, strengthening trust, and earning buyer attention.

This is a moment to lean into meaningful progress, not just for pipeline, but for the people behind every buying decision.

Execution: Invest In Buyer Intelligence, Not Just Outreach Volume

AI makes it easier than ever to scale outreach, but quantity alone no longer wins.

Today’s B2B buyers are defensive, independent, and value-driven.

Leadership teams that prioritize technology and strategic market imperative will enable their organizations to better understand buying signals, account context, and journey stage.

This intelligence-driven execution ensures resources are spent on the right accounts, at the right time, with the right message.

Measurement: Focus On Impact Metrics

Surface-level metrics no longer tell the full story.

Modern GTM demands a deeper, outcome-based lens – one that tracks what truly moves the business, such as pipeline velocity, deal conversion, CAC efficiency, and the impact of marketing across the entire revenue journey.

But the real promise of AI is meaningful connection. When early intent signals are tied to late-stage outcomes, GTM leaders gain the clarity to steer strategy with precision.

Executive dashboards should reflect the full funnel because that is where real growth and real accountability live.

Enablement: Equip Teams With Tools, Training, And Clarity

Transformation does not succeed without people. Leaders must ensure their teams are not only equipped with AI-powered tools but also trained to use them effectively.

Equally important is clarity around strategy, data definitions, and success criteria.

AI will not replace talent, but it will dramatically increase the gap between enabled teams and everyone else.

Key Takeaways

  • Redefine success metrics: Move beyond vanity KPIs like MQLs and focus on impact metrics: pipeline velocity, deal conversion, and CAC efficiency.
  • Build AI-native workflows: Treat AI as a foundational layer in your GTM architecture, not a bolt-on feature to existing processes.
  • Align around the buyer: Use AI to unify siloed data and teams, delivering synchronized, context-rich engagement throughout the buyer journey.
  • Lead with purposeful change: C-level executives must shift from transactional growth to value-led transformation by investing in buyer intelligence, team enablement, and outcome-driven execution.

More Resources:


Featured Image: BestForBest/Shutterstock

Non-Profit Organization Announces Free Domain Names via @sejournal, @martinibuster

A non-profit organization that is supported by Cloudflare, GitHub, and other organizations has open-sourced domain names, making them available with no catches or hidden fees. The sponsor of the free domain names explains that their purpose is not to replace commercial domain names but to offer an open-source alternative for developers, students, and people who want to create a hobby site for free.

The goal is to encourage making the Internet a free and open space so that everyone can publish and express themselves online without financial barriers.

DigitalPlat

The open source domains are offered by DigitalPlat, a non-profit organization that’s sponsored by 1Password, The Hack Club (The Hack Foundation), twilio, GitHub and Cloudflare.

The Hack Foundation is a certified non-profit organization of high school students that receive support from hundreds of supporters including Google.org and Elon Musk. The organization was founded in 2016.

According to their website:

“In 2018, The Hack Foundation expanded to act as a nonprofit fiscal sponsor for Hack Clubs, hackathons, community organizations, and other for-good projects.

Today, hundreds of diverse groups ranging from a small town newspaper in Vermont to the largest high-school hackathon in Pennsylvania are fiscally sponsored by The Hack Foundation.”

A notice posted on The Hack Foundation donation web page explains their connection to DigitalPlat:

“The DigitalPlat Foundation is a global non-profit organization that supports open-source and community development while exploring innovative projects. All funds are supervised and managed by The Hack Foundation, and are strictly regulated in compliance with US IRS guidance and legal requirements under section 501(c)(3). “

DigitalPlat FreeDomain

The free domain names can be registered via DigitalPlat and the free domains project is open source, licensed under AGPL-3.0.

An announcement was made by the GitHubs Projects Community on X with a link to a GitHub page for the free domains where the following domain extensions are listed as choices:

  • .DPDNS.ORG
  • .US.KG
  • .QZZ.IO
  • .XX.KG

Technically, those are subdomains. But so are .uk.com domains.

The official GitHub page for the domains recommends using Cloudflare, FreeDNS by Afraid.org, or Hostry for managing the DNS for zero cost.

The .KG domain is from the country code of Kyrgyzstan. DPDNS.ORG is the domain name of DigitalPlat FreeDomain. .US.KG is operated by the DigitalPlat Foundation, a non-profit charitable organization that’s sponsored by The Hack Foundation.

The Open-Source Projects page for the free domains explains the purpose and goals of the free domain offers:

“The project is open source (licensed under AGPL-3.0), transparent, and backed by The Hack Foundation, a U.S. 501(c)(3) nonprofit. This isn’t a trial or a limited-time offer—it’s a sustainable effort to increase accessibility on the web.”

Full directions for registering a free domain name can be found here.

Featured Image by Shutterstock/TenPixels

GEO Tools for SMBs

AI-powered search is a new way for shoppers to discover products. ChatGPT, Perplexity, Claude, Gemini, and even AI Overviews answer shopping-related questions directly — no additional clicks required.

For brands, that’s a double-edged sword. The good news is the potential for additional exposure. The challenge is replacing organic search traffic (see the Semrush study) and surfacing the company and its products in those AI-generated answers.

A growing set of generative engine optimization (GEO) tools promises to fix this problem by measuring and improving how products and brands appear in the responses.

Few GEO platforms offer SKU-level capability — tracking and optimizing individual products in AI answers. Most focus on page-level optimization and citations, making it difficult to bulk update products with optimized content.

Nonetheless, I recently evaluated over a dozen of these GEO platforms to see which are viable for small and mid-sized businesses. Below are three recommendations with use cases, overviews, and limitations.

GEO Tools for SMBs

Writesonic

Writesonic

Writesonic focuses on product page optimization. It lets merchants rewrite and optimize (for genAI) individual product pages or articles, to then publish directly to Shopify, BigCommerce, or WordPress.

Here’s the workflow:

  1. Identify target pages. Manually select SKUs with poor organic search traffic, using Search Console, Shopify analytics, or other SEO tools.
  2. Analyze in Writesonic. Paste product page content into Writesonic or connect via API.
  3. Optimize with content metric. Edit the pages in real time with Writesonic’s Content Score metric.
  4. Update product pages. Export and publish optimized content, including metadata and formatting, to the ecommerce platform, keeping metadata and formatting intact.

Overview

  • Pricing: Tiered plans start at $49 per month.
  • Ease of use: Self-service, minimal learning curve.
  • Integrations: Direct with WordPress; export for Shopify and BigCommerce.
  • Content optimization: Strong, with rewrites of product pages and articles.

Limitations

  • Does not surface underperforming SKUs on its own.
  • No historical performance tracking.
  • No SKU-level competitive benchmarking.

Peec AI

Peec AI

Peec AI provides competitive benchmarking, showing merchants where their products and brands appear in AI-generated answers and how they compare to competitors. Peec AI doesn’t (yet) create or publish content, but its SKU-level gap analysis can guide optimization.

To use:

  • Identify visibility gaps. Track which prompts cite your brand and products, and those of competitors.
  • Analyze competitors. Monitor competitor product visibility at the SKU level for missed opportunities.
  • Export data. Pull CSV files (or link via API) to feed into your search engine, content, or analytics tools.
  • Refine on-page content. Update product pages in Shopify, BigCommerce, or other platforms, closing identified gaps.

Overview

  • Pricing: Tiered plans start at €89 per month ($103)
  • Ease of use: Simple dashboards; quick start.
  • Integrations: No direct cart integrations.
  • Content optimization: Monitoring only; no optimization tools.

Limitations

  • Does not optimize or publish product content.

Profound

Profound

Profound is primarily a measurement platform, monitoring how brands appear across AI-powered search engines. It doesn’t optimize or publish content, but it offers deep discovery and measurement capabilities that can inform SKU-level strategy.

To use:

  • Identify visibility gaps. Use Profound’s dashboards to track your products, categories, or brand in AI answers.
  • Analyze competitors. Benchmark against competitors to pinpoint missed opportunities and find high-impact prompts to target.
  • Surface related prompts. Filter by geography, category, or topic to find prompts that align with your products for potential conversions.
  • Use insights to optimize content. Export reports or integrate with analytics and SEO tools to guide on-site optimization.

Overview

  • Pricing: $499 per month with custom plans available.
  • Ease of use: Training required to interpret fully.
  • Integrations: No direct ecommerce cart integrations.
  • Content optimization: None. Focus is on measurement.

Limitations

  • Does not optimize or publish product content.

Getting Started

Merchants do not require expensive tools to improve genAI visibility. To start:

  • Audit your presence. Use free trials or affordable tools such as Peec AI to see how your products appear in AI answers.
  • Identify high-intent prompts. Ask the genAI platforms, “Identify the most common customer questions about [product/category] by analyzing Reddit, Quora, product reviews, support tickets, and forums.”
  • Start small. Pick a half-dozen products and categories to track monthly. Adjust and expand over time.

AI may produce first-time customers, but loyalty programs, email marketing, and standout service will bring them back.

On the ground in Ukraine’s largest Starlink repair shop

Oleh Kovalskyy thinks that Starlink terminals are built as if someone assembled them with their feet. Or perhaps with their hands behind their back. 

To demonstrate this last image, Kovalskyy—a large, 47-year-old Ukrainian, clad in sweatpants and with tattoos stretching from his wrists up to his neck—leans over to wiggle his fingers in the air behind him, laughing as he does. Components often detach, he says through bleached-white teeth, and they’re sensitive to dust and moisture. “It’s terrible quality. Very terrible.” 

But even if he’s not particularly impressed by the production quality, he won’t dispute how important the satellite internet service has been to his country’s defense. 

Starlink is absolutely critical to Ukraine’s ability to continue in the fight against Russia: It’s how troops in battle zones stay connected with faraway HQs; it’s how many of the drones essential to Ukraine’s survival hit their targets; it’s even how soldiers stay in touch with spouses and children back home. 

At the time of my visit to Kovalskyy in March 2025, however, it had begun to seem like this vital support system may suddenly disappear. Reuters had just broken news that suggested Musk, who was then still deeply enmeshed in Trump world, would remove Ukraine’s access to the service should its government fail to toe the line in US-led peace negotiations. Musk denied the allegations shortly afterward, but given Trump’s fickle foreign policy and inconsistent support of Ukrainian president Volodymyr Zelensky, the uncertainty of the technology’s future had become—and remains—impossible to ignore.  

a view down at the back of a volunteer working in a corner workbench. Tools and components are piled on every bit of the surface as well as the shelves in front of him.

ELENA SUBACH
a carboard box stuffed with grey cylinders

ELENA SUBACH

Kovalskyy’s unofficial Starlink repair shop may be the biggest of its kind in the world. Ordered chaos is the best way to describe it.

The stakes couldn’t be higher: Another Reuters report in late July revealed that Musk had ordered the restriction of Starlink in parts of Ukraine during a critical counteroffensive back in 2022. “Ukrainian troops suddenly faced a communications blackout,” the story explains. “Soldiers panicked, drones surveilling Russian forces went dark, and long-range artillery units, reliant on Starlink to aim their fire, struggled to hit targets.”

None of this is lost on Kovalskyy—and for now Starlink access largely comes down to the unofficial community of users and engineers of which Kovalskyy is just one part: Narodnyi Starlink.

The group, whose name translates to “The People’s Starlink,” was created back in March 2022 by a tech-savvy veteran of the previous battles against Russia-backed militias in Ukraine’s east. It started as a Facebook group for the country’s infant yet burgeoning community of Starlink users—a forum to share guidance and swap tips—but it very quickly emerged as a major support system for the new war effort. Today, it has grown to almost 20,000 members, including the unofficial expert “Dr. Starlink”—famous for his creative ways of customizing the systems—and other volunteer engineers like Kovalskyy and his men. It’s a prime example of the many informal, yet highly effective, volunteer networks that have kept Ukraine in the fight, both on and off the front line.

A repaired and mounted Starlink terminal standing on a cobbled road

ELENA SUBACH
a Starlink unit mounted to the roof of a vehicle with pink tinted windows

ELENA SUBACH

Kovalskyy and his crew of eight volunteers have repaired or customized more than 15,000 terminals since the war began in February 2022. Here, they test repaired units in a nearby parking lot.

Kovalskyy gave MIT Technology Review exclusive access to his unofficial Starlink repair workshop in the city of Lviv, about 300 miles west of Kyiv. Ordered chaos is the best way to describe it: Spread across a few small rooms in a nondescript two-story building behind a tile shop, sagging cardboard boxes filled with mud-splattered Starlink casings form alleyways among the rubble of spare parts. Like flying buttresses, green circuit boards seem to prop up the walls, and coils of cable sprout from every crevice.

Those acquainted with the workshop refer to it as the biggest of its kind in Ukraine—and, by extension, maybe the world. Official and unofficial estimates suggest that anywhere from 42,000 to 160,000 Starlink terminals operate in the country. Kovalskyy says he and his crew of eight volunteers have repaired or customized more than 15,000 terminals since the war began.

a surface scattered with pieces of used blue tape of various colors and sizes. Two ziploc bags with small metal parts are also taped up.
The informal, accessible nature of the Narodnyi Starlink community has been critical to its success. One military communications officer was inspired by Kovalskyy to set up his own repair workshop as part of Ukraine’s armed forces, but he says that official processes can be slower than private ones by a factor of 10.
ELENA SUBACH

Despite the pressure, the chance that they may lose access to Starlink was not worrying volunteers like Kovalskyy at the time of my visit; in our conversations, it was clear they had more pressing concerns than the whims of a foreign tech mogul. Russia continues to launch frequent aerial bombardments of Ukrainian cities, sometimes sending more than 500 drones in a single night. The threat of involuntary mobilization to the front line looms on every street corner. How can one plan for a hypothetical future crisis when crisis defines every minute of one’s day?


Almost every inch of every axis of the battlefield in Ukraine is enabled by Starlink. It connects pilots near the trenches with reconnaissance drones soaring kilometers above them. It relays the video feeds from those drones to command centers in rear positions. And it even connects soldiers, via encrypted messaging services, with their family and friends living far from the front.  

Although some soldiers and volunteers, including members of Narodnyi Starlink, refer to Starlink as a luxury, the reality is that it’s an essential utility; without it, Ukrainian forces would need to rely on other, often less effective means of communication. These include wired-line networks, mobile internet, and older geostationary satellite technology—all of which provide connectivity that is either slower, more vulnerable to interference, or more difficult for untrained soldiers to set up. 

“If not for Starlink, we would already be counting rubles in Kyiv,” Kovalskyy says.

close up of a Starlink unit on the lap of a volunteer, who is writing notes in a gridded notebook

ELENA SUBACH
a hand holding pieces of shrapnel

ELENA SUBACH

The workshop’s crew has learned to perform adjustments to terminals, especially in adapting them for battlefield conditions. At right, a volunteer engineer shows the fragments of shrapnel he has extracted from the terminals.

Despite being designed primarily for commercial use, Starlink provides a fantastic battlefield solution. The low-latency, high-bandwidth connection its terminals establish with its constellation of low-Earth-orbit satellites can transmit large streams of data while remaining very difficult for the enemy to jam—in part because the satellites, unlike geostationary ones, are in constant motion. 

It’s also fairly easy to use, so that soldiers with little or no technical knowledge can connect in minutes. And the system costs much less than other military technology; while the US and Polish governments pay business rates for many of Ukraine’s Starlink systems, individual soldiers or military units can purchase the hardware at the private rate of about $500, and subscribe for just $50 per month.

No alternatives match Starlink for cost, ease of use, or coverage—and none will in the near future. Its constellation of 8,000 satellites dwarfs that of its main competitor, a service called OneWeb sold by the French satellite operator Eutelsat, which has only 630 satellites. OneWeb’s hardware costs about 20 times more, and a subscription can run significantly higher, since OneWeb targets business customers. Amazon’s Project Kuiper, the most likely future competitor, started putting satellites in space only this year. 


Volodymyr Stepanets, a 51-year-old Ukrainian self-described “geek,” had been living in Krakow, Poland, with his family when Russia invaded in 2022. But before that, he had volunteered for several years on the front lines of the war against Russian-supported paramilitaries that began in 2014. 

He recalls, in those early months in eastern Ukraine, witnessing troops coordinating an air strike with rulers and a calculator; the whole process took them between 30 and 40 minutes. “All these calculations can be done in one minute,” he says he told them. “All we need is a very stupid computer and very easy software.” (The Ukrainian military declined to comment on this issue.)

Stepanets subsequently committed to helping this brigade, the 72nd, integrate modern technology into its operations. He says that within one year, he had taught them how to use modern communication platforms, positioning devices, and older satellite communication systems that predate Starlink. 

a Starlink terminal with leaves inside the housing, seen lit in silhouette and numbered 5566
Narodnyi Starlink members ask each other for advice about how to adapt the systems: how to camouflage them from marauding Russian drones or resolve glitches in the software, for example.
ELENA SUBACH

So after Russian tanks rolled across the border, Stepanets was quick to see how Starlink’s service could provide an advantage to Ukraine’s armed forces. He also recognized that these units, as well as civilian users, would need support in utilizing the new technology. And that’s how he came up with the idea for Narodnyi Starlink, an open Facebook group he launched on March 21, just a few weeks after the full invasion began and the Ukrainian government requested the activation of Starlink.

Over the past few years, the Narodnyi Starlink digital community has grown to include volunteer engineers, resellers, and military service members interested in the satellite comms service. The group’s members post roughly three times per day, often sharing or asking for advice about adaptations, or seeking volunteers to fix broken equipment. A user called Igor Semenyak recently asked, for example, whether anyone knew how to mask his system from infrared cameras. “How do you protect yourself from heat radiation?” he wrote, to which someone suggested throwing special heat-proof fabric over the terminal.

Its most famous member is probably a man widely considered the brains of the group: Oleg Kutkov, a 36-year-old software engineer otherwise known to some members as “Dr. Starlink.” Kutkov had been privately studying Starlink technology from his home in Kyiv since 2021, having purchased a system to tinker with when service was still unavailable in the country; he believes that he may have been the country’s first Starlink user. Like Stepanets, he saw the immense potential for Starlink after Russia broke traditional communication lines ahead of its attack.

“Our infrastructure was very vulnerable because we did not have a lot of air defense,” says Kutkov, who still works full time as an engineer at the US networking company Ubiquiti’s R&D center in Kyiv. “Starlink quickly became a crucial part of our survival.”

Stepanets contacted Kutkov after coming across his popular Twitter feed and blog, which had been attracting a lot of attention as early Starlink users sought help. Kutkov still publishes the results of his own research there—experiments he performs in his spare time, sometimes staying up until 3 a.m. to complete them. In May, for example, he published a blog post explaining how users can physically move a user account from one terminal to another when the printed circuit board in one is “so severely damaged that repair is impossible or impractical.” 

“Oleg Kutkov is the coolest engineer I’ve met in my entire life,” Kovalskyy says.

a volunteer holding a Starlink vertically to pry it open

ELENA SUBACH
two volunteers at workbenches repairing terminals

ELENA SUBACH

When the fighting is at its worst, the workshop may receive 500 terminals to repair every month. The crew lives and sometimes even sleeps there.

Supported by Kutkov’s technical expertise and Stepanets’s organizational prowess, Kovalskyy’s warehouse became the major repair hub (though other volunteers also make repairs elsewhere). Over time, Kovalskyy—who co-owned a regional internet service provider before the war—and his crew have learned to perform adjustments to Starlink terminals, especially to adapt them for battlefield conditions. For example, they modified them to receive charge at the right voltage directly from vehicles, years before Starlink released a proprietary car adapter. They’ve also switched out Starlink’s proprietary SPX plugs—which Kovalskyy criticized as vulnerable to moisture and temperature changes—with standard ethernet ports. 

Together, the three civilians—Kutkov, Stepanets, and Kovalskyy—effectively lead Narodnyi Starlink. Along with several other members who wished to remain anonymous, they hold meetings every Monday over Zoom to discuss their activities, including recent Starlink-related developments on the battlefield, as well as information security. 

While the public group served as a suitable means of disseminating information in the early stages of the war when speed was critical, they have had to move a lot of their communications to private channels after discovering Russian surveillance; Stepanets says that at least as early as 2024, Russians had translated a 300-page educational document they had produced and shared online. Now, as administrators of the Facebook group, the three men block the publication of any posts deemed to reveal information that might be useful to Russian forces. 

Stepanets believes the threat extends beyond the group’s intel to its members’ physical safety. When we talked, he brought up the attempted assassination of the Ukrainian activist and volunteer Serhii Sternenko in May this year. Although Sternenko was unaffiliated with Narodnyi Starlink, the event served as a clear reminder of the risks even civilian volunteers undertake in wartime Ukraine. “The Russian FSB and other [security] services still understand the importance of participation in initiatives like [Narodnyi Starlink],” Stepanets says. He stresses that the group is not an organization with a centralized chain of command, but a community that would continue operating if any of its members were no longer able to perform their roles. 

closeup of a Starlink board with light shining through the holes
“We have extremely professional engineers who are extremely intelligent,” Kovalskyy told me. “Repairing Starlink terminals for them is like shooting ducks with HIMARS [a vehicle-borne GPS-guided rocket launcher].”
ELENA SUBACH

The informal, accessible nature of this community has been critical to its success. Operating outside official structures has allowed Narodnyi Starlink to function much more efficiently than state channels. Yuri Krylach, a military communications officer who was inspired by Kovalskyy to set up his own repair workshop as part of Ukraine’s armed forces, says that official processes can be slower than private ones by a factor of 10; his own team’s work is often interrupted by other tasks that commanders deem more urgent, whereas members of the Narodnyi Starlink community can respond to requests quickly and directly. (The military declined to comment on this issue, or on any military connections with Narodnyi Starlink.)


Most of the Narodnyi Starlink members I spoke to, including active-duty soldiers, were unconcerned about the report that Musk might withdraw access to the service in Ukraine. They pointed out that doing so would involve terminating state contracts, including those with the US Department of Defense and Poland’s Ministry of Digitalization. Losing contracts worth hundreds of millions of dollars (the Polish government claims to pay $50 million per year in subscription fees), on top of the private subscriptions, would cost the company a significant amount of revenue. “I don’t really think that Musk would cut this money supply,” Kutkov says. “It would be quite stupid.” Oleksandr Dolynyak, an officer in the 103rd Separate Territorial Defense Brigade and a Narodnyi Starlink member since 2022, says: “As long as it is profitable for him, Starlink will work for us.”

Stepanets does believe, however, that Musk’s threats exposed an overreliance on the technology that few had properly considered. “Starlink has really become one of the powerful tools of defense of Ukraine,” he wrote in a March Facebook post entitled “Irreversible Starlink hegemony,” accompanied by an image of the evil Darth Sidious from Star Wars. “Now, the issue of the country’s dependence on the decisions of certain eccentric individuals … has reached [a] melting point.”

Even if telecommunications experts both inside and outside the military agree that Starlink has no direct substitute, Stepanets believes that Ukraine needs to diversify its portfolio of satellite communication tools anyway, integrating additional high-speed satellite communication services like OneWeb. This would relieve some of the pressure caused by Musk’s erratic, unpredictable personality and, he believes, give Ukraine some sense of control over its wartime communications. (SpaceX did not respond to a request for comment.) 

The Ukrainian military seems to agree with this notion. In late March, at a closed-door event in Kyiv, the country’s then-deputy minister of defense Kateryna Chernohorenko announced the formation of a special Space Policy Directorate “to consolidate internal and external capabilities to advance Ukraine’s military space sector.” The announcement referred to the creation of a domestic “satellite constellation,” which suggests that reliance on foreign services like Starlink had been a catalyst. “Ukraine needs to transition from the role of consumer to that of a full-fledged player in the space sector,” a government blog post stated. (Chernohorenko did not respond to a request for comment.)

Ukraine isn’t alone in this quandary. Recent discussions about a potential Starlink deal with the Italian government, for example, have stalled as a result of Musk’s behavior. And as Juliana Süss, an associate fellow at the UK’s Royal United Services Institute, points out, Taiwan chose SpaceX’s competitor Eutelsat when it sought a satellite communications partner in 2023.

“I think we always knew that SpaceX is not always the most reliable partner,” says Süss, who also hosts RUSI’s War in Space podcast, citing Musk’s controversial comments about the country’s status. “The Taiwan problems are a good example for how the rest of the world might be feeling about this.”

Nevertheless, Ukraine is about to become even more deeply enmeshed with Starlink; the country’s leading mobile operator Kyivstar announced in July that Ukraine will soon become the first European nation to offer Starlink direct-to-mobile services. Süss is cautious about placing too much emphasis on this development though. “This step does increase dependency,” she says. “But that dependency is already there.” Adding an additional channel of communications as a possible backup is otherwise a logical action for a country at war, she says.


These issues can feel far away for the many Ukrainians who are just trying to make it through to the next day. Despite its location in the far west of Ukraine, Lviv, home to Kovalskyy’s shop, is still frequently hit by Russian kamikaze drones, and local military-affiliated sites are popular targets. 

Still, during our time together, Kovalskyy was far more worried by the prospect of his team’s possible mobilization. In March, the Ministry of Defense had removed the special status that had otherwise protected his people from involuntary conscription given the nature of their volunteer activities. They’re now at risk of being essentially picked up off the street by Ukraine’s dreaded military recruitment teams, known as the TCK, whenever they leave the house.

A room with walls covered by a grid of patches and Ukrainian flags, and stacks of grey boxes on the floor
The repair shop displays patches from many different Ukrainian military units—each given as a gift for their services. “We sometimes perform miracles with Starlinks,” Kovalskyy said.
COURTESY OF THE AUTHOR

This is true even though there’s so much demand for the workshop’s services that during my visit, Kovalskyy expressed frustration at the vast amount of time they’ve had to dedicate solely to basic repairs. “We have extremely professional engineers who are extremely intelligent,” he told me. “Repairing Starlink terminals for them is like shooting ducks with HIMARS [a vehicle-borne GPS-guided rocket launcher].” 

At least the situation seemed to have become better on the front over the winter, Kovalskyy added, handing me a Starlink antenna whose flat, white surface had been ripped open by shrapnel. When the fighting is at its worst, the team might receive 500 terminals to repair every month, and the crew lives in the workshop, sometimes even sleeping there. But at that moment in time, it was receiving only a couple of hundred.

We ended our morning at the workshop by browsing its vast collection of varied military patches, pinned to the wall on large pieces of Velcro. Each had been given as a gift by a different unit as thanks for the services of Kovalskyy and his team, an indication of the diversity and size of Ukraine’s military: almost 1 million soldiers protecting a 600-mile front line. At the same time, it’s a physical reminder that they almost all rely on a single technology with just a few production factories located on another continent nearly 6,000 miles away.

“We sometimes perform miracles with Starlinks,” Kovalskyy says. 

He and his crew can only hope that they will still be able to for the foreseeable future—or, better yet, that they won’t need to at all.  

Charlie Metcalfe is a British journalist. He writes for magazines and newspapers including Wired, the Guardian, and MIT Technology Review.

Why recycling isn’t enough to address the plastic problem

I remember using a princess toothbrush when I was little. The handle was purple, teal, and sparkly. Like most of the other pieces of plastic that have ever been made, it’s probably still out there somewhere, languishing in a landfill. (I just hope it’s not in the ocean.)

I’ve been thinking about that toothbrush again this week after UN talks about a plastic treaty broke down on Friday. Nations had gotten together to try and write a binding treaty to address plastic waste, but negotiators left without a deal.

Plastic is widely recognized as a huge source of environmental pollution—again, I’m wondering where that toothbrush is—but the material is also a contributor to climate change. Let’s dig into why talks fell apart and how we might address emissions from plastic.

I’ve defended plastic before in this newsletter (sort of). It’s a wildly useful material, integral in everything from glasses lenses to IV bags.

But the pace at which we’re producing and using plastic is absolutely bonkers. Plastic production has increased at an average rate of 9% every year since 1950. Production hit 460 million metric tons in 2019. And an estimated 52 million metric tons are dumped into the environment or burned each year.

So, in March 2022, the UN Environment Assembly set out to develop an international treaty to address plastic pollution. Pretty much everyone should agree that a bunch of plastic waste floating in the ocean is a bad thing. But as we’ve learned over the past few years, as these talks developed, opinions diverge on what to do about it and how any interventions should happen.

One phrase that’s become quite contentious is the “full life cycle” of plastic. Basically, some groups are hoping to go beyond efforts to address just the end of the plastic life cycle (collecting and recycling it) by pushing for limits on plastic production. There was even talk at the Assembly of a ban on single-use plastic.

Petroleum-producing nations strongly opposed production limits in the talks. Representatives from Saudi Arabia and Kuwait told the Guardian that they considered limits to plastic production outside the scope of talks. The US reportedly also slowed down talks and proposed to strike a treaty article that references the full life cycle of plastics.

Petrostates have a vested interest because oil, natural gas, and coal are all burned for energy used to make plastic, and they’re also used as raw materials. This stat surprised me: 12% of global oil demand and over 8% of natural gas demand is for plastic production.  

That translates into a lot of greenhouse gas emissions. One report from Lawrence Berkeley National Lab found that plastics production accounted for 2.24 billion metric tons of carbon dioxide emissions in 2019—that’s roughly 5% of the global total.  

And looking into the future, emissions from plastics are only set to grow. Another estimate, from the Organisation for Economic Co-operation and Development, projects that emissions from plastics could swell from about 2 billion metric tons to 4 billion metric tons by 2060.

This chart is what really strikes me and makes the conclusion of the plastic treaty talks such a disappointment.

Recycling is a great tool, and new methods could make it possible to recycle more plastics and make it easier to do so. (I’m particularly interested in efforts to recycle a mix of plastics, cutting down on the slow and costly sorting process.)

But just addressing plastic at its end of life won’t be enough to address the climate impacts of the material. Most emissions from plastic come from making it. So we need new ways to make plastic, using different ingredients and fuels to take oil and gas out of the equation. And we need to be smarter about the volume of plastic we produce.  

One positive note here: The plastic treaty isn’t dead, just on hold for the moment. Officials say that there’s going to be an effort to revive the talks.

Less than 10% of plastic that’s ever been produced has been recycled. Whether it’s a water bottle, a polyester shirt you wore a few times, or a princess toothbrush from when you were a kid, it’s still out there somewhere in a landfill or in the environment. Maybe you already knew that. But also consider this: The greenhouse gases emitted to make the plastic are still in the atmosphere, too, contributing to climate change. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.