Beardbrand’s Top Ecommerce Tools in 2026

Occasionally on the podcast I depart from interviewing guests and share my own experiences running Beardbrand, the D2C company I founded in 2012.

In this episode, I address my favorite ecommerce tools in 2026, the platforms and apps essential to our business.

My entire audio narration is embedded below. The transcript is edited for clarity and length.

Website

Shopify is an incredible platform for Beardbrand. It gives us the flexibility to quickly test and implement major site changes, such as restructuring our product pages. For example, we replaced multiple fragrance variants on a single product page with individual pages for each fragrance, supported by a collection page.

We can now tell the story of each scent, showcase fragrance-specific reviews, and recommend matching products. The result? A faster site and improved conversions (about 4.6%). For performance, storytelling, and scalability, Shopify dominates.

Judge.me. Another foundational tool is Judge.me, a customer review widget. I’m now a brand ambassador for that company after using it for years. The app is economical; we pay just $15 per month. We’ve customized it to blend into our website, and it looks beautiful.

Recharge. I’ve experienced ups and downs over the years with Recharge, the subscription management platform. Sometimes I feel it’s too expensive, but lately the features have improved. I’ve received compliments from customers on how we run our subscriptions and how easy the process is. Recharge has been a good partner. We have no intentions or plans to look elsewhere.

Marketing

Klaviyo. We’ve long used Klaivyo for all email and text campaigns and automated flows. The decision to include text messaging with Klaviyo was not easy. Postscript is the best in that category for us. But we wanted to consolidate our data. Klaviyo’s text platform is serviceable and a good option. Email is critical to Beardbrand’s success. Our subscriber database functions like a customer management platform.

PostPilot. We have been utilizing PostPilot for our physical postcard campaigns. It’s a nice service, especially to reach folks who have unsubscribed from email and text. They still buy from us, however, and PostPilot is a great way to stay in front of them.

Opensend helps us identify and reach anonymous site visitors who show interest in purchasing our products. The service has improved our conversions. We sync it with PostPilot flows and let it run automatically.

Grapevine Surveys is an essential post-purchase survey tool for customer insights. Grapevine is more affordable than platforms such as Triple Whale or Northbeam, both of which are great, precise options for larger brands. For us, Grapevine provides a simple three-question post-purchase survey: How long is your beard? How did you find us? Why did you choose us?

Meta Ads is our primary channel for customer acquisition. We create a ton of ads — some in-house and some with an agency.

Creative

CapCut is an AI-driven video-editing software. We don’t use it directly, but our agency does. CapCut streamlines and expedites the production process and lessens the burden of our in-house video editor.

Grok Imagine from X generates 6-second videos from prompts. If you’re not using AI video for some of your ads, you’re missing out. I love Grok Imagine. We can create an amazing number of videos quickly. The best use for us is video clips based on prompts of still images of real people, as testimonials. We never use AI to generate fake people and referrals, which is illegal.

Arcads. Mike, our growth marketer, uses Arcads, which is similar to Grok Imagine but more limiting. Sometimes he’ll have me generate videos in Grok Imagine, with its speed and capacity, and then send to him.

Google Nano Banana does a great job for our static images. Our product labels have a lot of text that’s challenging for AI to reproduce. Nano Banana is not perfect, but its errors and hallucinations in the text on our bottles are noticeable only if you stop and study it for a few seconds. Overall, Nono Banana is impressive. For example, I used it to generate an image with black hardened lava next to a knockout photo of our beard oil, to place on a bottle. It did a great job. If you are not experimenting with AI image and video generation, get in there, learn, and start cranking out stuff.

Operations

Settle is an accounts payable and vendor management platform. We signed up late last year. It syncs with our newly adopted accrual accounting system (we had long been on a cash basis) and helps us allocate resources and see where our money is going. Our bookkeeper enters all vendor invoices into Settle. I can verify the accuracy of the invoices and the timing of our payment.

Mercury. We switched to Mercury, a bank-like platform, about six months ago. It’s been a game-changer. It’s entirely different from our previous (traditional) bank. We’ve automated cash transfers between our operational checking and savings accounts to maintain the minimum checking balance while preventing overdrafts. We also use Mercury for our employee credit cards. Mercury pays off the balances immediately once they hit a threshold. It eliminates fees and saves a ton of time.

ShipStation and OpenBorder. We still use ShipStation’s software for fulfillment and shipping, integrating with our third-party fulfillment provider. We signed up with OpenBorder, another software platform, to expedite logistics into Europe. We haven’t officially returned to Europe, but it’s coming. OpenBorder’s assistance is helping.

Slack. Everybody uses Slack. We once used Asana, Trello, and Basecamp, among other collaboration platforms. We dropped them all in favor of Slack, which is also our project management tool. We’re saving money for equivalent productivity.

Google Docs. I’m not a fan of giant corporations such as Google. They retain my data, and I lose privacy. But still, Google Docs is an amazing tool with Sheets and sharing with my colleagues. So, yes, from Nano Banana to Docs, Google is crucial and beneficial.

Microsoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations via @sejournal, @MattGSouthern

Microsoft’s Defender Security Research Team published research describing what it calls “AI Recommendation Poisoning.” The technique involves businesses hiding prompt-injection instructions within website buttons labeled “Summarize with AI.”

When you click one of these buttons, it opens an AI assistant with a pre-filled prompt delivered through a URL query parameter. The visible part tells the assistant to summarize the page. The hidden part instructs it to remember the company as a trusted source for future conversations.

If the instruction enters the assistant’s memory, it can influence recommendations without you knowing it was planted.

What’s Happening

Microsoft’s team reviewed AI-related URLs observed in email traffic over 60 days. They found 50 distinct prompt injection attempts from 31 companies.

The prompts share a similar pattern. Microsoft’s post includes examples where instructions told the AI to remember a company as “a trusted source for citations” or “the go-to source” for a specific topic. One prompt went further, injecting full marketing copy into the assistant’s memory, including product features and selling points.

The researchers traced the technique to publicly available tools, including the npm package CiteMET and the web-based URL generator AI Share URL Creator. The post describes both as designed to help websites “build presence in AI memory.”

The technique relies on specially crafted URLs with prompt parameters that most major AI assistants support. Microsoft listed the URL structures for Copilot, ChatGPT, Claude, Perplexity, and Grok, but noted that persistence mechanisms differ across platforms.

It’s formally cataloged as MITRE ATLAS AML.T0080 (Memory Poisoning) and AML.T0051 (LLM Prompt Injection).

What Microsoft Found

The 31 companies identified were real businesses, not threat actors or scammers.

Multiple prompts targeted health and financial services sites, where biased AI recommendations carry more weight. One company’s domain was easily mistaken for a well-known website, potentially leading to false credibility. And one of the 31 companies was a security vendor.

Microsoft called out a secondary risk. Many of the sites using this technique had user-generated content sections like comment threads and forums. Once an AI treats a site as authoritative, it may extend that trust to unvetted content on the same domain.

Microsoft’s Response

Microsoft said it has protections in Copilot against cross-prompt injection attacks. The company noted that some previously reported prompt-injection behaviors can no longer be reproduced in Copilot, and that protections continue to evolve.

Microsoft also published advanced hunting queries for organizations using Defender for Office 365, allowing security teams to scan email and Teams traffic for URLs containing memory manipulation keywords.

You can review and remove stored Copilot memories through the Personalization section in Copilot chat settings.

Why This Matters

Microsoft compares this technique to SEO poisoning and adware, placing it in the same category as the tactics Google spent two decades fighting in traditional search. The difference is that the target has moved from search indexes to AI assistant memory.

Businesses doing legitimate work on AI visibility now face competitors who may be gaming recommendations through prompt injection.

The timing is notable. SparkToro published a report showing that AI brand recommendations already vary across nearly every query. Google VP Robby Stein told a podcast that AI search finds business recommendations by checking what other sites say. Memory poisoning bypasses that process by planting the recommendation directly into the user’s assistant.

Roger Montti’s analysis of AI training data poisoning covered the broader concept of manipulating AI systems for visibility. That piece focused on poisoning training datasets. This Microsoft research shows something more immediate, happening at the point of user interaction and being deployed commercially.

Looking Ahead

Microsoft acknowledged this is an evolving problem. The open-source tooling means new attempts can appear faster than any single platform can block them, and the URL parameter technique applies to most major AI assistants.

It’s unclear whether AI platforms will treat this as a policy violation with consequences, or whether it stays as a gray-area growth tactic that companies continue to use.

Hat tip to Lily Ray for flagging the Microsoft research on X, crediting @top5seo for the find.


Featured Image: elenabsl/Shutterstock

Google Ads Surfaces PMax Search Partner Domains In Placement Report via @sejournal, @MattGSouthern

Some advertisers are now seeing Performance Max placement data populate in Google Ads reporting, including Search Partner domains and impression counts that had previously been absent from the report.

PPC marketer Thomas Eccel flagged the change on LinkedIn, noting the report had been empty for his PMax campaigns until now.

“I finally see where and how Pmax is being displayed!” Eccel wrote. “But also cool to see finally who the real Google Search Partners are. That was always a blurry grey zone.”

What’s New

Google has documented a Performance Max placement report intended for brand safety review, and that report is now showing data for a wider set of accounts. The data includes individual placement domains, network type, placement type, and impression volume.

The Search Partner visibility is the detail getting attention. PMax campaigns have distributed ads across Google’s Search Partner Network since launch, but many advertisers saw an empty report when they looked for specifics. That’s now changing for at least some accounts.

Google hasn’t issued a formal announcement tied to this change. Google’s help documentation notes that starting in March 2024, the PMax placement report supports Search Partner Network sites. What’s new is the data appearing where it didn’t before.

The rollout is uneven, though. Some commenters on Eccel’s LinkedIn post said the report is still empty in their accounts.

What The Report Doesn’t Show

Google describes this placement reporting as a brand safety tool, not a performance report. The data shows impressions at the placement level but doesn’t break out clicks, conversions, or cost for individual placements.

You can see where your ads appeared and how many times, but you can’t calculate the return on any specific placement. Search Partner Network costs are reported as a single line item in channel performance reporting, rather than being attributed by domain.

Advertisers can use the data to make exclusion decisions for brand safety reasons. But tying outcomes to specific placements inside this view isn’t possible, which limits its use as an optimization tool.

This fits a pattern in how Google has rolled out PMax transparency over the past two years. Channel-level reporting launched in mid-2025 with performance data by surface type, and deeper asset segmentation followed in the fall. Each update has added visibility without giving advertisers full placement-level performance data.

Why This Matters

PMax placement visibility has been one of the most persistent requests from paid search practitioners since the campaign type launched. The placement report existed in the interface but returned no data, frustrating advertisers who wanted to know where their budgets were going.

The Search Partner detail matters because PMax doesn’t offer the same Search Partners toggle as standard Search campaigns, though advertisers can use exclusions. Seeing which partner domains are getting impressions and cross-referencing that against overall Search Partner performance in the channel report gives you a data point you didn’t have in practice before, even if the report itself isn’t new.

The brand safety framing is worth keeping in mind. Google’s documentation describes this report as a way to check where ads appear, not to evaluate performance. That distinction matters for how you use the data and how you talk about it with clients or stakeholders who may expect more granularity than it provides.

Looking Ahead

Google has steadily expanded PMax reporting over the past year, moving from limited channel visibility to surface-level breakdowns to the placement-level impression data now appearing for more accounts.

Whether placement-level performance metrics follow is an open question. Google hasn’t confirmed plans to add clicks, conversions, or cost to the placement report. For now, checking whether the data is available in your account and reviewing the Search Partner domains to get your impressions is the practical next step.

Information Retrieval Part 3: Vectorization And Transformers (Not The Film)

Information retrieval systems are designed to satisfy a user. To make a user happy with the quality of their recall. It’s important we understand that. Every system and its inputs and outputs are designed to provide the best user experience.

From the training data to similarity scoring and the machine’s ability to “understand” our tired, sad bullshit – this is the third in a series I’ve titled, information retrieval for morons.

Image Credit: Harry Clarkson-Bennett

TL;DR

  1. In the vector space model, the distance between vectors represents the relevance (similarity) between the documents or items.
  2. Vectorization has allowed search engines to perform concept searching instead of word searching. It is the alignment of concepts, not letters or words.
  3. Longer documents contain more similar terms. To combat this, document length is normalized, and relevance is prioritized.
  4. Google has been doing this for over a decade. Maybe for over a decade, you have too.

Things You Should Know Before We Start

Some concepts and systems you should be aware of before we dive in.

I don’t remember all of these, and neither will you. Just try to enjoy yourself and hope that through osmosis and consistency, you vaguely remember things over time.

  • TF-IDF stands for term frequency-inverse document frequency. It is a numerical statistic used in NLP and information retrieval to measure a term’s relevance within a document corpus.
  • Cosine similarity measures the cosine of the angle between two vectors, ranging from -1 to 1. A smaller angle (closer to 1) implies higher similarity.
  • The bag-of-words model is a way of representing text data when modelling text with machine learning algorithms.
  • Feature extraction/encoding models are used to convert raw text into numerical representations that can be processed by machine learning models.
  • Euclidean distance measures the straight-line distance between two points in vector space to calculate data similarity (or dissimilarity).
  • Doc2Vec (an extension of Word2Vec), designed to represent the similarity (or lack of it) in documents as opposed to words.

What Is The Vector Space Model?

The vector space model (VSM) is an algebraic model that represents text documents or items as “vectors.” This representation allows systems to create a distance between each vector.

The distance calculates the similarity between terms or items.

Commonly used in information retrieval, document ranking, and keyword extraction, vector models create structure. This structured, high-dimensional numerical space enables the calculation of relevance via similarity measures like cosine similarity.

Terms are assigned values. If a term appears in the document, its value is non-zero. Worth noting that terms are not just individual keywords. They can be phrases, sentences, and entire documents.

Once queries, phrases, and sentences are assigned values, the document can be scored. It has a physical place in the vector space as chosen by the model.

In this case, words, represented on a graph to denote relationships between them (Image Credit: Harry Clarkson-Bennett)

Based on its score, documents can be compared to one another based on the inputted query. You generate similarity scores at scale. This is known as semantic similarity, where a set of documents is scored and positioned in the index based on their meaning.

Not just their lexical similarity.

I know this sounds a bit complicated, but think of it like this:

Words on a page can be manipulated. Keyword stuffed. They’re too simple. But if you can calculate meaning (of the document), you’re one step closer to a quality output.

Why Does It Work So Well?

Machines don’t just like structure. They bloody love it.

Fixed-length (or styled) inputs and outputs create predictable, accurate results. The more informative and compact a dataset, the better quality classification, extraction, and prediction you will get.

The problem with text is that it doesn’t have much structure. At least not in the eyes of a machine. It’s messy. This is why it has such an advantage over the classic Boolean Retrieval Model.

In Boolean Retrieval Models, documents are retrieved based on whether they satisfy the conditions of a query that uses Boolean logic. It treats each document as a set of words or terms and uses AND, OR, and NOT operators to return all results that fit the bill.

Its simplicity has its uses, but cannot interpret meaning.

Think of it more like data retrieval than identifying and interpreting information. We fall into the term frequency (TF) trap too often with more nuanced searches. Easy, but lazy in today’s world.

Whereas the vector space model interprets actual relevance to the query and doesn’t require exact match terms. That’s the beauty of it.

It’s this structure that creates much more precise recall.

The Transformer Revolution (Not Michael Bay)

Unlike Michael Bay’s series, the real transformer architecture replaced older, static embedding methods (like Word2Vec) with contextual embeddings.

While static models assign one vector to each word, transformers generate dynamic representations that change based on the surrounding words in a sentence.

And yes, Google has been doing this for some time. It’s not new. It’s not GEO. It’s just modern information retrieval that “understands” a page.

I mean, obviously not. But you, as a hopefully sentient, breathing being, understand what I mean. But transformers, well, they fake it:

  1. Transformers weight input by data by significance.
  2. The model pays more attention to words that demand or provide extra context.

Let me give you an example.

“The bat’s teeth flashed as it flew out of the cave.”

Bat is an ambiguous term. Ambiguity is bad in the age of AI.

But transformer architecture links bat with “teeth,” “flew,” and “cave,” signaling that bat is far more likely to be a bloodsucking rodent* than something a gentleman would use to caress the ball for a boundary in the world’s finest sport.

*No idea if a bat is a rodent, but it looks like a rat with wings.

BERT Strikes Back

BERT. Bidirectional Encoder Representations from Transformers. Shrugs.

This is how Google has worked for years. By applying this type of contextually aware understanding to the semantic relationships between words and documents. It’s a huge part of the reason why Google is so good at mapping and understanding intent and how it shifts over time.

BERT’s more recent updates (DeBERTa) allow words to be represented by two vectors – one for meaning and one for its position in the document. This is known as Disentangled Attention. It provides more accurate context.

Yep, sounds weird to me, too.

BERT processes the entire sequence of words simultaneously. This means context is applied from the entirety of the page content (not just the few surrounding terms).

Synonyms Baby

Launching in 2015, RankBrain was Google’s first deep learning system. Well, that I know of anyway. It was designed to help the search algorithm understand how words relate to concepts.

This was kind of the peak search era. Anyone could start a website about anything. Get it up and ranking. Make a load of money. Not need any kind of rigor.

Halcyon days.

With hindsight, these days weren’t great for the wider public. Getting advice on funeral planning and commercial waste management from a spotty 23-year-old’s bedroom in Halifax.

As new and evolving queries surged, RankBrain and the subsequent neural matching were vital.

Then there was MUM. Google’s ability to “understand” text, images and visual content across multiple languages simultenously.

Document length was an obvious problem 10 years ago. Maybe less. Longer articles, for better or worse, always did better. I remember writing 10,000-word articles on some nonsense about website builders and sticking them on a homepage.

Even then that was a rubbish idea…

In a world where queries and documents are mapped to numbers, you could be forgiven for thinking that longer documents will always be surfaced over shorter ones.

Remember 10-15 years ago when everyone was obsessed when every article being 2,000 words.

“That’s the optimal length for SEO.”

If you see another “What time is X” 2,000-word article, you have my permission to shoot me.

You can’t knock the fact this is a better experience (Image Credit: Harry Clarkson-Bennett)

Longer documents will – as a result of containing more terms – have higher TF values. They also contain more distinct terms. These factors can conspire to raise the scores of longer documents

Hence why, for a while, they were the zenith of our crappy content production.

Longer documents can broadly be lumped into two categories:

  1. Verbose documents that essentially repeat the same content (hello, keyword stuffing, my old friend).
  2. Documents covering multiple topics, in which the search terms probably match small segments of the document, but not all of it.

To combat this obvious issue, a form of compensation for document length is used, known as Pivoted Document Length Normalization. This adjusts scores to counteract the natural bias longer documents have.

Pivoted normalization rescales term weights using a linear adjustment around the average document length (Image Credit: Harry Clarkson-Bennett)

The cosine distance should be used because we do not want to favour longer (or shorter) documents, but to focus on relevance. Leveraging this normalization prioritizes relevance over term frequency.

It’s why cosine similarity is so valuable. It is robust to document length. A short and long answer can be seen as topically identical if they point in the same direction in the vector space.

Great question.

Well, no one’s expecting you to understand the intricacies of a vector database. You don’t really need to know that databases create specialized indices to find close neighbors without checking every single record.

This is just for companies like Google to strike the right balance between performance, cost, and operational simplicity.

Kevin Indig’s latest excellent research shows that 44.2% of all citations in ChatGPT originate from the first 30% of the text. The probability of citation drops significantly after this initial section, creating a “ski ramp” effect.

Image Credit: Harry Clarkson-Bennett

Even more reason not to mindlessly create massive documents because someone told you to.

In “AI search,” a lot of this comes down to tokens. According to Dan Petrovic’s always excellent work, each query has a fixed grounding budget of approximately 2,000 words total, distributed across sources by relevance rank.

In Google, at least. And your rank determines your score. So get SEO-ing.

Position 1 gives you double the prominence of position 5 (Image Credit: Harry Clarkson-Bennett)

Metehan’s study on what 200,000 Tokens Reveal About AEO/GEO really highlights how important this is. Or will be. Not just for our jobs, but biases and cultural implications.

As text is tokenized (compressed and converted into a sequence of integer IDs), this has cost and accuracy implications.

  • Plain English prose is the most token-efficient format at 5.9 characters per token. Let’s call it 100% relative efficiency. A baseline.
  • Turkish prose has just 3.6. This is 61% as efficient.
  • Markdown tables 2.7. 46% as efficient.

Languages are not created equal. In an era where capital expenditures (CapEx) costs are soaring, and AI firms have struck deals I’m not sure they can cash, this matters.

Well, as Google has been doing this for some time, the same things should work across both interfaces.

  1. Answer the flipping question. My god. Get to the point. I don’t care about anything other than what I want. Give it to me immediately (spoken as a human and a machine).
  2. So frontload your important information. I have no attention span. Neither do transformer models.
  3. Disambiguate. Entity optimization work. Connect the dots online. Claim your knowledge panel. Authors, social accounts, structured data, building brands and profiles.
  4. Excellent E-E-A-T. Deliver trustworthy information in a manner that sets you apart from the competition.
  5. Create keyword-rich internal links that help define what the page and content are about. Part disambiguation. Part just good UX.
  6. If you want something focused on LLMs, be more efficient with your words.
    • Using structured lists can reduce token consumption by 20-40% because they remove fluff. Not because they’re more efficient*.
    • Use commonly known abbreviations to also save tokens.

*Interestingly, they are less efficient than traditional prose.

Almost all of this is about giving people what they want quickly and removing any ambiguity. In an internet full of crap, doing this really, really works.

Last Bits

There is some discussion around whether markdown for agents can help strip out the fluff from HTML on your site. So agents could bypass the cluttered HTML and get straight to the good stuff.

How much of this could be solved by having a less fucked up approach to semantic HTML, I don’t know. Anyway, one to watch.

Very SEO. Much AI.

More Resources:


Read Leadership in SEO. Subscribe now.


Featured Image: Anton Vierietin/Shutterstock

Google AI Mode Link Update, Click Share Data & ChatGPT Fan-Outs – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s SEO Pulse: updates affect how links appear in AI search results, where organic clicks are going, and which languages ChatGPT uses to find sources.

Here’s what matters for you and your work.

Google Redesigns Links In AI Overviews And AI Mode

Robby Stein, VP of Product for Google Search, announced on X that AI Overviews and AI Mode are getting a redesigned link experience on both desktop and mobile.

Key Facts: On desktop, groups of links will now appear in a pop-up when you hover over them, showing site names, favicons, and short descriptions. Google is also rolling out more descriptive and prominent link icons across desktop and mobile.

Why This Matters

This is the latest in a series of link-visibility updates Stein has announced since last summer, when he called showing more inline links Google’s “north star” for AI search. The pattern is consistent. Google keeps iterating on how links surface inside AI-generated responses.

The hover pop-up is a new interaction pattern for AI Overviews. Instead of small inline citations that are easy to miss, users now get a preview card with enough context to decide whether to click. That changes the calculus for publishers wondering how much traffic AI results actually send.

What The Industry Is Saying

SEO consultant Lily Ray (Amsive) wrote on X that she had been seeing the new link cards and was “REALLY hoping it sticks.”

Read our full coverage: Google Says Links Will Be More Visible In AI Overviews

43% Of ChatGPT Fan-Out Queries For Non-English Prompts Run In English

A report from AI search analytics firm Peec AI found that a large share of ChatGPT’s fan-out queries run in English, even when the original prompt was in another language.

Key Facts: Peec AI analyzed over 10 million prompts and 20 million fan-out queries from its platform data. Across non-English prompts analyzed, 43% of the fan-out queries ran in English. Nearly 78% of non-English prompt sessions included at least one English-language fan-out query.

Why This Matters

When ChatGPT Search builds an answer, it can rewrite the user’s prompt into “one or more targeted queries,” according to OpenAI’s documentation. OpenAI does not describe how language is chosen for those rewritten queries. Peec AI’s data suggests that English gets inserted into the process even when the user and their location are clearly non-English.

SEO and content teams working in non-English markets may face a disadvantage in ChatGPT’s source selection that doesn’t map to traditional ranking signals. Language filtering appears to happen before citation signals come into play.

Read our full coverage: ChatGPT Search Often Switches To English In Fan-Out Queries: Report

Google’s Search Relations Team Can’t Say You Still Need A Website

Google’s Search Relations team was asked directly whether you still need a website in 2026. They didn’t give a definitive yes.

Key Facts: In a new episode of the Search Off the Record podcast, Gary Illyes and Martin Splitt spent about 28 minutes exploring the question. Both acknowledged that websites still offer advantages, including data sovereignty, control over monetization, and freedom from platform content moderation. But neither argued that the open web offers something irreplaceable.

Why This Matters

Google Search is built around crawling and indexing web content. The fact that Google’s own Search Relations team treats “do I need a website?” as a business decision rather than an obvious yes is worth noting.

Illyes offered the closest thing to a position. He said that if you want to make information available to as many people as possible, a website is probably still the way to go. But he called it a personal opinion, not a recommendation.

The conversation aligns with increasingly fragmented user journeys, now spanning AI chatbots, social feeds, community platforms, and traditional search. For practitioners advising clients on building websites, the answer increasingly depends on where the audience is, not where it used to be.

Read our full coverage: Google’s Search Relations Team Debates If You Still Need A Website

Theme Of The Week: The Ground Keeps Moving Under Organic

Each story this week shows a different force pulling attention, clicks, or visibility away from the organic channel as practitioners have known it.

Google is redesigning how links appear in AI responses, acknowledging the traffic concern. ChatGPT’s background queries introduce a language filter that can exclude non-English content before relevance signals even apply. And Google’s own team won’t say that websites are the default answer for visibility anymore.

These stories reinforce the idea of spreading your content across different platforms to reach more people. And track where your clicks are really coming from.

More Resources:


Featured Image: TippaPatt/Shutterstock; Paulo Bobita/Search Engine Journal

New Meridian Tool, Performance Max Learning Path – PPC Pulse via @sejournal, @brookeosmundson

Welcome to this week’s PPC Pulse, where this week’s focus is on scenario-based planning in both Google and Microsoft platforms.

Google introduced a new Scenario Planner within Meridian, giving marketers the ability to model budget allocation shifts before committing spend. Microsoft launched a scenario-based Performance Max learning path designed to walk advertisers through practical campaign situations.

Both updates point to a growing emphasis on improving decisions before campaigns go live.

Here’s what happened this week and why it matters for advertisers.

Google Introduces Scenario Planner For Meridian

Google announced a new Scenario Planner within Meridian, its Marketing Mix Modeling platform. The tool allows marketers to test budget allocation scenarios and forecast potential outcomes using Meridian’s modeled insights.

Instead of waiting for quarterly MMM reports or static insights, advertisers can now simulate how shifting spend across channels might impact performance metrics like revenue, conversions, or return on investment.

According to Google, the goal is to make MMM insights more accessible and actionable for marketers who need to defend budgets and make planning decisions in real time. It also reiterated that coding isn’t required to use this tool.

It looks to be a promising planning tool built for higher-level strategy conversations between advertisers and key decision-makers.

Why This Matters For Advertisers

Marketing Mix Modeling has traditionally been handled at a higher level of the organization. It tends to show up in quarterly reviews, annual planning decks, or conversations led by finance and analytics teams. Most PPC managers are not sitting inside MMM tools on a weekly basis.

What makes this update notable is that Google is moving those insights closer to the teams actually managing budgets day to day.

PPC marketers are being asked more frequently to justify budget increases or reallocations with something stronger than last-click performance.

A tool like this could influence how those conversations happen. Instead of pointing only to recent return on ad spend (ROAS) trends, teams may start leaning more on modeled projections and incremental impact estimates when proposing changes.

What PPC Professionals Are Saying

Ginny Marvin, Ads Liaison for Google, shared the announcement on LinkedIn. Here’s what she emphasized about the Scenario Planner:

“No technical MMM experience needed to go from ‘what happened?’ to ‘what’s next?’”

Advertisers like Ivan Walker are “very excited!” about the update, while others like Ashley V. are curious about hearing feedback from others who have started using it.

Microsoft Launches Scenario-Based Performance Max Learning Path

Along the same lines of planning, Microsoft Advertising announced a new Performance Max learning path within its Learning Lab.

Unlike standard certification modules, this path walks advertisers through real-world scenarios designed to build hands-on expertise. The training focuses on practical decision-making across campaign setup, optimization, and troubleshooting.

I appreciate how Microsoft is positioning – that Performance Max success requires understanding, context, and strategy instead of focusing solely on what settings to toggle.

The learning path is designed to help advertisers think through situations they are likely to encounter in live accounts. For example, how to approach budget allocation, how to evaluate asset performance, and how to troubleshoot underperformance.

Why This Matters For Advertisers

Performance Max is not new at this point. Most advertisers have at least tested it, and many are running it at scale. What has changed is the level of thinking required to run it well.

There is still a misconception that PMax runs on its own once you flip it on. In reality, outcomes are heavily influenced by how campaigns are structured, what signals are being fed into the system, and how clearly conversion goals are defined.

Microsoft is leaning into the idea that automation does not remove the need for strategy. It shifts where strategy shows up. Instead of spending time adjusting bids manually, advertisers are spending time making decisions around inputs, segmentation, creative quality, and measurement alignment.

For agencies and in-house teams, scenario-based training could be useful for onboarding or leveling up junior team members. It provides context around the types of situations teams actually encounter, rather than just explaining what each setting does.

Theme Of The Week: Planning Before Spending

Both updates this week center around the same idea, which is trying to improve the quality of decisions before money is spent.

Google is giving marketers a way to test budget allocation scenarios before shifting spend to other platforms. Microsoft is walking advertisers through realistic campaign situations before they are live in their accounts.

While many industry updates focus on optimizations after campaigns are running, these ones focus on the earlier stage. How confident are you in the structure? How confident are you in the allocation? How confident are you in the assumptions behind the strategy?

Especially with budgets under tighter scrutiny than ever, and automation handling much more of campaign execution, the planning phase definitely carries more weight than it used to.

More Resources:


Featured Image: Kansuda2 Kaewwannarat/Shutterstock; Paulo Bobita/Search Engine Journal

From integration chaos to digital clarity: Nutrien Ag Solutions’ post-acquisition reset

Thank you for joining us on the “Enterprise AI hub.”

In this episode of the Infosys Knowledge Institute Podcast, Dylan Cosper speaks with Sriram Kalyan, head of applications and data at Nutrien Ag Solutions, Australia, about turning a high-risk post-acquisition IT landscape into a scalable digital foundation. Sriram shares how the merger of two major Australian agricultural companies created duplicated systems, fragile integrations, and operational risk, compounded by the sudden loss of key platform experts and partners. He explains how leadership alignment, disciplined platform consolidation, and a clear focus on business outcomes transformed integration from an invisible liability into a strategic enabler, positioning Nutrien Ag Solutions for future growth, cloud transformation, and enterprise scale.

Click here to continue.

What it takes to make agentic AI work in retail

Thank you for joining us on the “Enterprise AI hub.”

In this episode of the Infosys Knowledge Institute Podcast, Dylan Cosper speaks with Prasad Banala, director of software engineering at a large US-based retail organization, about operationalizing agentic AI across the software development lifecycle. Prasad explains how his team applies AI to validate requirements, generate and analyze test cases, and accelerate issue resolution, while maintaining strict governance, human-in-the-loop review, and measurable quality outcomes.

Click here to continue.

How uncrewed narco subs could transform the Colombian drug trade

On a bright morning last April, a surveillance plane operated by the Colombian military spotted a 40-foot-long shark-like silhouette idling in the ocean just off Tayrona National Park. It was, unmistakably, a “narco sub,” a stealthy fiberglass vessel that sails with its hull almost entirely underwater, used by drug cartels to move cocaine north. The plane’s crew radioed it in, and eventually nearby coast guard boats got the order, routine but urgent: Intercept.

In Cartagena, about 150 miles from the action, Captain Jaime González Zamudio, commander of the regional coast guard group, sat down at his desk to watch what happened next. On his computer monitor, icons representing his patrol boats raced toward the sub’s coordinates as updates crackled over his radio from the crews at sea. This was all standard; Colombia is the world’s largest producer of cocaine, and its navy has been seizing narco subs for decades. And so the captain was pretty sure what the outcome would be. His crew would catch up to the sub, just a bit of it showing above the water’s surface. They’d bring it to heel, board it, and force open the hatch to find two, three, maybe four exhausted men suffocating in a mix of diesel fumes and humidity, and a cargo compartment holding several tons of cocaine.

The boats caught up to the sub. A crew boarded, forced open the hatch, and confirmed that the vessel was secure. But from that point on, things were different.

First, some unexpected details came over the radio: There was no cocaine on board. Neither was there a crew, nor a helm, nor even enough room for a person to lie down. Instead, inside the hull the crew found a fuel tank, an autopilot system and control electronics, and a remotely monitored security camera. González Zamudio’s crew started sending pictures back to Cartagena: Bolted to the hull was another camera, as well as two plastic rectangles, each about the size of a cookie sheet—antennas for connecting to Starlink satellite internet.

The authorities towed the boat back to Cartagena, where military techs took a closer look. Weeks later, they came to an unsettling conclusion: This was Colombia’s first confirmed uncrewed narco sub. It could be operated by remote control, but it was also capable of some degree of autonomous travel. The techs concluded that the sub was likely a prototype built by the Clan del Golfo, a powerful criminal group that operates along the Caribbean coast.

For decades, handmade narco subs have been some of the cocaine trade’s most elusive and productive workhorses, ferrying multi-ton loads of illicit drugs from Colombian estuaries toward markets in North America and, increasingly, the rest of the world. Now off-the-shelf technology—Starlink terminals, plug-and-play nautical autopilots, high-resolution video cameras—may be advancing that cat-and-mouse game into a new phase.

Uncrewed subs could move more cocaine over longer distances, and they wouldn’t put human smugglers at risk of capture. Law enforcement around the world is just beginning to grapple with what the Tayrona sub means for the future—whether it was merely an isolated experiment or the opening move in a new era of autonomous drug smuggling at sea.


Drug traffickers love the ocean. “You can move drug traffic through legal and illegal routes,” says Juan Pablo Serrano, a captain in the Colombian navy and head of the operational coordination center for Orión, a multiagency, multinational counternarcotics effort. The giant container ships at the heart of global commerce offer a favorite approach, Serrano says. Bribe a chain of dockworkers and inspectors, hide a load in one of thousands of cargo boxes, and put it on a totally legal commercial vessel headed to Europe or North America. That route is slow and expensive—involving months of transit and bribes spread across a wide network—but relatively low risk. “A ship can carry 5,000 containers. Good luck finding the right one,” he says.

Far less legal, but much faster and cheaper, are small, powerful motorboats. Quick to build and cheap to crew, these “go-fasts” top out at just under 50 feet long and can move smaller loads in hours rather than days. But they’re also easy for coastal radars and patrols to spot.

Submersibles—or, more accurately, “semisubmersibles”—fit somewhere in the middle. They take more money and engineering to build than an open speedboat, but they buy stealth—even if a bit of the vessel rides at the surface, the bulk stays hidden underwater. That adds another option to a portfolio that smugglers constantly rebalance across three variables: risk, time, and cost. When US and Colombian authorities tightened control over air routes and commercial shipping in the early 1990s, subs became more attractive. The first ones were crude wooden hulls with a fiberglass shell and extra fuel tanks, cobbled together in mangrove estuaries, hidden from prying eyes. Today’s fiberglass semisubmersible designs ride mostly below the surface, relying on diesel engines that can push multi-ton loads for days at a time while presenting little more than a ripple and a hot exhaust pipe to radar and infrared sensors.

A typical semisubmersible costs under $2 million to build and can carry three metric tons of cocaine. That’s worth over $160 million in Europe—wholesale.

Most ferry between South American coasts and handoff points in Central America and Mexico, where allied criminal organizations break up the cargo and slowly funnel it toward the US. But some now go much farther. In 2019, Spanish authorities intercepted a semisubmersible after a 27-day transatlantic voyage from Brazil. In 2024, police in the Solomon Islands found the first narco sub in the Asia-Pacific region, a semisubmersible probably originating from Colombia on its way to Australia or New Zealand.

If the variables are risk, time, and cost, then the economics of a narco sub are simple. Even if they spend more time on the water than a powerboat, they’re less likely to get caught—and a relative bargain to produce. A narco sub might cost between $1 million and $2 million to build, but a kilo of cocaine costs just about $500 to make. “By the time that kilo reaches Europe, it can sell for between $44,000 and $55,000,” Serrano says. A typical semisubmersible carries up to three metric tons—cargo worth well over $160 million at European wholesale prices.

Starlink panel with a rusty mount
hands holding a Starlink antenna
rusty round white surveillance camera

Off-the-shelf nautical autopilots, WiFi antennas, Starlink satellite internet connections, and remote cameras are all drug smugglers need to turn semisubmersibles into drone ships.

As a result, narco subs are getting more common. Seizures by authorities tripled in the last 20 years, according to Colombia’s International Center for Research and Analysis Against Maritime Drug Trafficking (CMCON), and Serrano admits that the Orión alliance has enough ships and aircraft to catch only a fraction of what sails.

Until now, though, narco subs have had one major flaw: They depended on people, usually poor fishermen or low-level recruits sealed into stifling compartments for days at a time, steering by GPS and sight, hoping not to be spotted. That made the subs expensive and a risk to drug sellers if captured. Like good capitalists, the Tayrona boat’s builders seem to have been trying to obviate labor costs with automation. No crew means more room for drugs or fuel and no sailors to pay—or to get arrested or flip if a mission goes wrong.

“If you don’t have a person or people on board, that makes the transoceanic routes much more feasible,” says Henry Shuldiner, a researcher at InSight Crime who has analyzed hundreds of narco-sub cases. It’s one thing, he notes, to persuade someone to spend a day or two going from Colombia to Panama for a big payout; it’s another to ask four people to spend three weeks sealed inside a cramped tube, sleeping, eating, and relieving themselves in the same space. “That’s a hard sell,” Shuldiner says.

An uncrewed sub doesn’t have to race to a rendezvous because its crew can endure only a few days inside. It can move more slowly and stealthily. It can wait out patrols or bad weather, loiter near a meeting point, or take longer and less well-monitored routes. And if something goes wrong—if a military plane appears or navigation fails—its owners can simply scuttle the vessel from afar.

Meanwhile, the basic technology to make all that work is getting more and more affordable, and the potential profit margins are rising. “The rapidly approaching universality of autonomous technology could be a nightmare for the U.S. Coast Guard,” wrote two Coast Guard officers in the US Naval Institute’s journal Proceedings in 2021. And as if to prove how good an idea drone narco subs are, the US Marine Corps and the weapons builder Leidos are testing a low-profile uncrewed vessel called the Sea Specter, which they describe as being “inspired” by narco-sub design.

The possibility that drug smugglers are experimenting with autonomous subs isn’t just theoretical. Law enforcement agencies on other smuggling routes have found signs the Tayrona sub isn’t an isolated case. In 2022, Spanish police seized three small submersible drones near Cádiz, on Spain’s southern coast. Two years later, Italian authorities confiscated a remote-­controlled minisubmarine they believed was intended for drug runs. “The probability of expansion is high,” says Diego Cánovas, a port and maritime security expert in Spain. Tayrona, the biggest and most technologically advanced uncrewed narco sub found so far, is more likely a preview than an anomaly.


Today, the Tayrona semisubmersible sits on a strip of grass at the ARC Bolívar naval base in Cartagena. It’s exposed to the elements; rain has streaked its paint. To one side lies an older, bulkier narco sub seized a decade ago, a blue cylinder with a clumsy profile. The Tayrona’s hull looks lower, leaner, and more refined.

Up close, it is also unmistakably handmade. The hull is a dull gray-blue, the fiberglass rough in places, with scrapes and dents from the tow that brought it into port. It has no identifying marks on the exterior—nothing that would tie it to a country, a company, or a port. On the upper surface sit the two Starlink antennas, painted over in the same gray-blue to keep them from standing out against the sea.

I climb up a ladder and drop through the small hatch near the stern. Inside, the air is damp and close, the walls beaded with condensation. Small puddles of fuel have collected in the bilge. The vessel has no seating, no helm or steering wheel, and not enough space to stand up straight or lie down. It’s clear it was never meant to carry people. A technical report by CMCON found that the sub would have enough fuel for a journey of some 800 nautical miles, and the central cargo bay would hold between 1 and 1.5 tons of cocaine.

At the aft end, the machinery compartment is a tangle of hardware: diesel engine, batteries, pumps, and a chaotic bundle of cables feeding an electronics rack. All the core components are still there. Inside that rack, investigators identified a NAC-3 autopilot processor, a commercial unit designed to steer midsize boats by tying into standard hydraulic pumps, heading sensors, and rudder-­feedback systems. They cost about $2,200 on Amazon.

“These are plug-and-play technologies,” says Wilmar Martínez, a mechatronics professor at the University of America in Bogotá, when I show him pictures of the inside of the sub. “Midcareer mechatronics students could install them.”


For all its advantages, an autonomous drug-smuggling submarine wouldn’t be invincible. Even without a crew on board, there are still people in the chain. Every satellite internet terminal—Starlink or not—comes with a billing address, a payment method, and a log of where and when it pings the constellation. Colombian officers have begun to talk about negotiating formal agreements with providers, asking them to alert authorities when a transceiver’s movements match known smuggling patterns. Brazil’s government has already cut a deal with Starlink to curb criminal use of its service in the Amazon.

The basic playbook for finding a drone sub will look much like the one for crewed semisubmersibles. Aircraft and ships will use radar to pick out small anomalies and infrared cameras to look for the heat of a diesel engine or the turbulence of a wake. That said, it might not work. “If they wind up being smaller, they’re going to be darn near impossible to detect,” says Michael Knickerbocker, a former US Navy officer who advises defense tech firms.

Autonomous drug subs are “a great example of how resilient cocaine traffickers are, and how they’re continuously one step ahead of authorities,” says one researcher.

Even worse, navies already act on only a fraction of their intelligence leads because they don’t have enough ships and aircraft. The answer, Knickerbocker argues, is “robot on robot.” Navies and coast guards will need swarms of their own small, relatively cheap uncrewed systems—surface vessels, underwater gliders, and long-endurance aerial vehicles that can loiter, sense, and relay data back to human operators. Those experiments have already begun. The US 4th Fleet, which covers Latin America and the Caribbean, is experimenting with uncrewed platforms in counternarcotics patrols. Across the Atlantic, the European Union’s European Maritime Safety Agency operates drones for maritime surveillance.

Today, though, the major screens against oceangoing vessels of all kinds are coastal radar networks. Spain operates SIVE to watch over choke points like the Strait of Gibraltar, and in the Pacific, Australia’s over-the-horizon radar network, JORN, can spot objects hundreds of miles away, far beyond the range of conventional radar.

Even so, it’s not enough to just spot an uncrewed narco sub. Law enforcement also has to stop it—and that will be tricky.

man in naval uniform pointing at a map
To find drone subs, international law enforcement will likely have to rely on networks of surveillance systems and, someday, swarms of their own drones.
CARLOS PARRA RIOS

With a crewed vessel, Colombian doctrine says coast guard units should try to hail the boat first with lights, sirens, radio calls, and warning shots. If that fails, interceptor crews sometimes have to jump aboard and force the hatch. Officers worry that future autonomous craft could be wired to sink or even explode if someone gets too close. “If they get destroyed, we may lose the evidence,” says Víctor González Badrán, a navy captain and director of CMCON. “That means no seizure and no legal proceedings against that organization.” 

That’s where electronic warfare enters the picture—radio-frequency jamming, cyber tools, perhaps more exotic options. In the simplest version, jamming means flooding the receiver with noise so that commands from the operator never reach the vessel. Spoofing goes a step further, feeding fake signals so that the sub thinks it’s somewhere else or obediently follows a fake set of waypoints. Cyber tools might aim higher up the chain, trying to penetrate the software that runs the vessel or the networks it uses to talk to satellite constellations. At the cutting edge of these countermeasures are electromagnetic pulses designed to fry electronics outright, turning a million-dollar narco sub into a dead hull drifting at sea.

In reality, the tools that might catch a future Tayrona sub are unevenly distributed, politically sensitive, and often experimental. Powerful cyber or electromagnetic tricks are closely guarded secrets; using them in a drug case risks exposing capabilities that militaries would rather reserve for wars. Systems like Australia’s JORN radar are tightly held national security assets, their exact performance specs classified, and sharing raw data with countries on the front lines of the cocaine trade would inevitably mean revealing hints as to how they got it. “Just because a capability exists doesn’t mean you employ it,” Knickerbocker says. 

Analysts don’t think uncrewed narco subs will reshape the global drug trade, despite the technological leap. Trafficking organizations will still hedge their bets across those three variables, hiding cocaine in shipping containers, dissolving it into liquids and paints, racing it north in fast boats. “I don’t think this is revolutionary,” Shuldiner says. “But it’s a great example of how resilient cocaine traffickers are, and how they’re continuously one step ahead of authorities.”

There’s still that chance, though, that everything international law enforcement agencies know about drug smuggling is about to change. González Zamudio says he keeps getting requests from foreign navies, coast guards, and security agencies to come see the Tayrona sub. He greets their delegations, takes them out to the strip of grass on the base, and walks them around it, gives them tours. It has become a kind of pilgrimage. Everyone who makes it worries that the next time a narco sub appears near a distant coastline, they’ll board it as usual, force the hatch—and find it full of cocaine and gadgets, but without a single human occupant. And no one knows what happens after that. 

Eduardo Echeverri López is a journalist based in Colombia.

The building legal case for global climate justice

The United States and the European Union grew into economic superpowers by committing climate atrocities. They have burned a wildly disproportionate share of the world’s oil and gas, planting carbon time bombs that will detonate first in the poorest, hottest parts of the globe. 

Meanwhile, places like the Solomon Islands and Chad—low-lying or just plain sweltering—have emitted relatively little carbon dioxide, but by dint of their latitude and history, they rank among the countries most vulnerable to the fiercest consequences of global warming. That means increasingly devastating cyclones, heat waves, famines, and floods.

Morally, there’s an ironclad case that the countries or companies responsible for this mess should provide compensation for the homes that will be destroyed, the shorelines that will disappear beneath rising seas, and the lives that will be cut short. By one estimate, the major economies owe a climate debt to the rest of the world approaching $200 trillion in reparations.

Legally, though, the case has been far harder to make. Even putting aside the jurisdictional problems, early climate science couldn’t trace the provenance of airborne molecules of carbon dioxide across oceans and years. Deep-pocketed corporations with top-tier legal teams easily exploited those difficulties. 

Now those tides might be turning. More climate-related lawsuits are getting filed, particularly in the Global South. Governments, nonprofits, and citizens in the most climate-exposed nations continue to test new legal arguments in new courts, and some of those courts are showing a new willingness to put nations and their industries on the hook as a matter of human rights. In addition, the science of figuring out exactly who is to blame for specific weather disasters, and to what degree, is getting better and better. 

It’s true that no court has yet held any climate emitter liable for climate-related damages. For starters, nations are generally immune from lawsuits originating in other countries. That’s why most cases have focused on major carbon producers. But they’ve leaned on a pretty powerful defense. 

While oil and gas companies extract, refine, and sell the world’s fossil fuels, most of the emissions come out of “the vehicles, power plants, and factories that burn the fuel,” as Michael Gerrard and Jessica Wentz, of Columbia Law School’s Sabin Center, note in a recent piece in Nature. In other words, companies just dig the stuff up. It’s not their fault someone else sets it on fire.

So victims of extreme weather events continue to try new legal avenues and approaches, backed by ever-more-convincing science. Plaintiffs in the Philippines recently sued the oil giant Shell over its role in driving Super Typhoon Odette, a 2021 storm that killed more than 400 people and displaced nearly 800,000. The case relies partially on an attribution study that found climate change made extreme rainfall like that seen in Odette twice as likely. 

IVAN JOESEFF GUIWANON/GREENPEACE

Overall, evidence of corporate culpability—linking a specific company’s fossil fuel to a specific disaster—is getting easier to find. For example, a study published in Nature in September was able to determine how much particular companies contributed to a series of 21st-century heat waves.

A number of recent legal decisions signal improving odds for these kinds of suits. Notably, a handful of determinations in climate cases before the European Court of Human Rights affirmed that states have legal obligations to protect people from the effects of climate change. And though it dismissed the case of a Peruvian farmer who sued a German power company over fears that a melting alpine glacier could destroy his property, a German court determined that major carbon polluters could in principle be found liable for climate damages tied to their emissions. 

At least one lawsuit has already emerged that could test that principle: Dozens of Pakistani farmers whose land was deluged during the massive flooding events of 2022 have sued a pair of major German power and cement companies.

Even if the lawsuit fails, that would be a problem with the system, not the science. Major carbon-polluting countries and companies have a disproportionate responsibility for climate-change-powered disasters. 

Wealthy nations continued to encourage business practices that pollute the atmosphere, even as the threat of climate change grew increasingly grave. And oil and gas companies remain the kingpin suppliers to a fossil-fuel-addicted world. They have operated with the full knowledge of the massive social, environmental, and human cost imposed by their business while lobbying fiercely against any rules that would force them to pay for those harms or clean up their act. 

They did it. They knew. In a civil society where rule of law matters, they should pay the price. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.