Digital Goods Could Now Face Tariffs

For nearly three decades, the nations of the World Trade Organization have protected software and digital downloads from duties and tariffs. But what had been a rule is now negotiable.

The WTO’s 14th Ministerial Conference on March 26-29 in Yaoundé, Cameroon, “ended in impasse, after an agreement among 164 WTO members to extend the Moratorium on Customs Duties on Electronic Transmissions to December 31, 2030, was blocked by Brazil and Turkey,” wrote the Office of the United States Trade Representative in a release.

The moratorium expired on March 31, but the lapse does not immediately change how most businesses operate. But it removes a foundational protection for cross-border digital products, opening the door to tariffs on software, downloads, and potentially SaaS platforms.

For ecommerce businesses, the impasse may foretell a shift away from global uniformity toward comparably fragmented, country-specific rules.

History

Since 1998, WTO members have agreed not to charge tariffs on “electronic transmissions.” While the term was not perfectly defined, it has generally included everything from stock photography and streaming video to ecommerce software.

The no-tariff-or-duty agreement was, at least originally, temporary by design. Members renewed it every two years, creating a stable, if somewhat fragile, foundation for global digital commerce and software distribution.

The Trump Administration has worked to make the moratorium permanent, striking individual deals with countries.

Most member states in developed economies agreed.

Photo from the WTO of national representatives in a crowded room waving paper documents

At its March conference, WTO member nations did not maintain the moratorium on digital transaction duties. Photo: WTO.

Disagreement

The WTO, however, operates by consensus. All members must agree to extend agreements, such as the moratorium on electronic transmissions. This time, they did not.

A relatively small group of member nations, including Brazil and Turkey, blocked renewal. Their position reflects a broader divide. Many developing countries believe they are foregoing potential tax revenue and limiting their ability to regulate domestic digital markets.

The disagreement is perhaps less about the mechanics of digital ecommerce and more about control of the digital economy: who benefits, who collects revenue, and who sets the rules.

While not directly related, the lapse is at least adjacent to ongoing concerns around cryptocurrencies. The WTO typically and correctly treats digital currency as a financial asset or part of financial services, and not as a “digital good” crossing a border in the same way as a downloaded file. Nonetheless, there is at least a philosophical and economic connection.

Both software and digital currencies move value across borders without customs checkpoints. For governments seeking more direct administrative control, this raises concerns.

National Alliances

The expiration introduces uncertainties.

First, the WTO no longer blocks digital tariffs. Countries can impose duties on downloads, media, and potentially digital services. Whether and how they do so remains unclear. One question is around AI. Is using an AI system an “electronic transmission” under this rule?

Second, compliance may become complex. Merchants selling digital products or relying on cross-border SaaS tools could face different tax treatments, reporting requirements, or definitions of what constitutes a digital import.

“Fortunately, the United States has secured commitments from dozens of countries — and nearly all of our major trading partners — not to impose tariffs on U.S. digital transmissions. If the WTO cannot achieve this commonsense aim, the United States will work outside of the WTO with all interested partners to get it done. To that end, the United States invites all trading partners to commit to a plurilateral, ecommerce moratorium agreement,” said U.S. Trade Ambassador Jamieson Greer.

Essentially, without a WTO rule, digital trade will increasingly depend on regional agreements, deals, and country-specific policies. Ecommerce begins to resemble other regulated domains, such as privacy, where rules vary by market.

Operations

The potential challenges arising from the lapsed moratorium are not only financial but operational.

Determining where a digital transaction occurs, whether at the buyer’s location, the seller’s headquarters, or the server hosting the service, is not always obvious. Jurisdictions may apply different standards, creating uncertainty for merchants operating across borders.

Payments and billing systems could also require adjustment. In some cases, platforms or payment processors may be responsible for collecting and remitting any applicable duties, as is the case with value-added tax in many regions today. That shift could introduce additional fees or administrative steps, particularly for smaller merchants without dedicated tax resources.

Definitions will matter. One country may classify a SaaS subscription as an imported digital good, while another may not. Over time, the inconsistencies may require companies to adopt more localized pricing, infrastructure, or compliance strategies.

Don’t Go Chasing AI Yet: A Framework for Prioritizing SEO vs. AI Search via @sejournal, @hethr_campbell

Everyone is scrambling to incorporate AI. But what takes priority?

Is generative engine optimization (GEO) replacing traditional SEO?

Should you shift budget from traditional SEO to AI content experiments?

Watch this on-demand SEO webinar to see how to prioritize SEO vs. AI search based on your business model.

Before You Reallocate SEO Budget, Validate Where AI Will Drive Incremental Growth In Channel Mix

In this session, DAC’s Alex Hernandez, Associate Director of SEO, and Orli Millstein, Director of Content Strategy, challenge the assumption that more AI optimization automatically equals more growth. Instead, you’ll see how business model, product complexity, and customer journey determine whether AI visibility should be accelerated, balanced, or deprioritized.

You’ll Learn:

You’ll walk away with a structured way to evaluate strategic fit, content readiness, and revenue impact before reallocating budget or rewriting your roadmap.

Watch the on-demand webinar now to build an AI search strategy that strengthens performance rather than dilutes it!

Your Owned Content Is Losing To A Stranger’s Reddit Comment via @sejournal, @DuaneForrester

The next time you ask an AI what product to buy, which agency to hire, or which software platform actually works, pay attention to where the answer comes from. Increasingly, it does not come from the vendor’s own website. It comes from a stranger’s Reddit comment written eighteen months ago, upvoted 847 times by people who tried the thing themselves.

This is not an accident. It’s architecture.

The Reddit Effect

The financial architecture behind Reddit’s presence in AI answers became public in early 2024. Google signed an initial licensing agreement with Reddit worth a reported $60 million per year, with total disclosed licensing across multiple AI companies reaching $203 million. That arrangement gave Google real-time access to Reddit’s posts and comments for training its AI models and powering AI Overviews, and the terms are now being renegotiated upward. Reddit executives have said current agreements undervalue the platform’s discussions, which now fuel everything from ChatGPT to Google’s generative answers.

The citation data confirms how central Reddit has become. Between August 2024 and June 2025, Reddit was the most cited domain in both Google AI Overviews and Perplexity, and the second most cited source in ChatGPT, trailing only Wikipedia. In Google’s AI Overviews specifically, Reddit citations grew 450% between March and June 2025. A separate study from early 2024 found Reddit appearing in results more than 97% of the time for queries related to products and reviews.

Reddit’s visibility in traditional search has fluctuated over this period, with organic rankings dropping noticeably in early January 2025. But its foothold in the AI answer layer has proven more durable than its SERP position, because these are different systems pulling from the same data source. Reddit’s hold on the AI layer reflects something structural about the content itself, not just a licensing arrangement.

Why Community Signals Work For AI

To understand why community platforms have become load-bearing infrastructure for AI answers, you need to hold two ideas at once.

First, community signals enter AI systems through two distinct pathways, not one. In the parametric pathway, community content gets baked into model weights during training and becomes part of what the model knows before anyone types a query. In the retrieval pathway, community content gets pulled in real time through retrieval-augmented generation (RAG) when the model needs current, specific, or contested information. Brands absent from community platforms before a model’s training cutoff face a significantly harder problem than brands simply absent from recent crawls. They are invisible at both layers simultaneously.

Second, the quality filtering that community platforms apply, through upvotes, accepted answers, reply chains, and sustained engagement, functions as a proxy signal that training pipelines have learned to weight. OpenAI’s training data hierarchy explicitly places Reddit content with three or more upvotes at Tier 2, directly below Wikipedia and licensed publisher partners. A heavily upvoted Reddit thread is treated as more credible input than most published content on the open web, because it carries the accumulated validation of hundreds or thousands of independent human judgments.

When multiple independent voices converge on the same recommendation across a thread, that convergence pattern looks different to a retrieval system than a single authoritative publication making the same claim. It is the AI equivalent of a strong link graph, distributed and uncoordinated agreement that no single actor manufactured. About 48% of AI citations now come from community platforms like Reddit and YouTube, with 85% of brand mentions originating from third-party pages rather than owned domains. The model is telling you something about where it trusts the signal.

The Manipulation Risk

Any system that rewards community consensus will attract people who want to manufacture it, and this one is no exception. The SEO parallel is exact: The same logic that made link spam profitable for decades is now making fake community engagement attractive to anyone who understands how AI systems weigh these signals.

The Trap Plan incident in late 2025 is the clearest recent case study. A marketing firm posted approximately 100 fake organic comments promoting a game on Reddit, then published a blog post documenting the campaign’s approach. The screenshots circulated everywhere. The post was ultimately deleted, but the reputational damage was not. A thread naming the company indexed in Google and sat in search results alongside legitimate coverage, visible to every potential customer searching the brand.

The detection infrastructure is more robust than in the early link spam era. Reddit’s automated systems flag coordinated inauthentic behavior through patterns in posting timing, account age, karma accumulation, and comment structure, and moderator communities actively watch for coordinated campaigns. The community itself maintains a strong norm against manufactured consensus, and the backlash when a campaign is exposed tends to be proportional to how authentic it claimed to be.

There is also a structural dimension that goes beyond individual campaigns. Research by Originality.ai found that 15% of Reddit posts in 2025 were likely AI-generated, up from 13% in 2024. That is not just brands gaming the system. It is a broader contamination of the community signal itself, creating a feedback loop where AI trains on Reddit content that increasingly contains AI-generated material designed to look like human consensus. The argument for building authentic community presence now, before detection systems become more aggressive about filtering synthetic signals, is a strategic one, not a moral one. Manufactured signals degrade faster than authentic ones, and the penalty when they collapse is worse than the benefit while they worked.

What Brands Should Actually Do

The practical implication is not “post more on Reddit.” It is more precise than that.

Monitor brand mentions across Reddit, Stack Overflow, Quora, and review platforms not as a reputation exercise but as entity intelligence. The narrative that forms in community discussions, the specific language, the repeated associations, the persistent objections, is the narrative AI systems are more likely to reproduce than anything on your own website. If community threads consistently describe your enterprise product as “great for small teams,” that characterization will surface in AI answers regardless of how your positioning page reads.

Ensure subject matter experts are participating in relevant communities under their real identities, contributing answers to questions they actually know well. The upvote accumulation those answers generate is a durable quality signal that persists across training cycles. One genuinely helpful response in a relevant technical subreddit or a well-supported Stack Overflow answer does more long-term structural work than ten pieces of owned content, because it carries community validation that owned content cannot provide.

Create content that community members actively want to reference. Original research, specific benchmarks, documented case studies with real numbers, these are the formats that generate organic community citations, which in turn generate the kind of third-party mentions that AI systems treat as consensus rather than marketing. A practical rule of thumb that holds in community engagement generally: 80% of participation should contribute genuine value with no promotional intent, and the 20% that mentions your product should only appear when it is the honest answer to the question being asked.

Think of community presence as a context moat with a long construction timeline. Unlike most marketing assets, authentic community reputation compounds slowly and is genuinely difficult for competitors to replicate quickly. A brand that has been a good-faith participant in its relevant communities for two years has something that cannot be acquired in a quarter.

The Review Layer

Most brands managing reviews understand that aggregate star ratings affect purchase decisions. Fewer understand that the specific review content, the language customers use, the features they praise or criticize, the comparisons they draw to competitors, is increasingly the raw material for how AI describes your brand at the moment of recommendation.

The numbers make the stakes concrete. Domains with profiles on review platforms have three times higher chances of being chosen by ChatGPT as a source compared to sites without such presence. In a G2 survey of B2B software buyers in August 2025, 87% reported that AI chatbots are changing how they research products, and half now start their buying journey in an AI chatbot rather than Google, a 71% increase in just four months. When a procurement director asks an AI to recommend CRM options for a 50-person team, the answer draws from review platform content, not from vendor websites.

Here is where the landscape shifts in a way that most review management programs have not caught up with yet. Not all review platforms are accessible to AI retrieval systems, and the differences are significant.

A June 2025 analysis of 456,570 AI citations found that review platforms divide into three distinct categories based on crawler access policies. Platforms like Clutch and SourceForge allow full crawler access, and their content surfaces regularly in AI-generated answers. Platforms like G2 and Capterra operate with selective access that permits some retrieval. Major platforms (Yelp is an example) block AI crawlers at the robots.txt level, which means reviews written there, however numerous or positive, are structurally unavailable to AI retrieval at the point of recommendation.

The citation data reflects this directly. For Perplexity, 75% of review site citations in the software category come from G2. Clutch dominates AI citations in the agency and digital services category. The market prominence of a review platform and its accessibility to AI crawlers are different variables, and review management strategy that conflates them is directing effort toward platforms where the AI visibility signal cannot be retrieved regardless of review volume.

This is not an argument that major platform reviews are worthless. They still matter significantly for direct consumer decision-making, traditional search, and brand reputation overall. It is an argument that the AI visibility value of a review depends specifically on whether the platform permits retrieval, and that understanding has material consequences for where teams prioritize cultivating review volume when AI answer visibility is the goal.

One additional layer of complexity: robots.txt compliance among AI crawlers is not guaranteed. Analysis by Tollbit found that 13.26% of AI bot requests ignored robots.txt directives in Q2 2025, up from 3.3% in Q4 2024. The boundary between “blocked” and “accessible” is not as clean in practice as it is in policy. The implication is to treat your entire review footprint as potentially accessible to AI retrieval while being deliberate about which platforms receive active cultivation for AI visibility specifically.

The Broader Picture

Community presence has always been a trust signal. What has changed is that the systems making purchase recommendations at scale are now reading those signals directly, at the platform level, and weighting them above the content brands produce about themselves.

SEO professionals who have spent years optimizing owned content for search visibility now face a layer of visibility that operates on fundamentally different inputs. The link-building parallel is not rhetorical. Just as the profession eventually accepted that links from authoritative external sources outweigh on-page optimization in many contexts, the community signal layer is demonstrating the same dynamic for AI-generated answers. Authority comes from outside the brand’s control, which means the work of building it looks less like content production and more like sustained, authentic participation in the places where buyers actually talk.

The brands that start building authentic community presence now are constructing a signal that compounds. Genuine community reputation is difficult to manufacture at scale, genuinely difficult for competitors to replicate quickly, and structurally favored by the same AI systems that are increasingly the first stop in the purchase journey. Later entrants will find it expensive to match.

If you want to learn more about topics like these, take a look at my newest book on Amazon: The Machine Layer: How to Stay Visible and Trusted in the Age of AI Search. It’s written to help you not only understand the topics I write about here, but also to help you learn more about LLMs and consumer behavior, build ways to grow conversations within your organization, and can serve as a workbook with multiple frameworks included.

More Resources:


Featured Image: ginger_polina_bublik/Shutterstock; Paulo Bobita/Search Engine Journal

Why Your New SEO Vendor Can’t Build on a Broken Foundation via @sejournal, @TaylorDanRW

There is a common expectation in SEO that needs to be challenged, and it usually appears as soon as a new agency or consultant takes over performance.

Many businesses assume that fresh expertise should lead to quick wins, as if changing vendors resets everything and removes the issues that held performance back before. This view ignores how search works and overlooks the lasting impact of previous decisions.

Quick wins can still exist, but they should be seen as small steps rather than complete solutions. Changes such as improving page titles, updating content, or fixing isolated issues can lead to short-term improvements, but they do not address deeper problems.

Relying too heavily on these quick fixes can create a false sense of progress while leaving the core issues unresolved.

Inheriting History Is Never Starting Fresh

A new SEO vendor does not start with a clean slate, and they are never working in isolation from what came before. They inherit the full history of the website, including past strategies, technical decisions, and content choices, whether those were effective or not. That inherited position becomes the real starting point, and in many cases, it is far more restrictive than stakeholders expect.

Poor SEO does not simply fail to deliver results, as it often creates long-term problems that take time to fix. If a site has built up low-quality backlinks, published thin or duplicated content, or ignored technical issues, it develops a reputation that search engines take into account. This means that even strong improvements can take time to show results, as they must first counterbalance what already exists.

The impact of past decisions tends to build over time, shaping how a domain is viewed and ranked. Practices such as buying links at scale, creating large volumes of low-value pages, or focusing only on short-term gains often leave a lasting footprint. Search engines respond to this by becoming more cautious, which affects not just old content but also any new work that is introduced.

Technical debt is another major factor that often goes unnoticed until a new vendor begins to investigate properly. Many websites grow over time without clear structure or oversight, leading to issues such as broken internal links, inefficient crawl paths, duplicate content, and problems with how pages are rendered. These issues directly affect how search engines access and understand a site, which makes them a priority to fix before growth can happen.

Stabilization Comes Before Growth

The early stages of a new SEO engagement are usually focused on stabilizing the site rather than driving immediate growth. This involves detailed audits, identifying crawl and indexation issues, and making sure important pages are accessible and prioritized correctly. Although this work is essential, it does not always lead to instant improvements in rankings, which can be frustrating if expectations are not aligned.

Rebuilding trust is another key part of the process, especially if a site has used poor practices in the past. Search engines are designed to reward consistency and reliability over time, and trust cannot be restored through quick changes. It requires steady improvements in content quality, link profile, and overall site structure, supported by signals that show genuine value to users.

Brand Strength As A Limiting Factor

Brand strength also plays a larger role than many businesses realize, and its absence can limit SEO performance even when technical issues are addressed. Websites with little presence outside their own domain, few mentions across the web, and low branded search demand often struggle to compete. Search engines look for signals that a brand is recognized and trusted, which means visibility beyond the site itself matters.

A lack of investment in brand building creates additional work for any new SEO vendor, as they may need to introduce digital PR, content promotion, and strategies that increase visibility across relevant platforms. These efforts take time to build momentum and rarely deliver immediate results, which reinforces the need for a longer-term view.

Frustration often comes from the fact that much of the early work is not visible in the form of traffic or ranking gains. Audits, clean-up work, and structural improvements are not always obvious to stakeholders, but they are necessary to remove the barriers that limit performance. Without this foundation, any gains from new content or links are likely to be limited.

Accountability And Communication

Accountability still matters, and a new SEO vendor should be clear about what they are doing and why. They should explain the starting position, set realistic timelines, and outline the steps needed to improve performance. Clear communication helps build trust and ensures that stakeholders understand what progress looks like at each stage.

Realistic timelines are often longer than businesses expect, especially when there are significant issues to address. The first few months are usually focused on fixing problems and improving site health, followed by a period where early signals begin to improve. More noticeable growth in rankings and traffic often comes later, once the foundation is stronger and search engines respond to consistent improvements.

A shift in mindset is needed to get the most from SEO, moving away from the idea of quick fixes towards a more long-term approach. Past decisions, whether they involve shortcuts or a lack of investment, shape current performance and cannot be undone instantly. Accepting this reality allows businesses to focus on building sustainable growth rather than chasing immediate results.

The Long-Term View

Bringing in a new SEO vendor should be seen as the start of a process rather than the end of a problem. The best outcomes come from understanding the starting point, investing in the necessary work, and staying consistent over time. This approach creates the conditions for steady improvement rather than short bursts of activity that do not last.

The key point is that SEO reflects both history and current effort, and ignoring the past leads to unrealistic expectations. A new vendor can bring structure, expertise, and direction, but they cannot remove the impact of previous actions overnight. What they can do is build a stronger position over time, provided they are given the space and support to do it properly.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

Google’s CEO Predicts Search Will Become An AI Agent Manager via @sejournal, @martinibuster

In a recent interview, Google’s CEO, Sundar Pichai, explained how search is changing in response to advances in AI. The discussion centered on a simple question: If AI can act, plan, and execute, then what role will search play in the future?

Information Queries May Become Agent AI Search

The interviewer asked whether search remains a product or becomes something else as AI systems begin handling tasks instead of returning results.

They asked:

“What do you view as a future of search? Is it a distribution mechanism? Is it a future product? Is it one of N ways people are going to interact with the world?”

Had Pichai been interviewed by members of the publishing and SEO community, his answer may have received some pushback. He answered that search does not get replaced, but continues to expand as new capabilities are introduced and user expectations change.

He said:

“I feel like in search, with every shift, you’re able to do more with it.

And we have to absorb those new capabilities and keep evolving the product frontier.

If it’s mobile, the product evolved pretty quickly, you’re getting out of a New York subway, you’re looking for web pages, you want to go somewhere, how do you find it? So you’re constantly shifting, people’s expectations shift, and you’re moving along.

If I fast forward, a lot of what are just information seeking queries will be agentic search. You will be completing tasks, you have many threads running.”

In the first example of a person coming out of a New York subway, yes, someone may search for a web page, but will Google show the user a web page or treat it like data by summarizing it?

The second example completely removes the user from search and inserts agents in the middle. That scenario implicitly treats web pages as data.

Will Search Exist In Ten Years?

Pichai was asked what the future of search will be like in ten years. His answer suggests that the future of search will involve many information-seeking queries being handled as tasks carried out by agentic AI systems. Furthermore, search will be more like an orchestration layer that sits between the user and AI agents.

The exact question he was asked is:

“Will search exist in ten years?”

Google’s CEO responded:

“It keeps evolving. Search would be an agent manager, right, in which you’re doing a lot of things.

I think in some ways, I use anti-gravity today, and you have a bunch of agents doing stuff.

And I can see search doing versions of those things, and you’re getting a bunch of stuff done.”

At this point, the interviewer tried to get Pichai to return to the question of the actual search paradigm, if that will exist in ten years. Pichai declined to expressly state whether the search paradigm will still exist.

He continued his answer:

“Today in AI mode in search, people do deep research queries. So that doesn’t quite fit the definition of what you’re saying. But kind of people adapted to that.

So I think people will do long-running tasks, can be asynchronous.”

What he described is a version of search that manages actions across multiple steps, where multiple processes can run at once instead of returning a single set of ranked results. And yet, it’s weirdly abstract because he’s talking about queries but fails to mention websites or web pages in that specific context.

What’s going on? His next answer brings it into sharper focus.

Who Is The Flea And Who Is The Dog?

The interviewer picked up on Pichai’s mention of adaptation, made an analogy to evolution, and then asked:

“It’s almost like, does that former version or paradigm eventually go away? And what was search becomes an agent and your future interface is an agent, and the search box in ten years or n years is no longer the–“

Pichai interrupted the interviewer to say that it’s no longer possible to look ahead five or ten years because the models are changing, what people do is rapidly changing, and given that pace, the only thing to do is to embrace it.

He explained:

“The form factor of devices are going to change. I/O is going to radically change. And so …I think you can paralyze yourself thinking ten years ahead. But we are fortunate to be in a moment where you can think a year ahead, and the curve is so steep. It’s exciting to just do that year ahead, right?

Whereas in the past, you may need to sit and envision five years out, unlike the models are going to be dramatically different in a year’s time.

…I think it’ll evolve, but it’s an expansionary moment. I think what a lot of people underestimate in these moments is, it feels so far from a zero-sum game to me, right? The value of what people are going to be able to do is also on some crazy curve, right?

I think the more you view it as a zero-sum game, it looks difficult. It can become a zero-sum game if you’re innovating or the product is not evolving.

But as long as you’re at the cutting edge of doing those things, and we’re doing both search and Gemini, and so they will overlap in certain ways. They will profoundly diverge in certain ways, right? And so I think it’s good to have both and embrace it.”

What Google’s CEO is doing is rejecting the possibility of becoming obsolescent by deliberately focusing on competitive agility and embracing uncertainty as a strategic advantage.

That might work for Google, but what about websites?

I think businesses also need to embrace competitive agility and get out of the mental attitude of fleas on the dog. And yet, online businesses, publishers, and the SEO community are not fleas because Google itself is the one feeding off the web’s content.

What About Websites?

The interview lasted for over an hour, and at no point did Pichai mention websites. He mentioned web pages twice, once as something to understand with technology and once in the example of a person emerging from a subway who is looking for a web page. In both of those instances, the context was not Google Search looking for or fetching a web page in response to a query.

Given that Google Search is used by billions of people every day, it’s a bit odd that websites aren’t mentioned at all by the CEO of the world’s most successful search engine.

Breaking Content & SEO Silos To Build Entity Authority in AI Search

This post was sponsored by Victorious. The opinions expressed in this article are the sponsor’s own. 

Improving search visibility across traditional and AI search requires evolving our methods and updating how teams work together to improve outcomes.

Content teams and SEO teams have always needed each other. But with AI search raising the bar on entity authority, the cost of operating in silos has never been higher. This framework is how you close that gap.

Why AEO Makes SEO & Content Collaboration Non-Negotiable

Historically, content and SEO teams have both pursued organic visibility, though they often worked independently. While it’s always been ideal for these teams to collaborate effectively, with answer engine optimization (AEO), it’s more critical than ever that they work together to strengthen a site’s entity associations and improve its retrieval opportunities.

What Is AEO?

AEO, which is also called generative engine optimization (GEO), is the process of improving a website’s content and technical foundations to make it easier for AI crawlers to read and extract content. AEO aims to improve brand citations and mentions and requires SEO and content teams to work together to improve entity targeting, semantic associations, content quality, content comprehensiveness, and content structure, among other things.

Without entity-level coordination, brands may fail to gain traction in AI search surfaces and lose AI citation and mention opportunities to competitors. Let’s break it down. AI Overviews (those AI generated snippets at the top of Google search results) cite websites that demonstrate concentrated authority (backed by external sources) on specific entities. Websites with consistent messaging around their core services and products backed by external corroboration like backlinks and PR mentions appear in knowledge panels and other search features. So, when content depth and external link validation operate independently, sites miss retrieval opportunities across AI-powered search.

Entities provide the framework for this collaboration. When content and SEO strategies align around building authority for the same entities, teams can execute coordinated work that strengthens both content comprehensiveness and external validation.

How Entities Provide a Shared Framework

Entities are distinct concepts that search systems can uniquely identify and connect. Unlike keywords, entities are semantic concepts with attributes and relationships. “Customer onboarding” as an entity connects to “user adoption,” “product activation,” “time to value,” and “customer success.” To get cited, brands need to build entity authority.

What Is Entity Authority?

Entity authority is the degree to which search systems recognize your brand as a credible, well-corroborated source on a specific entity. A site with strong entity authority for “resource planning” has comprehensive content on the topic, earns links from sources that also discuss it, and structures that content so search systems can map the relationships between related concepts.

Search systems evaluate entity authority on three dimensions:

  • Recognition: Can they identify which entities your content addresses?
  • Relationships: Do they understand how those entities connect?
  • Corroboration: Do external sources validate your entity representations?

These evaluation criteria create natural points of coordination. When both teams work toward the same entity authority goals, their work reinforces the same recognition, relationship, and corroboration signals that search systems use to evaluate expertise.

Why Neither Team Can Do This Alone

SEO teams could identify target entities and pursue entity-focused optimization independently. But without comprehensive content coverage, the technical infrastructure (schema, internal linking, site architecture) would connect thin, scattered content that doesn’t demonstrate depth. Conversely, content teams could create full-funnel entity coverage independently. But without the technical entity infrastructure and external corroboration through entity-relevant backlinks, the content lacks the structural and external signals that strengthen entity authority.

The coordination creates what neither discipline can build alone: comprehensive content backed by both technical entity infrastructure and external sources.

Putting Entity Authority Into Practice

Start by choosing 3–5 core topics your business wants to be known for, then consistently build content and links around those topics. Instead of spreading effort across dozens of disconnected ideas, SEO and content teams focus on reinforcing the same few areas until search systems clearly associate your brand with them.

Entities work as an organizing principle because they’re specific enough to guide both disciplines. Instead of content planning around vague topics and SEO chasing domain authority, both teams can focus on, say, “resource planning,” specifically.

Content creates guides, research, and comparisons on resource planning. SEO builds links from publications discussing resource planning. Both reinforce the same entity signals, and the compounding effect of that alignment is what separates brands that gain AI retrieval from those that don’t.

What an Entity-Focused Collaboration Workflow Looks Like

We propose a four-phase workflow that enables teams to test entity strategies and adapt based on performance.

Image created by Victorious, March 2026

Phase 1: SEO Conducts Entity Research

SEO begins by identifying entities aligned to the business’s services or products. Through vector embedding analysis (using tools like Google’s Natural Language API or Semrush to create a numerical representation of semantic associations), the team identifies related topics (entity associations) that would build authority for these main entities. This analysis reveals patterns of topic similarity and competitive gaps.

During this phase, SEO also analyzes link velocity requirements for each main entity, with the understanding that link building will be distributed across the entity cluster. This entity cluster would include pages with different search intents that cover different aspects of the same concept (entity). The output is a shortlist of main entities with their associated entities, aligned with business objectives and realistic resource constraints.

For a project management platform, the main entity might be “project management,” with associated entities like “resource planning,” “capacity management,” and “project forecasting.” Focusing on a limited number of main entities allows both teams to commit sufficient resources to build depth rather than scattering effort across too many targets.

Phase 2: SEO and Content Teams Analyze Content Gaps and Prioritize Impact

The teams review existing content coverage for each target entity together. They identify gaps across the buyer journey (awareness, consideration, decision) and prioritize which assets to create based on competitive need, business impact, and available resources. This isn’t content asking “what should we write?” or SEO saying “we need these pieces.”

Both teams evaluate comprehensiveness together:

  • Does the entity coverage span formats (research, guides, comparisons, how-tos)?
  • Does it address different stages of the buyer journey?
  • Does it create the depth that AI systems recognize as authority?

At this point, the teams also align on success metrics. Each team needs to agree on what entity authority looks like for the target entities and which signals will indicate progress, taking into account current content performance. This shared measurement framework ensures both teams work toward the same definition of success.

At the end of this phase, the teams should have a prioritized content plan showing which assets support which entities, target publication dates, and metrics for measuring entity authority growth.

Where Most Teams Break Down

Content and SEO often report into different leaders, operate on different timelines, and measure success differently. Content teams may focus on production and engagement, while SEO teams may focus on rankings and links. Without a shared framework, priorities drift and execution becomes fragmented.

Aligning around entities gives both teams a common target, so decisions about what to create, what to promote, and what to fix all point in the same direction.

Phase 3: Both Teams Execute on the Plan

Content creates and publishes the planned assets. SEO implements schema markup to highlight entity relationships, analyzes and fixes internal linking between entity clusters, and executes backlink building using entity-relevant anchor text and targeting publications that discuss those entities.

When prioritizing internal linking fixes, SEO focuses first on pages that already have topical relevance to the target entity but lack incoming links from related content, as these represent the fastest wins for entity cluster cohesion. For anchor text, the goal is to show natural variation rather than exact-match repetition to avoid over-optimization. Links also may not necessarily point to newly published content. What matters is that link velocity, anchor text, and link sources all reinforce the same entity associations that the content is building.

The goal here is entity-level coordination over piece-level coordination. Content and SEO teams work toward improving entity authority together.

Phase 4: Teams Assess Performance and Refine Plan

Together, the teams track implementation progress and entity authority signals to determine whether their efforts are improving brand visibility and ultimately, the bottom line for the business.

They’ll monitor ranking increases for related terms, since organic visibility influences AI citation opportunities. They also track AI Overview citations when users search entity-related queries (e.g., “[entity] best practices,” “[entity] solutions”) and frequency of brand mentions in AI-generated responses.

Traditional metrics like traffic and conversions emerge later as lagging indicators. Teams use the early signals to refine the plan: maintain the current approach, accelerate investment in high-performing entity clusters, or adjust tactics for underperforming entities.

Example: Resource Planning Entity in Action

Vector embedding analysis at a SaaS project management platform reveals “resource planning” as an entity association with strong similarity to their main “project management” entity. Building authority on resource planning would strengthen their overall project management authority. Competitive analysis shows they need consistent link velocity over six months to reach parity. (This six-month timeline assumes a moderately competitive landscape. In more saturated categories, building to parity may take longer, and teams should calibrate expectations based on their specific competitive environment before committing to a roadmap.)

A joint review of existing coverage reveals one surface-level blog post on resource planning basics. Competitive sites have research on resource allocation trends, comprehensive guides on capacity planning, comparison content evaluating resource planning approaches, and implementation how-tos. The gap is clear.

Together, they prioritize:

  • Awareness: Original research on resource planning practices
  • Consideration: A comprehensive resource planning guide
  • Consideration: A comparison of resource planning methodologies
  • Decision: Implementation guides for different team structures

Over three months, the content team publishes the planned assets while SEO implements schema, tightens internal linking across the entity cluster, and builds links from project management publications to pages across the site, not just the new content. They start looking for organic ranking changes, branded traffic changes, and AI citation rates.

After four months, visibility increases for resource planning queries across multiple pages, not just the newly published content. The research piece earns two AI Overview citations. These results reflect the entity strategy working as designed: content depth, technical infrastructure, and external corroboration all reinforcing the same entity signals together. Neither outcome would have happened on the same timeline if the teams had executed independently. That’s the compounding effect of entity-level coordination in practice.

It’s Time To Move Toward Structured Experimentation

Entity-focused collaboration isn’t a fixed formula, but rather, a framework for structured experimentation. Teams will need to test which entity associations drive the strongest authority signals, which content formats generate the most AI citations, and which link-building strategies accelerate entity recognition most effectively.

Though the workflow outlined here provides a starting structure, iteration is expected. You’ll likely find that entity clusters don’t build authority at the same pace, buyer journey stages that seem less critical may drive unexpected retrieval, link velocity requirements vary by competitive landscape, and the measurement signals themselves evolve as AI search capabilities change.

Flexibility is essential. Teams need space to test approaches, measure what works, and adapt quickly. Tighter coordination between content and SEO enables faster learning cycles. When both teams work from the same entity framework and shared success metrics, they can identify what’s working and shift resources accordingly. The brands that establish entity authority now, before AI search surfaces fully mature, will be significantly harder to displace later.


Image Credits

Featured Image: Image by Victorious. Used with permission.

OpenAI, Meta, ByteDance Lead AI Bot Traffic In Publishing via @sejournal, @MattGSouthern

Akamai analyzed AI bot activity by examining application-layer traffic from its bot management tools.

Commerce drew the most AI bot traffic at 48%. Media, which includes publishing, video, social media, and broadcasting, came second at 13%.

Publishing companies accounted for 40% of all AI bot activity in media, ahead of broadcast and OTT at 29%.

OpenAI generated the most AI bot traffic hitting media companies, with 40% of its media requests going to publishing companies. That’s partly because OpenAI runs multiple bots. GPTBot handles training, OAI-SearchBot powers AI search, and ChatGPT-User retrieves content in real time.

Meta and ByteDance were the second- and third-largest operators. Anthropic and Perplexity rounded out the top five at lower volumes.

Why Akamai Says Fetcher Bots Are The Bigger Concern

The report groups AI bots into four types based on behavior.

Training crawlers and fetchers account for most of the AI bot activity Akamai saw in media, which includes publishing. Training crawlers collect content to build language models. They made up 63% of AI bot activity targeting media in H2 2025.

Fetcher bots grab specific pages in real time when someone asks an AI chatbot a question. They made up 24%, and publishing accounted for 43% of that fetcher activity.

Akamai argues that fetcher bots are the more immediate revenue concern, even though training crawlers generate more total traffic. When a fetcher bot pulls an article to answer a chatbot query, the user gets the information without visiting the publisher’s site.

How Publishers Are Responding

It’s worth noting that Akamai sells bot management tools, and the report’s recommendations point toward its own products and partners.

The most common responses among Akamai’s customers are deny (blocking requests outright), tarpit (holding connections open to waste bot resources), and delay (adding a pause before responding). One unnamed publisher chose tarpitting over blocking, controlled 97% of AI bot requests, and kept the door open to potential licensing deals.

The report argues against blanket blocking, saying some AI companies are willing to pay for content access and that blocking all bots removes that option.

Looking Ahead

The report’s top takeaway is the distinction between training crawlers and fetcher bots. Blocking a training crawler can influence how your content helps build future AI models. Blocking a fetcher bot affects whether your content appears in AI responses right now.


Featured Image: la pico de gallo/Shutterstock

The Download: water threats in Iran and AI’s impact on what entrepreneurs make

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Desalination plants in the Middle East are increasingly vulnerable 

As the conflict in Iran has escalated, a crucial resource is under fire: the desalinization technology that supplies water in the region. 
 
President Donald Trump has threatened to destroy “possibly all desalinization plants” in Iran if the Strait of Hormuz is not reopened. The impact on farming, industry, and—crucially—drinking in the Middle East could be severe. Find out why

—Casey Crownhart 

This story is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. 

AI is changing how small online sellers decide what to make 

For small entrepreneurs, deciding what to sell and where to make it has traditionally been a slow, labor-intensive process. Now that work is increasingly being done by AI.   

Tools like Alibaba’s Accio compress weeks of product research and supplier hunting into a single chat. Business owners and e-commerce experts say they’re making sourcing more accessible—and slashing the time from product idea to launch.  

Read the full story on how AI is leveling the path to global manufacturing

—Caiwei Chen 

The gig workers who are training humanoid robots at home 

When Zeus, a medical student in Nigeria, returns to his apartment from a long day at the hospital, he straps his iPhone to his forehead and records himself doing chores.  
 
Zeus is a data recorder for Micro1, which sells the data he collects to robotics firms. As these companies race to build humanoids, videos from workers like Zeus have become the hottest new way to train them.   
 
Micro1 has hired thousands of them in more than 50 countries, including India, Nigeria, and Argentina. The jobs pay well locally, but raise thorny questions around privacy and informed consent. The work can be challenging—and weird. Read the full story.  

—Michelle Kim 

This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 Anthropic’s new model found security problems in every OS and browser 
Claude Mythos has been heralded as a cybersecurity “reckoning.” (The Verge)  
+ Anthrophic is limiting the rollout over hacking fears. (CNBC
+ It’s also launching a project that lets Mythos flag vulnerabilities. (Gizmodo
+ Apple, Google, and Microsoft have joined the initiative. (ZDNET

2 Iranian hackers are targeting American critical infrastructure 
Their focus is on energy and water infrastructure. (Wired
+ They’re targeting industrial control devices. (TechCrunch)  

3 Google’s AI Overviews deliver millions of incorrect answers per hour 
Despite a 90% accuracy rate. (NYT $) 
+ AI means the end of internet search as we’ve known it. (MIT Technology Review

4 Elon Musk is trying to oust OpenAI CEO Sam Altman in a lawsuit 
As remedies for Altman allegedly defrauding him. (CNBC
+ Musk wants any damages given to OpenAI’s nonprofit arm. (WSJ $) 

5 ICE has admitted it’s using powerful spyware 
The tools that can intercept encrypted messages. (NPR
+ Immigration agencies are also weaponizing AI videos. (MIT Technology Review

6 Greece has joined the countries banning kids from social media 
Under-15s will be blocked from 2027. (Reuters
+ Australia introduced the world’s first social media ban for children. (Guardian
+ Indonesia recently rolled out the first one in Southeast Asia. (DW)  
+ Experts say they’re a lazy fix. (CNBC

7 Intel will help Elon Musk build his Terafab in Texas 
They aim to manufacture chips for AI projects. (Engadget
+ Musk says it will be the largest-ever semiconductor factory. (Engadget
+ Future AI chips could be built on glass. (MIT Technology Review)  

8 TikTok is building a second billion-euro data center in Finland 
It’s moving data storage for European users. (Reuters
+ Finland has become a magnet for data centers. (Bloomberg $) 
+ But nobody wants one in their backyard. (MIT Technology Review

9 Plans for Canada’s first “virtual gated community” have sparked a row 
The AI-powered surveillance system has divided neighbors. (Guardian
+ Is the Pentagon allowed to surveil Americans with AI? (MIT Technology Review

10 The high-tech engineering of the “space toilet” has been revealed 
Artemis II is the first mission to carry one around the world. (Vox

Quote of the day 

“This case has always been about Elon generating more power and more money for what he wants. His lawsuit remains nothing more than a harassment campaign that’s driven by ego, jealousy and a desire to slow down a competitor.” 

—OpenAI criticizes Musk’s legal action in an X post

One More Thing 

USWDS

Inside the US government’s brilliantly boring websites 

You may not notice it, but your experience on every US government website is carefully crafted. 

Each site aligns an official web design and a custom typeface. They aim to make government websites not only good-looking but accessible and functional for all. 

MIT Technology Review dug into the system’s history and features. Find out what we discovered

—Jon Keegan 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ Rejoice in the splendor of the “Earthset” image captured by Artemis II. 
+ Meet the fearless cat chasing off bears. 
+ This document vividly explains what makes the octopus so unique. 
+ Revealed: the rhythmic secret that makes emo music so angsty

Mustafa Suleyman: AI development won’t hit a wall anytime soon—here’s why

We evolved for a linear world. If you walk for an hour, you cover a certain distance. Walk for two hours and you cover double that distance. This intuition served us well on the savannah. But it catastrophically fails when confronting AI and the core exponential trends at its heart.

From the time I began work on AI in 2010 to now, the amount of training data that goes into frontier AI models has grown by a staggering 1 trillion times—from roughly 10¹⁴ flops (floating-point operations‚ the core unit of computation) for early systems to over 10²⁶ flops for today’s largest models. This is an explosion. Everything else in AI follows from this fact.

The skeptics keep predicting walls. And they keep being wrong in the face of this epic generational compute ramp. Often, they point out that Moore’s Law is slowing. They also mention a lack of data, or they cite limitations on energy.

But when you look at the combined forces driving this revolution, the exponential trend seems quite predictable. To understand why, it’s worth looking at the complex and fast-moving reality beneath the headlines.

Think of AI training as a room full of people working calculators. For years, adding computational power meant adding more people with calculators to that room. Much of the time those workers sat idle, drumming their fingers on desks, waiting for the numbers to come through for their next calculation. Every pause was wasted potential. Today’s revolution goes beyond more and better calculators (although it delivers those); it is actually about ensuring that all those calculators never stop, and that they work together as one.

Three advances are now converging to enable this. First, the basic calculators got faster. Nvidia’s chips have delivered an over sevenfold increase in raw performance in just six years, from 312 teraflops in 2020 to 2,250 teraflops today. Our own Maia 200 chip, launched this January, delivers 30% better performance per dollar than any other hardware in our fleet. Second, the numbers arrive faster thanks to a technology called HBM, or high bandwidth memory, which stacks chips vertically like tiny skyscrapers; the latest generation, HBM3, triples the bandwidth of its predecessor, feeding data to processors fast enough to keep them busy all the time. Third, the room of people with calculators became an office and then a whole campus or city. Technologies like NVLink and InfiniBand connect hundreds of thousands of GPUs into warehouse-size supercomputers that function as single cognitive entities. A few years ago this was impossible.

These gains all come together to deliver dramatically more compute. Where training a language model took 167 minutes on eight GPUs in 2020, it now takes under four minutes on equivalent modern hardware. To put this in perspective: Moore’s Law would predict only about a 5x improvement over this period. We saw 50x. We’ve gone from two GPUs training AlexNet, the image recognition model that kicked off the modern boom in deep learning in 2012, to over 100,000 GPUs in today’s largest clusters, each one individually far more powerful than its predecessors.

Then there’s the revolution in software. Research from Epoch AI suggests that the compute required to reach a fixed performance level halves approximately every eight months, much faster than the traditional 18-to-24-month doubling of Moore’s Law. The costs of serving some recent models have collapsed by a factor of up to 900 on an annualized basis. AI is becoming radically cheaper to deploy.

The numbers for the near future are just as staggering. Consider that leading labs are growing capacity at nearly 4x annually. Since 2020, the compute used to train frontier models has grown 5x every year. Global AI-relevant compute is forecast to hit 100 million H100-equivalents by 2027, a tenfold increase in three years. Put all this together and we’re looking at something like another 1,000x in effective compute by the end of 2028. It’s plausible that by 2030 we’ll bring an additional 200 gigawatts of compute online every year—akin to the peak energy use of the UK, France, Germany, and Italy put together.

What does all this get us? I believe it will drive the transition from chatbots to nearly human-level agents—semiautonomous systems capable of writing code for days, carrying out weeks- and months-long projects, making calls, negotiating contracts, managing logistics. Forget basic assistants that answer questions. Think teams of AI workers that deliberate, collaborate, and execute. Right now we’re only in the foothills of this transition, and the implications stretch far beyond tech. Every industry built on cognitive work will be transformed.

The obvious constraint here is energy. A single refrigerator-size AI rack consumes 120 kilowatts, equivalent to 100 homes. But this hunger collides with another exponential: Solar costs have fallen by a factor of nearly 100 over 50 years; battery prices have dropped 97% over three decades. There is a pathway to clean scaling coming into view.

The capital is deployed. The engineering is delivering. The $100 billion clusters, the 10-gigawatt power draws, the warehouse-scale supercomputers … these are no longer science fiction. Ground is being broken for these projects now across the US and the world. As a result, we are heading toward true cognitive abundance. At Microsoft AI, this is the world our superintelligence lab is planning for and building.

Skeptics accustomed to a linear world will continue predicting diminishing returns. They will continue being surprised. The compute explosion is the technological story of our time, full stop. And it is still only just beginning.

Mustafa Suleyman is CEO of Microsoft AI.

New Ecommerce Tools: April 8, 2026

This week’s rundown of new services for ecommerce merchants includes updates on B2B tools, fulfillment, AI reporting, advertising platforms, agentic commerce, payments, affiliate marketing, and business formation services.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

Shopify extends native B2B features to all merchants. Merchants on Shopify can now manage wholesale and direct-to-consumer operations without plugins or patchwork. Shopify is extending its foundational B2B features to merchants on Basic, Grow, and Advanced plans, at no extra cost. Also, merchants on those plans can access native features, including company profiles for wholesale buyers, up to three custom catalogs with tailored pricing, volume discounts and quantity rules, vaulted credit cards, and payment terms.

Web page for Shopify's B2B and D2C management

Shopify: B2B and D2C management

ShipMonk opens fulfillment center designed for apparel brands. ShipMonk, a global fulfillment provider for ecommerce brands, has opened a center in Louisville, Kentucky. The facility has 406,000 square feet, 60 dock doors, and more than 300,000 storage locations. Per ShipMonk, the facility optimizes layouts for high-SKU density, dedicated rework stations, on-site embroidery services for premium customization, specialized workflows for wholesale complexity, and floor-ready presentation, as well as bespoke fulfillment services.

Swyft Filings launches an app to start an LLC inside ChatGPT. Swyft Filings, a provider of business formation services, has launched its OpenAI app, enabling entrepreneurs to form a business, start an LLC, register a corporation, and handle formation tasks. According to Swyft Filings, the app leverages generative AI for real-time guidance, faster filings, and more intuitive access to business formation services. The app is now available in the ChatGPT store.

Affiliate platform Levanta acquires Perch+ for Amazon sellers. Levanta, an affiliate and creator platform for ecommerce, has acquired Perch+, an affiliate network for Amazon sellers. According to Levanta, the move enables Perch+ brands to work directly with 60,000 partners in Levanta’s marketplace, with support for Amazon attribution and creator connections. Brands can run paid placement campaigns, automate product sampling, and surface who’s talking about their brand across social, per Levanta.

Miva releases 26 R1 with an embedded AI reporting assistant. Ecommerce platform Miva has released its 26 R1 update, with an embedded reporting assistant to help merchants make smarter, data-driven decisions. At the center of the release is AI Insights, an in-line reporting assistant. Merchants can use natural-language prompts to query store data, generate performance summaries, analyze conversion rates, and identify top-performing products by category or date range, all without exporting reports or building complex dashboards, according to Miva.

Home page of Miva

Miva

Commercetools partners with TradeCentric on B2B commerce. TradeCentric, a B2B e-procurement provider, has partnered with Commercetools, an ecommerce platform for global enterprises. Together, Commercetools and TradeCentric state that they enable seamless integration capabilities to eliminate manual order processing, reduce errors, and accelerate order-to-cash cycles. Commercetools customers gain access to these capabilities while integrating with major e-procurement systems, including the 220 procurement platforms supported by TradeCentric.

Bitly introduces AI-powered features for marketing analytics. Bitly, the URL shortener, has launched Assist and Weekly Insights for marketing. Bitly Assist is an AI-powered chat assistant built directly into the platform. It enables customers to ask questions conversationally about link and QR code performance. Weekly Insights highlights meaningful changes in link and QR code activity, identifying patterns across referrers, geographies, and devices.

Durable launches Discoverability for AI search. Durable, an AI business builder, has launched Discoverability, a feature that helps businesses get found on generative AI platforms such as ChatGPT, Gemini, Grok, and Perplexity. According to Durable, Discoverability provides (i) a single measure of how findable a business is across all online channels, (ii) guided suggestions to improve results over time, and (iii) rankings against competitors in genAI visibility.

Goflow introduces Order-Level P&L for real-time profit tracking. Goflow, a multichannel operating system for Amazon-first merchants, has launched Order-Level P&L for real-time profit tracking across orders, analytics, and reporting. The new feature estimates margins using inputs such as inventory batch costs and shipping rates. Each value is labeled by source, ensuring transparency between actual and estimated figures. By making profit and margin visible at the order, SKU, and channel level, sellers can identify issues quickly, per Goflow.

Home page of Goflow

Goflow

BitRail launches merchant payment suite to help eliminate fees. Fintech company BitRail has announced an expanded suite of merchant payment services in partnership with Payment Lock, an enterprise security and payment processing platform. These tools build on BitRail’s core platform, which gives merchants branded checkout, branded digital wallets, and branded payment infrastructure. The features include compliant pricing for cash and credit, checkout and pay-now buttons, API integration, customer card vault, and reporting.

BQool brings Amazon AI advertising solutions to growing brands. BQool, a platform for Amazon sellers to automate operations, has announced the availability of its AI-powered advertising feature, designed to simplify campaign management and improve performance. The feature provides a one-click system that continuously analyzes large volumes of data, automates repetitive advertising tasks, and optimizes campaigns. The feature includes “auto-harvesting,” which identifies and adds high-performing keywords, and enables sellers to adopt intelligent automation while maintaining control and visibility, per BQool.

SDLC Corp launches free Odoo-WooCommerce connector. SDLC Corp, a technology integration provider, has launched a free connector that enables data synchronization between WooCommerce storefronts and Odoo back-office systems. According to SDLC Corp, the connector includes features to support day-to-day performance, including webhook-based event processing, scheduled jobs for inventory and catalog synchronization, configurable field mapping inside Odoo, and centralized logging with error-handling capabilities — all across single- and multi-store environments.

Criteo expands its Go self-service ad platforms to SMBs. Criteo, an advertising retargeting platform, has expanded its Go self-service ad tool with full access for small and midsize businesses. Criteo Go unifies display, video, native, and social within a single campaign environment. The platform optimizes spend, while built-in genAI creative tools produce and adapt ad formats, including video, to maintain consistent messaging across channels. Advertisers can create an account, enter billing details, and launch campaigns in as few as five clicks, according to Criteo.

Web page for Criteo Go

Criteo Go