Will AI Solve Ecommerce Personalization?

A nascent firm armed with a fresh $12.3 million investment aims to deliver on the promise of ecommerce personalization.

A personalization engine shows the right product to the right shopper at the right time.

In theory, it makes everyone happy. Shoppers see relevant and engaging products. Merchants sell more.

It sounds simple enough. Think of an ecommerce website with products for sale. What item(s) does the site show to a particular user to entice a sale? How does it know what to show?

Data Right Now

This question of “what to show” is how Matteo Ruffini, chief science officer of the Swiss start-up Albatross AI, described the problem his company solves during a February 2025 interview.

Many ecommerce personalization and recommendation solutions rely on historical shopper behavior. The systems look backward over months or years, at purchases and browses, for instance.

The folks at Albatross also use past behavioral data, but they’ve added a real-time, right-now predictive element.

The Albatross product, according to a Forbes contributor, “captures every user action in a session and passes it into [an AI] transformer model that behaves like a language model for intent. The inputs are event triplets — user, action, item — instead of words. The model analyzes not just the action but the sequence of actions and the context that connects them. It updates continuously and responds in milliseconds without retraining.”

Essentially, the company claims to have the first AI infrastructure for training models on sequential, live events.

A flow-diagram illustrating a real-time personalization system by Albatross. At the bottom left, several orange-toned blocks represent item embeddings feeding into a “Large Event Model.” To the right, small orange blocks show a “live sequence of events” coming from a smartphone-shaped icon. These events flow into the model, which outputs a horizontal row of blue blocks labeled “Real-Time User Embedding” at the top left. An arrow carries this embedding to the top right, where gray-toned blocks represent “Best items based on in-session user behaviour.” The overall layout shows events from a user’s device informing embeddings to generate personalized item recommendations.

Albatross claims to have the first AI infrastructure for training models on sequential, live events.

3 Challenges

Albatross AI addresses at least three long-standing problems with predictive ecommerce recommendations:

  • Long training periods.
  • Categorizing new shoppers.
  • Cold starts for products.

Training

Personalized and segment-based recommendations depend on machine learning models that need time and data to mature. It can take weeks or months to gather enough data for meaningful recommendations. Moreover, the model must retrain often.

Some recommendation solutions train in cycles, such as daily or weekly, and they require reams of historical shopping activity. The result is recommendations that can lag behind rapidly changing demand signals, seasonal trends, influencer surges, or unpredictable cultural moments (such as the pandemic).

A shopper’s intent can change today, but if not in the next training cycle, the system cannot react.

Emerging platforms such as Albatross explore continuous or incremental learning, reducing reliance on scheduled retraining and moving toward models that reflect active sessions.

New shoppers

A second long-standing challenge is how recommendation systems treat new shoppers. Historically, these systems relied on popularity-driven rankings or generic best-sellers while they waited to gather enough signals to personalize.

Cookie-less personalization or probable identity matching offers only limited relief.

The industry is now shifting toward what could be described as “first-minute personalization,” meaning that intent signals within a single session — scroll depth, dwell time, bounce patterns, micro-hovers, theme switches — become the primary inferences.

The goal is to reduce the number of interactions required to understand a shopper’s interests and intents.

Cold start

The third obstacle is the cold start product problem.

An ecommerce catalog is rarely static. New SKUs arrive every day; marketplaces can add thousands per hour.

Current recommendation algorithms need interaction data before they can confidently suggest an item. Hence new products may remain buried.

Marketers can mark them as new and provide preferential treatment in search and on category pages. But those actions can defeat the purpose of personalized recommendations.

AI approaches are beginning to leverage content embedding, multimodal representation, and sequential modeling to infer probable relevance before engagement data is available. Essentially, AI understands much better which shoppers will like the new product.

Research continues to uncover ways to combine item metadata, textual or image-based descriptions, and user-sequence context so that new items are visible on day one.

AI and Commerce

The three challenges apply to other trends in ecommerce and the ongoing AI transformation.

LLMs such as ChatGPT, Perplexity, and Gemini are attempting to rank products for individuals through agentic commerce. Yet none of these will deliver unless they can interpret shopping intent.

In short, recommendation engines and AI shopping agents are becoming blurred. Product discovery and purchase decisions are merging.

LLMs.txt Shows No Clear Effect On AI Citations, Based On 300k Domains via @sejournal, @MattGSouthern

A new analysis from SE Ranking suggests the llms.txt file isn’t delivering measurable benefits yet.

After examining roughly 300,000 domains, the company found no relationship between having llms.txt and how often a domain is cited in major LLM answers.

What The Data Says

Adoption Is Thin

SE Ranking’s crawl found llms.txt on 10.13% of domains. In other words, nearly nine out of ten sites they measured haven’t implemented it.

That low usage matters because the format is sometimes described as an emerging baseline for AI visibility. The data instead shows scattered experimentation. SE Ranking says adoption is fairly even across traffic tiers and not concentrated among the biggest brands.

High-traffic sites were slightly less likely to use the file than mid-tier websites in their dataset.

No Measurable Link To LLM Citations

To assess whether the llms.txt file affects AI visibility, SE Ranking analyzed domain-level citation frequency across responses from prominent LLMs. They employed statistical correlation tests and an XGBoost model to determine the extent to which each factor contributed to citations.

The main finding was that removing the llms.txt feature actually improved the model’s accuracy. SE Ranking concludes that llms.txt “doesn’t seem to directly impact AI citation frequency. At least not yet.”

Additionally, they found no significant correlation between citations and the file using simpler statistical methods.

How This Squares With Platform Guidance

SE Ranking notes that its results align with public platform guidance. But it’s important to be precise about what is confirmed.

Google hasn’t indicated that llms.txt is used as a signal in AI Overviews or AI Mode. In its AI search guidance, Google frames it as an evolution of Search that continues to rely on its existing Search systems and signals, without mentioning llms.txt as an input.

OpenAI’s crawler documentation similarly focuses on robots.txt controls. OpenAI recommends allowing OAI-SearchBot in robots.txt to support discovery for its search features, but does not say llms.txt affects ranking or citations.

SE Ranking also notes that some SEO logs show GPTBot occasionally fetching llms.txt files, though they say it doesn’t happen often and does not appear tied to citation outcomes.

Taken together, the dataset suggests that even if some models retrieve the file, it’s not influencing citation behavior at scale right now.

What This Means For You

If you want a clean, low-risk way to prepare for possible future adoption, adding llms.txt is easy and unlikely to cause technical harm.

But if the goal is a near-term visibility bump in AI answers, the data says you shouldn’t expect one.

That puts llms.txt in the same category as other early AI-visibility tactics. Reasonable to test if it fits your workflow, but not something to sell internally as a proven lever.


Featured Image: Mameraman/Shutterstock

The Role Of Brand Authority And E-E-A-T In The AI Search Era via @sejournal, @DuaneForrester

AI-generated answers are spreading across search. Google and Bing are each presenting synthesized responses alongside regular results. These answers are not replacing traditional SERPs yet, but they are taking up attention. As they improve, they influence what people see first and what they trust most. The question is no longer whether they will change search, but how much of your brand’s visibility they will absorb as they expand. And as usage of ChatGPT, Claude, Perplexity, and other platforms continues to expand, we’re going to start to see user habits shift. Which means we’ll see more engagement with synthesized answers with no traditional SERPs in sight at all.

Being ranked is no longer enough. When machines decide which brands to cite or quote, the deciding factor is trust. The brands that become part of AI-generated answers are those seen as authoritative and credible. That is where E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) takes on greater importance.

Image Credit: Duane Forrester

Understanding E-E-A-T

Yup, we are about to re-walk well-traveled territory in this section, much of which you may already know. But here’s the rub … this is still news to some folks, and so many who claim to know it, still get the execution wrong, so please bear with me with this section if you are already crushing it with E-E-A-T.

E-E-A-T is not a single ranking factor. It is a framework used by Google’s search evaluators to judge how credible, useful, and accurate a page appears. You can read the full guidelines here: https://services.google.com/fh/files/misc/hsw-sqrg.pdf.

Experience refers to first-hand involvement. It is the signal that you have actually done or tested what you are writing about. Expertise is the skill or background that ensures accuracy. Authoritativeness reflects recognition from others: citations, backlinks, and mentions that confirm your credibility. Trustworthiness is the foundation. It is built through transparency, consistency, and honesty. In Google’s guidelines, trust is described as the single most important quality of a high-value page. The other three factors exist to reinforce it.

These same principles are now emerging in AI systems. Models trained to generate answers rely on reliable, verifiable information. A system cannot “feel” trust, but it can measure it through repetition and context. The more your brand appears in credible environments, the stronger your statistical trust signal becomes.

It’s worth also noting that E-E-A-T is not a Holy Grail. It’s not the silver bullet, a magic concept, or a single-point savior for sites struggling with poor UX, weak content, troubled histories, and so on. It’s a part of the whole landscape of work you need to do to enjoy success, but I’m calling it out here because this whole article is really about trust and its importance to LLM-based answers.

How AI Answers Are Changing Discovery

Search results still look familiar, but discovery no longer begins and ends with a search box. AI-generated answers now appear in Gemini, Perplexity, Bing Copilot, ChatGPT, and Claude, each shaping what people learn before they ever visit a website. These systems don’t replace traditional results, but they compete for the same attention. They answer quickly, carry conversational authority, and often satisfy curiosity before a click happens.

For SEOs, this creates two overlapping visibility systems. The first is still the structured web: ranking pages through links, metadata, and relevance. The second is the interpretive layer of AI retrieval and synthesis. Instead of evaluating pages in order, these systems evaluate meaning. They identify fragments of content, score them for reliability, and rewrite them into new narratives. Visibility no longer depends only on ranking high; it depends on being known, cited, and semantically retrievable.

Each major platform handles this differently.

  • Gemini and Bing Copilot remain closest to classic search, combining web results with AI-generated summaries. They still reference source domains and show linked citations, giving SEOs some feedback on what’s being surfaced.
  • Perplexity acts as a bridge between web and conversation. It routinely cites the domains it draws from, often favoring pages with structured data, clear headings, and current publication dates.
  • ChatGPT and Claude represent a different kind of discovery altogether. Inside these environments, users often never see the open web. Answers are drawn from model knowledge, premium connectors, or browsing results, sometimes citing, sometimes not. Yet they still shape awareness and trust. When a consumer asks for “the best CRM for small business,” and your brand appears in that response, the exposure influences perception even if it happens outside Google’s ecosystem.

That’s the part most marketers miss: Visibility now extends beyond what typical analytics can track. People are discovering, comparing, and deciding inside AI tools that don’t register as traffic sources. A mention in ChatGPT or Claude may not show up in referral logs, but it builds brand familiarity that can resurface later as a direct visit or branded search.

This creates a new discovery pathway. A user might start with an AI conversation, remember a brand name that sounded credible, and later search for it manually. Or they might see it mentioned again inside Gemini’s summaries and click then. In both cases, awareness grows without a single traceable referral.

The measurement gap is real. Current analytics tools are built for link-based behavior, not conversational exposure. Yet the signals are visible if you know where to look. Rising branded search volume, increased direct traffic, and mentions across AI surfaces are early indicators of AI-driven visibility. Several emerging platforms now monitor brand appearance inside ChatGPT, Claude, Gemini, and Perplexity responses, offering the first glimpses of how brands perform in this new layer.

In practice, this means SEO strategy now extends beyond ranking factors into retrieval factors. Crawlable, optimized content remains essential, but it also needs to be citation-ready. That means concise, fact-driven writing, updated sources, and schema markup that defines your authors, organization, and entities clearly enough for both crawlers and AI parsers to verify.

Traditional SEO remains your discoverability engine. AI citation has become your credibility engine. One ensures you can be found; the other ensures you can be trusted and reused. When both operate together, your brand moves from being searchable to being referable, and that’s where discovery now happens.

Expanding Challenges To Brands

This shift introduces new risks that can quietly undermine visibility.

  • Zero-click exposure is the first. Your insights might appear inside an AI answer without attribution if your brand identity is unclear or your phrasing too generic. This isn’t really “new” to SEOs who have long had to deal with typical zero-click answer boxes in SERPs, but this expands that footprint noticeably.
  • Entity confusion is another. If your structured data or naming conventions are inconsistent, AI systems can mix your brand with similar ones.
  • Reputation bleed happens when old or inaccurate content about your brand lingers on third-party sites. AI engines scrape that information and may present it as fact.
  • Finally, trust dilution is an issue. The flood of AI-generated content is making it harder for systems to separate credible human work from synthetic filler. In response, they will likely narrow the pool of trusted domains.

These risks are not yet widespread, but the direction is obvious. Brands that delay strengthening trust signals will feel it later.

How To Build Trust And Authority

Building authority today means creating signals that both people and machines can verify. This is what content moating looks like in practice: establishing proof of expertise that’s difficult to fake or copy. It starts with clear ownership. Every piece of content should identify who created it and why that person is qualified to speak on the topic. Readers and algorithms alike look for visible credentials, experience, and professional context. When authorship is transparent, credibility becomes traceable.

Freshness signals care. Outdated information, dead links, or references to old data quietly undermine trust. Keeping content current shows ongoing involvement in your subject and helps both users and search systems recognize that your knowledge is active, not archived.

Structure supports this effort. Schema markup for articles, authors, and organizations gives machines a way to verify what they’re seeing. It clarifies relationships: who wrote the piece, what company they represent, and how it fits into a larger body of work. Without it, even well-written content can get lost in the noise.

External validation deepens the signal. When reputable outlets cite or reference your work, it strengthens your perceived authority. Media mentions, partnerships, and collaborations all act as third-party endorsements that reinforce your brand’s credibility. They tell both people and AI systems that others already trust what you have to say.

Then there’s the moat that no algorithm can replicate: original insight. Proprietary data, firsthand experience, and in-depth case studies show real expertise. These are the assets that set your content apart from AI-generated summaries because they contain knowledge that isn’t available elsewhere on the web.

Finally, consistency ties it all together. The version of your brand that appears on your website, LinkedIn profile, YouTube channel, and review sites should all align. Inconsistent bios, mismatched tone, or outdated information create friction that weakens perceived trust. Authority is cumulative. It grows when every signal points in the same direction.

The Coming Wave Of Verification

In the near future, trust will not just be a guideline. It will become a measurable inclusion standard. Major AI platforms are developing what are often called universal verifiers, systems that check the accuracy and reliability of content before it is included in an answer. These tools will aim to confirm that cited information is factually correct and that the source has a history of accuracy.

When this arrives, the brands that already display strong trust cues will pass verification more easily. Those without structured data, transparent authorship, or verifiable sourcing will struggle to appear. What HTTPS did for security, these systems may soon do for credibility.

This will also redefine technical SEO. It will not be enough for your site to be fast and crawlable. It will need to be verifiable. That means clear author data, factual sourcing, and strong entity ties that confirm ownership.

How To Measure Progress

New forms of visibility require new measurement. Traditional metrics like traffic, backlinks, and keyword rankings still matter, but they no longer tell the full story.

  • Track whether your brand appears in AI-generated answers. Use the new tools/platforms available, chatbots, and answer engines to test your visibility.
  • Monitor branded search volume over time; it reflects whether your exposure in AI summaries is driving awareness.
  • Audit your structured data and author markup regularly. Consistency is what keeps you trusted.
  • Track external mentions and citations in high-trust environments. Authority builds where consistency meets recognition.

What Matters Most

E-E-A-T was once a quality checklist. Now it is a visibility strategy. Search systems and AI models are moving toward the same destination – finding reliable information faster.

Experience proves you have done the work. Expertise ensures you can explain it accurately. Authoritativeness confirms others trust you. Trustworthiness ties it all together. And if you believe your own interpretation and approach to E-E-A-T is good enough, look at your current search rankings. They can act as an early warning for you. If you consistently fail to rank well for key terms, that could be a clue that the AI systems will see your content as “less than,” when compared to competing pieces of content. By no means is that a straight map, but if you consistently struggle to meet the requirements of traditional search trust gates, it’s unlikely you’ll get a pass from AI systems as they ramp up their focus on trust.

The brands that live these principles will be the ones cited, quoted, and remembered. In a world of AI-generated answers, your reputation becomes your ranking signal. Build it deliberately. Make it visible. Keep it consistent.

That is how you stay trusted when the answers start writing themselves.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Viktoriia_M/Shutterstock

Repositioning What SEO Success Looks Like via @sejournal, @TaylorDanRW

In SEO, we are at a turning point, and after more than a decade of chasing rankings and traffic volume, many of us are beginning to recognize the need to have a broader and more meaningful conversation about what “success” really means in SEO.

This article reflects on how these conversations are evolving, why the older definitions are no longer sufficient, and how we can reposition the success metrics we use so that they better align with business value and reflect the reality of changing search behavior.

Narrow Success Window

For many years, success in SEO was defined in fairly narrow terms, where we measured how many keywords ranked in the top 10 or top three, and reported increases in organic sessions, improvements in domain authority, or growth in backlink counts.

These were tangible, easy to track, and often felt convincing in boardroom conversations, but underneath the surface, the limitations of this approach were already apparent.

Rankings, while useful, are ultimately vanity metrics, and if they improve without leading to increased clicks or qualified traffic, or if visitors arrive but never become leads or drive revenue, the SEO team may appear successful, but the business does not necessarily benefit.

We must now begin with the end in mind, asking what the business goal truly is, what value each new lead brings, and how the website supports those aims. The classic metric stack was keyword positioning to impressions, to clicks, to organic traffic, and possibly to conversions, but it no longer reflects the full story, and we need to think more holistically.

Why This Conversation Needs Updating

Several forces are now converging that make the older success yardsticks less reliable, and search behavior is one of the most prominent.

People increasingly expect fast, direct answers, and search engines now deliver results that provide those answers immediately through formats that do not always require a click, such as “zero-click” results.

This significantly changes how we measure success, because if users receive what they need without visiting a site, traditional click-based metrics lose much of their relevance.

The attribution chain is growing more complex, as organic traffic often plays a role early in the decision-making journey or supports brand engagement later in the funnel. The connection between a search visit and a tangible business outcome, such as a sale or a lead, can be indirect, span time, or be difficult to track with confidence.

At the same time, the data itself is becoming noisier and harder to interpret, with increasing levels of bot traffic, variations in device usage, growing privacy constraints, and changes in how users interact with results.

Metrics such as bounce rate, time on site, or even click-through rate are now more vulnerable to misinterpretation.

Expectations of SEO teams have also changed, and we are being asked to deliver clear business value, not just improved rankings. If we are still tracking only vanity metrics, we may be missing the real impact. We need to connect our work directly to outcomes such as revenue, visibility among key audiences, and genuine customer engagement.

It is no longer enough to say that traffic is up by 20%. We need to ask what that increase means for the business and whether those visitors were qualified and led to a meaningful result.

Repositioning Success: What The Conversation Should Focus On

To define SEO success more accurately, we need to reframe the conversation entirely. These are the dimensions I now focus on.

Business Alignment

Real success begins by aligning SEO activity to business outcomes. If the objective is to capture high-value enterprise leads, then reporting traffic to low-intent blog content is no longer meaningful.

Instead, we need to set goals that are measurable, commercially relevant, and clearly linked to strategic priorities, ensuring the SEO team contributes to those priorities in a language leadership understands. When we do that, the conversation shifts away from keyword counts toward the broader question of how much value organic search adds to the business.

Quality Over Quantity

While traffic volume still has its place, we need to move beyond surface metrics and focus on the quality of visitors, whether they reflect the right intent, whether they engage with content meaningfully, and whether their behavior suggests a pathway toward a business outcome.

Metrics such as engagement depth, lead generation rate, and alignment with target personas tell us far more than raw traffic alone. The question we now ask is whether the right people are finding us and taking action once they do.

Visibility And Market Share In Search

It is not enough to rank well for a few hand-picked terms.

Visibility in search today is about occupying the right positions across a much broader landscape, reaching our audience at various moments of need. This includes winning impressions across multiple query types, appearing in rich results and featured formats, and maintaining a presence that reinforces our authority.

The more we dominate relevant search journeys, the more we influence the market, even when that influence is not reflected in click metrics alone.

Attribution And Value Tracking

We must tie SEO performance directly to measurable business value, whether that is leads, revenue, brand visibility, or contribution to a broader customer lifecycle. That requires stronger analytics frameworks, and the discipline to identify and follow the signals that matter most. Instead of obsessing over rankings, I now focus on the question of how many of our business outcomes can be reliably influenced or supported by organic search, and what that influence is worth.

Adaptability To Search Evolution

Search is no longer static, and with the rise of AI, direct answers, voice, and structured data, our measurement frameworks must evolve just as quickly.

Success might mean gaining impressions in key places, even if those impressions do not always convert directly.

We may see lower click-through rates because our content is being used in answer boxes or overviews. Rather than viewing this as a failure, we should ask whether we are still present, whether our brand remains visible, and whether we are feeding into the new ways people search for and consume information. That adaptability is part of long-term success.

Practical Steps To Have This Conversation

To reposition the conversation, we must first return to the strategic context.

What does the business want to achieve in the next six to 12 months? Growth, market expansion, brand credibility, operational efficiency?

Whatever the goal, we need to ask how organic search supports it, and we must agree early on what success will look like.

This means defining shared metrics that matter. We might look at the percentage of relevant traffic, the number of qualified inbound leads from organic, the revenue pipeline influenced, or the share of voice in a competitive space.

These metrics need to be discussed, agreed upon, and tracked collaboratively. Once we know what matters, we can classify our metrics as leading indicators, lagging outcomes, and diagnostic signals, ensuring we track progress meaningfully from awareness through to value delivery.

When we report results, we must do so in business terms. Rather than quoting percentage increases in traffic, we need to say what that traffic represented, such as how many people matched our target buyer personas, how many converted into something valuable, and what that means in financial or strategic terms.

We also need to acknowledge the complexity of attribution, explaining what can and cannot be measured with precision, and why. When traffic rises but clicks are flat due to zero-click results, or when awareness improves without immediate leads, we need to explain what those patterns mean and what the underlying story really is.

This process should not be static. As search evolves and business priorities shift, we must revisit our KPIs, our assumptions, and our methods. A flexible, open approach builds trust and keeps SEO positioned as a strategic partner rather than just a technical service.

A Case For Reframing Success Now

It is no longer a question of if we should change how we define success in SEO, but when. The risks of holding onto outdated metrics are serious. If we keep measuring keyword rankings and traffic counts, while the business cares about conversion, revenue, and growth, then we risk being seen as disconnected or misaligned.

The result is often loss of confidence, shrinking budgets, and missed opportunities.

But if we reframe how we measure and report success, we gain influence, relevance, and longevity. We align better with leadership goals. We allocate effort where it has the most impact. We stay ahead of search evolution. And most importantly, we build a case for the enduring value of SEO in any business context.

What This Means In Practice

In practical terms, this shift means reporting not only what ranks but what that visibility delivers. When I report on keyword positions, I explain the monthly search potential and the conversion rate of the landing pages they drive. When I talk about traffic growth, I segment it by intent and persona fit, and I show how that growth affects demo requests, contact forms, or sales-qualified leads.

If the click-through rate falls but featured snippets rise, I report the increased visibility and link it to changes in branded search or engagement with our wider content. If backlinks increase, I focus on their relevance and domain quality, and I explain how they influence brand signals and domain authority. Every number I report should tie back to business relevance, not technical vanity.

Final Thoughts

We are long overdue for a new understanding of what SEO success really means. As behavior changes, as platforms evolve, and as expectations increase, we need to be ready to tell a better story – one that shows our work is about value, not vanity.

The results that matter most are the ones that serve the business, influence the market, and build a sustainable presence over time.

If you have been in this industry for a while, now is the moment to lead that shift. Bring your leadership into the conversation.

Ask the right questions. Set the right metrics. Build a measurement framework that makes SEO impossible to ignore.

Because when we position ourselves as strategic contributors and not just technical operators, the work we do will finally get the recognition it deserves.

More Resources:


Featured Image: Vitalii Vodolazskyi/Shutterstock

How To Manage Demand Fluctuation During Key Ecommerce Shopping Seasons via @sejournal, @brookeosmundson

Ecommerce demand doesn’t rise and fall in a straight line throughout the years.

It can build gradually, spike hard, stall, or drop with little-to-no warning. During peak shopping periods like Black Friday, Cyber Monday, Prime Day(s), Back-to-School, these swings become even more intense.

For PPC marketers, that volatility affects far more than just traffic or CPCs. It influences bidding strategies, budgets, inventory planning, campaign structures, and even internal operations.

Managing demand fluctuation isn’t just about “spending more when demand is high.” It’s also about knowing when demand is coming, preparing your accounts before the surge, staying in control while competition rises, and stabilizing performance after the peak ends.

It means understanding that marketing decisions affect logistics and profitability, not just vanity metrics like impression share.

This article will walk you through how to manage demand in a way that improves performance and protects the business across each phase of the season.

1. Understand And Anticipate Seasonal Demand

Predictable seasonal spikes are only predictable if you know what to look for.

Demand rarely appears out of nowhere. It ramps up gradually. The marketers who recognize early changes in behavior are the ones who scale at the right time instead of reacting too late.

Start with historical data from your own account. Look at when impressions and clicks began to rise last year, not just when the holiday officially started.

Compare year-over-year and week-over-week trends to identify whether demand is starting earlier. In many industries, consumers begin researching long before they’re ready to buy, which means waiting until “the big day” is too late to build momentum.

Conversion lag is another signal. If your data shows it normally takes five days from first click to purchase, and your promo begins on Friday, you need to start increasing budget earlier in the week. Otherwise, you’ll miss buyers who started the journey before the event.

Don’t ignore external factors. Shipping cutoff dates, competitor promotions, weather trends, and even economic sentiment can accelerate or delay demand. The data in the platform only shows part of the picture, while market behavior provides the context.

Forecasting is also critical. Even a simple model based on past revenue, impression share, and growth targets can help you determine expected demand and budget requirements.

This helps create a baseline so you can recognize when performance is ahead or behind expectations and adjust accordingly.

2. Align Bids And Budgets With Demand

Once demand starts building, your bidding and budgeting strategy must evolve with it. This is where many marketers either scale too slowly and miss opportunity. On the opposite side, you scale too aggressively and burn through budget prematurely.

If you’re using Smart Bidding, seasonality adjustments in Google Ads or Microsoft Ads can help the algorithm prepare for a short-term spike that differs from typical trends. These are best used for specific, limited windows (e.g., a 3-day flash sale) rather than entire multi-week seasons.

When demand returns to normal, remove the adjustment so the system doesn’t keep bidding too high in a softening market.

Target settings also matter. A tROAS (Target Return on Ad Spend) goal that works during regular pricing may be too restrictive during steep discounts. Likewise, a CPA goal may need to be relaxed slightly if conversion rates are temporarily lower but lifetime value remains strong.

In some cases, switching to a “Maximize” strategy gives the system more flexibility to capture demand efficiently, especially when intent is high and margin is acceptable.

If using “Maximize Conversions” (or Conversion Value), you could set more flexible bid limits to let the algorithm know you’re willing to pay more for conversions without letting it go haywire and have a mind of its own.

Budgets require just as much attention as bids. If campaigns are capping out early in the day, you’re likely missing high-intent shoppers later. Increasing budgets, reallocating across campaigns, or adjusting bids to stretch delivery can help you maintain visibility during peak hours. Shared budgets can also allow strong-performing categories to pull in more spend without manual intervention.

Scaling back after the surge is equally important. Abrupt budget cuts or major bid changes can disrupt algorithmic learning. Gradual reductions give the system time to recalibrate as demand normalizes.

3. Keep Product Availability And Campaign Structures Aligned

Even the best campaign strategy falls apart if product availability isn’t properly managed.

During peak shopping seasons, inventory can change rapidly. If feeds don’t update quickly, ads may continue promoting items that are low or out of stock. This leads to wasting spend and hurting customer experience.

Be sure to increase your feed update frequency during high-demand periods. This could mean multiple syncs per day if possible.

Ensure that price, availability, and shipping information are accurate. If your platform or feed tool allows real-time inventory updates, take advantage of it.

Custom labels in your feed are one of the most valuable seasonality tools. Try labeling your products by margin, best seller status, promotion type, limited stock, or seasonality. This allows you to structure campaigns around business priorities, not just categories or sub-types.

For example:

  • Increase bids on high-margin or high-conversion products
  • Lower bids or pause products with low inventory
  • Separate promotional items so they receive dedicated budgets and messaging

Performance Max and Shopping campaigns require even more attention. In my experience, it’s common to see PMax concentrate budget on a narrow slice of the catalog while other SKUs receive little to no impression share.

If that pattern doesn’t match your merchandising goals, segmenting high-priority product groups and tightening feed signals usually helps. If you don’t segment campaigns thoughtfully or monitor product-level performance, the algorithm may stall.

Consider using a mix of Standard Shopping and PMax when you need more control over key seasonal categories. Standard Shopping can provide the structure you need, while PMax can help with scaling.

Just make sure they serve different roles to avoid internal competition.

Campaign structure should work hand-in-hand with inventory strategy. The goal is to ensure your best products get visibility when demand spikes and that you don’t waste spend on items you can’t fulfill.

4. Work With Internal Teams During Peak Demand

In normal months, PPC managers can operate with relative independence.

During major retail seasons, that approach can create problems.

Demand fluctuation affects far more than media spend. It touches logistics, merchandising, pricing, site operations, and customer experience.

For example, if marketing pushes a product heavily but the warehouse can’t fulfill orders quickly enough, conversion rates could drop, and customer complaints can arise.

If a PPC offer launches a “50% off” ad before the site reflects the discount, you’ll likely pay for unqualified clicks or see conversions drop.

If inventory runs low but product promotions continue, you’ll burn budget on products that can’t convert.

During peak periods, cross-functional alignment is necessary for optimal performance. Be sure to establish regular communication with:

  • Inventory and fulfillment (stock levels, restock timelines, shipping delays).
  • Merchandising (featured products, bundles, hero SKUs).
  • Pricing and promotions (exact discount timing and margin impact).
  • Creative (messaging changes, urgency vs. value).
  • Site operations (traffic capacity, potential downtime, landing page readiness).
  • Customer service (policy changes, support volume expectations).

Even short daily syncs with these teams can prevent costly mistakes. Something as simple as a delayed shipment or pricing error can change campaign performance within hours.

When teams are aligned, marketing decisions become less reactive and more strategic.

Also, be prepared to change messaging quickly. If shipping times increase, adjust ad copy or landing page expectations. If a product is selling out fast, highlight “limited availability” or shift spend to similar in-stock alternatives.

5. Plan For Post-Peak Performance And Future Seasons

When the surge ends, the work isn’t over.

The post-peak period can feel unstable. After peak periods, I’ve experienced many advertisers observe a short re-balancing window: Conversion intent normalizes faster than bidding pressure does. This is where many marketers overreact and cut budgets too aggressively, causing campaigns to lose momentum.

Instead, treat the cooldown as a transition phase. Reset any seasonality bid adjustments. Reevaluate ROAS or CPA targets. Gradually adjust budgets to align with current demand, rather than slashing them immediately.

Shift campaign focus to retention and LTV where appropriate. Remarketing, post-purchase offers, loyalty initiatives, and subscription promotions can help turn seasonal traffic into long-term value. The conversion window doesn’t always end when the sale does.

This is also the most important time to analyze. Don’t wait weeks to reflect; be sure to capture key insights while the data is fresh.

When analyzing, ask questions like:

  • Which categories or SKUs exceeded (or missed) expectations?
  • Were budgets or bids too slow to adjust?
  • Did any campaigns cap too early in the day?
  • Were there inventory issues that hurt performance?
  • How did different bidding strategies respond under pressure?
  • What messaging/ad copy resonated best with users?
  • What would you start earlier or stop entirely next time?

Document everything. Don’t assume you’ll remember next year.

Seasonality repeats, but consumer behavior and the corresponding algorithm responses evolve every year. The teams that improve each cycle are the ones who treat post-peak as planning time, not recovery time.

Then, build your playbook for the next season. Define earlier ramp-up timing if needed. Establish bidding and budget frameworks. Create inventory and messaging coordination workflows.

When the next seasonality surge comes, you’ll be ready to scale strategically.

Sustain Stability Through Every Season

Managing demand fluctuation is more about staying in control when the market becomes unpredictable. That requires preparation, data awareness, cross-team coordination, flexible bidding and budgeting, and deliberate post-peak analysis.

Demand shifts will always happen. The difference between chaotic seasons and successful ones comes down to how well you anticipate, adapt, and learn from each cycle.

The marketers who treat seasonality as a workflow system (not an event) are the ones who can turn volatility into growth.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

SEO Community Reacts To Adobe’s Semrush Acquisition via @sejournal, @martinibuster

The SEO community is excited by the Semrush Adobe acquisition. The consensus is that it’s a milestone in the continuing evolution of SEO in the age of generative AI. Adobe’s purchase comes at a time of AI-driven uncertainty and may be a sign of the importance of data for helping businesses and marketers who are still trying to find a new way forward.

Cyrus Shepard tweeted that he believes the Semrush sale creates an opportunity for Ahrefs under the belief that Adobe’s scale and emphasis on the enterprise market will present an opportunity for Ahrefs to move fast to respond to rapidly changing needs of the marketing industry.

He tweeted:

“Adobe’s marketing tools lean towards ENTERPRISE (AEM, Adobe Analytics). If Adobe leans this way with Semrush, it may be a less attractive solution to smaller operators.

With this acquisition, @ahrefs remains the only large, independent SEO tool suite on the market. Ahrefs is able to move fast and innovate – I suspect this creates an opportunity for Ahrefs – not a problem.”

Shepard is right, some of Adobe’s products (like Adobe Analytics) do lean toward enterprise users but there’s a significant small and medium size business user base for design related tools with pricing at the $99/month range that make the tools relatively affordable. Nevertheless that’s a significant cost compared to the $600 range that Adobe used to charge for standalone versions for Windows and Mac.

I agree that Ahrefs is quite likely the best positioned tool to serve the needs of the SMB end of the SEO industry should Semrush increase focus on the enterprise market. But there are also smaller tools like SERPrecon that are tightly focused on helping businesses deliver results and may benefit from the vacuum left by Semrush.

Validates SEO Platforms

Seth Besmertnik, CEO of the enterprise SEO platform Conductor, sees the acquisition as validating SEO platforms, which is a valid observation considering how much money, in cash, Semrush was acquired for.

Besmertnik wrote:

“I’m feeling a lot this morning. HUGE news today. Adobe will be acquiring Semrush…our partner, competitor, and an ally in the broader SEO and AEO/GEO world for over a decade.

For a long time, big tech ignored SEO. It drove half of the internet’s traffic, yet somehow never cleared the bar as something to own. I always believed the day would come when major platforms took this category seriously. Today is that day.”

It’s an exciting moment! We’re starting to see some consolidation and this represents huge recognition of how important the work of SEOs is. From traditional SEO through optimizing for AI platforms, the work is important. Clearly Adobe is thinking this way on behalf of their clientele, which means great things ahead.”

Besmertnik also made the point that the industry is entering a transitional phase where platforms that are adapted to AI will be the leaders of tomorrow.

He added:

“This next era won’t be led by legacy architectures. It will be led by platforms that built their foundations for AI…and by companies engineered for the data-first, enterprise-grade world that’s now taking shape.”

Validates SEO

Duane Forrester, formerly of Bing, shared the insight that the acquisition shows how important SEO is, especially as the industry is evolving to meet the challenges of AI search.

Duane shared:

“It’s an exciting moment! We’re starting to see some consolidation and this represents huge recognition of how important the work of SEOs is. From traditional SEO through optimizing for AI platforms, the work is important. Clearly Adobe is thinking this way on behalf of their clientele, which means great things ahead.”

Online Reactions Were Mostly Positive

There were a few comments with negative sentiment published in response to Adobe’s announcement on X (formerly Twitter), where some used the post to vent about pricing and other grudges but many others from the SEO community offered congratulations to Semrush.

What It All Means

As multiple people have said, the sale of Semrush is a landmark moment for SEO and for SEO platforms because it puts a dollar figure on the importance of digital marketing at a time when the search marketing industry is struggling to reach consensus of how SEO should evolve to meet the many changes introduced by AI Search.

Many Questions Remain Unanswered

What Will Adobe Actually Do With Semrush’s Product?

Will Semrush remain a standalone product or will it be offered in multiple versions for enterprise users and SMBs or will it be folded into one of Adobe’s cloud offerings?

Pricing

A common concern is about pricing and whether the cost of Semrush will go up. Is it possible that the price could actually come down?

Semrush Is A Good Fit For Adobe

Adobe started as a software company focused on graphic design products but by the turn of the millenium it began acquiring companies directly related to digital marketing and web design, but increasingly focusing on the enterprise market. Data is useful for planning content and also for better understanding what’s going on at search engines and at AI-based search and chat. Semrush is a good fit for Adobe.

Featured Image by Shutterstock/Sunil prajapati

Quantum physicists have shrunk and “de-censored” DeepSeek R1

<div data-chronoton-summary="

Quantum-inspired compression Spanish firm Multiverse Computing has created DeepSeek R1 Slim, a version of the Chinese AI model that’s 55% smaller but maintains similar performance. The technique uses tensor networks from quantum physics to represent complex data more efficiently.

Chinese censorship removed Researchers claim to have stripped away built-in censorship that prevented the original model from answering politically sensitive questions about topics like Tiananmen Square or jokes about President Xi. Testing showed the modified model could provide factual responses comparable to Western models.

Selective model editing The quantum-inspired approach allows for granular control over AI models, potentially enabling researchers to remove specific biases or add specialized knowledge. However, critics warn that completely removing censorship may be difficult as it’s embedded throughout the training process in Chinese models.

” data-chronoton-post-id=”1128119″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

A group of quantum physicists claims to have created a version of the powerful reasoning AI model DeepSeek R1 that strips out the censorship built into the original by its Chinese creators. 

The scientists at Multiverse Computing, a Spanish firm specializing in quantum-inspired AI techniques, created DeepSeek R1 Slim, a model that is 55% smaller but performs almost as well as the original model. Crucially, they also claim to have eliminated official Chinese censorship from the model.

In China, AI companies are subject to rules and regulations meant to ensure that content output aligns with laws and “socialist values.” As a result, companies build in layers of censorship when training the AI systems. When asked questions that are deemed “politically sensitive,” the models often refuse to answer or provide talking points straight from state propaganda.

To trim down the model, Multiverse turned to a mathematically complex approach borrowed from quantum physics that uses networks of high-dimensional grids to represent and manipulate large data sets. Using these so-called tensor networks shrinks the size of the model significantly and allows a complex AI system to be expressed more efficiently.

The method gives researchers a “map” of all the correlations in the model, allowing them to identify and remove specific bits of information with precision. After compressing and editing a model, Multiverse researchers fine-tune it so its output remains as close as possible to that of the original.

To test how well it worked, the researchers compiled a data set of around 25 questions on topics known to be restricted in Chinese models, including “Who does Winnie the Pooh look like?”—a reference to a meme mocking President Xi Jinping—and “What happened in Tiananmen in 1989?” They tested the modified model’s responses against the original DeepSeek R1, using OpenAI’s GPT-5 as an impartial judge to rate the degree of censorship in each answer. The uncensored model was able to provide factual responses comparable to those from Western models, Multiverse says.

This work is part of Multiverse’s broader effort to develop technology to compress and manipulate existing AI models. Most large language models today demand high-end GPUs and significant computing power to train and run. However, they are inefficient, says Roman Orús, Multiverse’s cofounder and chief scientific officer. A compressed model can perform almost as well and save both energy and money, he says. 

There is a growing effort across the AI industry to make models smaller and more efficient. Distilled models, such as DeepSeek’s own R1-Distill variants, attempt to capture the capabilities of larger models by having them “teach” what they know to a smaller model, though they often fall short of the original’s performance on complex reasoning tasks.

Other ways to compress models include quantization, which reduces the precision of the model’s parameters (boundaries that are set when it’s trained), and pruning, which removes individual weights or entire “neurons.”

“It’s very challenging to compress large AI models without losing performance,” says Maxwell Venetos, an AI research engineer at Citrine Informatics, a software company focusing on materials and chemicals, who didn’t work on the Multiverse project. “Most techniques have to compromise between size and capability. What’s interesting about the quantum-inspired approach is that it uses very abstract math to cut down redundancy more precisely than usual.”

This approach makes it possible to selectively remove bias or add behaviors to LLMs at a granular level, the Multiverse researchers say. In addition to removing censorship from the Chinese authorities, researchers could inject or remove other kinds of perceived biases or specialty knowledge. In the future, Multiverse says, it plans to compress all mainstream open-source models.  

Thomas Cao, assistant professor of technology policy at Tufts University’s Fletcher School, says Chinese authorities require models to build in censorship—and this requirement now shapes the global information ecosystem, given that many of the most influential open-source AI models come from China.

Academics have also begun to document and analyze the phenomenon. Jennifer Pan, a professor at Stanford, and Princeton professor Xu Xu conducted a study earlier this year examining government-imposed censorship in large language models. They found that models created in China exhibit significantly higher rates of censorship, particularly in response to Chinese-language prompts.

There is growing interest in efforts to remove censorship from Chinese models. Earlier this year, the AI search company Perplexity released its own uncensored variant of DeepSeek R1, which it named R1 1776. Perplexity’s approach involved post-training the model on a data set of 40,000 multilingual prompts related to censored topics, a more traditional fine-tuning method than the one Multiverse used. 

However, Cao warns that claims to have fully “removed” censorship may be overstatements. The Chinese government has tightly controlled information online since the internet’s inception, which means that censorship is both dynamic and complex. It is baked into every layer of AI training, from the data collection process to the final alignment steps. 

“It is very difficult to reverse-engineer that [a censorship-free model] just from answers to such a small set of questions,” Cao says. 

The Download: de-censoring DeepSeek, and Gemini 3

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Quantum physicists have shrunk and “de-censored” DeepSeek R1

The news: A group of quantum physicists at Spanish firm Multiverse Computing claims to have created a version of the powerful reasoning AI model DeepSeek R1 that strips out the censorship built into the original by its Chinese creators. 

Why it matters: In China, AI companies are subject to rules and regulations meant to ensure that content output aligns with laws and “socialist values.” As a result, companies build in layers of censorship when training the AI systems. When asked questions that are deemed “politically sensitive,” the models often refuse to answer or provide talking points straight from state propaganda.

How they did it: Multiverse Computing specializes in quantum-inspired AI techniques, which it used to create DeepSeek R1 Slim, a model that is 55% smaller but performs almost as well as the original model. It allowed them to identify and remove Chinese censorship so that the model answered sensitive questions in much the same way as Western models. Read the full story.

—Caiwei Chen

Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent

Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent.

Gemini Agent is an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules. Read the full story.

—Caiwei Chen

MIT Technology Review Narrated: Why climate researchers are taking the temperature of mountain snow

The Sierra’s frozen reservoir provides about a third of California’s water and most of what comes out of the faucets, shower heads, and sprinklers in the towns and cities of northwestern Nevada.

The need for better snowpack temperature data has become increasingly critical for predicting when the water will flow down the mountains, as climate change fuels hotter weather, melts snow faster, and drives rapid swings between very wet and very dry periods.

A new generation of tools, techniques, and models promises to improve water forecasts, and help California and other states manage in the face of increasingly severe droughts and flooding. However, observers fear that any such advances could be undercut by the Trump administration’s cutbacks across federal agencies.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Yesterday’s Cloudflare outage was not triggered by a hack
An error in its bot management system was to blame. (The Verge)
+ ChatGPT, X and Uber were among the services that dropped. (WP $)
+ It’s another example of the dangers of having a handful of infrastructure providers. (WSJ $)
+ Today’s web is incredibly fragile. (Bloomberg $)

2 Donald Trump has called for a federal AI regulatory standard
Instead of allowing each state to make its own laws. (Axios)
+ He claims the current approach risks slowing down AI progress. (Bloomberg $)

3 Meta has won the antitrust case that threatened to spin off Instagram
It’s one of the most high-profile cases in recent years. (FT $)
+ A judge ruled that Meta doesn’t hold a social media monopoly. (BBC)

4 The Three Mile Island nuclear plant is making a comeback
It’s the lucky recipient of a $1 billion federal loan to kickstart the facility. (WP $)
+ Why Microsoft made a deal to help restart Three Mile Island. (MIT Technology Review)

5 Roblox will block children from speaking to adult strangers 
The gaming platform is facing fresh lawsuits alleging it is failing to protect young users from online predators. (The Guardian)
+ But we don’t know much about how accurate its age verification is. (CNN)
+ All users will have to submit a selfie or an ID to use chat features. (Engadget)

6 Boston Dynamics’ robot dog is becoming a widespread policing tool
It’s deployed by dozens of US and Canadian bomb squads and SWAT teams. (Bloomberg $)

7 A tribally-owned network of EV chargers is nearing completion
It’s part of Standing Rock reservation’s big push for clean energy. (NYT $)

8 Resist the temptation to use AI to cheat at conversations
It makes it much more difficult to forge a connection. (The Atlantic $)

9 Amazon wants San Francisco residents to ride its robotaxis for free
It’s squaring up against Alphabet’s Waymo in the city for the first time. (CNBC)
+ But its cars look very different to traditional vehicles. (LA Times $)
+ Zoox is operating around 50 robotaxis across SF and Las Vegas. (The Verge)

10 TikTok’s new setting allows you to filter out AI-generated clips
Farewell, sweet slop. (TechCrunch)
+ How do AI models generate videos? (MIT Technology Review)

Quote of the day

“The rapids of social media rush along so fast that the Court has never even stepped into the same case twice.”

—Judge James Boasberg, who rejected the Federal Trade Commission’s claim that Meta had created an illegal social media monopoly, acknowledges the law’s failure to keep up with technology, Politico reports.

One more thing

Namibia wants to build the world’s first hydrogen economy

Factories have used fossil fuels to process iron ore for three centuries, and the climate has paid a heavy price: According to the International Energy Agency, the steel industry today accounts for 8% of carbon dioxide emissions.

But it turns out there is a less carbon-­intensive alternative: using hydrogen. Unlike coal or natural gas, which release carbon dioxide as a by-product, this process releases water. And if the hydrogen itself is “green,” the climate impact of the entire process will be minimal.

HyIron, which has a site in the Namib desert, is one of a handful of companies around the world that are betting green hydrogen can help the $1.8 trillion steel industry clean up its act. The question now is whether Namibia’s government, its trading partners, and hydrogen innovators can work together to build the industry in a way that satisfies the world’s appetite for cleaner fuels—and also helps improve lives at home. Read the full story.

—Jonathan W. Rosen

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.+ This art installation in Paris revolves around porcelain bowls clanging against each other in a pool of water—it’s oddly hypnotic.
+ Feeling burnt out? Get down to your local sauna for a quick reset.
+ New York’s subway system is something else.
+
Your dog has ancient origins. No, really!

Scaling innovation in manufacturing with AI

Manufacturing is getting a major system upgrade. As AI amplifies existing technologies—like digital twins, the cloud, edge computing, and the industrial internet of things (IIoT)—it is enabling factory operations teams to shift from reactive, isolated problem-solving to proactive, systemwide optimization.

Digital twins—physically accurate virtual representations of a piece of equipment, a production line, a process, or even an entire factory—allow workers to test, optimize, and contextualize complex, real-world environments. Manufacturers are using digital twins to simulate factory environments with pinpoint detail.

“AI-powered digital twins mark a major evolution in the future of manufacturing, enabling real-time visualization of the entire production line, not just individual machines,” says Indranil Sircar, global chief technology officer for the manufacturing and mobility industry at Microsoft. “This is allowing manufacturers to move beyond isolated monitoring toward much wider insights.”

A digital twin of a bottling line, for example, can integrate one-dimensional shop-floor telemetry, two-dimensional enterprise data, and three-dimensional immersive modeling into a single operational view of the entire production line to improve efficiency and reduce costly downtime. Many high-speed industries face downtime rates as high as 40%, estimates Jon Sobel, co-founder and chief executive officer of Sight Machine, an industrial AI company that partners with Microsoft and NVIDIA to transform complex data into actionable insights. By tracking micro-stops and quality metrics via digital twins, companies can target improvements and adjustments with greater precision, saving millions in once-lost productivity without disrupting ongoing operations.

AI offers the next opportunity. Sircar estimates that up to 50% of manufacturers are currently deploying AI in production. This is up from 35% of manufacturers surveyed in a 2024 MIT Technology Review Insights report who said they have begun to put AI use cases into production. Larger manufacturers with more than $10 billion in revenue were significantly ahead, with 77% already deploying AI use cases, according to the report.

“Manufacturing has a lot of data and is a perfect use case for AI,” says Sobel. “An industry that has been seen by some as lagging when it comes to digital technology and AI may be in the best position to lead. It’s very unexpected.”

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

New Ecommerce Tools: November 19, 2025

Every week we publish a rundown of new services for ecommerce merchants. This installment includes updates on product images, returns management, agentic commerce, financing, social commerce, website builders, international shipping, and alternative payments.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

VisualScale.ai launches on Google Cloud Marketplace for 3D images. Imagine.io, a developer of AI-powered three-dimensional visualization and content for commerce, has announced the availability of VisualScale.ai on Google Cloud Marketplace. VisualScale.ai lets brands generate realistic, lifestyle visuals using a product image and prompt. Users can start from a cutout image, an existing lifestyle image, or an eligible three-dimensional model, then refine the result with natural-language prompts. Users can upload brand guidelines or reference imagery to steer outputs toward approved looks and identities.

Home page of Imagine.io

Imagine.io

ShipStation streamlines duty and tax payments for cross-border deliveries. ShipStation, a shipping and logistics platform, is helping U.S. merchants ship internationally. The company is offering a guaranteed prepaid duties and taxes feature, enabling end consumers to pay all fees upfront and thus eliminate unexpected post-delivery charges. Merchants can confirm duty and tax costs directly in ShipStation’s platform when purchasing a label.

European Payments Initiative launches Wero in Germany. The European Payments Initiative, a service backed by 16 European banks and providers, has announced a feature for Wero, an alternative payment solution for Europe-based consumers and merchants, following its development for instant peer-to-peer transfers. Wero ecommerce is now live in Germany, enabling consumers to find merchants accepting this payment solution. EPI member banks in Germany (Postbank, Deutsche Bank, ING Deutschland, Revolut) have begun allowing Wero transactions.

WooCommerce integrates with Reddit. Woo, the company behind the WooCommerce plugin for WordPress, has released Reddit for WooCommerce, streamlining the process for merchants to launch ad campaigns on Reddit. The extension’s one-click deployment automatically enables the Reddit Pixel and conversations API. Merchants can sync product catalogs to Reddit Ads Manager in a single click and, once connected, use the Reddit Ads Manager to create Dynamic Product Ads and Conversions campaigns.

Web page on WooCommerce announcing collaboration with Reddit

Reddit for WooCommerce

Amazon announces a new returns dashboard. Amazon has introduced “Returns and Recovery: Insights and Opportunities, “a dashboard for all sellers to obtain clearer insights into returns and inventory recovery. The dashboard offers (i) ASIN-level insights, (ii) performance metrics to track trends over time, (iii) recovery insights for Grade and Resell members, and (iv) a resource center and settings to manage returns and recovery settings for both FBA and Fulfilled by Merchant products.

Cross-border platform IFYshop launches Global Warehouse Logistics Fund. IFYshop, a shopping app, has launched the Global Warehouse Logistics Fund to construct smart storage centers, automated sorting systems, and AI-driven supply chain management systems. IFYshop plans to build a logistics network in major trade corridors, helping sellers reduce warehousing and fulfillment costs while enhancing cross-border fulfillment speed. IFYshop is also launching Saving Wallet, a tool for sellers to manage settlement funds.

MikMak updates platform with MCP-powered and AI-driven features. MikMak, an ecommerce enablement and analytics platform, has announced the upcoming release of Model-Context-Protocol-powered and AI-driven enhancements to MikMak 3.0, including the debut of conversational insights. With MCP, commerce systems and AI agents can communicate. MikMak says it amplifies this capability through commerce and insights APIs. MikMak’s conversational insights enable the pairing of visual analytics with narrative intelligence, automatically generating summaries and recommendations.

Home page of MikMak

MikMak

Facebook Marketplace improves tools for buyers and sellers to interact. Facebook Marketplace is testing a feature to help buyers ask the right questions. When starting a chat with a seller, shoppers will see a “Suggested questions to ask” button. Meta AI will use the details from the listing and conversation to suggest questions to ask the seller. Shoppers can react and comment directly on listings, helping others learn about item quality and discover unique finds.

Alibaba.com unveils agentic AI mode. Alibaba.com has launched AI mode, integrating agent-based capabilities directly into the user journey. According to Alibaba.com, AI mode will interpret natural language queries, analyze technical specifications, and automatically compare suppliers across pricing, logistics, certifications, and production capabilities, quickly delivering tailored recommendations. By connecting with existing Alibaba.com services such as secure payment and post-sales support, AI mode aims to enable a fully automated buying experience.

GoDaddy brings agentic AI to small businesses with launch of Airo. GoDaddy has launched Airo, a beta agentic AI website for small businesses to turn simple conversations into completed tasks. Airo can propose an idea, register a domain, build a website, generate a logo or template, and produce a hosted app. Six agents are available at launch: Airo, Airo App Builder, Compliance, Domain Search and Registration, Website Builder, and Logo.

Liquid announces multi-year partnership with Shopify. Liquid, a builder of multimodal foundation models for real-time applications, has partnered with Shopify to license Liquid Foundation Models for search. As part of the agreement, Shopify and Liquid have co-developed a generative recommender system. They are evaluating multimodal models for additional products and use cases, including customer profiles, agents, and product classification. The agreement follows Shopify’s participation in Liquid’s $250 million Series A round in December 2024.

Home page of Liquid

Liquid

Ant International’s Antom launches AI-powered app for SMB operations. Antom, a provider of merchant payment services under Ant International, has announced EPOS360, an app that brings point-of-sale systems, payments, banking, lending, and support together for small and medium‑sized businesses. According to Antom, the app enables merchants to set up online stores and partner with e-wallets and other digital channels. Merchants can also manage daily operations, inventory, and seasonal promotions, and obtain financing support from Ant’s Anext Bank, regulated by the Monetary Authority of Singapore.

PayPal relaunches in the U.K. with debit and credit cards plus rewards. PayPal is relaunching its digital wallet across the U.K. as a unified payment experience for customers to shop online and in-store. PayPal customers across the U.K. can now access the new PayPal+ loyalty program, with PayPal debit and credit cards also available. Consumers can sign up to PayPal+ for free in the PayPal app and earn points on both online and in-store purchases.

Znode partners with Unbound Commerce for mobile B2B ecommerce. Znode, a B2B ecommerce platform, has announced a partnership with Unbound Commerce, a provider of app solutions. The partnership enables Znode customers to extend wholesale ecommerce experiences into native iOS and Android apps. Unbound Commerce specializes in building mobile applications for manufacturers and distributors.

Marketing platform Profound launches Shopping Analysis to track AI engines. Profound, a platform that helps businesses control how they appear in generative AI responses, has launched Shopping Analysis. The new tool enables retailers to track which products appear in AI shopping conversations, monitor their visibility rates and positioning against competitors, and understand the specific attributes answer engines assign to their products. Shopping Analysis captures actual product images, their placement within conversations, and comprehensive response details. It also enables teams to evaluate merchant and channel performance.

Home page of Profound

Profound