Product recommendation widgets are powerful tools in ecommerce, driving engagement, enhancing user experience, and increasing conversions. However, measuring their true impact is not always straight-forward. In this article, we'll explore two primary methods for assessing the performance of recommendation widgets: attribution and incrementality. We’ll dive deep into each, covering concepts like session-level attribution, A/B testing for incrementality, and tools that can help you optimize your analysis.
Attribution models help us understand how recommendation widgets contribute to a shopper’s journey, measuring their impact on sessions and orders. By implementing robust attribution, we can identify which sessions or orders were influenced by the recommendations and gauge the widget’s effect.
The key metrics when we asses attribution are sessions with recommendation clicks, orders containing recommended products, and revenue from those products. They allow us to evaluate the effectiveness of recommendations. By tracking the percentage of influenced sessions, orders, and total revenue, we can assess how well recommendations contribute to overall sales and customer interaction. By looking at these metrics over time, we can identify patterns and measure the sustained value of recommendation strategies.
Direct Attribution: Direct attribution measures only the clicks directly on the recommendation widget that immediately led to a conversion. This approach is straightforward and more conservative, but can miss the bigger picture, as recommendation widgets can influence decisions even when users don’t click directly on them or end up purchasing other products.
Assisted Attribution: In contrast, assisted attribution includes indirect contributions. For instance, if a user interacts with a recommended product but ends up purchasing another product, the revenue would still be attributed to the product recommendations. This approach provides a more holistic view of the widget’s influence, capturing the “bigger picture” of its contributions.
The attribution window defines the timeframe within which interactions with the recommendation widget are counted toward conversions. For instance, you might set a 7-day attribution window to capture purchases made within a week of an initial interaction with the widget. Choosing an appropriate window depends on your customers’ buying behavior; short cycles may warrant a 1–3 day window, while longer purchase journeys might benefit from 30 days or more.
👍 Aqurate's advice
Most personalization tools provide a dashboard for attribution analysis, but make sure to ask your provider how their attribution model works in detail, so you know what numbers you're actually looking at.
Incrementality analysis helps determine if the recommendation widget is generating additional revenue or simply influencing purchases that would have occurred regardless. There are two main approaches here: A/B testing and time-series analysis.
In an A/B test, users are divided into two groups:
By comparing metrics between the two groups, you can isolate the true incremental effect of the widget. This approach provides clear, data-backed insights, but it requires traffic segmentation and may impact short-term revenue.
Here’s how to set up a successful A/B test for a recommendation widget:
👍 Aqurate's advice
For a standard A/B test to be significant, make sure you have at least 1000 orders per month, and focus on revenue per session as your north start metric.
When A/B testing isn’t feasible, time-series analysis can be used to gauge incrementality by observing changes in conversion rates, revenue, or other metrics over time. For example, you might compare the performance of the site before and after implementing the widget. While this approach can reveal trends, it is susceptible to confounding factors (like seasonality or external events) that may influence results.
Method | Benefits | Drawbacks |
---|---|---|
A/B Testing | Clear measurement of incrementality, high accuracy | Requires traffic segmentation, may reduce short-term revenue |
Time-Series | Can work without traffic segmentation, good for trends | Limited accuracy due to external factors, less precise |
Various tools make it easier to implement and measure A/B testing for recommendation widgets. Here are two popular platforms:
Omniconvert Explore: A comprehensive CRO tool that allows for easy segmentation and A/B testing, with specific support for ecommerce experiments. Its robust analytics can help you understand how recommendation widgets are impacting revenue.
abconvert.io: abconvert.io provides extensive A/B testing options tailored for ecommerce, including split testing and audience segmentation. We recommend it for Shopify stores as it very easy to set up and run tests on Theme versions.
To fully leverage product recommendation widgets, you need a clear strategy for assessing their impact. Attribution analysis helps you understand the widget’s role in the user journey, while incrementality analysis isolates its true effect on revenue. Combining both methods, along with A/B testing tools like Omniconvert, will provide you with the insights you need to optimize your recommendation strategy and drive growth. By investing time in accurately assessing these widgets’ performance, your ecommerce business can ensure its recommendation efforts are truly moving the needle on sales.