Resources & reports for eCommerce businesses

How to Measure the Performance of Personalized Product Recommendation Widgets in eCommerce

Written by Tudor Goicea | Nov 14, 2024 10:34:40 AM

Product recommendation widgets are powerful tools in ecommerce, driving engagement, enhancing user experience, and increasing conversions. However, measuring their true impact is not always straight-forward. In this article, we'll explore two primary methods for assessing the performance of recommendation widgets: attribution and incrementality. We’ll dive deep into each, covering concepts like session-level attribution, A/B testing for incrementality, and tools that can help you optimize your analysis.

1. Attribution: Understanding the Role of Product Recommendations

Attribution models help us understand how recommendation widgets contribute to a shopper’s journey, measuring their impact on sessions and orders. By implementing robust attribution, we can identify which sessions or orders were influenced by the recommendations and gauge the widget’s effect.

Clicks → Orders Revenue

The key metrics when we asses attribution are sessions with recommendation clicks, orders containing recommended products, and revenue from those products. They allow us to evaluate the effectiveness of recommendations. By tracking the percentage of influenced sessions, orders, and total revenue, we can assess how well recommendations contribute to overall sales and customer interaction. By looking at these metrics over time, we can identify patterns and measure the sustained value of recommendation strategies.

Direct Attribution vs. Assisted Attribution

  • Direct Attribution: Direct attribution measures only the clicks directly on the recommendation widget that immediately led to a conversion. This approach is straightforward and more conservative, but can miss the bigger picture, as recommendation widgets can influence decisions even when users don’t click directly on them or end up purchasing other products.

  • Assisted Attribution: In contrast, assisted attribution includes indirect contributions. For instance, if a user interacts with a recommended product but ends up purchasing another product, the revenue would still be attributed to the product recommendations. This approach provides a more holistic view of the widget’s influence, capturing the “bigger picture” of its contributions.

Attribution Windows

The attribution window defines the timeframe within which interactions with the recommendation widget are counted toward conversions. For instance, you might set a 7-day attribution window to capture purchases made within a week of an initial interaction with the widget. Choosing an appropriate window depends on your customers’ buying behavior; short cycles may warrant a 1–3 day window, while longer purchase journeys might benefit from 30 days or more.

👍 Aqurate's advice

Most personalization tools provide a dashboard for attribution analysis, but make sure to ask your provider how their attribution model works in detail, so you know what numbers you're actually looking at.

2. Incrementality: Isolating the Impact of Recommendations

Incrementality analysis helps determine if the recommendation widget is generating additional revenue or simply influencing purchases that would have occurred regardless. There are two main approaches here: A/B testing and time-series analysis.

A/B Testing: A Gold Standard for Incrementality

In an A/B test, users are divided into two groups:

  • Group A sees the recommendation widgets.
  • Group B does not see the widgets.

By comparing metrics between the two groups, you can isolate the true incremental effect of the widget. This approach provides clear, data-backed insights, but it requires traffic segmentation and may impact short-term revenue.

Here’s how to set up a successful A/B test for a recommendation widget:

  1. Define the Goal: Decide which metric you’re assessing for incrementality, such as average order value, conversion rate, or revenue per session (recommended).
  2. Segment Your Audience: Randomly assign users to control and test groups, ensuring a statistically significant sample size. 
  3. Establish a Testing Period: Run the test long enough to capture meaningful results. A standard A/B test might run for two to four weeks to account for any fluctuations in shopping behavior.
  4. Analyze Results: Compare the results of the test group (with widget) to the control group (without widget) to measure any lift in conversions or revenue.

👍 Aqurate's advice

For a standard A/B test to be significant, make sure you have at least 1000 orders per month, and focus on revenue per session as your north start metric.

Time-Series Analysis: Observing Trends Over Time

When A/B testing isn’t feasible, time-series analysis can be used to gauge incrementality by observing changes in conversion rates, revenue, or other metrics over time. For example, you might compare the performance of the site before and after implementing the widget. While this approach can reveal trends, it is susceptible to confounding factors (like seasonality or external events) that may influence results.

Comparing Incrementality Methods: Benefits and Drawbacks

Method Benefits Drawbacks
A/B Testing Clear measurement of incrementality, high accuracy Requires traffic segmentation, may reduce short-term revenue
Time-Series Can work without traffic segmentation, good for trends Limited accuracy due to external factors, less precise

 

3. Tools for Testing and Optimization

Various tools make it easier to implement and measure A/B testing for recommendation widgets. Here are two popular platforms:

  • Omniconvert Explore: A comprehensive CRO tool that allows for easy segmentation and A/B testing, with specific support for ecommerce experiments. Its robust analytics can help you understand how recommendation widgets are impacting revenue.

  • abconvert.io: abconvert.io provides extensive A/B testing options tailored for ecommerce, including split testing and audience segmentation. We recommend it for Shopify stores as it very easy to set up and run tests on Theme versions.

Conclusion

To fully leverage product recommendation widgets, you need a clear strategy for assessing their impact. Attribution analysis helps you understand the widget’s role in the user journey, while incrementality analysis isolates its true effect on revenue. Combining both methods, along with A/B testing tools like Omniconvert, will provide you with the insights you need to optimize your recommendation strategy and drive growth. By investing time in accurately assessing these widgets’ performance, your ecommerce business can ensure its recommendation efforts are truly moving the needle on sales.