Funnel Testing Input for Sample Size Purposes
I beieve I know the answer to this but I want to erase any doubt from my already over-topic(ed) CRO brain. Thanks ahead for your replies.
We are embarking on a series of funnel tests for an eCommerce store. The current setup is a 4 step process. Say, the overall conversion rate of the site hovers around 4-5%, however, once a user reaches step 1 of the funnel, there is a 60% completion rate through the 4 steps onto a order.
Question: For any given test within this 4 step, process, should we use the completion rate, as our basline conversion rate when computing estimated test duration?
Solved! Go to Solution.
According to me you should use conversion rate after your step 1 to completely determine which variation is better. I believe this as there can be numerous reason for a user to not take step 1 (high cost, desired product not found), based on which you cannot make any suggestion as to which test is better.
For purposes of estimating the sample size requried for the test, it depends on which metric you are using as your KPI:
A- If your experiment's KPI is "Visitor Conversion" (orders per visitor) then you would use your 4-5% rate.
B- If your experiment's KPI is "Step 1 to Step 2 Converison", then you would use that rate.
C- If your experiments KPI is "Checkout Funnel Conversion" (Step 1 through Receipt), then you would use the 60% rate.
Which of those metrics you *should* be using is a different convesation.
Where B is best indicator of how your change affects Step 1 to Step 2, IMO the main KPI should always be to the receipt ... e.g., you could replace all the form fields on Step 1 with a big NEXT button and put everything on Step 2 which should make your Step1 to Step2 results look awesome, but would probably hurt your overall conversion.
You should definitely use your 60% completion rate baseline. The reason for this is simple: the calculator is helping you understand how much you need to affect the conversion rate of users in the experiment to get a statistically meaningful result. The 4-5% conversion rate is not relevant (as far as this experiment is concerned) because that is a rate based on total visitros to the site, many of whom would never even be eligible to enter this experiment because they don't get to that step of the funnel.
@ryanlillis The other consideration was to use revenue (e.g. Receipt) but considering we aren't really affecting revenue, in the sense of an up-sell or other mechanism in the funnel, we thought the CRV was more relevant. Any best practices on that thought?
In the future we are thinking of ruining a promo box text, where we collapse the promo box on each page of the funnel. Right now, it appears on each page as part of the module that shows the products. In that case, both revenue and throughoutput seem to be goals? Thoughts on that one?
In the example you're mentioning it definitely sounds like you should be focusing on conversion rate and not revenue. Any time you're testing in the funnel, unless you're doing something that gives users some kind of ability to modify either A) the items in their carts or (e.g. different placement or algorithm for showing related/recommended products etc.) or B) the price of the items in their carts (e.g. promo/coupon codes then the focus should be on conversion rate.
And yes, if you are focusing on of of those items, it's good to consider both factors. For example, imagine you showed an offer in the checkout funnel for 99% off and you only focused on your conversion rate. That wouldn't make sense because your conversion rate would be sure to shoot up but the negative impact to revenue would be huge as well.
It's important to understand where that line should be before you run your experiment to avoid the headache when you're reviewing results. In other words you should be saying something like, "We're willing to accept a __% drop in conversion rate if our revenue goes up by __%" or "We're willing to accept a __% drop in revenue of our conversion rate goes up by __%." These are sometimes difficult questions to answer with perfect precision but even attempting to answer that question before you run the test is likely to set you up for clearer results that everyone agrees once they start to roll in.