Rich-text Reply

Subsequent Tests don't have similar conversion rates, why?

EBTesting 05-29-14

Subsequent Tests don't have similar conversion rates, why?

Hi, I need a little help explaing why conversion rates are different between tests.

 

We've run one successful test on our homepage with the winning variation converting around 2%. We then updated the homepage to include those winning changes. Afterwards, we lauched the next test, where the control is the winning variation from the previous test. However, the control is converting around .9% instead of 2% like before. Why is that? Thanks!

 

 

DonPeppers 05-29-14
 

Re: Subsequent Tests don't have similar conversion rates, why?

this is a very common result. You should always EXPECT lower results on roll-out that you get from a test, for the simple reason that you only roll out winning tests - no one ever rolls out a failing test. But test results have an element of randomness in them (greater variance in results because of a smaller sample, also). As a result, SOME tests will succeed for essentially random reasons, while some tests will fail for random reasons. But we don't roll out the random failures, only the random successes. So there is a much greater likelihood that a roll-out will under-perform a test than over-perform it.
adzeds 05-30-14
 

Re: Subsequent Tests don't have similar conversion rates, why?

This could be due to a number of factors. Do you have any variance in the traffic now compared to your previous testing period. e.g. More PPC traffic then compared to now?

I would check the traffic sources first as that normally is where you would find the answer to this sort of problem. You could then look at seasonal traffic trends, are we at a lower performing period of a month now, compared to a high performing part of the month when you did your previous test?

David Shaw
Level 11
Kathryn 06-02-14
 

Re: Subsequent Tests don't have similar conversion rates, why?

Hi @EBTesting,

 

Whilst other commenters have offered useful suggestions, I'd also like to add that it's possible that the test was called before reaching statistical significance.

 

I'd like to direct you to this sample size calculator, which helps decide how many visitors should have been included in each branch (variation) of your test before achieving a conclusive results. 

 

To break down the terms:

 

Baseline conversion rate is your conversion rate for the original page

Minimum detectable effect is the % uplift you're seeing on the variation.

The calculator should also be set to "relative".

 

Once you've input the values, you'll be given the number of visitors requires for each variation before the results can be considered statistically significant. If this test had not reached the recommended number of visitors by the time you launched the winner, this may be the reason behind the lower-than-expected conversion rate.

 

If you've any follow-up questions, just let me know.

Optimizely
zemaniac 06-04-14
 

Re: Subsequent Tests don't have similar conversion rates, why?

[ Edited ]

Your conversion rate was 2%. Good but, what about the error interval? if it was (2±1)% then you shouldn't be surprised at all. The C.R. itself doesn't make much sense if you don't keep an eye on its "error bars".

Other answers also explain what you are experiencing. It's just statistics.

Alessio Romito
Project Manager at 21DIAMONDS GmbH
Level 2
EBTesting 06-09-14
 

Re: Subsequent Tests don't have similar conversion rates, why?

Thank you all so much, this definitely helps!

adzeds 06-09-14
 

Re: Subsequent Tests don't have similar conversion rates, why?

That's what Optiverse is here for I guess!
David Shaw
Level 11