Rich-text Reply

Multivariate Experiments - How much is too much

pocketderm 05-02-15

Multivariate Experiments - How much is too much

Hi Community,

 

I've just joined the Optiverse so please be kind. I'm looking to run a multivariate test with copy and imagery variations and I wanted to get everyone's feedback, how much is too much? I am planning to run an experiement with 4 sections and 3 to 4 variations on each. In your experience are these too many combinations to clearly choose a "winnter"

 

Any and all feedback would be greatly appreciated.

 

Thanks!

td_evans 05-02-15
 

Re: Multivariate Experiments - How much is too much

Depends on your traffic size and conversion rates, but that's 16 variations in total. To choose a statistically significant combination will be difficult unless you're running lots of traffic through it

Thomas Evans
Technical Product Manager - Conversion @ Secret Escapes

Level 2
Mike_in_SF 05-04-15
 

Re: Multivariate Experiments - How much is too much

Ditto to TD's comment above.

 

You can use Optimizely's sample size calculator to get an idea of the amount of traffic each variation would need to see in order to announce a 95%+ statistically confident winner.

 

If you're interested in hearing about statistical strategy/philosophy, I suggest reading Avinash Kaushik's story on statistical significance in analytics

- Mike

Re: Multivariate Experiments - How much is too much

[ Edited ]

Hi pocketderm,

As the others said, you're going to need a lot of traffic to get anywhere with that many variations. We recently ran an MVT with around that many variations and after 250k visitors we hadn't reached confidence with any of the variations. We were able to see a significant lift for certain variations and identified trends around what elements were making the most impact but the results for individual combinations didn't reach confidence. Had we let it run longer it eventually would have reached confidence but we needed to pull it to run other things.

The other thing about MVTs with Optimizely is that there isn't a clear way to determine which element of an mvt contributed the most to the lift you're seeing for that variation. Let's say that one combination resulted in a big lift, it would be helpful to know what element (the copy, the image, the color, etc) within that combination contributed the most to that lift. I believe some testing tools provide that kind of data, Optimizely doesn't that I know of. If you know your way around statistics though you can figure it out on your own.

 

Here's a sample size calculator that lets you factor in the number of combinations: https://vwo.com/ab-split-test-duration/

Hope that helps,

- Brian