Simultaneous Testing - Running two different tests at the same time
This is a hot topic at our organization. My and IT understanding is that when a visitor is subject to two different split tests, your results for each test are now kind of skewed. Reading this article (https://help.optimizely.com/hc/en-us/articles/200064329-Simultaneous-Testing-Running-two-different-t... explains that with enough volume, Optimizely did not see issues that I am referring to, but still ... is this a good practice? Will this really allow us to test more or just make us collect more data to be more certain of the results we see for both tests?
It just seems wrong to me.
Solved! Go to Solution.
We made the decision that the benefit of a high testing velocity was worth the risk of having a few conflicts. But we are trying to avoid conflicts between A/B tests as much as possible.
For example, I run several experiments at the same time on the same page but I run A/B tests on different elements of the page (one around price display and one around reviews) _ never 2 independent experiments at the same time for the same elements (like 2 A/B tests for the header).
If you feel that experiments might interfere with each other I would recommend doing a MVT or wait to run them one after the other.
Hi @Pianist718 ,
We're asked this question a lot. There are certainly differing opinions but the advice we usually give to most of our customers is that it's ok to run several tests simultaneously. The basic idea is that even if one test's variation is having an impact on another test's variations, the effect is proportional on all the variations in the latter test and therefore, the results should not be materially affected.
Let's walk through a hypothetical, but very real-world example:
Homepage Test: You're running two versions of a promotional banner's text, let's call them "Variation A" and "Variation B."
Product Page Test: You're running two versions of the Add-to-Cart button, let's call them Variation C" and "Variation D."
In the homepage test, Variation B is significantly outperforming Variation A. There's a good chance that even in this case where there's a big difference between variations, it won't affect user behavior on the product page (and if it does it is likely to be very minimal). But for the sake of argument, let's assume there IS something in Variation B on the homepage that affects user behavior on the product page and that effect favors Variation C in the product page test. Even in this case, Variation C (and Variation D for that matter) is still getting an equal proportion of visitors who saw both Variation A and Variation B in the Homepage test.
This pretty much sums up the content in the Help Center article you referenced but I'll add that from a practical perspective, all of the clients I have worked with over more than four years of testing run tests siumultaneously.
Let us know if you have other questions!
Great response. We have reached the same conclusion ourselves as well. The only additional thought is that there may only be a an impact that if the first tests variation has a very strong positive effect it will influence the difference in conversion rates in the latter test as those visitors who see the winning variation of the first test are positively influenced and may go through the second treatment without caring, which of the variations they see.
In this case the latter treatment just needs more time to run in order to reach statistical significance.
But it will bear no risk of calling the wrong variation winning as the influence is always in the same direction.