Pause a single variation
We have been running a blocker test now for one month and made a MVT based on two elements; button-color (3 extra variations) and framework-color (2 extra variations). So we have 12 single variations in total.
Now there are like 5/6 single variations which are doing fine, but the other 6/7 aren't performing the way we want to.
So we just want to go continue with the well performing variations. What's the best option to do here?
I already tried to duplicate the old test and delete the bad variations, but if I delete for example the "Green button", all variations with the Green-button will be gone. While we only want to stop one of the variations with the Green button.
Is this possible?
Solved! Go to Solution.
Could you not take those 5/6 variations that are doing well and create them as variations in an A/B test? Might be an extra bit of setup work but would allow you to focus your traffic just to those variations.
I'm afraid the extra bit of setup is necessary as you said. I will start with this solution, but I hope Optimizely will consider the solution to pause a single variation in a MVT. So you just have to duplicate and pause instead of making a whole new A/B-test.
Thanks for your reply anyway!
You can't pause a single variation. As I said, if I want to pause the "green button/red framework"-variation, I can't.
I only can pause all the green button variations or all the red framework variations.
Forgive me if I am misunderstanding the question in any way, but you can pause just one variation in an MVT test, like @Aicke mentioned.
Simply click on "Options" > "Traffic Allocation" and you'll see a summary of all the different combinations. In the example screenshot, I am pausing just variation 1 within section 1.
In your example, you would just find the Green button within the relevant section and hit pause. Note that this will only apply for *new* users to your site. All visitors who originally saw that combination will continue to do so. To get around this, you'll need to pause the experiment, duplicate it, and start the new one with the adjusted traffic allocation.
Let me know if this answers your question!
I'm sorry but I think you are not right at this point.
At your example, your pausing the variation 1 OF section 1 and not WITHIN. Lets give the variations and sections a name;
Section 1= Button color
Section 2= Button text
Variation 1(Section 1) = green
Variation 1(Section 2)= bold
So in your example you're turning off all combinations with "Button color - Green" (Variation 1-Section 1). All traffic will see the original button color. I can't set the combination Green button - Bold text to 0%, it is or all combinations with the green button or all combinations with the bold text.
I'm looking forward to your reply.
Thanks for your help in this case.
EDIT: sorry, I see this solution was already discussed above.
I would say your suggested solution makes optimizely more complex to use than it's helpful.
For your example, I think the solution would be to have 4 variations in a normal A/B Test (in your case an A/B/C/D Test) and not a multivariant one:
Original: color original + weight original
Variant 1: color variant + weight original
Variant 2: color original + weight variant
Variant 3: color variant + weight variant
I used "original" and "variant" naming, to be flexible with the values, and I don't know your variant button color and original font weight.
You are correct in noting that pausing a specific MVT combination is not currently possible in Optimizely. Your recommendation to create a new test with the winning combination as the only variation and run 100% of your traffic to it is the best solution.
Director, Experience Optimization | BVAccel
you can read in Optimizely documentation that:
"...you would funnel visitors to all possible combinations of these elements. This is also known as full factorial testing, ..."
What you are trying to do is not a full factorial MVT so you can´t do it using Optimizely.
A humble advice: the first thing one has to do in order to extract conclusions from an experiment is to design it from an statistical point of view. If you do so, you won´t get questions that has no sense for you experiment.
I´m saying this because i´m detecting a few questions that are related with this fact.
/Web (developer|analyst|CRO specialist)/g
I'm facing the same issue.
I'm seeing a particular variation that is underperforming at a significant level to one of the main goals i am tracking. My experiment has over 30 combinations though
Was hoping to dismiss those combinations that were not performing and end up with variations that were performing better.
Interesting in hearing more on designing the experiment from a statistical point of view. Any advice or links to further info would be appreciated.