Rich-text Reply

Traffic allocation not working correctly

rodsacc 10-12-15

Traffic allocation not working correctly

Hi,

 

I have 3 versions of a page in a split test. At first, each version got 33.33% of the traffic. But a few days ago, I paused one version, and so the remaining 2 each have a 50% allocation of traffic.

 

The problem is that the paused version is still getting traffic, and one of the versions that should be receiving 50% traffic is reciving no traffic.

 

The test is a page redirection type test. Any help?

Level 1

Re: Traffic allocation not working correctly

Hi there,

Thanks for posting to the community. Can you post a screenshot of the Traffic allocation that is set up? Also, can you post a screenshot of your results page?

In this scenario, I may have to look closer at the experiment to see what is going on. Please either post your experiment ID here or send me a direct message with the information.

Thanks and looking forward to hearing from you!

Best,
Amy Herbertson
Customer Success
JDahlinANF 10-12-15
 

Re: Traffic allocation not working correctly

When you pause a variation, users that are already allocated to that variation stay in that variation.

New users are allocated 50-50 between the remaining 2 variations.

 

I'm not aware of an easy way to address this.

When we have to do this, we will typically clone an experiment and start the new one fresh (the upside being that no one see the now-obsolete variation, the downside being that some users had one experience yesterday are in a different experience now... but that is how it works when you launch a test anyway, so usually not a big deal).

 

If you know ahead of time that one variation may be paused, you can set up your experiment as a 6-part experiment:

variations A-B: control

variations C-D: test 1

variations E-F: test 2

 

Suppose you want to pause "test 1"...

copy the code from control into variation C

copy the code from test2 into variation D

 

you'll end up like:

variations A-B-C: control

variations D-E-F: test 2

 

This isn't the most elegant setup if you are using Optimizely's dashboard for your reads, since your audience is split into multiple segments and with more than one variation running the exact same code, no one variation will ever gain statistical significance.  But, this is easy to overcome if you are using SiteCatalyst or some other reporting system that allows you to combine segments together for analysis (e.g., "Visitors where s.prop1 = 'A or B or C'" and "Visitors where s.prop1 = 'D or E or F'").