Rich-text Reply

Issue in reporting with uneven traffic allocation?

Peter 12-07-15

Issue in reporting with uneven traffic allocation?

I had an experiment running with a traffic allocation of 50/50. I later changed it to 75/25. I later changed it to 99/1. Since referring back to that test, it keeps giving me more and more positive results for my test variant.

 

*However*, this includes “statistically significant” positive results for a custom event that the experiment should have zero effect on. In fact, the change I’m testing just applies a class to the body tag and the visual changes don’t appear in the user flow until after this event fires. This was partially meant as a sanity-check goal, to verify there’s no difference on this one, but a difference on others.

 

This is the test: https://app.optimizely.com/results2?experiment_id=3595931812#

 

When I filter down from Dec 3–Dec 7 it gets even more extreme and states with >99% statistical significance that there’s a 310.2% improvement in the rate at which people open checkout.

 

Have any issues like this been identified before, or would you need more info to dig deeper? I the meantime I have cloned the experiment, paused the old one, and started the new one fresh at 95/5% of traffic allocation to see if it shows up any different.

Level 2

Re: Issue in reporting with uneven traffic allocation?

Hi @Peter,

 

Great question, and we do actually see this once in a while.  The main issue is we don't recommend making traffic allocation changes in the middle of a running experiment very much because it can produce unexpected results.  Essentially you're introducing new data that Stats Engine has to calculate in conjunction with previous data, and traffic allocation is one of the factors considered when looking at goal/variation pairs.

 

Another thing to keep in mind about drilling down into specific date ranges on the Results Page is that we recalculate results as if the experiment only ran in that window of time.  So, if you're drilling down into a time when traffic was split 99/1, there will certainly be a higher raw volume of visitors and conversions in the variation receiving 99% of the traffic.  However, Stats Engine establishes significance based on differences in conversion rate and not conversion volume, so the percentage of visitors who convert per variation still needs to higher as due to a real difference for us to declare a winner.

 

Over the life of the experiment, we do see a 52% increase in the Open Checkout goal, so directionally at least, the larger improvement makes some sense.

 

Were there any external factors that drove additional traffic to your site and/or incentivized checkouts in the Dec 3-7 window?  Was there a marketing campaign, a new promotion, or urgency messaged elsewhere on the site?

Harrison Krat
Solutions Architect | Optimizely, Inc.
harrison@optimizely.com
 
Peter 12-15-15
 

Re: Issue in reporting with uneven traffic allocation?

Reviewing the project more, I had also changed to the target URL for the experiment to target the first page of the checkout, which led to all traffic (on both tests) to trigger the checkoutOpen event.

 

Might it be that the combination of the change in the distribution of that event combined with the change in traffic allocation lead to broken results?

 

More generally, might just changing traffic allocation in any experiment lead to bad results?

Level 2

Re: Issue in reporting with uneven traffic allocation?

Very generally, yes and yes.

 

If you're looking for a better way to mark certain changes like these, our Enterprise plan offers the ability to annotate the Results Page.  This allows you to note the date of changes in your experiment design or external events that could contribute to peaks or valleys in your data.

Harrison Krat
Solutions Architect | Optimizely, Inc.
harrison@optimizely.com