Mutually Exclusive Experiments

Status: Done
by RyanPoznikoff ‎04-16-2014 April 16, 2014 - edited ‎04-16-2014 April 16, 2014

It would be great to have an option to make experiments mutually exclusive without writing complex JS.  A simple option within targeting would be helpful.  

 

For example, if a user is entered into experiment 1, then don't allow the user into experiment 2.  Or, if a user is included in any live experiments, then don't allow the user in the current experiment.

Status: Done
Comments
by igerber
‎04-17-2014 April 17, 2014

I've run into this so many times, I would love the feature as well.

Level 2
by ShaneHale
‎04-21-2014 April 21, 2014
I've run into this as well. Having a mutually exclusive option would be a nice feature. - shane
Level 2
by zemaniac
‎04-22-2014 April 22, 2014

THIS.

Level 2
by tasin Level 1
‎04-24-2014 April 24, 2014

I agree that this option should be available and easy to implement for the experiments.

 

As far as I know, at the moment, the only way to detect and target a visitor by using Custom JavaScript targeting where you read the Optimizely cookie to check if someone is part of an active experiment or not.

 

Regards

Tasin

Level 1
by Optimizely
‎04-25-2014 April 25, 2014 - edited ‎04-25-2014 April 25, 2014

This is a great product idea, but there are some edge cases that need to be considered.

 

Consider experiment A running on the homepage and Experiment B running on the entire site's navigation bar.

 

 

1) What if a user lands on the homepage and isn't in Experiment A or Experiment B? Should the tool choose one of the 2 mutually exclusive experiments at random?

 

2) What if a user lands on on a product page and isn't in Experiment A or Experiment B? Should the tool automatically include them in Experiment B because they are not eligible for Experiment A? If the likelihood that visitors enter the site outside the homepage is high, then the traffic to Experiment A will be much less than Experiment B.

 

 

In summary, I think it's pretty straightforward to configure for experiments whose URL targeting conditions are the same, but it gets more complicated with experiments that target different parts of the site and have different likelihoods of being visited.

 

 

 

Optimizely
by
‎04-28-2014 April 28, 2014
Status changed to: New
 
Community Manager
by Alexis
‎04-29-2014 April 29, 2014 - edited ‎05-01-2014 May 1, 2014

If you have two experiments that you want to exclude from one another, you could try using the following in the custom JS targeting:

 

(window['optimizely'].variationMap.hasOwnProperty('ExperimentID') != true);
 
Just throw in the ID of the experiment you want to exclude. The mutually exclusive JS that is documented in the support section is definitely useful (especially for excluding groups of experiments), but if you're looking for a simple exclusion of one experiment from another give the above code a try. Hope that is helpful!!
Level 2
by
‎05-15-2014 May 15, 2014 - edited ‎05-15-2014 May 15, 2014

This is a request I've had since our evaluation period with Optimizely nearly 2 years ago. I sympathize with their desire to prioritize feature requests based on demand. That said, I'm delighted to see this thread and would encourage all who read it to add a +1 comment if you want to see this as a built-in targeting option.

 

Meanwhile, as a counterpoint to the post from @Alexis, I'd like to offer a solution for those who wish to exclude ALL experiments from one another (as is the case with our organization).

 

Building off of the recommended JS for custom targeting criteria, I posed the question of whether or not this could be made more generic. In other words, could we not simply pull the list of active experiments from the Optimizely object rather than having to manually create an array in each experiment (then maintain those arrays when other experiments came online)?

 

With the aid of Optimizely support, I've refactored this code to sit in all experiments and automatically exclude all other active experiments. The 4 biggest considerations were:

 

  1. How do we ensure that we're only including running experiments in our list of random distribution? Otherwise, the code would take all non-archived experiment IDs as potential assignment and significantly reduce the traffic allotment.
  2. What if we want to run the occasional "hotfix" experiment independent of all the normally exclusive ones?
  3. How do we accurately predict traffic percentages when we have multiple exclusive experiments?
  4. There's no way (yet) to make this code fully generic. We will always have to manually set the curExperiment ID for each test.

The answer to question 1 is to check typeOf 'enabled' when building the experiment array.

 

The answer to question 2 is to simply not include the custom target JS in the experiment that we want to run in tandom. 

 

Although I haven't had time to prove or disprove my hypothesis, I believe that this solution will only work if the 'hotfix' experiment is the most recently created experiment. If true, this would mean that to keep it alive, when new exclusive experiments are introduced, we would have to duplicate the 'hotfix' experiment and restart it after all other exclusive experiments are created.


Question 3 becomes complicated because traffic allocation is all selected from the same percentage. In other words, if we have 3 experiments running at 20% traffic, that doesn't mean we'll be using 60% of our site traffic divided 3 ways. All 3 experiments will be sharing the same 20% of the site traffic. 

[This last comment is based on my calculations from a year ago and my no longer be the case. Optimizely architecture may have changed how it calculates traffic percentages since then]

 

The 4th consideration, that there is unfortunately no way to identify the experiment that is running the targeting evaluation, is because this evaluation code is run only once for all active experiments and is therefore agnostic towards any individual experiment. This means that we are still required to manually update the curExperiment ID value for each new exclusive experiment.

 

Anyway, here's the final code we use. I hope it's helpful:

 


var currExpArray=[]; //Array of Experiment IDs for CURRENTLY ACTIVE Experiments
var optyExpArray = Object.keys(optimizely.allExperiments); //array of all experiments in project
for (var i = 0; i < optyExpArray.length; i ++) {
if (typeof(optimizely.allExperiments[optyExpArray[i]].enabled) !== 'undefined') //check for enabled property
currExpArray.push((optyExpArray)[i]);
}

var curExperiment = "999999999"; //Get from URL in editor

var groupName = "__groupA";

if (!optimizely[groupName]){
var optlyCookie = document.cookie.match("optimizelyBuckets=([^;]*)");
var regexMatch = currExpArray.join("|");
optimizely[groupName] = (optlyCookie && optlyCookie[1].match(regexMatch)) ? optlyCookie[1].match(regexMatch)[0] : null;
}
//groupName now contains ID from expArray list, previous other experiment target code result or it's empty
//If empty, choose random from currExpArray

optimizely[groupName] = optimizely[groupName] || currExpArray[Math.floor(Math.random()*currExpArray.length)];
(optimizely[groupName] == curExperiment);

 

 

Level 7
by
‎05-15-2014 May 15, 2014

@james 

 

Wouldn't the use of groupName in the original posted custom JS solution be a solution to selectively excluding experiments that target separate pages?

Level 7
by HeatherW
‎06-02-2014 June 2, 2014
Status changed to: Great Idea!
 
Level 2
by timoschca
‎07-24-2014 July 24, 2014

Any progress on this subject?

Level 1
by
‎05-11-2015 May 11, 2015

@timoschca

Hopefully, you've found this on your own already, but in case you haven't:

 

There is still no built-in option to simply mark an experiment as exclusive or inclusive. However, there have been some strides in that direction. 

 

Optimizely is now supporting Project-level Javascript coding (in certain Enterprise plans).

See the "Option 2" section of this page for instuctions on how to use this feature to render experiments mutually exclusive.

 

Level 7
by Optimizely
‎08-08-2017 August 8, 2017
Status changed to: New

We're thrilled to roll out out-of-the-box mutually exclusive experiments in Optimizely X! Learn more about them here!

Optimizely
by
‎12-19-2017 December 19, 2017

@Jason-GSell

This is definitely a giant step in the right direction. 

Thank you and the team for pulling this together. 

 

Advanced Exclusion

As a follow-up I would also request the ability to exclude experiments individually. In other words, to mark an experiment such that it is exclusive from all others; regardless of exclusion groups.

The problem with this recently released solution is that if we have just one experiment that is guaranteed not to play well with others, the only way to isolate it is to put all running/active experiments into one exclusion group. This forces the traffic slicing onto all running experiments and is ultimately impractical for larger scale testing projects. 

 

A possible implementation of this would be to either add a setting on exclusion groups to make them 'universally exclusive' or, perhaps, for each project to have one system defined universal exclusion group. In this case, "Universal Exclusion" would mean that experiments in this group could not overlap with any other experiment whether in or out of any other exclusion group. 

 

I realize this complicates the matter of traffic allocation. In theory, it really just adds a layer of allocation by which an exclusion group would be given its own traffic percentage setting. 

 

Example: Universal Exclusion Group [UEG] traffic setting is: 50%. 

This means that all experiments or exclusion groups not in the UEG would have the remaining 50% of traffic from which to draw.

 

In this scenario,

  • An experiment set for 100% of traffic would really be getting 50% of Total traffic. 
  • An experiment set for 50% of traffic would be getting 50% of 50% = 25% of Total traffic. 
  • A non-universal exclusion group with 5 experiments, each set to 20% would actually be splitting 50% of traffic resulting in 10% of Total Traffic per experiment.

In practice, this is really not as confusing as it seems. The calculations to determine "Actual Traffic Percentage" are fairly straight forward. 

Level 7
by Optimizely
‎01-04-2018 January 4, 2018
Status changed to: Done
 
Optimizely
by Optimizely
‎01-04-2018 January 4, 2018

Hi @cubelodyte

 

Thanks for your suggestion! I see how this would be a valuable enhancement. I've shared this with our product team who oversees mutually exclusive experiments. I'll update you if they decide to move forward with it.

Optimizely