About once a month or so, someone asks questions about how to move all users into the winning variation.
Currently, when an experiment is over and it is time to declare a winner, to make everyone see the "winning" variaiton, we do the following:
1- clone the experiment
2- in the clone, set the winning variation to 100%
3- pause the original
4- start the clone
5- wait for dev to integrate the winning experience into the site's baseline
6- pause the clone
I propose the creation of a "Declare Winner" option that would effectively accomplish steps 1, 2, 3, and 4.
It would set the winning variation to 100% and would force users bucketed into any other variation into the winner.
(There may be other considerations regarding dashboard indicators, etc.)
We always use the same analytics and heatmapping integrations for almost every test. Most of our tests use the same goals as well. It would make a lot of sense to have a "default settings" area that we could still adjust if needed.
We use several Google Analytics dimensions. But for every new experiment, I first have to click through all the running experiments to find out which dimension is not in at the moment.
It would be awesome if Optimizely would show me the dimensions already occupied by my currently running experiments. This information is best shown in the modal where I have to choose the dimension of my new project.
ok, I cross fingers that there is an empty spot on the roadmap for this feature ;-)
I'd like to see more ways of tracking goals. Currently the only two ways of tracking goals are users converted, and revenue per user.
Some goal tracking scenarios I'd like to see:
- Events per user (How many purchases did each user make?)
- Events per session (How many product pages did a user view in one visit?)
- Numeric values for goals besides rev/user (Avg order value, number of items in cart, options selected, etc)
- Negative correlation (Which variation reduced errors the most? Minor issue I currently just look for the biggest loser.)
i would be great to be able to get the change log in the API.
Currently we do a software release every week, and I'd like to be able to include a list of experiments that were turned on/pff during that period. Ideally the API would allow me to search the change logs filtering by change type (created/started/paused/etc); project or all projects; experiment or all experiments; date range.
I'd get back:
Change Type (reset results; paused; started; archived; unarchived; etc)