Optimizely has this Engagement goal set as a default goal in every experiment. It measures any click on any element. Other than this you have no idea of the industry standard bounce rate and they are quite different in nature. It is unthinkable for me to not measure the bounce rate as a primary KPI for a landing page which Optimizely should be designed for. Engagement is such a vague term. Getting accurate bounce rate during A/B tests especially if you use redirects to other domains or subdomains is such a difficult task and misleading.
Man, just give us the bounce rate.
One thing I feel is lacking (and some times frustrating) is the ability to look back at experiment variations.
After I have archived a completed experiement, I'll often what to reference it for ideas or learnings. The problem is that production code changes may of occured since the experiement was turned off, and the variation design no longer renders correctly.
It would be nice if Optimizely could take a snap shot of the variation just as a visual refernece to the experiment's variations.
Allow For Full Raw Data Export
Currently, when we want to match 1-to-1 between Optimizely test results and our database to do additional analysis, we have to request the raw data file from our Optimizely Team. It would be great if I could export a .csv of the full visitor-level results and avoid this extra step in the future!
It would be great to be able to post notes regarding certain Goals or dates within the 'View Results' section. Currently, we are at the mercy of the Goal Name and other Optimizely terms. Being able to input notes/details about certain goals will help other teammembers understand what each change is affecting without having to figure it out in QA.
Also, the ability to add annotations as certain events may want to be noted within the results. (i.e. - release went out, reasons for traffic shfts, etc)
It would be great to have an option to make experiments mutually exclusive without writing complex JS. A simple option within targeting would be helpful.
For example, if a user is entered into experiment 1, then don't allow the user into experiment 2. Or, if a user is included in any live experiments, then don't allow the user in the current experiment.
It would be great if we could re-order the variations after adding them in the experiment.
Currently a new experiment has an "Orginal" and a "Variation 1" as default. As more variations are added they are added after the last variation tab.
When organizing the variations, it would be nice to be able to rearrange their order, by clicking and dragging the tab to it's new position - in the same way tabs can be moved around in Chrome.
See video demo here: http://screencast.com/t/umjf6RZ485
For me, the most painful day-in-day-out part of using Optimizely right now is waiting for the iframe of my site to load before I can begin using the editor. Would it be possible to lazy load the experiment page and make the test configuration options and variation code immediately available for interaction?
IMHO, this would be a tremendous improvement. Thanks!
It would be great to be able to pick OR conditions between different types of targeting, for example, "visitors who come to this URL OR have this cookie". It would also be good to allow for AND conditions within a category, for example, "visitors who have the query parameter utm_source AND utm_campaign in the URL."
I think it would be very helpful if there was a way that we could measure time on page for each Variation. This would allow us to measure engagement in a different way and it would be awesome if it could be measured directly through optimizely.
An example, we have a better more informative product page on our site comparing it against our standard page. If i could set Time on page as a goal and measure it directly through optimizely that would save my needing to dig through GA or another analytics tool.
It's frustrating to have to use a new email address for every Optimizely account you get added to. Ideally I would like to use my own work email address across every account. So when I log in with that email I see a list of the accounts I've been added to in a splash screen and can then dive into each of these accounts. Google Analytics has a similar screen on log in.
There are likely to be a number of use cases for been a user on multiple accounts:
- agency with multiple clients each with unique account
- multinational with different accounts/business units per market but centralised testing team
- strategy or development freelancer who does ad-hoc consulting on different accounts
It would be nice if the colors assigned to variations were consistent between goals on the results page. Here are some screenshots of 2 different goals for an experiment, demonstrating the current inconsistency:
On the first goal, the "Original" variation is orange, but on the second goal it is blue. These shots were taken after initial page load, no configuration changes or any other page manipulation was performed.
I could have sworn that these colors were consistent, but I had a client point this out to me today. Went back and looked at the result pages of several other experiments and sure enough, the colors change from goal to goal. I guess I never noticed it because I'm usually looking at the numbers and the order of the variations is consistent, but this does appear to cause some confusion when doing "at-a-glance" monitoring or if using screenshots of a goal graphs to assemble a report.
I just read your article on how long to run a test.
About once a month or so, someone asks questions about how to move all users into the winning variation.
Currently, when an experiment is over and it is time to declare a winner, to make everyone see the "winning" variaiton, we do the following:
1- clone the experiment
2- in the clone, set the winning variation to 100%
3- pause the original
4- start the clone
5- wait for dev to integrate the winning experience into the site's baseline
6- pause the clone
I propose the creation of a "Declare Winner" option that would effectively accomplish steps 1, 2, 3, and 4.
It would set the winning variation to 100% and would force users bucketed into any other variation into the winner.
(There may be other considerations regarding dashboard indicators, etc.)
I think it would be very nice if the Experiment Edit Page would include Hypothesis Tab.
Then the editor would be able to write the hypothesis or give a short description about the experiment.
This feature would help with giving a better idea about the experiment, especially if a few people are working on it.
It would be useful to grab it later from the API as well.
Attached are 2 pictures of how it could look like.