In the experiments list, it would be nice to have a expansion button to display current percentages for variants of the experiment, and also change them.
Current behavior of:
clicking on experiment -> scroll down to variants -> change percentage -> scroll to bottom -> save
is way too much work.
Optimizely has this Engagement goal set as a default goal in every experiment. It measures any click on any element. Other than this you have no idea of the industry standard bounce rate and they are quite different in nature. It is unthinkable for me to not measure the bounce rate as a primary KPI for a landing page which Optimizely should be designed for. Engagement is such a vague term. Getting accurate bounce rate during A/B tests especially if you use redirects to other domains or subdomains is such a difficult task and misleading.
Man, just give us the bounce rate.
One thing I feel is lacking (and some times frustrating) is the ability to look back at experiment variations.
After I have archived a completed experiement, I'll often what to reference it for ideas or learnings. The problem is that production code changes may of occured since the experiement was turned off, and the variation design no longer renders correctly.
It would be nice if Optimizely could take a snap shot of the variation just as a visual refernece to the experiment's variations.
Allow For Full Raw Data Export
Currently, when we want to match 1-to-1 between Optimizely test results and our database to do additional analysis, we have to request the raw data file from our Optimizely Team. It would be great if I could export a .csv of the full visitor-level results and avoid this extra step in the future!
It would be great to be able to post notes regarding certain Goals or dates within the 'View Results' section. Currently, we are at the mercy of the Goal Name and other Optimizely terms. Being able to input notes/details about certain goals will help other teammembers understand what each change is affecting without having to figure it out in QA.
Also, the ability to add annotations as certain events may want to be noted within the results. (i.e. - release went out, reasons for traffic shfts, etc)
It would be great to have an option to make experiments mutually exclusive without writing complex JS. A simple option within targeting would be helpful.
For example, if a user is entered into experiment 1, then don't allow the user into experiment 2. Or, if a user is included in any live experiments, then don't allow the user in the current experiment.
For me, the most painful day-in-day-out part of using Optimizely right now is waiting for the iframe of my site to load before I can begin using the editor. Would it be possible to lazy load the experiment page and make the test configuration options and variation code immediately available for interaction?
IMHO, this would be a tremendous improvement. Thanks!
It would be great if we could re-order the variations after adding them in the experiment.
Currently a new experiment has an "Orginal" and a "Variation 1" as default. As more variations are added they are added after the last variation tab.
When organizing the variations, it would be nice to be able to rearrange their order, by clicking and dragging the tab to it's new position - in the same way tabs can be moved around in Chrome.
See video demo here: http://screencast.com/t/umjf6RZ485
It would be great to be able to pick OR conditions between different types of targeting, for example, "visitors who come to this URL OR have this cookie". It would also be good to allow for AND conditions within a category, for example, "visitors who have the query parameter utm_source AND utm_campaign in the URL."
In Optimizely Classic, it was very easy to click to see total vs unique data on each metric in the results page. Now with Optimizely X, you either have to create two metrics (one unique and one total), or go back in and edit a metric to change it to total in order to see the total data. This is time-consuming and frustrating vs. the easy way this was done in Classic. Our experiments are unlike the 'standard' ones which only have a few metrics. We have many that are based on increasing engagement on the page, and as such we need to measure how many different things are affected or changing. We have lots of metrics we are tracking. And for many of these, if they are click-based, we need to see the totals. It takes a lot of time now to go in and pull this data.
It's frustrating to have to use a new email address for every Optimizely account you get added to. Ideally I would like to use my own work email address across every account. So when I log in with that email I see a list of the accounts I've been added to in a splash screen and can then dive into each of these accounts. Google Analytics has a similar screen on log in.
There are likely to be a number of use cases for been a user on multiple accounts:
- agency with multiple clients each with unique account
- multinational with different accounts/business units per market but centralised testing team
- strategy or development freelancer who does ad-hoc consulting on different accounts
I think it would be very helpful if there was a way that we could measure time on page for each Variation. This would allow us to measure engagement in a different way and it would be awesome if it could be measured directly through optimizely.
An example, we have a better more informative product page on our site comparing it against our standard page. If i could set Time on page as a goal and measure it directly through optimizely that would save my needing to dig through GA or another analytics tool.
It would be nice if the colors assigned to variations were consistent between goals on the results page. Here are some screenshots of 2 different goals for an experiment, demonstrating the current inconsistency:
On the first goal, the "Original" variation is orange, but on the second goal it is blue. These shots were taken after initial page load, no configuration changes or any other page manipulation was performed.
I could have sworn that these colors were consistent, but I had a client point this out to me today. Went back and looked at the result pages of several other experiments and sure enough, the colors change from goal to goal. I guess I never noticed it because I'm usually looking at the numbers and the order of the variations is consistent, but this does appear to cause some confusion when doing "at-a-glance" monitoring or if using screenshots of a goal graphs to assemble a report.
I just read your article on how long to run a test.
I think it would be very nice if the Experiment Edit Page would include Hypothesis Tab.
Then the editor would be able to write the hypothesis or give a short description about the experiment.
This feature would help with giving a better idea about the experiment, especially if a few people are working on it.
It would be useful to grab it later from the API as well.
Attached are 2 pictures of how it could look like.