I'd like the option to export the data as a clean presentable PDF. This would serve two purposes. First, sharing the information with less screen-savy people. Second, teh PFD can double as a file backup of testing results.
Bonus, PDFs are editable in tools like Adobe Illustrator - so I could tweak them for pretty presentation.
It would be great to have a tab that listed all the tests at 100%. The current list setup makes it challenging to keep track of the experiments. Many times we need to see what is at 100% on that page before setting up a test. This would help manage and organize the list.
When I look at the home screen and resulst on a ipad (landscape orientation), it does not fit the screen and I have to scroll horizontally. Even when I do, I am not able to see the sidebar with the experiment details.
Supposedly, this should be an easy fix with media queries.
When I am correct: user are unbucketed from a variation of an experiment when experiment is 'paused'.
Promblem is when you restart the experiment they can get into a different variant then the one they were originally in, making the experiment in-valid.
So, my proposal is to keep users bucketed in the same variant even when expermint is paused/restarted again, and un-bucket users once the experiment gets archived.
(see info in ticket Request #99731).
at the moment it is not possible to delete projects.
Once setup we cannot remove them from our dashboard.
Please give us the easy feature to be able to remove obsolete or unwanted, unused projects
from our account.
I'm at the point with some of my clients where I have 50+ goals set up for them. It gets somewhat tedious having to scroll through that big list every time I want to add a saved goal to a new experiment. It would be really nice if the "Saved Goals" selection dialog had a text box that filtered the list (based on goal name) of goals based on what you type into it. Also maybe checkboxes/tabs for each of the goal types (pageview, click, event).
Maybe this has been said before.
An option to Disable Proxy by default to all the project experiments instead of keep on adding " &optimizely_disable_proxy=true" at the end of the url of the Experiment Editing Page
For a lot of my tests I run them across the same page URL targeting options but I have to keep copying them into each test.
It would be good if I could create url targeting groups/conditions that I can just click to apply to my test rather than having to copy all the URLs into each test.
Not sure if this is something that others may also find useful.
To speed up our development cycle we push up experiments and variations through the API. That's all well and good for JS & CSS but for images we have to go through a few more steps (adding the image to a variation and then getting the URL to use in our code). If there was an API wasy to add images to the variation or even a simple UI to upload images to CDN and get the URL that would save some steps.
I install the v1.0.1 Optimizely Wordpress Plugin and in the settings it says
Optimizely project code
You can find your project code on your project's experiments page. Go to optimizely.com/experiments, make sure you've selected the right project and click on <Project Code>, then click on 'Copy to Clipboard'. You can then paste the code in the box below. Your project code should start with "<script" and end with "</script>".
It would be nice if that was updated to reflect the current location to retrieve the project code: My Web Projects, Settings, Implementation.
It'd be great to have an easy way to measure impact of a section for MVT results. This is one of the purposes of MVT testing and would be very helpful. Right now, I can't see an easy way to get to a section impact even with playing with the drop-down menu for baseline.
On your Resources>MVT vs. A/B page, you discuss this regarding the Avantages of MVT testing:
"Multivariate testing is a powerful way to help you target redesign efforts to the elements of your page where they will have the most impact. This is especially useful when designing landing page campaigns, for example, as the data about the impact of a certain element’s design can be applied to future campaigns, even if the context of the element has changed."
So, essentially, it would be nice to have an easy way to see the impact of a section so we can zone in on that area.
I attached a generic screenshot I found on Google Images which demonstrates it well. You can see an example of how impact is shown in the 2nd column for each of the sections.
Thanks so much!
i would be great to be able to get the change log in the API.
Currently we do a software release every week, and I'd like to be able to include a list of experiments that were turned on/pff during that period. Ideally the API would allow me to search the change logs filtering by change type (created/started/paused/etc); project or all projects; experiment or all experiments; date range.
I'd get back:
Change Type (reset results; paused; started; archived; unarchived; etc)
Hi There - I posted this in support and at this stage it deosn't exist so I was encouraged to add this as an idea.
We do almost all of our test analysis in excel, so that we can add together only specific goals and calculate significance across multiple goals.
At the moment, this is a very time consuming process having to export the results from each test individually. Ideally, we would be able to do this for an entire project. That would make it easier for us to automate reporting and save a significant amount of time each week.
It's a generally accepted consequence that major updates/enhancements open the door to more requests. Even when everything is moving in a positive direction, getting what we want inescapably leads to wanting more.
The current presentation of the Results page encorporates so many things I've requested over the years that I must pause first to acknowledge what a great advancment this page is over its former incarnation.
That said, my very first attempt at creating Custom Views immediately inspired a new feature request:
The ability to create a Custom View based on an existing Custom View.
The moment I used my new Custom View, I went to the center drop down and realized I'd love to see this view with all of the Goals set to "Chance to Beat Baseline" and be able to quickly switch between the states. I knew that I'd have to start all over from scratch and set:
- The beginning date
- Each Goal I wanted to track
- Reorder them because, intuitively, I expect each additional Goal to be added at the bottom of the list but they are added to the top
- Choose my segmentation
then, once I had an exact replica of the first view, add the graph view setting to each goal.
So, of course, I began looking for a way to copy the current Custom View so I could modify it rather than go through all the work to create it again. Alas, no such function was found.
No "Save As New".
No "Save as Template".
No "Create New From".
It's worth noting that this request is about more than convenience*.
The most significant consideration when dealing with repetitive activity is the fact that manual operations such as this are prone to error and oversight. One can be much more confident that a set of identical views with one variation has been successfully generated if all the common elements can be recreated automatically.
* "If Necessity is the mother of invention, then Laziness is the father."
- Thomas A. Fischer
At the moment it looks like it's only possible to launch a winning variation that has the highest conversion out of all variations.
When we segment by mobile/non-mobile, or weight different conversions and actions, sometimes we would like to launch a different variation from the one presented by Optimizely.
At the moment the workaround seems to be to clone the experiment and fiddle with the traffic allocation, but it would be much simpler to be able to launch the variation that we choose.
Should possibly be labelled a bug...
We are currently running an experiment with a large number of variations (11 in total), and I've noticed that the results page does an ajax-style load of some of the experiments when the page is scrolled.
This is great for when the content is browsed interactively, but when I export to CSV for processing in Excel, I must scroll through the page first otherwise the data for all the experiments is not included in the CSV output.
We now have the Annotation feature on the result page. I use this for documentation of changes in the test for external factors like marketing events. Often, I want to see the impact of these changes on the test. It would be nice, if we could select the annotion dates for start/end date in the Date Range selector. Right now, I have to this manually, which is quite painful.
In addition, it would be nice if the interface could remember my Data Range selection (via query string or custom view).