Or at least have the ability to turn this off. My 'campaign' list is littered with HUNDREDS of values and they aren't typically helpful as campaigns have really small data samples for us.
I would prefer if Optimizely uses the utm_medium paramater instead of tracking utm_campaign. This way we're looking at marketing channel impact of a test, which matters b/c we can focus on segments with larger data pools.
Not sure why this was removed. I used to be able to right-click on a test a bring up a number of actions. I probably defaulted to this functionality for all of my test actions, and did not use buttons in the UI. But now you're forcing me to. Not sure why. And seem like actions on the new UI are all spread out all over the place.
Just bring back that simple dialog box please!
Also, what happened to delete? Some tests aren't worth archiving...
In the Experiments dashboard, I would love some sort of visual cue/icon that signifies that there is a conclusive winner in an active Experiment. At the moment, drilling into each Experiment to see if there are any winners is kind of a pain, especially when I'm just doing a quick morning check.
I 'shopped a quick mockup of what I'm describing:
Something simple like that would be a nice little improvement for those of us running several active campaigns and only really need to click into each one if there is a winner.
Maybe a Phase-2 would be a modal window pop-up if a user hovers their mouse over the icon, which would then display some type of abridged results overview. Basically, how can we get the dashboard to be a better dashboard so we don't need to click off the page to view a snapshot of the results.
Just my two cents for the moment - Thanks!
I have a project specifically set up for a mobile version of a website so it would be a small time saver if for every experiment i set up under that project, the editor would default to a mobile view.
I think it would be valuable to support multiple revenue goals. In this way you could track revenue deltas by product type, etc. Would make a huge difference for companies like ours that have some product lines that cost over a $1,000 and others that typically cost under $100 and in order to get the best test results I need to measure their performance separately to maintain some semblance of a normal distribution and not get horribly skewed by outliers.
I would like to be notified via email when an experiment has found a winning variation, as an out of the box feature. It would be helpful when I have several tests running at once, so I don't have to check each one manually all the time. Thanks, Billy
Could there be either a confirmation pop-up once you've agreed to 'launch' the winning variation, or can the results page change to reflect the launch so that we know it's been actioned? An additional change that would be useful is that, if you hit the 'launch' button again', a message appears to tell you that this has already been actioned, otherwise it just lets the user keep hitting 'launch' as if it didn't work the first time.
One of our tests has just won against the control, so I've utilised the new 'launch' button on the results page to send 100% of traffic to the variation, but I can't see if anything has changed?
There's no pop-up message to tell me that the launch has been actioned, and the results page looks exactly the same so without just leaving it for a few hours to see if the traffic migrates, I'll have no way of knowing whether 100% of traffic is now going to the winner.
This idea comes up quite often in the message boards...
"How do I push people into a specific variation of a specific experiment?"
The standard answer is:
a- you can't
b- to accomplish that, you need to run two different experiments that contain no alternate variations
This comes up short in the following ways:
1- This forces us to use Optimizely to split the traffic. Why can't we split the traffic on our own?
For example... split testing emails where a single cohort group receives either email A or email B. Links in the email contain different campaign codes and we'd like to track this as a single experiment so we can use Optimizely's dashboard for analysis
2- Presenting a consistent view to users who happen to cross domains or Projects (e.g., one of the companies I used to work at had over 100+ sites with different code bases but shared a common (branded) checkout flow. When we tested changes to the branding on the main site, we wanted those chagnes reflected in the checkout flow too!
This should be as simple as changing this:
window.optimizely = window.optimizely || ; window.optimizely.push(["activate", EXPERIMENT_ID]);
to support this:
window.optimizely = window.optimizely || ; window.optimizely.push(["activate", EXPERIMENT_ID, VARIATION_ID]);
Also - this would give additional flexibility for when Optimizely is being used for Personalization instead of Testing. Rather than forcing each personalization as a "separate experiment with no varations", I could have one "experiment" for each type of personalization that I want to run (for example, re-ordering the categories on the homepage based on the gender the user shops for). Variation A is for "Mens shoppers", Variation B is for "Womens shoppers" - that way my code for each gender is isolated but I can easily turn personalization on or off.
We are in the process to integrate Infusionsoft
into the Email Campaigns that we run. And the first question that came to us was? Is any way to test emails
I made some research and did not found too much information about that, not only about Infusionsoft and Optimizely, even I did not found too much info about A/B test for emails. I made some test on MailChimp in the past but their tools to make A/B test are very basics.
It would be great to be able to comment on the results page. That each comment would be a specific event that we can title so it's not one long rambling list. Also explain events that might be happening. It would help in giving context to those we are sharing results with or a note to remind of when a change happened during a test.
This is based on a question I saw a user just submitted about changes. It would nice to have version control over experiments. That way, if an unexpected error occurred, or an editor made a change that broke an experiment, one could go back to an earlier version of the experiment instead of using time to determine what went wrong.
I miss seeing the experiment name in the browser tab. When having several tests running at the same time, I would have each test results opened up in a separate tab and the title on the tab would tell me what test I would be looking for. Now the tabs just say "Optimizely".
The page URL was available in Classix but not in X when downloading the raw list results. The closest thing we have is the page id, but this is irrelevant if the page is set to the whole site.
I've taken over a project with a lot of redundant audiences and have to time consumingly archive one by one. It would be more convenient if there were checkboxes near each audience and a button to archive them all at once.
It would be great to have a tab that listed all the tests at 100%. The current list setup makes it challenging to keep track of the experiments. Many times we need to see what is at 100% on that page before setting up a test. This would help manage and organize the list.
To speed up our development cycle we push up experiments and variations through the API. That's all well and good for JS & CSS but for images we have to go through a few more steps (adding the image to a variation and then getting the URL to use in our code). If there was an API wasy to add images to the variation or even a simple UI to upload images to CDN and get the URL that would save some steps.
It'd be great to have an easy way to measure impact of a section for MVT results. This is one of the purposes of MVT testing and would be very helpful. Right now, I can't see an easy way to get to a section impact even with playing with the drop-down menu for baseline.
On your Resources>MVT vs. A/B page, you discuss this regarding the Avantages of MVT testing:
"Multivariate testing is a powerful way to help you target redesign efforts to the elements of your page where they will have the most impact. This is especially useful when designing landing page campaigns, for example, as the data about the impact of a certain element’s design can be applied to future campaigns, even if the context of the element has changed."
So, essentially, it would be nice to have an easy way to see the impact of a section so we can zone in on that area.
I attached a generic screenshot I found on Google Images which demonstrates it well. You can see an example of how impact is shown in the 2nd column for each of the sections.
Thanks so much!
Hi There - I posted this in support and at this stage it deosn't exist so I was encouraged to add this as an idea.
We do almost all of our test analysis in excel, so that we can add together only specific goals and calculate significance across multiple goals.
At the moment, this is a very time consuming process having to export the results from each test individually. Ideally, we would be able to do this for an entire project. That would make it easier for us to automate reporting and save a significant amount of time each week.
At the moment it looks like it's only possible to launch a winning variation that has the highest conversion out of all variations.
When we segment by mobile/non-mobile, or weight different conversions and actions, sometimes we would like to launch a different variation from the one presented by Optimizely.
At the moment the workaround seems to be to clone the experiment and fiddle with the traffic allocation, but it would be much simpler to be able to launch the variation that we choose.
Should possibly be labelled a bug...
We are currently running an experiment with a large number of variations (11 in total), and I've noticed that the results page does an ajax-style load of some of the experiments when the page is scrolled.
This is great for when the content is browsed interactively, but when I export to CSV for processing in Excel, I must scroll through the page first otherwise the data for all the experiments is not included in the CSV output.