This idea comes up quite often in the message boards...
"How do I push people into a specific variation of a specific experiment?"
The standard answer is:
a- you can't
b- to accomplish that, you need to run two different experiments that contain no alternate variations
This comes up short in the following ways:
1- This forces us to use Optimizely to split the traffic. Why can't we split the traffic on our own?
For example... split testing emails where a single cohort group receives either email A or email B. Links in the email contain different campaign codes and we'd like to track this as a single experiment so we can use Optimizely's dashboard for analysis
2- Presenting a consistent view to users who happen to cross domains or Projects (e.g., one of the companies I used to work at had over 100+ sites with different code bases but shared a common (branded) checkout flow. When we tested changes to the branding on the main site, we wanted those chagnes reflected in the checkout flow too!
This should be as simple as changing this:
window.optimizely = window.optimizely || ; window.optimizely.push(["activate", EXPERIMENT_ID]);
to support this:
window.optimizely = window.optimizely || ; window.optimizely.push(["activate", EXPERIMENT_ID, VARIATION_ID]);
Also - this would give additional flexibility for when Optimizely is being used for Personalization instead of Testing. Rather than forcing each personalization as a "separate experiment with no varations", I could have one "experiment" for each type of personalization that I want to run (for example, re-ordering the categories on the homepage based on the gender the user shops for). Variation A is for "Mens shoppers", Variation B is for "Womens shoppers" - that way my code for each gender is isolated but I can easily turn personalization on or off.
We are in the process to integrate Infusionsoft
into the Email Campaigns that we run. And the first question that came to us was? Is any way to test emails
I made some research and did not found too much information about that, not only about Infusionsoft and Optimizely, even I did not found too much info about A/B test for emails. I made some test on MailChimp in the past but their tools to make A/B test are very basics.
It would be great to be able to comment on the results page. That each comment would be a specific event that we can title so it's not one long rambling list. Also explain events that might be happening. It would help in giving context to those we are sharing results with or a note to remind of when a change happened during a test.
This is based on a question I saw a user just submitted about changes. It would nice to have version control over experiments. That way, if an unexpected error occurred, or an editor made a change that broke an experiment, one could go back to an earlier version of the experiment instead of using time to determine what went wrong.
I miss seeing the experiment name in the browser tab. When having several tests running at the same time, I would have each test results opened up in a separate tab and the title on the tab would tell me what test I would be looking for. Now the tabs just say "Optimizely".
The page URL was available in Classix but not in X when downloading the raw list results. The closest thing we have is the page id, but this is irrelevant if the page is set to the whole site.
I've taken over a project with a lot of redundant audiences and have to time consumingly archive one by one. It would be more convenient if there were checkboxes near each audience and a button to archive them all at once.
It would be great to have a tab that listed all the tests at 100%. The current list setup makes it challenging to keep track of the experiments. Many times we need to see what is at 100% on that page before setting up a test. This would help manage and organize the list.
For me, the most painful day-in-day-out part of using Optimizely right now is waiting for the iframe of my site to load before I can begin using the editor. Would it be possible to lazy load the experiment page and make the test configuration options and variation code immediately available for interaction?
IMHO, this would be a tremendous improvement. Thanks!
A couple of improvements for ListTargeting:
1- value in a cookie (vs value of a cookie)
-- cookies are a valuable commodity - we cannot arbitrarily create new ones whenever we want, so instead I've started storing valuable information inside of a JSON formatted cookie. In the following example, I'd like List Targeting to be able to detect "ABC123" inside the cookie where the contents are:
2- use localStorage and sessionStorage
-- similar to the above... I'd like to look for a value inside of localStorage or sessionStorage
window.localStorage.eid = 'ABC123'
This would drastically improve the flexibility of using List Targeting.
To speed up our development cycle we push up experiments and variations through the API. That's all well and good for JS & CSS but for images we have to go through a few more steps (adding the image to a variation and then getting the URL to use in our code). If there was an API wasy to add images to the variation or even a simple UI to upload images to CDN and get the URL that would save some steps.
Or at least have the ability to turn this off. My 'campaign' list is littered with HUNDREDS of values and they aren't typically helpful as campaigns have really small data samples for us.
I would prefer if Optimizely uses the utm_medium paramater instead of tracking utm_campaign. This way we're looking at marketing channel impact of a test, which matters b/c we can focus on segments with larger data pools.
It'd be great to have an easy way to measure impact of a section for MVT results. This is one of the purposes of MVT testing and would be very helpful. Right now, I can't see an easy way to get to a section impact even with playing with the drop-down menu for baseline.
On your Resources>MVT vs. A/B page, you discuss this regarding the Avantages of MVT testing:
"Multivariate testing is a powerful way to help you target redesign efforts to the elements of your page where they will have the most impact. This is especially useful when designing landing page campaigns, for example, as the data about the impact of a certain element’s design can be applied to future campaigns, even if the context of the element has changed."
So, essentially, it would be nice to have an easy way to see the impact of a section so we can zone in on that area.
I attached a generic screenshot I found on Google Images which demonstrates it well. You can see an example of how impact is shown in the 2nd column for each of the sections.
Thanks so much!
Add the option for audiences to be joined by an AND instead of the current ANY (OR) for targeting.
As default, ANY audience that matches will be included in the experiment. However, you may want to use multiple audiences to avoid having to create weird hybrids.
Example: A campaign targeted to mobile phones
Audience 1: All mobile devices
Audience 2: query param = your campaign
If you had an "AND", we could use these two audiences (thereby being able to always have a reusable mobile device audience). However, at present, we would have to create a 3rd audience that has both the mobile targeting AND the query param.
Hi There - I posted this in support and at this stage it deosn't exist so I was encouraged to add this as an idea.
We do almost all of our test analysis in excel, so that we can add together only specific goals and calculate significance across multiple goals.
At the moment, this is a very time consuming process having to export the results from each test individually. Ideally, we would be able to do this for an entire project. That would make it easier for us to automate reporting and save a significant amount of time each week.
Not sure why this was removed. I used to be able to right-click on a test a bring up a number of actions. I probably defaulted to this functionality for all of my test actions, and did not use buttons in the UI. But now you're forcing me to. Not sure why. And seem like actions on the new UI are all spread out all over the place.
Just bring back that simple dialog box please!
Also, what happened to delete? Some tests aren't worth archiving...
I manage multiple separate Optimizely accounts for each of my CRO clients. Whenever I get a notification email (ie. a visitor account overage alert) there's no way for me to tell which Optimizely account the overage is for, based on the content of the email alone. Same thing with billing invoices. I can eventually figure it out if look into the account settings and line up the relevant data (visitor count, invoice date, etc.) but it would be really great if account names were placed into emails and on invoices.
In the Experiments dashboard, I would love some sort of visual cue/icon that signifies that there is a conclusive winner in an active Experiment. At the moment, drilling into each Experiment to see if there are any winners is kind of a pain, especially when I'm just doing a quick morning check.
I 'shopped a quick mockup of what I'm describing:
Something simple like that would be a nice little improvement for those of us running several active campaigns and only really need to click into each one if there is a winner.
Maybe a Phase-2 would be a modal window pop-up if a user hovers their mouse over the icon, which would then display some type of abridged results overview. Basically, how can we get the dashboard to be a better dashboard so we don't need to click off the page to view a snapshot of the results.
Just my two cents for the moment - Thanks!
It would be very helpful if it would be possible to share the results link with a custom starting/ending date.
It for example could record every changes made to the GUI in the URL. Therefore it would allow to share links and get people to see exactly what you are looking at.
For now whenever the link is shared, it allows only the standard date(when the experiment has been started).
It can be changed only after the link is loaded.
It looks like this:
It is not too good solution, because if we want to share it with people and our goal is to point out a specific period of time, we would have to tell them to do so manually.
This is an example of what I have in mind:
For example instead of 11th September, it would be 1st October.
I have a project specifically set up for a mobile version of a website so it would be a small time saver if for every experiment i set up under that project, the editor would default to a mobile view.