I usually mention someone in a reply to a comment, since many people turn off a lot of notifications. It looks like the dropdown to @ mention someone shows up in a post, but it doesn't seem to come up in comment responses. Would be a great feature add.
First, I have to say I love the tool and I have found only one issue. I have already sent feedback about this, but since it´s not fixed I’ll give more pressure and write my idea to all.
IP filtering is very frustrating, I mean not so easy and quick thing to do especially when we could have about 50-100 address to filter and 500 character limit is going to be also issue for us. Couldn’t you just make X amount of fields (like in Excel) and to those we could copy & paste our IP-addresses and forget those regular expressions (don’t forget, but as default I don’t want to use those). So like this:
Add IP to filter:
+ Click this and add new ip to filter.
Or easier and faster way would be just to import .csv file with all the ip addresses and Optimizely could automatically add those to IP filtering. And make it possible to add at least 100-200 IP-addresses to exclude.
The change history function is awesome, but I would love to be able to add my own notes/annotations to specific events that happen. It gives details for some things (ie, adding a click goal) but not others. I'm not sure what consitutes additional info, but there are a bunch of line items that say "updated" with no details. The notes section in the experiment details pane is supposed to be for test hypotheses, etc. but I've started using it for change control notes.
Even though I find myself on the Results page almost every day, it would be great to have a "Notifications Center" where I could create automatic alerts when my tests pass certain thresholds. A bonus would be making text alerts an option as well.
I frequently re-use code pieces when setting up experiments, and I'm sure many other people do as well. I often find myself wanting to grab some code that I've written for an older, completed experiment but am unable to because the page that the experiment runs on no longer loads in the editor. Maybe the page is gone, maybe the Optimizely project code was removed from it, etc. I'd like to be able to get at the variation code in that situation.
I know I could maybe do a workaround by putting a different page into the editor, but even that won't work if the project code is no longer on the page.
The current results view can show graphs of several kinds of data. I am interested in Unique Conversions and Visitors. The charts shows the number of conversion or visitors at an instance in time. The total number of conversions or visitors is then the area under this curve. There is also a feature where the time range of the data can be selected by clicking and dragging horizontally in the chart to select a range (the zoom feature).
So the feature I am interested in adding is the ability to get the absolute counts of conversions and/or visitors over a range of time. So for instance, if I care about 9:00am to noon, I can easily get the number of Unique Conversions and the number of Visitors during that period of time. The use case is that I would like to compare conversion counts and visitor counts against data in the database or omniture for the same time range while the test runs. I basically want to spot check to see if my database and/or omniture and/or other data source agrees with the tests.
The only way to do this now that I know of is to get the nightly data available to enterprise customers and use that. That is a heavy solution for a simple requirement. Also, I can only get data for previous days, I can't look at data for this morning because the data is aggregated by day and written at midnight.
One way to provide absolute counts would be to add two new views to the results; Total Unique Conversions and Total Visitors. These curves would start as zero when the test was launched and then go up to the right as conversions and visitors are registered. With this feature, I could move my mouse on the chart to the start of the range I am interested in and hover over the chart to get my starting value for the range. I would then do the same at the end of my range to get the ending value. Very manual, but doable. It would seem like the results page must already have the necessary underlying data available to it in order to show the Unique Conversions and Visitors views that it already shows.
Another way to handle this is to show the absolute numbers for whatever range is showing on the current Unique Conversions and Visitor views, perhaps in an box in the chart area, or perhaps by changing (or adding to) the data in the tables below the chart so that they correspond to the selected range. This would work well since those tables already include absolute counts. This may be a checkbox or pulldown to turn this feature on for the given table.
This kind of feature would help us understand if our tests are tracking as we expect by allowing us to spot check against other datasources while the test runs.
It would be very nice if you can reward SWAGS and Some Sort of Appreciation Greeting Card of your company signed by your officials for Bug hunters who reported some Bug in your website
Currently when we have a test we initially run a soft launch of 90% (original) / 10% (test) for a day, and if no issues occur we full launch the test to 50%/50%. Between Soft to Full launch we would have to clone the soft launch test to avoid the visitor bucketing issue.
It would be nice to see this implemented, and not have many archived tests that are similar - e.g. dev version, soft launch version, full version, then we will have the Winner version until the developers implement it directly to the site.
Maybe a "Reset" visitors option, everytime we adjust the traffic allocation?
We use Adobe Analytics as our general analytics suite. One of the powerful things within it is the custom segementing ability and conversion varaible mappings (eVars).
Since we've already built out a number of segments there, it would be very powerful to have Optimizely hook into it via Genesis (Data Connecters) to let me target and build audiences around those segements / dimensions.
We always use the same analytics and heatmapping integrations for almost every test. Most of our tests use the same goals as well. It would make a lot of sense to have a "default settings" area that we could still adjust if needed.
When I am correct: user are unbucketed from a variation of an experiment when experiment is 'paused'.
Promblem is when you restart the experiment they can get into a different variant then the one they were originally in, making the experiment in-valid.
So, my proposal is to keep users bucketed in the same variant even when expermint is paused/restarted again, and un-bucket users once the experiment gets archived.
(see info in ticket Request #99731).
I have a personal account with Optimizely that I enjoy using for dummy experiments or showcasing Optimizely to people/groups that I work with that might be interested in the product. I also use Optimizely for work. It would be great to have a way to navigate between multiple accounts so that I don't necessarily have to log in or out of one to access the other.
Optimizely has this Engagement goal set as a default goal in every experiment. It measures any click on any element. Other than this you have no idea of the industry standard bounce rate and they are quite different in nature. It is unthinkable for me to not measure the bounce rate as a primary KPI for a landing page which Optimizely should be designed for. Engagement is such a vague term. Getting accurate bounce rate during A/B tests especially if you use redirects to other domains or subdomains is such a difficult task and misleading.
Man, just give us the bounce rate.
I just read your article on how long to run a test.
It would be great to have an option to make experiments mutually exclusive without writing complex JS. A simple option within targeting would be helpful.
For example, if a user is entered into experiment 1, then don't allow the user into experiment 2. Or, if a user is included in any live experiments, then don't allow the user in the current experiment.
A couple of improvements for ListTargeting:
1- value in a cookie (vs value of a cookie)
-- cookies are a valuable commodity - we cannot arbitrarily create new ones whenever we want, so instead I've started storing valuable information inside of a JSON formatted cookie. In the following example, I'd like List Targeting to be able to detect "ABC123" inside the cookie where the contents are:
2- use localStorage and sessionStorage
-- similar to the above... I'd like to look for a value inside of localStorage or sessionStorage
window.localStorage.eid = 'ABC123'
This would drastically improve the flexibility of using List Targeting.
I've had a few requests now to test things within an iFrame element (e.g. social sharing buttons, video player assets, etc.). It'd be great if there was a way to access iFrame elements, and test them, or have a way to overlay those elements so that designs could be manipulated for them.