My company has a standard QA process using JIRA. Basically, for every experiment we push to our dev site, we have our QA team look at it, and if all looks good we push to production.
It would be so awesome have the following:
-Designate a project ID as a dev project
-Have an export for QA / Send to QA button for JIRA (This might include screenshots of the changed elements, URLs, etc)
Or at minimum, it can spit out text that we can copy and paste. It could look like this:
We have new three variations to test:
-Smaller Image (http://dev.mycompany.com/home/?optimizely_x1433070805=1)
-Logo removed (http://dev.mycompany.com/home/?optimizely_x1433070805=2)
-Add to Cart button Centered. (http://dev.mycompany.com/home/?optimizely_x1433070805=3)
By having this info readily available, we can send to our QA team for easy testing!
It would be great if the "Cross Browser Testing" feature worked for "internal" or behind-a-login pages as well.
Perhaps you could be prompted to provide CBT test login information or that information could be "shared" after opt-ing in.
Frequently we run tests on numerous URLs that don't have a simple match pattern, which means we have to add in all 50 URLs one at a time. This can be tedious and time consuming. It would be great if there was a way we could upload an excel file that includes all of the URLs we want to target and have that added to a test.
Today, my Development Director asked my to turn on the winning variation 100% until the release at the end of the week.
Although I've been generally inclined to handle this manually, through the Experiment Editor, I decided to give the new Results view/funtionality a ride.
To my surprise, the result of "launching" the winning variation variation was to pause the other variations in the experiment.
What it didn't do, was kick up the trafffic for the experiment to 100% (as I had expected). Instead, it left the experiment at its overall traffic setting.
This, I suppose, is a great topic for discussion. In most occasions, I find that software which assumes too much in order to be "helpful" is usually confusing and assumes "too much".
Strangs how, when it comes to something I want, it no longer seems like assuming too much. It seems like the feature fell short.
This, of course, reveals the subjective nature of such decisions and casues me, as a Usability Analyst, to fall back on my normal "Less is More" foundational principles.
Conclusion: I agree with the decision, but am frustrated with the outcome.
Since my experience is limited enough that I don't easily imagine a use for "Launcing" at anything other than 100% traffic, I would love to see some additional comments here to expose the contrasting views.
Meanwhile, I can envision a variation of this feature that allows you to choose the lauch percentage from the confirmation dialog.
I'd like the ability to bucket people into a variation based on URL query string.
Here's my example. I'd like to run an experiment that actually starts with an A/B test through my email marketing system. I want to send two emails that run different special offers. Links from each email would have corresponding campaign codes in the query string.
I'd like to use the value of that campaign parameter to show the user the corresponding special offer.
That's just a specific example though. You can imagine running other campaigns - for example through Google adwords, touting different offers or similar.
Being able to plug in to the query string in the URL would allow me to run integrated tests spanning multiple systems.
Add querystring parameter targeting at the url level for multi-page tests. Other stuff would be cool as well (like checking js variable value), but querystring is what's causing us pain right now, so I won't be too greedy ;-)
Here's the current scenario:
Since I believe the following are true
- targeting conditions other than the url don't apply when the test code is actually run after the user has already been slotted into a test
The following is not possible in a single multipage test:
- targeting changes to a dynamic page where the only difference in the url are querystring parameters and values. In other words, where what loads on the page is determined by what's in the query string, not the base url.
I just had to create a test that was essentially 4 optimizely tests, that could have been a single multipage test if the url parameters were available in the url targeting.
It might be cool if you guys had a place for users to tell you where there were bugs or innacuracies in Optiverse and the developer portal. That way we can help you to keep it up to date :-)
For example: when you use browser search (Ctrl F) on the developer site (once you are in one of the sections) the header gets messed up. I'd attach a screenshot, but it doesn't look like I can. Essentially, big text appears overlapping the logo area. This happens for me in Chrome and IE, but not Safari. Not a big deal, but something I noticed and thought I would pass on.