We now have the Annotation feature on the result page. I use this for documentation of changes in the test for external factors like marketing events. Often, I want to see the impact of these changes on the test. It would be nice, if we could select the annotion dates for start/end date in the Date Range selector. Right now, I have to this manually, which is quite painful.
In addition, it would be nice if the interface could remember my Data Range selection (via query string or custom view).
As an Optimizely user, I have come across this situation where my page has two or more buttons (each button with a unique CTA) to the same form and an ensuing common thank you page.
To track which CTA is sending the user to the thank you page (goal URL), currently, I have to set up custom event in a separate experiment that then pushes the result into my main experiment for the CTAs that caused the conversion.
As of now, in Optimizely, it is possible to select multiple buttons when settlng click goals.
Can this be extended to choosing multiple buttons that lead to the same thank you page with the ability to track which CTA produced the thank you goal?
It would be great if we had the option to bucket data about our visitors in Optimizely, so that we could push data relevant back to Optimizely to store it in the results.
Specifically I am currently looking for a solution that would allow me to push some conversion data back to Optimizely such as visitors email address when then convert.
This can then be used when reporting on things like long-term revenue gains from testing as we can better judge the lifetime values of customers based on which bucket they were in on a test.
In my case we are looking to run incentivized signups and would like to be able to log the visitors email addresses in the Optimizely result so that we know what version of the page they saw, helpful if you are testing several incentives as you would want to know what incentive a visitor was offered.
Just a thought as I am finding it difficult to get to a work-around solution.
This is a duplicate idea. We are merging these 2 ideas together.
When using Preview while editing an experiement and using the events tab it would be useful if you could click on the event and it would tell you which test the event was from. This would help with debugging
As many of you know Hybris is a huge CM. My company is just about to switch to Hybris and It would be great if Optimizely was easily able to be hooked up to it. I am not sure what would be involved as we are still at the early stages of setting up Hybris but it would be fantastic if we could have all new features launched though Optimizely so we could test them and get actionable data.
This idea has been merged with another post that had the same idea.
I have a problem that is very frustrating!!
Whenever we look to preview our tests on mobile devices we continue to run into the same problems with the Optimizley footer bar.
On iPhones there appears to be no way of getting rid of the preview pane? It is a fixed width and we cannot scroll it across to collapse it or close it. Which means that testing anytihng on an iPhone is near impossible unless you put the test live.
I have also noticed on my Nexus 5 that it is really difficult to collapse the bar even when you can scroll across to it. I tend to have to press the button about 20 times before it collapses.
Given that more and more testing will be done on mobile in the coming months/years it would be good to get this fixed as soon as possible. It just needs to be responsive.
My company has a standard QA process using JIRA. Basically, for every experiment we push to our dev site, we have our QA team look at it, and if all looks good we push to production.
It would be so awesome have the following:
-Designate a project ID as a dev project
-Have an export for QA / Send to QA button for JIRA (This might include screenshots of the changed elements, URLs, etc)
Or at minimum, it can spit out text that we can copy and paste. It could look like this:
We have new three variations to test:
-Smaller Image (http://dev.mycompany.com/home/?optimizely_x1433070805=1)
-Logo removed (http://dev.mycompany.com/home/?optimizely_x1433070805=2)
-Add to Cart button Centered. (http://dev.mycompany.com/home/?optimizely_x1433070805=3)
By having this info readily available, we can send to our QA team for easy testing!
It might be that I am missing something really obvious but when assigning goals I am never ever to test they are correctly tracking on the variations that have been created. It only allows you to see what is being tracked on the orginal page. This can be a bit annoying when creating new CTAs and you want to verify the goal recognises them..
Am I missing anything obvious?
We do quite a lot of MultiVariate tests. The old reports had functionality to allow us to group different multivariate changes together, for example if I am testing 3 variations in the header and 3 variations in the footer, I could group all the results together in such a way that showed me performance for the three variations in the header only. For example, by default you might see:
- Header A, Footer B = X% Conv Rate
- Header B, Footer B = X% Conv Rate
- Header C, Footer B = X% Conv Rate
- Header A, Footer A = X% Conv Rate
- .... and so on, so you'd have 9 rows.
In the old report, I could group it together so I could see the average conversion rate for all variations with Header A, Header B, and Header C so that I could tell my client which of these variations had the most impact, which of the Footer variations had the most impact, and what the winning combination of the two was.
It would be great if the "Cross Browser Testing" feature worked for "internal" or behind-a-login pages as well.
Perhaps you could be prompted to provide CBT test login information or that information could be "shared" after opt-ing in.
Frequently we run tests on numerous URLs that don't have a simple match pattern, which means we have to add in all 50 URLs one at a time. This can be tedious and time consuming. It would be great if there was a way we could upload an excel file that includes all of the URLs we want to target and have that added to a test.
Today, my Development Director asked my to turn on the winning variation 100% until the release at the end of the week.
Although I've been generally inclined to handle this manually, through the Experiment Editor, I decided to give the new Results view/funtionality a ride.
To my surprise, the result of "launching" the winning variation variation was to pause the other variations in the experiment.
What it didn't do, was kick up the trafffic for the experiment to 100% (as I had expected). Instead, it left the experiment at its overall traffic setting.
This, I suppose, is a great topic for discussion. In most occasions, I find that software which assumes too much in order to be "helpful" is usually confusing and assumes "too much".
Strangs how, when it comes to something I want, it no longer seems like assuming too much. It seems like the feature fell short.
This, of course, reveals the subjective nature of such decisions and casues me, as a Usability Analyst, to fall back on my normal "Less is More" foundational principles.
Conclusion: I agree with the decision, but am frustrated with the outcome.
Since my experience is limited enough that I don't easily imagine a use for "Launcing" at anything other than 100% traffic, I would love to see some additional comments here to expose the contrasting views.
Meanwhile, I can envision a variation of this feature that allows you to choose the lauch percentage from the confirmation dialog.
I'd like the ability to bucket people into a variation based on URL query string.
Here's my example. I'd like to run an experiment that actually starts with an A/B test through my email marketing system. I want to send two emails that run different special offers. Links from each email would have corresponding campaign codes in the query string.
I'd like to use the value of that campaign parameter to show the user the corresponding special offer.
That's just a specific example though. You can imagine running other campaigns - for example through Google adwords, touting different offers or similar.
Being able to plug in to the query string in the URL would allow me to run integrated tests spanning multiple systems.
We use Adobe Analytics as our general analytics suite. One of the powerful things within it is the custom segementing ability and conversion varaible mappings (eVars).
Since we've already built out a number of segments there, it would be very powerful to have Optimizely hook into it via Genesis (Data Connecters) to let me target and build audiences around those segements / dimensions.
This idea is a duplicate of another product idea. Please view the parent idea for updates and comments.
Add querystring parameter targeting at the url level for multi-page tests. Other stuff would be cool as well (like checking js variable value), but querystring is what's causing us pain right now, so I won't be too greedy ;-)
Here's the current scenario:
Since I believe the following are true
- targeting conditions other than the url don't apply when the test code is actually run after the user has already been slotted into a test
The following is not possible in a single multipage test:
- targeting changes to a dynamic page where the only difference in the url are querystring parameters and values. In other words, where what loads on the page is determined by what's in the query string, not the base url.
I just had to create a test that was essentially 4 optimizely tests, that could have been a single multipage test if the url parameters were available in the url targeting.