Reading results for two tests sharing same goals
I'm currently trying to set up a test for my site's homepage which will test the current homepage v.s. a new homepage (that we will use as a variable through a redirect URL). This will be running at the same time as another redirect experiment (on a different page), and these two experiments will share the same engagement/conversion goals.
We would like to differentiate the engagement/conversion data between these two redirect URL variables.
Is there a way to track these goals based on where the user is entering from? People that can access experiment 1 will organically be able to get into experiment 2 if they click on a certain button. Is there a way to figure out what variation people are being bucketed into from the first redirect experiment? Can this easily be defined in the results page? For example, If the user is in experiment 1 and gets put into the variation 2 (which is the redirect homepage), then the user goes and click on a certain button, and it buckets the user into the redirect page on experiment 2, is there a way to track in the first experiment where it's bucketing users in the second experiment? In other words, is there a way to easily define the different goals in the results page with two different tests running with the same goals?
Also, is this the kind of test I should set up a multi-page test for? If so, can a multi-page test let you redirect to two different urls?
Thank you in advance for your help!
Thanks for reaching out.
I don't think it is possible to get that level of granularity if you are running two different experiments. With two different experiments, it is unsure whether a visitor in experiment 1 variation 1 will see experiment 2 variation 1. A multi-page test however guarantees that if a visitor is in variation 1 in experiement 1, it will also end up in experiment 2 variation 2. This could potentially allow you to be more aware of the path taken by your visitors as you'll know for sure what variation your visitors were across the two pages.
The Results page shows you the conversions of each goal per variation, but doesn't tell you what variation your visitors were in previous experiments.
An alternative would be to run one test after the other so this would eliminate the need of measuring the impact of experiment 1 on the conversions of experiment 2.
Thank you for getting back to me so soon and answering my questions! I have a few follow up questions that I'm hoping you can also help me with.
Is it possible in the results page for me to create a custom view that shows the same goal twice but segmented by different URLS (rather than different audiences)? This article https://help.optimizely.com/Analyze_Results/Segment_your_results made me think it might be possible.
I don't have the option to run one test after the other. Therefore, is there anyway I can set up my test to help me distinguish whether the conversions being tracked are from one experiment rather than the other if they share the same goals?
Wonder of you are using any analytics package Google Analytics / Adobe Site Catalyst.
If so then you can use Optimizely integrations to push test data and then will be able to see a lot more about what users have been doing, before and after the test, how many tests they have been to and what was the sequence they were exposed to multiple tests.
Your Idea of using Optimizely dimnesions is also very good, but from my experience, you would be amazed using/integrating analytics data with Optimizely.