Iterating on Results: Your Stories
My colleague @AdamA , education manager at Optimizely, has just published this thoughtfully curated synopsis of how to turn your experiment results into meangingful iteration - I helped him write it, so I am a little biased when I say its a great resource for those of you looking for something to chew on! Check it out here - making decision on your Optimizely results
Reviewing the polished final version got me thinking again about how difficult, exciting, and rewarding it is when a testing team gets in the habit of turning ALL results, not just the wins, into an iteration strategy.
I'd love to hear from those of you out there who have struggled with translating tough results into next steps, and those of you that have had great success doing so.
Share! We want to learn from your experience.
Solved! Go to Solution.
Hi @Hudson - Thanks for sharing. Would you be able to link to the article you helped write? I'm curious -- we have a pretty decent strategy for turning winning tests into a new & iterative test, but we struggle with what to do with the losing tests. Can you provide some tips for turning the non-winning tests into an iterative strategy? How do you investigate why the variations did not work and how do you come up with new ideas? Do you just go back to the drawing board and start over with a new hypothesis?
Please see the article (now linked to!).
Essentially, with losing tests, the playbook is relatively simple, but there is an art to it.
Let me know if what we've outlined there make sense based on your experience and I'll be happy to discuss it here.
Thanks @Hudson for the article. It is helpful, I understand that for smaller test, you should just move on to a new idea and new test. But, for more extensive tests, I guess my question boils down to this:
If we find that our variation is a loser, then is it valuable to investigate why this variation was a loser? For example, if we tested a new product page layout, then wouldn't it be important to learn *why* the customers' behavior resulted in an underperforming design? I.e, maybe there is a fundamental issue that we should avoide site-wide? Maybe we should avoid images with people in them and instead stay product focused (as an example)
Hey @JohnH ,
Your point is spot-on.
What I think the article could do a better job of emphasizing is that each of those strategies is entirely rooted in iterative learning.
So, like you said, if a certain image underperformed, I think a follow-up test wcomparing categorically discrete images (person vs. product vs. person with product) would be very wise; it would lend insight to your learning and weight to your decisions that could have an effect on oher martketing channels as well. In that example, I would recommend using cateogrically similar images that are executionally different (i.e., Test 2 uses person b and product image b, vs. Test 1 that tests person a and product image a), so you can be sure it isn't an issue with the executional style of the images.
The broader story is that for winning, losing, and inconclusive tests, there is an appropriate follow-up test that should either help you better capture value or better understand user behavior, or hopefully both.