Rich-text Reply

Best practices for getting buy-in on experiments?

vanessa_krumb 02-04-16

Best practices for getting buy-in on experiments?

Sometimes we have to convince somebody that an experiment is a good idea, before we can run it. Let's say whoever has the final approval over experiments isn't a CRO expert, but they generally think testing is a good idea - what are the points that you're communicating that are most effective for getting buy-in on experiments? Where do you tend to see the most friction on getting buy-in?

---------------------------------- - - - - - ➳
Vanessa Monitor your Optimizely experiments

corneliusdo 02-05-16

Re: Best practices for getting buy-in on experiments?

[ Edited ]

I think it depends on the person really.. including dollar figures is always a strong point. Something along the lines of: "This page/these pages current convert at X% giving us $Y per conversion, we think we can push the needle by 10%, leading to a potential $Z increase in revenue per conversion". Adapt for your situation of course!

I would also combine that with presenting 3/4 experiment options together along with the one you think is the best. I assume that because you think it's a good idea that there's data to back it up. You'll be able to rationalise/sell it better when you compare it against 2 or 3 other experiments which are not as good/not as ready.


Hope that helped! Smiley Very Happy

Cornelius Do
Digital Channels @ CommBank
MartijnSch 02-09-16

Re: Best practices for getting buy-in on experiments?

I agree with Cornelius, as long as you can include a dollar figure on what the potential is for a test that could also help you out. Making sure that you know up front what the potential impact could be based on some old tests that you might have done on the same part of your site might also help.

As long as you make sure that you embrace the fact that testing will always impact the bottomline in the end you'll make sure to make an argument for your case.
glvzLIFT 02-25-16

Re: Best practices for getting buy-in on experiments?

The key areas that cause the most friction are around risk/exposure, questions around prioritization, and lack of confidence in the impact the treatment can make. Underlying all of these things is the relationship aspect of the work, specifically building trust.

* The risk/exposure issue can often be mitigated with throttling and/or segmentation
* The prioritization questions I've seen are largely around disconnect on which problems need to be solved for first. If your experiment is ready-to-go, it can often be helpful to talk about not wasting a launch window and quickly starting development on an experiment that focused on the "higher" priority problem.
* To address the lack of confidence in the treatment, it can be effective to "pre-flight" the test idea at a smaller scale using tools like UsabilityHub. Being able to show some data that the treatment can move the needle is pretty compelling.

Hoping this thread gets a few more posts, since this is an area that directly affects optimization program health.
Michael Galvez
Optimization Strategist, Pointmarc
Level 2