Advice for tests not reaching statistical significance quickly
Hi, I am fairly new to Optimizely and CRO testing and have a question.
Im running my first test, a very simple one which is to change the colour of the add to cart button to a more eye catching colour to improve its prominence.
The test has been running for 42 days now.
The problem I have is the site doesnt get a great deal of traffic and so those who actually then add a product to cart is even smaller.
In the 42 days, the variation is averaging an improvement over the original of 33% but has a statistical significance of less than 1% with 2300 visitors remaining.
What advice can people give about this test? Do I keep it running or make the assumption that I don't have enough traffic to test this? Or assume that this change isn't as impactful as I had thought?
The experiment is here:
Hi @ColinEdge45, thanks for asking this! I'm sure that others will reply with their own tips, but I did want to share an article we've written on this exact topic with you: https://help.optimizely.com/hc/en-us/articles/202595450
There are a few approaches you can take, but typically when you have less traffic to spare, you'll want to prioritize the broader, more "global" changes where you're more likely to see a difference in conversion rates. The article gets into more detail on some things you can do as you test, as well.
Please let me know whether this helps you out!
For example, you could track an event for when a user mouses over the button you are testing (as opposed to clicking on it). That could give you a better idea about the prominence of the button, which is what you are actually testing. Another option is to us a heatmap on each variation to see where users are looking.
I'm assuming that your main purpose here is to test out Optimizely specifically. Alternatively, you could use a combo of tools like Mechanical Turk and Validately to get people to test mockups of your flow. Or even go back further with user interviews and live UX testing.