Rich-text Reply

Why I have trust issues...

santhony7 03-08-17

Why I have trust issues...

Running an experiemnt on our site now that is a 50/50 split between original and a variation. However, the variation is empty; no changes were made. 

Interesting stuff :-)

Anthony Smith
1489002984.png
tomfuertes 03-09-17
 

Re: Why I have trust issues...

That's why you have to look at a test from all segments and run the revenue through the quartile export (might be classic only)!
____
- Tom Fuertes | CTO @ CROmetrics / LinkedIn
"Most Impactful Use of Personalization" and "Experience of the Year" Optie award winner.


emgao-fsa 03-09-17
 

Re: Why I have trust issues...

We always have 2-way, 3-way, and 4-way empty tests going on to measure random noise. When we get significant results from a test, we try to look at the same timeframe as the test duration in our random noise detector tests. It helps us see additional context, especially because our site experiences a lot of seasonality! Not an exact science, but setting up ongoing empty tests can be helpful the next time your results page looks like this.
santhony7 03-09-17
 

Re: Why I have trust issues...

That is a great idea. I never thought about keeping these things running all the time. Totally doing that.
Anthony Smith
santhony7 03-09-17
 

Re: Why I have trust issues...

I actually duplicated the empty test so now I have two running at the same time and there are even differences between the two for the same time period. Definitely going to use this data when making decisions.
Anthony Smith
santhony7 03-09-17
 

Re: Why I have trust issues...

 
Anthony Smith
1489074305.png
emgao-fsa 03-09-17
 

Re: Why I have trust issues...

Hm, I'd guess that the discrepancy in the total orders across the two variations comes from different timing when the data was updated. I can see that you set both to 8:35 AM, but I've always been skeptical of the accuracy of that timestamp Smiley Happy

It looks like your total visitors in the first screenshot is 3,685 while total visitors in the second screenshot is 3,639, so I think that the timeframe of the data between these two is slightly off. Maybe it'd be more accurate if you check again once the duplicate has gone on for a full 24 hours, so you can compare midnight to midnight for both.
santhony7 03-09-17
 

Re: Why I have trust issues...

I agree. I'm going to let it ride and then check again tomorrow. I'll try to post the final results...
Anthony Smith
santhony7 03-10-17
 

Re: Why I have trust issues...

Yesterday's results...

264 orders to 255 orders: 10 order difference.
Side note: There were 240 orders recorded in Google Analytics

7,505 users to 7,442 users: 63 user difference

+7.99% improvement to -4.02% improvement: 12.01% difference

 

I am not saying these numbers should be exactly the same but it certainly means to me that Optimizely results should be taken with a grain of salt and should be a part of a larger conversation for sure.

 

Anthony Smith
1489157158.png
JasonDahlin 03-10-17
 

Re: Why I have trust issues...

Small numbers suck, especially for low-power experiments.

 

It doesn't look like either of these have reached "significance" yet.  At best, anything that has not achieved significance should be treated as "directionally positive/negative".  If you let them run longer, the epxectation is that these A/A tests would eventually settle on "no directionality".

--Jason Dahlin
Analytics and Testing Guru Smiley Happy