Highlighted

12-06-18

# Stats Engine question

Hi all

I don't think I've properly understood the following page (I hope I haven't!) https://help.optimizely.com/Analyze_Results/Why_Stats_Engine_results_sometimes_differ_from_classical...

> Optimizely calculates a series of 100 successive confidence intervals through the course of an experiment. Each of those intervals receives a distinct p-value and a confidence interval. The p-value that appears in the Results page reflects the smallest p-value that Optimizely saw over the course of all of these sequential intervals.

When I see a "statistical significance" of 95% in Optimizely, that means that a hundred t-tests were run and one of them had a p<0.05 significant result?

Imagine a scientist who, not seeing publishable results, ran the exact same study 99 more times, and then published the best results and ignored the rest. Surely that's not what Optimizely does, so can someone explain the misunderstanding I have here?

Any help would be greatly appreciated.

Level 2

Michal 12-07-18

## Re: Stats Engine question

Hi there,

To start with, it's probably fair to mention that I'm not a statistician myself. That said, if I understand the definition of the p-value correctly, using the lowest recorded p-value actually means that Optimizely is choosing the conservative approach here.

If I were to use your scientist example - following the Optimizely model, this scientist would publish results according to the most conservative/worst result he got out of his 100 experiments, not the other way around.

To add to this, one of the benefits of using Stats engine is that it helps with the control of false-discovery rate.

I hope this helps,

Michal
Optimizely