On Wed, 20 Apr 2016 01:50 am, Steven D'Aprano wrote: > - Is the effect due to chance? Remember, with a p-value of 0.05 (the > so-called 95% significance level), one in twenty experiments will > give a positive result just by chance. A p-value of 0.05 does not > mean "these results are proven", it just means "if every single > thing about this experiment is perfect, then the chances that these > results are due by chance alone is 1 in 20".
Arggh! The above is, of course, *wrong*. This is why statistical significance is so hard. I know the correct interpretation[1] of p-values and I still got it wrong. p-values give the probability of a positive result by chance if the null hypothesis is true, that is, the chances of getting a false positive result. "We detected a difference that actually isn't there." It *doesn't* tell you anything about a false negative result: "We failed to detect a difference which actually is there." And it certainly doesn't tell you the chances that the result are true. More here: https://www.sciencenews.org/article/odds-are-its-wrong [1] At least, I'm confident I understand p-values with a 95% significance level. -- Steven -- https://mail.python.org/mailman/listinfo/python-list