Themis;373434 Wrote: 
> You don't get the point. What you call "with sufficient statistics" will
> always remain below the confidence point.
> If -say- there can only be 0.01% samples which can be different,
> whatever the number of samples examined, it won't change this
> percentage: there will only be 0.01% of chance in finding *any*
> difference.
> So, the test will fail. Because 99.99% of samples (and thus, 99.99% of
> partial tests) won't show any difference.

I don't mean to be rude, but that's just... wrong.  If .01% of the
samples are different, with enough data they will eventually make their
presence known as a statistically significant effect -no matter what the
confidence point is-. 

Would you like me to prove that mathematically?

> And, in fact, the broader the audience, the closer to the theoretical
> difference the test gets : the more people making the test, the closer
> to the 0.01% of people having passed the test we get.
> So, in fact, in order that the test could *possibly* succeed in showing
> *any* differences, the differences should cover 92% of the samples. This
> is why I say it measures something else. ;)
> 

I'm not sure where 92% came from...  but again, that just isn't true. 
And to make it even easier, in many tests of this type the experimenter
take the group of "golden ears" the performed the best and test them
again, to determine whether they succeeded by chance or due to actual
ability.


-- 
opaqueice
------------------------------------------------------------------------
opaqueice's Profile: http://forums.slimdevices.com/member.php?userid=4234
View this thread: http://forums.slimdevices.com/showthread.php?t=56425

_______________________________________________
audiophiles mailing list
audiophiles@lists.slimdevices.com
http://lists.slimdevices.com/lists/listinfo/audiophiles

Reply via email to