In article <9g9k9f$h4c$[EMAIL PROTECTED]>,
Eric Bohlman <[EMAIL PROTECTED]> wrote:
>In sci.stat.consult Tracey Continelli <[EMAIL PROTECTED]> wrote:
>> value.  I'm not sure why you'd want to reduce the size of the data
>> set, since for the most part the larger the "N" the better.

>Actually, for datasets of the OP's size, the increase in power from the 
>large size is a mixed blessing, for the same reason that many 
>hard-of-hearing people don't terribly like wearing hearing aids: they 
>bring up the background noise just as much as the signal.  With an N of 
>one million, practically *any* effect you can test for is going to be 
>significant, regardless of how small it is.


This just points out another stupidity of the use of 
"significance testing".  Since the null hypothesis is
false anyhow, why should we care what happens to be
the probability of rejecting when it is true?

State the REAL problem, and attack this.  

-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to