Herman Rubin wrote:
> 
> In article <9g9k9f$h4c$[EMAIL PROTECTED]>,
> Eric Bohlman <[EMAIL PROTECTED]> wrote:
> >In sci.stat.consult Tracey Continelli <[EMAIL PROTECTED]> wrote:
> >> value.  I'm not sure why you'd want to reduce the size of the data
> >> set, since for the most part the larger the "N" the better.
> 
> >Actually, for datasets of the OP's size, the increase in power from the
> >large size is a mixed blessing, for the same reason that many
> >hard-of-hearing people don't terribly like wearing hearing aids: they
> >bring up the background noise just as much as the signal.  With an N of
> >one million, practically *any* effect you can test for is going to be
> >significant, regardless of how small it is.
> 
> This just points out another stupidity of the use of
> "significance testing".  Since the null hypothesis is
> false anyhow, why should we care what happens to be
> the probability of rejecting when it is true?
> 
> State the REAL problem, and attack this.

How true! The only drawback there can be to more rather than less
data for inferential purposes would have to center around the extra
cost of computation, rather than the inconvenience posed to
significance testing methodology. 

There is a significant philosophical question lurking here. It is a
reminder of how we get so attached to the tools we use that we
sometimes turn their bugs into features. Significance testing is a
make-do construction of classical statistical inference, in some
sense an indirect way of characterizing the uncertainty surrounding a
parameter estimate. The Bayesian approach of attempting to
characterize such uncertainty directly, rather than indirectly, and
further of characterizing directly, through some function
transformation of the parameter in question, the uncertainty
surrounding some consequential loss or profit function critical to
some real-world decision, is clearly laudable... if it can be
justified. 

Clearly, from a classicist's perspective, the Bayesians have failed
at this attempt at justification, otherwise one would have to be a
masochist to stick with the sheer torture of classical inferential
methods. Besides, the Bayesians indulge not a little in turning bugs
into features themselves. 

At any rate, I say all that to say this: once it is recognized that
there is a valid (extended) likelihood calculus, as easy of
manipulation as the probability calculus in attempting a direct
characterization of the uncertainty surrounding statistical model
parameters, the gap between these two ought to be closed. 

I'm not holding my breath, as this may take several generations. We
all reach for the tool we know how to use, not necessarily for the
best tool for the job.     

> Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
> [EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558

Regards,
S. F. Thomas


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to