In article <8sill5$gvf$[EMAIL PROTECTED]>,
 <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
>  [EMAIL PROTECTED] (Robert J. MacG. Dawson) wrote:


                        .................

>> Fair enough: but I would argue that the right question is rarely "if
>> there were no effect whatsoever, and the following model applied, what
>> is the probility that we would observe a value of the following
>> statistic at least as great as what was observed?" and hence that a
>> hypothesis test is rarely the right way to obtain the right answer.
>> Hypothesis testing does what it sets out to do perfectly well- the
>> only question, in most cases, is why one would want that done.

>I agree with this. From what I gauge from your rephrasing of the
>research question, there seems to be no reason why most research
>questions could not be phrased in this manner. Rather, it seems that the
>problems with hypothesis testing result from people misusing it. Like I
>said before, I don't think this can be seen as a problem with hypothesis
>testing; but it is a matter for hypothesis *testers*.

I disagree.  This may be the case for questions of
philosophical belief, but not for action, and publishing an
article, or even discussion with colleagues, is action.

Robert Dawson is quite right; few who understand what
hypothesis testing actually is doing would use it.  Those
who started out using it, more than two centuries ago, had
the mistaken belief that the significance level was, if not
the probability of the truth of the hypothesis, at least a
good indication of that.  The situation, however, is
generally that the hypothesis, stated in terms of the
distribution of the observations, is at least almost always
false.  So why should the probability that we would observe
a value of the statistic at least as great as what was
observed, from a model which we would not believe anyhow,
even be of importance?

This does not mean that we should not do hypothesis
testing.  The null hypothesis might well be the best useful
approximation available, given the observations.  A more
accurate model need not be more useful.  One must consider
all the consequences.

>> Fair enough... I do not argue with your support of proper controls.
>> However, in the real world, insisting on this would be tantamount to
>> ending experimental research in the social sciences and many
>> disciplines within the life sciences. (You may draw your own
>> conclusions as the advisability of this <grin>

>Certainly, one could argue that anyone who wants to test a hypothesis
>needs to adhere to same guidelines. The fact that this frequently
>doesn't happen is, again, the fault of people not principles. One quick
>glance at the social psychology literature, for example, reveals a
>history replete with low power, inadequate controls and spurious
>conclusions based on doubtful stats. (I'm going to annoy somebody here I
>just know it <grin>).

One must also consider the consequences of the action in other
states of nature.  Starting out with classical statistics
makes it much harder to consider the full problem.

Hypothesis testing has become a religion.  The belief that 
there must be something this simple is what is driving it.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to