----- Original Message -----
From: Michael Granaas <[EMAIL PROTECTED]>
To: EDSTAT list <[EMAIL PROTECTED]>
Sent: Thursday, April 13, 2000 8:23 AM
Subject: Re: Hypothesis testing and magic - episode 2
> In addition to defining the variables some areas do a better job of
> defining and therefore testing their models.  The ag example is one where
> not only the variables are relatively clear so are the models.  That is
> there is one highly plausible reason for rejecting a null that fertilizer
> does not effect crop production:  Fertilizer increases crop production.
> You have rejected a model of no effect in favor of a model positing an
> effect.
>
> But in some areas in psychology you will have a situation where many
> theoretical perspectives predict the same outcome relative to a zero
> valued null while the zero valued null reflects no theoretical
> perspective.  In this situation rejecting a zero valued null supports all
> theoretical perspectives equally and differentiates among none of them.
>
> In a recent example a student was citing the research literature
> supporting the convergent validity of some measure.  The evidence used by
> all investigators was that the null of rho = 0 was rejected.  I've seen
> this same thing many times, but this time I saw something different.  The
> smallest sample (n about 95) failed to reject rho = 0 while the remaining
> samples (all n's > 200) successfully rejected rho = 0 and convergent
> validity was declared.  (No r's were actually reported in this review.)
>
> A quick thought experiment, and check of critical value tables, suggests
> that the best estimate of rho from the evidence provided is  some value
> greater than 0 but less than .20.
>
> In this case it seems to me that testing the default zero valued null was
> misleading rather than informative.  In addition to convergent validity it
> seems to me that correlations in the range 0 - .20 could easily be
> explained by at least a couple of other competing models that would not
> support the conclusions drawn.  Only the most trivial link between
> theoretical models and statistical hypotheses exist in this case.
>
> Using Alan's ethnicity and statistical ability example, and assuming for
> the moment that all measures were useful, the first time we reject a no
> effect null we have some sort of useful information.  Now, imagine that 12
> researchers generate 12 different hypotheses explaining the cause of these
> differences.  Current practice has all 12 of these researchers collecting
> data and testing to eliminate the chance model and then declaring that
> their hypothesis  has been confirmed.
............................................................................
.................................................
Good example of many of the current problems.

1. If testing the null hypothesis provides no conclusive information, why
structure the experiment around the null hypothesis. I quoted R.A. Fisher in
a previous message, so to repeat it here, I say etcetera. If the outcome
explains the measured outcome, the problem is one, does it do it
conclusively. There is a lot of very well done psychology work that comes to
valid conclusions and gets published in Science. The point is that the work
was very thoughal. You have to be very careful in establishing the research
objectives and the roadmap.

2. Re the 12 researchers with different claimed valid hypotheses. Happens
all the time. Any significant work with startling claims will be retested
using alternate approaches. In this case the proof is not the statistical
test, but the fact that others can demonstrate under different conditions
that the cause put forth by one of the researchers, produces the observed
result. The theory works. If they can't repeat the findings, then regardless
of statistics, the theory is not accepted. There is a lot of stuff in the
"hard sciences" that gets disproved, because it just doesn't hold up under a
more carefull experiment.

3. If what is being done is just mathematical excercises (the main output
from the bulk of the university stat departments). sure then arguing
endlessly about the null hypothesis is fine. One gets visability among ones
peers when this is done. But it sure doesn't help the researchers build up a
really good plan and method to do a first class investigation. I fail to see
why there is so much emphisis on a null hypothesis test if the result really
is not important.

DAHeiser
Not Associated with any Stat Department, School or University




===========================================================================
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================

Reply via email to