Yes, I think we need to be more careful about what is being discussed.  If
you are using nonparametric to only imply tests based on ranks like M-W,
that is a very different definition than if you are referring to tests based
on permutation of test statisics under the null hypothesis.  While M-W and
other rank tests can be evaluated and indeed were developed from a
permutation framework, the possibility of doing a far greater range of tests
in the permutation framework exists.  We can do permutation tests for
conditional means in linear models, where the permutation version of the F
tests will differ little from the normal theory versions of the F-tests, but
clearly we are testing estimates of parameters.  We can do similar
procedures but where were estimating conditional medians (or some other
quantile) in a linear model, and the estimates and test statistics will have
very different statistical performance than estimates and permutation tests
for conditional means.  But these tests still involve tests for estimates of
parameters, just not the mean.  We can also perform omnibus distributional
tests such as MRPP where no specific parameter is being tested.  The real
advantage of thinking about "nonparametric" or "distribution-free"
approaches is that by judicious use of certain test statistics or estimates
that are evaluated via permutation theory, it is possible to detect
important, relevant effects that are not detected well with tests and
estimates of means (whether evaluated by permutation theory on normal
theory).

Brian Cade (USGS)

Rich Ulrich wrote:

>  - I have a comment on an offhand remark of Glen's, at the start of
> his interesting posting -
>
> On Tue, 07 Dec 1999 15:58:11 +1100, Glen Barnett
> <[EMAIL PROTECTED]> wrote:
>
> > Alex Yu wrote:
> > >
> > > Disadvantages of non-parametric tests:
> > >
> > > Losing precision: Edgington (1995) asserted that when more precise
> > > measurements are available, it is unwise to degrade the precision by
> > > transforming the measurements into ranked data.
> >
> > So this is an argument against rank-based nonparametric tests
> > rather than nonparametric tests in general. In fact, I think
> > you'll find Edgington highly supportive of randomization procedures,
> > which are nonparametric.
> >
>  - In my vocabulary, these days, "nonparametric"  starts out with data
> being ranked, or otherwise being placed into categories -- it is the
> infinite parameters involved in that sort of non-reversible re-scoring
> which earns the label, nonparametric.  (I am still trying to get my
> definition to be complete and concise.)
>
> I know that when *nonparametric*  and  *distribution-free*  were the
> two alternatives to ANOVAs, either of the two labels was slapped onto
> people's pet procedures, fairly  indiscriminately;  and a lack of
> discrimination seems to have widened to encompass  *robust*,  later
> on.  Okay, I see that exact evaluation by randomization of a fixed
> sample does not use a t or F distribution for its p-levels.   Okay, I
> see that it is not ANOVA.   But, I'm sorry,  I don't regard a test as
> nonparametric which *does*  preserve and use the original metric and
> means.  Comparison of means is parametric, and that contrasts to
> nonparametric.
>
> Similarly, bootstrapping is a method of "robust variance estimation"
> but it does not change the metric like a power transformation does, or
> abandon the metric like a rank-order transformation does.  If it were
> proper  terminology to say randomization is nonparametric, you would
> probably want to say bootstrapping is nonparametric, too.  (I think
> some people have done so; but it is not widespread.)
>
> --
> Rich Ulrich, [EMAIL PROTECTED]
> http://www.pitt.edu/~wpilib/index.html


Reply via email to