Dennis Roberts asked, imagining a testing-free universe:

>> what would the vast majority of folks who either do inferential work
>and/or
>> teach it ... DO????
>> what analyses would they be doing? what would they be teaching?

I wrote:
> >*  students would be told in their compulsory intro stats that
> >        "a posterior probability of 95% or greater is called
> >         "statistically significant", and we say 'we believe
> >         the hypothesis'. Anything less than that is called
> >        "not statistically significant", and we say 'we disbelieve
> >         the hypothesis'".

and Herman Rubin responded:

> Why?  What should be done is to use the risk of the procedure,
> not the posterior probability.  The term "statistically significant"
> needs abandoning; it is whether the effect is important enough
> that it pays to take it into account.

Dennis asked what _would_ happen, not what _should_.  Most of the abuses we
see around us are not the fault of hypothesis testing _per_se_, but of
statistics users who believe:

    (a) that their discipline ought to be a science;
    (b) that statistics must be used to make this so;
    (c) and that it is unreasonable to expect them to _understand_
statistics just because of (a) and (b).

Granted, if they did understand statistics, they would not test hypotheses
nearly as often as they do. However, that said, I am not entirely persuaded
that risk calculation is the whole story, either. In many pure research
situations, "risk" is just not well defined. What is the risk involved in
believing (say) that the universe is closed rather than open?

    Moreover, suppose we elected Herman to the post of Emperor of Inference,
(with the power of the "Bars and the Axes"?) to enforce a risk-based
approach to statistics (not that he'd take it, but bear with me...), would
the situation realy improve?

    My own feeling is that, in many "soft" science papers of the sort where
the research is not immediately applied to the real world, but may affect
public policy and personal politics, a "risk" aproach would be disastrous.
If the researcher had to assign "risks" to outcomes that were merely a
matter of correct or incorrect belief, it  would be all too tempting to
assign a large risk to an outcome that "would set back the cause of X fifty
years" and conversely a small risk to accepting a belief that might be
considered "if not true, at least a useful myth." (Exercise: provide your
own examples).  Everything would be lowered to the level of Pascal's Wager -
surely the canonical example of the limitations of a risk-based approach?

    One might argue that in such a situation the rare reader who intends to
take action, and not the writer, should do the statistics. Unfortunately, in
the real world, that won't wash. People want simple answers, and with the
flood of information that we have to deal with in keeping up with the
literature in any subject today, this is not entirely a foolish or lazy
desire. It is considered the author's responsibility to reach a conclusion,
not just to present a mass of undigested data for posterity to analyze.
Thus, it would be unrealistic to expect any discipline, forced to use
risk-based inference, to do other than have the author guess at risks (and
work with those guesses) in situations where objective measurements of risk
don't exist.

    -Robert Dawson





===========================================================================
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================

Reply via email to