I've been involved in off-list discussion with Duncan Murdoch.  At one 
stage there I was about to retire in disgrace.  But sighs of relief... his 
objection is Bayesian.  OK.  The p value is a device to put in a 
publication to communicate something about precision of an estimate of an 
effect, under the assumption of no prior knowledge of the magnitude of the 
true value of the effect.  If we assume no prior knowledge of the true 
value, then my claim stands:  the p value for a one-tailed test is the 
probability of an opposite true effect--any true effect opposite in sign or 
impact to that observed.

I can't see how a Bayesian perspective dilutes or invalidates this 
interpretation.  The same Bayesian perspective would make you re-evaluate 
the p value under its conventional interpretation.  In other words, if you 
have some other reason for believing that the true value has the same sign 
as the observed value, reduce the p value in your mind.  Or if you believe 
it has opposite sign, increase it.

If we are stuck with p values, then I believe we should start showing 
one-tailed p values, along with 95% confidence limits for the 
effect.   Both these are far far easier to understand than hypothesis 
testing and statistical significance. Put a note in the Methods saying 
something like: "The p values, which were all derived from one-tailed 
tests, represent the probability that the true value of the effect is 
opposite in sign (correlations; differences or changes in means) or impact 
(relative risks, odds ratios) to that observed."

Will



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to