In article <[EMAIL PROTECTED]>,
Jerry Dallal  <[EMAIL PROTECTED]> wrote:
>Herman Rubin wrote:

>> I will address this point first, as it is equally valid no
>> matter what one's view of statistical inference happens to
>> be.  The point is that the probability of publication
>> depends on the p-value.  If you have 100 studies published,
>> and the distribution of p-values is roughly uniform between
>> 0 and .05, this should be looked upon as selection bias,
>> and not as meaning anything.  On the other hand, if one
>> had 100 studies with the p-values coming from a normal
>> distribution with mean 1 and variance 1, while these
>> p-values would be considerably larger, the evidence of
>> the effect would be overwhelming.

>I missed something.  A truncated N(0,1)?  But why would 
>the evidence be overwhelming if the P values were larger?

What was meant was that the p-values came form a sample
which had a N(1,1) distribution, but were computed
using the right tails from a N(0,1) distribution.

If this is the model, the resultant sufficient statistic
would be N(100,100).  The probability that the sum is
larger than 50, which corresponds to a one-sided p-value
of < 10^-6, is also that small.

Most of the approaches to meta-analysis assume that all
studies are published.  If the decision about whether
something is published depends on the p-value, that 
needs to be taken into account in the analysis.

-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to