Dear Friends,
    According to Gelman et al (2003), "...Bayesian P-values are defined as
the probability that the replicated data could be more extreme than the
observed data, as measured by the test quantity p=pr[T(y_rep,tetha) >=
T(y,tetha)|y]..." where p=Bayesian P-value, T=test statistics, y_rep=data
from replicated experiment, y=data from original experiment, tetha=the
function of interest. My question is, How do I calculate p (the bayesian
P-value) in R from the chain I obtained from the Gibbs sampler? I have a
matrix 'samp' [10,000x86] where I stored the result of each of the 10,000
iterations of the 86 variables of interest.
    Something I want to add is that Gelman also states that "...in practice,
we usually compute the posterior predictive distribution using simulation.
If we already have L simulations from the posterior density of theta, we
just draw one y_rep from the predictive distribution for each simulated
theta; we now have L draws from the joint posterior distribution,
p(y_rep,theta|y). The posterior predictive check is the compariosn between
the realized test quantities, T(y, theta_L) ant the predictive test
quantities, T(y_rep, theta_L). The estimated P-value is just the proportion
of these L simulations for which the test quantity equals or exceeds its
realized value; that is, for which T(y_rep,theta_L)>=T(y,theta_L)..." 
    Does this means that the usual p-value applies? i.e., pv <- 1 -
chosen_CDF(parameters)?
Can anybody clarify that for me please?
Thanks for your help.
Jorge

        [[alternative HTML version deleted]]

______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to