OK, I'm going to expose my ignorance and biases here and give y'all an
opportunity to set me straight.

People keep saying things about Bayesians that I just don't get. For
example, that Bayesians require precise probabilities and that they can't
represent the *uncertainty* of our probability estimates.

But a standard introduction to Bayesian estimation is estimating the
probability of Heads for an unknown coin. And as given in, for example,
Silva, you watch as an initially broad distribution becomes
more and more narrow as you gather evidence about the coin.

Isn't that all we need? Doesn't the flatness of our pdf encapsulate
"degree of ignorance"? 

Rolf Haenni writes about degrees of support
}What would Bayesians do in such a case. They would start by saying 
}p(X|A)=1 and p(A)=0.1. So what is p(X)?
}    p(X) = p(X|A)p(A)+p(X|NOT-A)p(NOT-A) = 0.1 + p(X|NOT-A)*0.9.
}Correct. But what is p(X|NOT-A)??? Bayesians tend then to assume 
}p(X|NOT-A)=0.5 and to compute p(X)=0.55.

I would have thought such a "max entropy" Bayesian would put a flat prior
between 0 and 1 on p(X|NOT-A), rather than a Dirac delta around p=0.5.

Am I missing something?

-Charles

--
Charles R. Twardy, Res.Fellow,  Monash University, School of CSSE
ctwardy at alumni indiana edu   +61(3) 9905 5823 (w)  5146 (fax)

Reply via email to