Jacques Basaldúa wrote:
Hello,
Just an explanation on something I may have explained badly. I see we
agree in the fundamental.
Correcting bias in that estimate should lead to better sampling.
This is usually called "continuity correction"
http://en.wikipedia.org/wiki/Continuity_correction
> Well, the "assumption" that p is estimated from the binomial because we
> are counting Bernoulli experiments of constant p is a mathematically
> sound method used universally. It does not require go knowledge, that's
> what i meant. When n is big enough, the binomial converges to the normal
> an
Hello,
Just an explanation on something I may have explained badly. I see
we agree in the fundamental.
Correcting bias in that estimate should lead to
better sampling.
This is usually called "continuity correction"
http://en.wikipedia.org/wiki/Continuity_correction. The estimator
is not r
I respond to various items below. Sections of the original e-mail that
I'm not responding to were completely deleted.
Jacques Basaldúa wrote:
Hello Jason
I think what you are trying to do can be done more easily.
I guess the key question is "what am I trying to do?".
In UCT, the next move
Hello Jason
I think what you are trying to do can be done more easily.
A. You have a Bernoulli random variable whose result is 0 or 1
following an unknown probability p. (Excuse me for explaining
obvious things, this is for anyone who reads it.) You want to
estimate p from a random sample. The e
> I'm actually kind of surprised at the dissimilarity between the
> normal and multinormal. I'd expect the multinormal to boil down to the
> normal, but it looks like the standard normal has additional terms.
the multivariate normal has the 1-d normal as a special case, but
instead of normalizi
Actually, the example given (and the first element in the table) are
exactly what I stumbled upon. In the scope of MC, I think this style of
analysis is 100% correct for evaluation of leaf nodes. I guess time
will tell if assuming a conjugate distribution with prior
hyperparameters is a good
> Maybe other simple solutions exist,
you might want to check out those distributions that magically
have nice properties with respect to the bayesian integral.
they're called conjugate priors, and lots of distributions have
nice, easy to calculate conjugate priors.
there's a table here:
http:/
Based on my analysis, estimating a moves probability of winning by
taking the number of winning simulations (w) and dividing it by the
total number of simulations (n) is actually biased. I tried to break
this e-mail up into sections for easy digestion by the various people
who might read this