Hello all,

We just presented our paper describing MoGo's improvements at ICML,
and we thought we would pass on some of the feedback and corrections
we have received.
(http://www.machinelearning.org/proceedings/icml2007/papers/387.pdf)

The way that we incorporate prior knowledge in UCT can be seen as a
bayesian prior, and corresponds exactly to the dirichlet prior (more
precisely to the beta prior as we here get binomials).

The cumulative result is only given using the prior knowledge on top
of RAVE, but it could have been done the other way round and give the
same type of results. Each particular improvement is somehow
independent of the others.

On figure 5, the legend of horizontal axis should be m_prior rather
than n_prior.

All experiments (except the default policy) were played against GnuGo
level 10, not level 8.

Any other comments are welcome!
Sylvain & David
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to