You have an input that represents whose turn it is (one input for white, one
for black, value one if that player is on turn and zero otherwise). I think
that's in the original Tesauro setup isn't it?
On Dec 10, 2011, at 1:10 AM, Joseph Heled jhe...@gmail.com wrote:
Well, I am not sure how
On Fri, 9 Dec 2011, Mark Higgins wrote:
I took a look through eval.c but it's a bit daunting. :)
The attached graph may be useful when trying to understand gnubg's
evaluation code.
pprof1742.0.pdf
Description: Adobe PDF document
___
Bug-gnubg
Hi Mark,
If I take a given board and translate the position into the inputs and then
evaluate the network, it gives me a probability of win. If I then flip the
board's perspective (ie white vs black) and do the same, I get another
probability of win. Those two probabilities should sum to
Thx! Makes sense. Though I wonder if adding back in the whose move is it
input and reducing the hidden-output weights by half ends up as a net benefit
for training. Maybe I'll test it out.
On Dec 10, 2011, at 2:06 PM, Frank Berger fr...@bgblitz.com wrote:
Hi Mark,
If I take a given
I notice in gnubg and other neural networks the probability of gammon gets its
own output node, alongside the probability of (any kind of) win.
Doesn't this sometimes mean that the estimated probability of gammon could be
larger than the probability of win, since both sigmoid outputs run from 0