Re: [Computer-go] Creating the playout NN

2016-06-13 Thread Stefan Kaitschick
>
> The purpose is to see if there is some sort of "simplification" available
> to the emerged complex functions encoded in the weights. It is a typical
> reductionist strategy, especially where there is an attempt to converge on
> human conceptualization.
>
>
That's an interesting way to look at it. If you do this with several
different smaller NN of varying complexity, and see how good they are, you
would get some kind of numeric estimate of the complexity of the encoded
concepts. Of course. there is the slight problem, that we also would need
to map those "simple" NN to concepts somehow.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Creating the playout NN

2016-06-13 Thread Stefan Kaitschick
>
> BTW, by improvement, I don't mean higher Go playing skill...I mean
> appearing close to the same level of Go playing skill _per_ _move_ with far
> less computational cost. It's the total game outcomes that will fall.
>
>
 For the playouts, you always need a relatively inexpensive computation.
Because for every invocation of the main NN in the tree, you need several
hundred cheaper calls in the playout.
So it will have to be orders of magnitude faster. Surely, replacing a crude
fast NN with a slightly less crude fast NN would be beneficial.
I don't know if other bots besides AlphaGo are already utilizing the
selfplay improvement. But when they do, it will be helpful there too.
Because the added knowledge of the main NN can be transferred down to the
playout NN.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go