When TensorFlow was first released I used it to implement a CNN for move
prediction and evaluation, and I requested the addition of a function to
implement chain pooling: https://github.com/tensorflow/tensorflow/issues/549

It's now implemented here:
https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/unsorted-segment-max

By the time they got around to implementing it I wasn't actively doing
computer go anymore (I went back to chess for a while), so I haven't
actually used it. But it is a very natural idea.

Regards,
Álvaro.




On Fri, Aug 18, 2017 at 2:14 PM, David Wu <lightvec...@gmail.com> wrote:

> While browsing the online, I found an interesting idea "chain pooling"
> presented here:
> https://github.com/jmgilmer/GoCNN
>
> The idea is to have some early layers that perform a max-pool across
> solidly-connected stones. I could also imagine it being useful to perform a
> sum. So the input would be a 19x19 layer, and the output would be a 19x19
> layer where the output at a given position, if that position is occupied by
> a stone, is equal to the maximum (or the sum of) all the values in the
> input layer across all stones that are solidly connected to that group.
>
> One might imagine going further and allowing the neural net some early
> convolutional layers that determine the connectivity strength for this
> pooling between groups, so that it could choose to pool across definite
> single-point eyes or bamboo joints, etc. It's possible that one would not
> want to force all layers through this operation, so possibly only some
> feature planes would be fed through this operation, or perhaps all of them
> but the identity transformation would also be an output of the layer to
> feed into the next.
>
> Speculatively, in the best case one might imagine this has a chance to
> improve the ability of the neural net to evaluate large semeai or to judge
> the status of large dragons, by letting it propagate liberty count
> information (including virtual liberties due to approach moves) and
> information about eyes across the board more rapidly than a series of local
> convolutions could do so. In fact, it seems that convolutional layers
> followed by an early pooling of this sort would make it unnecessary to
> provide liberties as an input feature because it would become easy for the
> neural net to compute it on its own, although one would still probably want
> to provide it to save the network the effort of having to learn it.
>
> Of course, this idea could also easily turn out worthless. One thing I'm
> very not sure about is how GPU-friendly this kind of operation could be
> made to be, since I don't understand GPUs. Any thoughts?
>
>
>
> _______________________________________________
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to