It all comes down to having a reasonable way to search the MCTS’s tree. An
elegant tool would be wonderful, but even something basic would allow a
determined person to find interesting things. When I was debugging SlugGo,
which had a tree as wide as 24 and as deep as 10, with nothing other than
Major changes in the evaluation probability could likely have a horizon of
a few moves behind that might be interesting to more closely evaluate. With
a small window like that, a deeper/more exhaustive search might work.
s.
On Mar 31, 2016 10:21 AM, "Petr Baudis" wrote:
> On Thu,
Petr,
You know, just exploring this conversation is motivating to me, even if I
am still seeing it as huge risk with small payoff.
I like your line of thinking in that from a top down approach, we start
simple and just push it as far as it can go, acknowledging we won't likely
get anywhere near
On Thu, Mar 31, 2016 at 08:51:30AM -0500, Jim O'Flaherty wrote:
> What I was addressing was more around what Robert Jasiek is describing in
> his joseki books and other materials he's produced. And it is exactly why I
> think the "explanation of the suggested moves" requires a much deeper
> baking
Petr suggests "caption Go positions based on game commentary"
Without doubt, there has to be a lot of mileage in looking for a way
for a machine to learn from expert commentaries.
i see a difference between labelling a cat in a photo and labelling a
stone configuration in a picture of a board,
On Wed, Mar 30, 2016 at 09:58:48AM -0500, Jim O'Flaherty wrote:
> My own study says that we cannot top down include "English explanations" of
> how the ANNs (Artificial Neural Networks, of which DCNN is just one type)
> arrive a conclusions.
I don't think that's obvious at all. My current avenue