> 
> So I believe a better approach is a heavy playout approach with NO
> tree.  Instead, rules would evolve based on knowledge learned from each
> playout - rules that would eventually move uniformly random moves into
> highly directed ones.      All-moves-as-first teaches us that in the
> general case a move that is good now is good later or visa versa.    But
> it needs to go way farther than that.   It needs to "act like a tree"
> when something specific needs to be handled and generalize when this is
> most appropriate.       If something like this could be made to work, a
> tree could probably be built on top of it if desired.   This would be a
> super-playout approach. 
> 
This looks very much like the way human players work (albeit with a
tree): read local sequences and outcomes that can be kept in reserve for
a long time, but called about any time depending on the situation. Big
chunks.


Jonas
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to