FineArt won.
(;GM[1]SZ[19]
PB[zen]
PW[fineart]
DT[2017-03-19]RE[W+R]KM[6.5]TM[30]RU[Japanese]PC[UEC, Tokyo]
;B[qd];W[dc];B[pq];W[dp];B[oc];W[po];B[qo];W[qn];B[qp];W[pm]
;B[pj];W[oq];B[pp];W[op];B[oo];W[pn];B[no];W[or];B[pr];W[lq]
;B[lo];W[rn];B[kq];W[kr];B[mr];W[mq];B[lr];W[kp];B[jq];W[lp]
;B[jr]
Fine Art is very, very strong.
-Original Message-
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
Hiroshi Yamashita
Sent: Saturday, March 18, 2017 10:38 PM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] UEC cup 2nd day
Final is
Fine Art vs Zen
(;
Training a policy network is simple and I have found a Residual Network
with Batch Normalization works very well. However training a value network
is far more challenging as I have found it indeed very easy to have
overfitting, unless one uses the final territory as another prediction
target. Even
A few more wordsÂ
*) Pushing this idea to the extreme, one might want to build a "Tree
Network" whose output tries to somehow fit the whole Monte-Carlo Search
Tree (including all the win/lose numbers etc.) for the board position. As
we know a deep network can fit anything. The structure of the net