Thanks Philippe, Fascinating stuff.
Did you also re-roll the benchmark data? Or was this not used? I remember what a big effort it was the first time round. I noticed that in the earlier post you added some positions with your own selection of best move. This must have been very labour intensive, and going back to the earliest days of bot training with expert knowledge. I take my hat off you! What would be the next stage of bot training? Repeating the rollout process and re-training? Or would it be better to search for new positions that the bot does not understand well? -- Ian -----Original Message----- From: Philippe Michel [mailto:[email protected]] Sent: 15 June 2015 22:56 To: Ian Shaw Cc: [email protected] Subject: Re: [Bug-gnubg] Confused On Mon, 15 Jun 2015, Ian Shaw wrote: > I searched through the archives for a report of how this was achieved, > but I can't find it right now. If Phillippe is reading this, I'd love > to know what was done. Longer training? Different initial weights? The > inputs and number of hidden nodes are unchanged, I believe. The post in the list archives Michael mentions elsewhere in this thread was from an intermediate stage. What I did is what I mentioned at the end of said post : rolling out the training database. I think this, a better training database, is the main reason for the nets improvement. Training was most probably longer than for the previous nets since it took a few weeks on a computer maybe 10 times faster than what Joseph may have had ten years earlier. Preparing the training database was much longer. Initial weights ware small random values. Inputs and number of hidden nodes are unchanged for the main nets. Pruning nets have larger hidden nodes, but this probably shouldn't make much of a difference. That was really making then a multiple of 8 to be able to evaluate them with AVX instructions. _______________________________________________ Bug-gnubg mailing list [email protected] https://lists.gnu.org/mailman/listinfo/bug-gnubg
