I scrolled back through the archives and it seems like it has been a while since gnubg got smarter.
I have always suspected that the only thing holding gnubg back was an obscene amount of compute. A large single model should be able to outperform a bunch of essentially disconnected specialized ones, ala snowie. IFF the learning is deep enough and long enough. Connecting backgames, and containment with early game play should/might/could happen. I've been building a little compute node in my basement, a few xeon phis boards and a xeon 8 core processor and thinking that if the models were ported to openmp or openacc and tweaked a bit we might find a corporate sponsor with heavy metal to run them. Intel is pushing the Xeon Phi architecture pretty heavily; could be they might lend us some cpu time. Amazon, google, Baidu... bound to be someone with some cycles out there. Who is the neural networks guru in residence? Where do I look for the docs,pseudo code,scribblings? I'm at the "Going to Take the Andrew Ng" course point in the project but I'm pretty good at assembling and porting things. Hopefully, by the time I have code familiarity I might actually have some current knowledge that can be applied to the real challenges. R , pynum, pymic have both been ported to run on the intel/mic libraries. Intels C/C++ compiler and vtune are available for students and the Phi's are cheap like borscht for establishing a development environment/sandbox. I've been scrounging hardware for the better part of a year at this point but I'm within spitting distance of booting and getting Centos installed. Would like to use this as a stepping stone to doing some Kaggle contests and generally getting my data analysis, machine learning chops. Thanks for any thoughts or suggestions, Robert _______________________________________________ Bug-gnubg mailing list Bug-gnubg@gnu.org https://lists.gnu.org/mailman/listinfo/bug-gnubg