Hmm.. I dunno.. I think there are a lot of ideas floating around but some miscommunications.
So the aim is to devise a computer that will beat the strongest human players of go. I hear that "Monte-Carlo with UCT is proven to be scalable to perfect play". It seems that this is essentially saying... that as the sample size for this technique grows to infinity.. you will approach the accuracy an algorithm that has solved go (in the sense that 5x5 was solved)... kind of like creating the entire game tree. That this curve approaches perfect play as you increase the samples to infinity. Same goes for drawing out the entire game tree. It just seems that MCwUCT is a lot easier. This however speaks nothing about the rate at which it approaches perfect play as you increase the sample size. I didn't see anything in the papers I have read about this. Which brings us to what our aim is.. and that is to beat human players at go. Nothing has been proven yet about practical scalability... which is what we would like. Scalable in the sense of approaching infinity alone does not prove that it is not intractable. And it was said that the Mogo devs said that a double-strength version beats the other one with 63%. As they mentioned... ideally... this would mean about 30 years. But there could be a point of diminishing returns as it relates to beating a human. When you say that is it proven scalable to perfect play... it is like saying that we know that if you create every possible game... and have a database that can access it well... you can get to perfect play. This does not help us actually do it.
_______________________________________________ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/