I first thought I would keep my ideas secret until the Asmterdam
tournament, but now that I have submitted my paper, I cannot wait to
share it. So, here it is:

http://remi.coulom.free.fr/Amsterdam2007/

Comments and questions are very welcome.

I'd like to propose a potential direction of further research.  In
your paper, you acknowledge that the strong assumption that each
feature's Elo can be added to form the feature team Elo may not be
correct all the time.

The Stern/Herbrich/Graepel method did not need to make this assumption
because for them a feature team was it's own first class feature
(leading to exponential growth of the number of features).  You could
evaluate the degree to which each feature violates additive-Elo
assumption by distributing that feature to all the other features and
retesting the prediction rate.

For example, instead of having features {Pass, Capture, Extension},
you would evaluate the Pass feature additive-Elo assumption by testing
with features {Capture, Extension, Pass-Capture, Pass-Extension}.

This obviously leads to more first class features, but you can test
each one one at a time to see if it is worth it.  Or at least to
validate that the additive-Elo assumption is okay in most cases.
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to