On Tue, Feb 3, 2009 at 1:53 PM, Isaac Deutsch <i...@gmx.ch> wrote:

> Hi Jason,
>
> Thanks for your numbers. I might try to limit my bot to 50k playouts and 1
> core, but I usually simulate as long as time permits.


That kind of setup should make it easier to compare.  There have been a few
times in the past where multiple authors posted similar bots with the same
configurations and let them all duke it out on the server for a while.  Once
upon a time, I was the owner of a bot that was performing far worse than
yours and trying to figure out why.  I forget who owns myctest, but they
were very willing to run a bot to help me out.




> Do you suspect my
> pure UCT version has bugs, too, judging from its rating?


Maybe, but it's not all that likely.  It could be that your RAVE
implementation isn't buggy either.  IIRC, when I was running my bots, there
were very few bots in the 1600-2100 ELO range.  That may have skewed my
results.



> I find it hard to
> find good tests for the correctness of a program depending on "randomness",
> and even harder to find bugs.


I created a bunch of unit tests to test very specific cases with my search.
Looking at
http://housebot.svn.sourceforge.net/viewvc/housebot/branches/0.7/search/uct.d?view=markupthose
tests are as follows:

   - (line 743) Exploitation of perfect children - Two children with 100%
   winning rate.  Run 100 simulations and ensure that only one child was
   explored.
   - (line 767) Evaluation of foced sequences - Commented out.  I never
   added that functionality (Auto-expand when only one follow-up move is
   available)
   - (line 777) Recognition of a lost position - The end of the game is one
   move away, and the game is lost.  Ensure that each leaf is evaluated once
   and that the search stops and declares the position as lost
   - (line 793) Recognition of a won position - The end of the game is one
   move away and the game is won.  Ensure that only one leaf is evaluated and
   that the search stops and declares the position as won
   - (line 809) Two ply search of a won position
   - (line 834) Searches with solved children (some won, some lost)
   - (line 862) Searches where a winning subtree gets solved through a
   transposed position - Multiple paths exist to the same subtree.  Force the
   evaluation through one path to complete and then force the evaluation
   through another path and ensure the conclusion is picked up
   - (line 894) Searches where a losing subtree gets solved through a
   transposed position
   - (line 924) Searches where a losing terminal gets solved through a
   transposed position

I focused most of the tests on terminal positions that were easier to
understand how they should turn out.  It also had the nice side effect of
helping me speed up my endgame handling in addition to helping me find
several bugs that had been plaguing my bot.



> Maybe we can set up our bots to play under
> similar circumstances on CGOS.



Yes, I'll set it up soon (but probably not today).




>
>
> Regards,
> Isaac
>
> > My bot is about 1/4-1/2 of the speed of yours, but here are my strength
> > numbers (median of reported bayeselo numbers below):
> >             HouseBot     Rango_004
> > RAVE - 1778 ELO     1721 ELO
> > UCT    - 1587 ELO     1642 ELO
> >
> > Given the tremendous speed difference between our bots, I suspect you may
> > have some debugging to do :(  I'll try to put my bot up on CGOS again in
> > the
> > next few days.  Maybe some head to head games will best answer how our
> > implementations compare to each other.
>
>
> --
> Jetzt 1 Monat kostenlos! GMX FreeDSL - Telefonanschluss + DSL
> für nur 17,95 Euro/mtl.!* http://dsl.gmx.de/?ac=OM.AD.PD003K11308T4569a
> _______________________________________________
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to