Re: [computer-go] simple MC reference bot and specification

2008-10-13 Thread steve uurtamo
sorry to be pedantic, but:

13. Chinese scoring.

s.

On Sat, Oct 11, 2008 at 9:11 AM, Don Dailey [EMAIL PROTECTED] wrote:
 On Sat, 2008-10-11 at 13:33 +0100, Claus Reinke wrote:
 I have a rough idea of what that might be. And I suspect that keeping
 this
 de facto standard implicit has been hiding some actual differences
 in what
 different people think that standard is. Some of my questions arise
 from trying
 to pin down where and why different authors have different ideas of
 what the
 standard is. If there has been some explicit standardisation since
 those papers
 were published, I'd be interested in a pointer to that standard and
 its rationale.

 I'm going to publish a real simple java reference program and some docs
 to go with it and a program to test it for black box conformance.
 (Actually, it will test 2 or more and compare them.)   I would like to
 get someone who writes better than I do to write up the standard in less
 casual language but it goes something like this:

  1. A complete game playing program so it can also be tested in real
 games.

  2. Play uniformly random moves except to 1 pt eyes and avoiding simple
 ko.  When a move is otherwise not possible,  pass.

  3.  Playout ends after 2 consecutive pass moves (1 for each side.)

  4.  1 pt eye is an empty point surrounded by friendly stones for the
 side to move.  Additionally, we have 2 cases.  If the stone is NOT on
 any edge (where the corner is an edge) there must be no more than one
 diagonal enemy stone.If the point in question is on the edge, there
 must be NO diagonal enemy stones.

  5.  In the playouts, statistics are taken on moves played during the
 playouts.   If the move is played FIRST (during the playout) by the side
 to move it is one data point and the win loss record is maintained.

  6.  The move with the highest statistical win rate is the one selected
 for move in the actual game.

  7.  In the case of moves with even scores a random selection is made.

  8.  Pass move are never selected as the final move to play unless no
 other non-eye filling move is possible.

  9.  Random number generator is unspecified - your program should
 simply pass the black box test and as a further optional test your
 program should score close to 50% against other properly implemented
 programs.

  10.  Suicide not allowed in the playouts or in games it plays.

  11.  When selecting moves to play in the actual game (not playouts)
 superko is checked and forbidden.

  12.  If a move has NO STATS taken (which is highly unlikely unless you
 do very few playouts) it is ignored for move selection.

 Did I miss anything?  I would like to get feedback and agreement on
 this.

 Please note - a few GTP commands will be added in order to instrument
 any conforming programs.   Haven't figured those out yet,  but it will
 be designed so that it can report number of nodes, number of playouts,
 average score of playouts, etc.   So the tester may set up some position
 and a ko,  and ask for statistics based on the number of specified
 playouts.

 - Don









 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Anyone With Measures for MiniMax LightSimulations

2008-10-13 Thread terry mcintyre
Nick's remarks about teaching computer programs which they understand rang a 
bell.

Recently, I operated a beta version of MFG12 at the Cotsen tournament. It 
appears to have a very strong tendency to stake out a large center territory. 
If the players permit this to be solidified, MFG wins. But in the pursuit of 
this huge central territory, MFG make lots of hanes (diagonal plays) which 
leave cutting points behind. When players aggressively attack the weak points, 
they win. 

Qualitatively, subject to the caveat that I only have a sample of five games to 
work with, this would be in instance of not knowing how to follow up on 
objective - it stakes out a large territory but doesn't know how to keep it 
stable, and the root appears to be not so much a lack of tactical skill as a 
lack in what Go players refer to as good shape - good shapes (patterns) are 
easily defended; bad shapes are not. 

Such shape patterns require a bit of light tactical reading to be effective. 
Yilun Yang, published by Slate and Shell, has a nice short presentation of good 
shape. To cite the most obvious blunder: a double hane is sometimes effective, 
sometimes not. A bit of local tactical reading is required to discern the 
difference.

 
When using a joseki, a MC-based program will need to have enough knowledge to 
read out whether it works or not. Perhaps a directed expansion of the top-level 
tree, of all the reasonable branches, including the known semeais, trick plays, 
and blunders and their refutations? If a ladder is involved, this directed 
expansion might add some interesting ladder breakers to the tree - even trying 
them in advance of playing the joseki, in order to constrain the opponent's 
plays to its advantage. 

Terry McIntyre [EMAIL PROTECTED]


We must stop dressing up the slaughter of foreigners as a great national cause. 
-- Sheldon Richman

 My impression of Joseki for computers was that it was really Joseki 
 for 5-kyus.
 
 Suppose you want to teach joseki to a 5-kyu, with the objective of 
 making him into a 4-kyu.  Assume that you have no higher objective such 
 as his one day becoming 1-dan.  It is sensible to teach him to play 
 josekis in which he understands what he has achieved (such as making 
 third-line territory), so he won't screw up later.  You should teach him 
 to avoid josekis which are sound in the hands of a strong player 
 (because of say the central influence they give), but which the 5-kyu 
 won't know how to follow up on.
 
 Nick
 -- 
 Nick Wedd[EMAIL PROTECTED]



  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] komi study with CGOS data

2008-10-13 Thread Erik van der Werf
Don, thanks for providing these statistics!

Overall it suggests that on CGOS White only has a small advantage. I
still don't like this, but it is not nearly as bad as I initially
suspected.

The initially decreasing percentages are somewhat puzzling. One might
speculate that up to a certain level Black's strategy to dominate the
center is easier, and White needs significant resources to learn how
to build two sufficiently large living groups.

My guess is that for 5-minute CGOS games the average level of play is
still weak enough to keep White's advantage relatively close to 50%.
I'm not sure how many programs actually play at peak strength on CGOS,
but in the future we may expect to see an increasing group suffering
from the large komi.

Maybe it would be interesting to compare recent games to older games
to see if there is a trend in the top group?

Erik



On Thu, Oct 9, 2008 at 6:08 AM, Don Dailey [EMAIL PROTECTED] wrote:
 Ok, I'm doing the komi study.   I hope this data formats properly on
 your email clients.

 I am not including the first day or two of games because I remember
 that I started out with 6.5 komi but I think that only lasted a few
 hours.

 I'm including ALL games unless they ended with an illegal move.

 I'm using bayeselo ratings.  So each bot has only 1 rating over it's
 entire lifetime of games.

 I do not include games where either player played less than 20 games.
 20 games does not give a very accurate rating, but I had to draw the
 line somewhere.  However it's probably within 100 ELO of being
 correct.

 I require opponents to be within 100 ELO of each other.  I ran this
 many times using different minimum ELO values.

 The data seems to indicate that white's winning percentage at 7.5 komi
 DECREASES with the strength of the players in general.

 HOWEVER, the 2500 ELO entry is intriguing.  It shows a sudden jump
 with a sample of 3400 games.  Does anyone have an explanation for
 that?  Is this just sample error?  Or are the programs finally strong
 enough to start seeing that white wins at 7.5 komi?

 Another interesting fact is that whites win percentage drops below 50%
 with some of these entries (stronger players.)


  DIFF  MIN ELOWHITETOTAL  PERCENT
 -  ---  ---  ---  ---
  100074513   141579   52.630
  100  10074513   141579   52.630
  100  20074511   141577   52.629
  100  30074191   141054   52.598
  100  40073723   140308   52.544
  100  50073524   140009   52.514
  100  60073427   139874   52.495
  100  70072763   138921   52.377
  100  80072335   138212   52.336
  100  90071227   136490   52.185
  100 100071192   136432   52.181
  100 110071084   136231   52.179
  100 120068862   132356   52.028
  100 130067765   130428   51.956
  100 140066562   128193   51.923
  100 150064143   123672   51.865
  100 160056458   108767   51.907
  100 170053943   103828   51.954
  100 18004079078999   51.634
  100 19001840336247   50.771
  100 20001685933340   50.567
  100 21001379327399   50.341
  100 2200 898618072   49.723
  100 2300 855517266   49.548
  100 2400 768615569   49.367
  100 2500 1801 3400   52.971
  100 2600   12   28   42.857

 When I use a window of 200 ELO the data looks very similar.

 Here is the data when I require the difference to be 50 ELO or less:

  DIFF  MIN ELOWHITETOTAL  PERCENT
 -  ---  ---  ---  ---
   5004618487556   52.748
   50  1004618487556   52.748
   50  2004618387555   52.747
   50  3004603187326   52.712
   50  4004589087100   52.687
   50  5004581586996   52.663
   50  6004581386993   52.663
   50  7004537686377   52.533
   50  8004510885932   52.493
   50  9004405084302   52.253
   50 10004403284277   52.247
   50 11004396984163   52.243
   50 12004284082193   52.121
   50 13004236181354   52.070
   50 14004150379735   52.051
   50 15004100378853   51.999
   50 16003795972933   52.046
   50 17003628069569   52.150
   50 18002847854845   51.925
   50 1900 853516668   51.206
   50 2000 789615433   51.163
   50 2100 682313349   51.112
   50 2200 4145 8120   51.047
   50 2300 3917 7704   50.844
   50 2400 3627 7166   50.614
   50 2500 1497 2871   52.142
   50 2600   12   28   42.857



 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing 

Re: [computer-go] simple MC reference bot and specification

2008-10-13 Thread Don Dailey
I think I already add a 13,  so you must mean 14 :-)

- Don


On Mon, 2008-10-13 at 09:48 -0400, steve uurtamo wrote:
 sorry to be pedantic, but:
 
 13. Chinese scoring.
 
 s.
 
 On Sat, Oct 11, 2008 at 9:11 AM, Don Dailey [EMAIL PROTECTED] wrote:
  On Sat, 2008-10-11 at 13:33 +0100, Claus Reinke wrote:
  I have a rough idea of what that might be. And I suspect that keeping
  this
  de facto standard implicit has been hiding some actual differences
  in what
  different people think that standard is. Some of my questions arise
  from trying
  to pin down where and why different authors have different ideas of
  what the
  standard is. If there has been some explicit standardisation since
  those papers
  were published, I'd be interested in a pointer to that standard and
  its rationale.
 
  I'm going to publish a real simple java reference program and some docs
  to go with it and a program to test it for black box conformance.
  (Actually, it will test 2 or more and compare them.)   I would like to
  get someone who writes better than I do to write up the standard in less
  casual language but it goes something like this:
 
   1. A complete game playing program so it can also be tested in real
  games.
 
   2. Play uniformly random moves except to 1 pt eyes and avoiding simple
  ko.  When a move is otherwise not possible,  pass.
 
   3.  Playout ends after 2 consecutive pass moves (1 for each side.)
 
   4.  1 pt eye is an empty point surrounded by friendly stones for the
  side to move.  Additionally, we have 2 cases.  If the stone is NOT on
  any edge (where the corner is an edge) there must be no more than one
  diagonal enemy stone.If the point in question is on the edge, there
  must be NO diagonal enemy stones.
 
   5.  In the playouts, statistics are taken on moves played during the
  playouts.   If the move is played FIRST (during the playout) by the side
  to move it is one data point and the win loss record is maintained.
 
   6.  The move with the highest statistical win rate is the one selected
  for move in the actual game.
 
   7.  In the case of moves with even scores a random selection is made.
 
   8.  Pass move are never selected as the final move to play unless no
  other non-eye filling move is possible.
 
   9.  Random number generator is unspecified - your program should
  simply pass the black box test and as a further optional test your
  program should score close to 50% against other properly implemented
  programs.
 
   10.  Suicide not allowed in the playouts or in games it plays.
 
   11.  When selecting moves to play in the actual game (not playouts)
  superko is checked and forbidden.
 
   12.  If a move has NO STATS taken (which is highly unlikely unless you
  do very few playouts) it is ignored for move selection.
 
  Did I miss anything?  I would like to get feedback and agreement on
  this.
 
  Please note - a few GTP commands will be added in order to instrument
  any conforming programs.   Haven't figured those out yet,  but it will
  be designed so that it can report number of nodes, number of playouts,
  average score of playouts, etc.   So the tester may set up some position
  and a ko,  and ask for statistics based on the number of specified
  playouts.
 
  - Don
 
 
 
 
 
 
 
 
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Congratulations to ManyFaces and to MoGo!

2008-10-13 Thread Seo Sanghyeon
2008/10/14 Nick Wedd [EMAIL PROTECTED]:
 The results of yesterday's KGS bot tournament are now available at
 http://www.weddslist.com/kgs/past/43/index.html

Formal division round 1 position has a technical name, you may want to
mention it:
http://senseis.xmp.net/?SendingTwoReturningOne

-- 
Seo Sanghyeon
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Congratulations to ManyFaces and to MoGo!

2008-10-13 Thread Jason House

On Oct 13, 2008, at 4:41 PM, Nick Wedd [EMAIL PROTECTED] wrote:


The results of yesterday's KGS bot tournament are now available at
http://www.weddslist.com/kgs/past/43/index.html

As always, I look forward to your corrections.


You give HBotSVN too much credit in the round 8 open game. Seki is  
completely unrecognized in playouts and scoring. HouseBot views the  
game as having one follow-up move, and it clearly leads to a loss. If  
all moves lose, it resigns. Before the end, HouseBot retained hope the  
opponent would break the seki first.


I also think the hardware for HBotSVN may be wrong, but I'll let Urban  
confirm.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] java reference bot

2008-10-13 Thread Don Dailey
I made a reference bot and I want someone(s) to help me check it out
with equivalent data from their own program.  There are no guarantees
that I have this correct of course.

Doing 1 million play-outs from the opening position I get the following
numbers for various komi:

  playouts:1,000,000
  komi:5.5
 moves:  111,030,705
 score:  0.445677

  playouts:1,000,000
  komi:6.0
 moves:  111,066,273
 score:  0.446729

  playouts:1,000,000
  komi:6.5
 moves:  111,040,546
 score:  0.447138

  playouts:1,000,000
  komi:7.0
 moves:  111,029,204
 score:  0.4333795

  playouts:1,000,000
  komi:7.5
 moves:  111,047,843
 score:  0.421281

(I also get a score of 0.524478 for 0.0 komi)

Score is from blacks point of view.  Score is not the score of the
best move of course but the combined average score of all 1 million
play-outs using the stated komi and ranges from zero to one.

I am going to build a test harness to compare multiple bots side by
side using gtp commands.  I made up two private gtp commands to
facilitate this:

   ref-nodes - return total moves executed in play-outs
   (including both pass moves at end of each
   play-out.)

   ref-score - return total win fraction for black.  

  NOTE: both commands report stats from last given genmove search.

   

I hope to get peoples opinion on the following implementation
specification.  I'm definitely not a writer, so I need to know if this
very informal spec is enough at least for experienced MC bot authors
or where there are still some ambiguous points.


I'm using the following implementation specification:

[ bot implementation specification ]

This is an informal implementation specification document for
writing a simple Monte Carlo Bot program.  The idea is to build a bot
like this in ANY language and test it for performance (and
conformity.)  Can be used as a general language benchmark but is as much
about the implementation as the language.This specification assumes
some knowledge of go and Monte Carlo go programs.   (If you don't like
it, please write a better one for me!)



  1. Must be able to play complete games for comprehensive conformity
 testing.

  2. In the play-out phase, the moves must be chosen in a uniformly
 random way between legal moves that do not fill 1 point eyes and
 obey the simple-ko restriction.

 When a move in the play-out is not possible, a pass is given.

  3. Play-outs stop after 2 consecutive pass moves, OR when N*N*3
 moves have been completed, except that at least 1 move gets tried
 where N is the size of the board.  So if the board is 9x9 then
 the game is stopped after 9*9*3 = 81*3 = 243 move assuming at
 least one move has been tried in the play-outs.

  4.  A 1 point eye is an empty point surrounded by friendly stones
  for the side to move.  Additionally, we have 2 cases.  If the
  stone is NOT on any edge (where the corner is an edge) there
  must be no more than one diagonal enemy stone.  If the point in
  question is on the edge, there must be NO diagonal enemy stones.

  5.  Scoring is Chinese scoring.  When a play-out completes, the 
  score is taken accounting for komi and statistics are kept.  

  6.  Scoring for game play uses AMAF - all moves as first.  In the
  play-outs, statistics are taken on moves played during the
  play-outs.  Statistics are taken only on moves that are played by
  the side to move, and only if the move in question is being
  played for the first time in the play-out (by either side.)  A
  win/loss record is kept for these moves.

  7.  The move with the highest statistical win rate is the one
  selected for move in the actual game.  In the case of moves with
  even scores the choice is randomly made between them.

  8.  Pass move are never selected as the final move to play unless no
  other non-eye filling move is possible.

  9.  Random number generator is unspecified - your program should
  simply pass the black box test and possible an optional
  additional test which consists of long matches between other
  known conforming bots.  Your program should score close to 50%
  against other properly implemented programs.

 10.  Suicide not allowed in the play-outs or in games it plays.  
 
 11.  When selecting moves to play in the actual game (not play-outs)
  positional superko is checked and forbidden.

 12.  If stats for a move was never seen in the play-outs, (has a count
  of zero) it is ignored for move selection.  



signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] java reference bot

2008-10-13 Thread Don Dailey
A minor correction in the GTP ref-score command.   Score is not from
blacks point of view, but from the point of view of the player who's
turn to move it is.

- Don


On Mon, 2008-10-13 at 19:14 -0400, Don Dailey wrote:
 I made a reference bot and I want someone(s) to help me check it out
 with equivalent data from their own program.  There are no guarantees
 that I have this correct of course.
 
 Doing 1 million play-outs from the opening position I get the following
 numbers for various komi:
 
   playouts:1,000,000
   komi:5.5
  moves:  111,030,705
  score:  0.445677
 
   playouts:1,000,000
   komi:6.0
  moves:  111,066,273
  score:  0.446729
 
   playouts:1,000,000
   komi:6.5
  moves:  111,040,546
  score:  0.447138
 
   playouts:1,000,000
   komi:7.0
  moves:  111,029,204
  score:  0.4333795
 
   playouts:1,000,000
   komi:7.5
  moves:  111,047,843
  score:  0.421281
 
 (I also get a score of 0.524478 for 0.0 komi)
 
 Score is from blacks point of view.  Score is not the score of the
 best move of course but the combined average score of all 1 million
 play-outs using the stated komi and ranges from zero to one.
 
 I am going to build a test harness to compare multiple bots side by
 side using gtp commands.  I made up two private gtp commands to
 facilitate this:
 
ref-nodes - return total moves executed in play-outs
  (including both pass moves at end of each
play-out.)
 
ref-score - return total win fraction for black.  
 
   NOTE: both commands report stats from last given genmove search.
 

 
 I hope to get peoples opinion on the following implementation
 specification.  I'm definitely not a writer, so I need to know if this
 very informal spec is enough at least for experienced MC bot authors
 or where there are still some ambiguous points.
 
 
 I'm using the following implementation specification:
 
 [ bot implementation specification ]
 
 This is an informal implementation specification document for
 writing a simple Monte Carlo Bot program.  The idea is to build a bot
 like this in ANY language and test it for performance (and
 conformity.)  Can be used as a general language benchmark but is as much
 about the implementation as the language.This specification assumes
 some knowledge of go and Monte Carlo go programs.   (If you don't like
 it, please write a better one for me!)
 
 
 
   1. Must be able to play complete games for comprehensive conformity
  testing.
 
   2. In the play-out phase, the moves must be chosen in a uniformly
  random way between legal moves that do not fill 1 point eyes and
  obey the simple-ko restriction.
 
  When a move in the play-out is not possible, a pass is given.
 
   3. Play-outs stop after 2 consecutive pass moves, OR when N*N*3
  moves have been completed, except that at least 1 move gets tried
  where N is the size of the board.  So if the board is 9x9 then
  the game is stopped after 9*9*3 = 81*3 = 243 move assuming at
  least one move has been tried in the play-outs.
 
   4.  A 1 point eye is an empty point surrounded by friendly stones
   for the side to move.  Additionally, we have 2 cases.  If the
   stone is NOT on any edge (where the corner is an edge) there
   must be no more than one diagonal enemy stone.  If the point in
   question is on the edge, there must be NO diagonal enemy stones.
 
   5.  Scoring is Chinese scoring.  When a play-out completes, the 
   score is taken accounting for komi and statistics are kept.  
 
   6.  Scoring for game play uses AMAF - all moves as first.  In the
   play-outs, statistics are taken on moves played during the
   play-outs.  Statistics are taken only on moves that are played by
   the side to move, and only if the move in question is being
   played for the first time in the play-out (by either side.)  A
   win/loss record is kept for these moves.
 
   7.  The move with the highest statistical win rate is the one
   selected for move in the actual game.  In the case of moves with
   even scores the choice is randomly made between them.
 
   8.  Pass move are never selected as the final move to play unless no
   other non-eye filling move is possible.
 
   9.  Random number generator is unspecified - your program should
   simply pass the black box test and possible an optional
   additional test which consists of long matches between other
   known conforming bots.  Your program should score close to 50%
   against other properly implemented programs.
 
  10.  Suicide not allowed in the play-outs or in games it plays.  
  
  11.  When selecting moves to play in the actual game (not play-outs)
   positional superko is checked and forbidden.
 
  12.  If stats for a move was never seen in the play-outs, (has a count
   of zero) it is ignored for move 

Re: [computer-go] java reference bot

2008-10-13 Thread Joshua Shriver
Is the source available would be neat to see.

-Josh

On Mon, Oct 13, 2008 at 7:14 PM, Don Dailey [EMAIL PROTECTED] wrote:

 I made a reference bot and I want someone(s) to help me check it out
 with equivalent data from their own program.  There are no guarantees
 that I have this correct of course.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] java reference bot

2008-10-13 Thread Don Dailey

On Mon, 2008-10-13 at 23:21 -0400, Joshua Shriver wrote:
 Is the source available would be neat to see.

Yes,  get it here:  http://cgos.boardspace.net/public/javabot.zip

It includes a unix style simple Makefile. 

For you java programmers:  I'm sure you won't like it - I'm not a java
programmer but I did try to comment it fairly well and make it readable
because it's supposed to be a reference bot.

If anyone wants to clean it up, make it more readable, speed it up
(without uglying it up) I would be interested and would incorporate that
into the final reference bot.I don't know java specific do's and
dont's and idioms for getting faster code.   

But of course first I want to find all the bugs.  

- Don





 
 -Josh
 
 On Mon, Oct 13, 2008 at 7:14 PM, Don Dailey [EMAIL PROTECTED] wrote:
 I made a reference bot and I want someone(s) to help me check
 it out
 with equivalent data from their own program.  There are no
 guarantees
 that I have this correct of course.
 
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/