PS : how can i do so that my response to this mailing-list will be correctly 
indented ? 
(for example i would have liked to set this one as a response to my previous 
post).


 I have made some experiments with my AMAF implementation. It is both an 
attempt at understanding how the beast scales, and also another seek at 
asserting it does what i'd like it to do indeed. So if someone could confirm he 
gets the same results for the same engine ... (especially maybe for the 
low-cpu-costs experiments).
 
 Three bots are represented here
 GNUGO  : GNU Go 3.7.11   lvl1 (without any fancy options)
 RANDOM : Uses the equiprobability play, with no feeling self "pseudo-eyes"
 AMAF   : Uses amaf, with the given number of playout. It is a "first player 
who played there" type Amaf.
 
 
------------------------------------------------------------------------------------------------------
 Every victory ratio mesure was made over a 
 -----------------------------------------------------------
 1000 games match alternating black and white. Komi is 6,5
 -----------------------------------------------------------
 Here are the results :
-----------------------

-----------------
GNUGO vs  AMAF
-----------------
 Number_of_Amaf_Simulations | Victory_percentage_for_GNUGO | 
Victory_percentage_for_Amaf
 300                    |       97,9 |  2,1
 500                    |       94,6 |  5,4
 1000                   |       90,5 |  9,5
 5000                   |       87,0 |  13,0
 10000                  |       88,7 |  11,3
 
 ----------------
AMAF vs  AMAF
-----------------
 Number_of_Amaf_Simulations A1 |  Number_of_Amaf_Simulations A2 | 
Victory_percentage_for_A1 | Victory_percentage_for_A2
 20     |       100             |       0,8     |       99,2
 50     |       100             |       12,6    |       87,4
 70     |       100             |       27,6    |       72,4
 
 100    |       1000            |       0,1     |       99,9
 250    |       1000            |       6,6     |       93,4
 500    |       1000            |       26,3    |       73,7
 750    |       1000            |       41,4    |       58,6

----------------
AMAF vs  RANDOM
-----------------
 Number_of_Amaf_Simulations | Victory_percentage_for_AMAF | 
Victory_percentage_for_RANDOM
 5                      |       89,3 |  10,7
 10                     |       97,4 |  2,6
 20                     |       99,5 |  0,5
 100                    |       100  |  0


=========================================================================================
Note : in the match 10000 AMAF vs gnugo there was a game that didn't finish 
because of a
triple ko. (i ran another one instead)

There is certainly a number of interesting things to read out there. Although 
we would
probably need more data before drawing any conclusion. Especially, i'd like to 
understand better how 10000 simulations
can be worst than 5000. Unfortunatly the matches with 10000 takes quite a lot 
of cpu
time. Can we degrade performances more with more simulations ? :) How does 
5000AMAF fares
agains 10000AMAF, i wonder. Although i'm more interested about the upscales 
that the
downscales :)



------------------------------------- Answer to another post 
---------------------------
Ingo said :
-----------
Some of you may want to stone me for this heresy,
but read carfully before.

When you have MCST-/UCT for Go that can work for
real-valued scores (or at least a version that can
work for three-valued scores: winm, draw, loss),
you may look at Go with different scoring systems.

Example: 
A win by 0.5+k points is worth 100+k.
A loss by 0.5+k points gives score -100-k.
Let's call b=100 the base score. 
Question: How does the playing style change with b?
(Of course, for very large B you should have almost the
normal playing style.)

Ingo.

PS: Such evaluations may also help to reduce the problem
of MC's laziness.

--------
Answer :
--------
 This is, of course a good idea. So good in fact, that many people have tryed 
it. With the conclusion that it mainly help to degrade how often a program do 
win. The MC's laziness seems to be a golden one indeed.
 Yet i do not know if the discussion of it has been exausted when applyed to 
WIN/vs/DRAW
 
 For example, we could modelized a theoritical situation, where developpers get 
paid based on the performances of their bots. If they win, they get X $ as a 
reward. if they lose, they get X $ as a loss. And if they get a draw, there is 
not cost and no reward involved. Then it would be interesting to find out what 
the best balance is between looking out for a win, a looking out for a draw. 
When a montecarlo bot can't find a draw, it'll often throw all his stones out 
of the window in a vein attemp to achieve a win no matter what ... So it could 
be interesting to try to balance this.
 
  Anyway it's not connected to the "laziness" problem in any ways i can see. 
Most probably because i do not per-se see the "laziness" as a "problem". More 
as a feature.
  
  
------------  
Claus said :
------------  
What about that claim that "the program
could figure out the rule by itself"?


---------
Answer :
---------
I made experiments with no-go-knowledge. (no eye knowledge). From what i 
remember about them,
it is 100% true that a program without go-knowledge can figure out the rules. 
Albeit it
gets a bit hard to bear to have to run thousands of simulation, only to be able 
to play
roughly a random game without filling own eyes :)
I think what i did, was just to play a random number of moves, and scoring like 
if the game
had terminated. Using that i remember that i could get a bot that was able not 
to fill it's
own eyes :) That's about as strong as it got though.


------------------------------------- Answers to previous responses 
---------------------------
Jason House said :
------------------
Looking superficially...
The game length appears in the right ballpark. I seem to remember  
110-112 moves, depending on how passed are counted.
20k playouts/core/sec seems reasonable for lightly optimized.
The center bias also looks correct.

The win rates don't look right to me. 7.5 Komi gives white a  
significant edge in random playouts that results in a 40-60 split  

(don't take those numbers as exact)
Sent from my iPhone

Answer :
--------
This particular experiment was intended to take into account a komi of 0.5 for 
determining the wining percentage.
(note the average score doesn't take into account komi)
My implementation uses an Integer komi (truncating the 0.5 part), with black 
loosing in case of a draw.


Don Dailey said :
-----------------
Just as a suggestion, you
might as well put in the infrastructure for GTP and simple game logic so
that you have reference bots.

Answer : 
-------
I just did that.


Don Dailey said :
-----------------
Do you mean "black is the first player to move to this point?"  or do
you mean, black at some point in the game moved there?   (I'm not sure
it would be much different, but it should be checked.)

Answer : 
--------
"black is the first player to move to this point". Is correct.
  I maintain a "has already been played" map for each simulation,
  and i discard any move that has already been played on a point that was 
previously occupied sooner in the simulation,
  while computing the AMAF score.

Christoph Birk said :
--------------------
> To Don and Christoph : I reallize that i was probably not as clear as i 
> though i was.
> I have built up a light simulator. There are no tree involved. It is 
> only choosing a move with equiprobabilty from the set of empty points on 
> the board.

That's exactly what 'myCtest-xxk' is doing.

Answer :
--------
I do not consider it to be the same thing.
 I was talking about a SINGLE simulation at this point.
 This SINGLE simulation was done by picking up a random move at each stage,
 until the game eventually ends. However i now have AMAF GTP bots as well. 
 
 
 


_________________________________________________________________
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant !
http://www.windowslive.fr/messenger/1.asp_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to