RE: [computer-go] Pure light playout + RAVE bot.

2009-12-19 Thread Denis fidaali

 Hi,

 Would it be possible to put one of those bots on the 19x19 server ?
I'd really like to know their strength (19x19) at some fixed setting for future 
reference :)

> From: lukasz@gmail.com
> Date: Sat, 19 Dec 2009 05:53:29 +0100
> To: computer-go@computer-go.org
> Subject: [computer-go] Pure light playout + RAVE bot.
> 
> Just FYI:
> Bots: e.21a e.21b e.21c e.21d on CGOS
> are identical and make 100k light playouts per move. And have MCTS+RAVE.
> 
> Regards,
> Lukasz Lew
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
  
_
Téléchargez Internet Explorer 8 et surfez sans laisser de trace !
http://clk.atdmt.com/FRM/go/182932252/direct/01/___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Reference Montecarlo TreeDecision Bot.

2009-12-13 Thread Denis fidaali


I like standard references bots a lot :) I think they are very usefull for 
promoting computer-go, and helping new comers. It gives a very good base for 
confidence in a new implementation from a new comer :) So it would certainly be 
usefull, if people could agree on a reference monte carlo tree bot (and provide 
some reference implementations in popular langages).
It would obviously be based on the reference light-bot.

 Usually people begins with trying their hands at 9x9. For 9x9 basic UCT is 
viable. I think you get about 50% win against gnu with 50K light-playout and 
UCT.

First we have to agree about a "standard" UCT formula. I think the 
http://senseis.xmp.net/?UCT is okay. (first on google for uct algorithm :) )

1/ Uct formala -> UCTValue(parent, n) = winrate + 
sqrt((ln(parent.visits))/(5*n.nodevisits))

Uct trys each node at least once. We have to agree about an order policy for 
nodes. Simplicity would call for A1, A2 ... A9, B1 .. B9, C1.. C9 etc. Uper 
left corner first, then upper edge, etc. But i suppose most people would find 
that very uncomfortable (giving priority to known worst move in game :) ) so... 

2/ If parent.visit > 1000, pick a move according to UCT policy, if 
parent.visit<=1000 pick up a move with light playout policy. 

I think once we agree about 1/ and 2/, we can build reference for pure UCT 
light-Playout bots.

//-

 Alternatively, i would apreciate if it existed a reference UCT pure-light 
implementation for 19x19. I think the main difference here, is that pure UCT is 
hardly an option. So i propose this :

 1/ Uct formala -> UCTValue(parent, n) = winrate + 
sqrt((ln(parent.visits))/(5*n.nodevisits))

 2/ If parent.visit > 1000, pick a move according to UCT policy (winrate would 
be (realVisit* realWinRate + AMAFwinRate)/(realVisit+1)), if
parent.visit<=1000 pick up a move according to AMAF light playout policy.








> Date: Sat, 12 Dec 2009 16:39:56 -0800
> From: br...@slesinsky.org
> To: computer-go@computer-go.org
> Subject: [computer-go] Gongo: Go in Go
> 
> Thought I'd announce that I've ported the Java refbot to the Go
> language (with some modifications).
> 
> I'm getting about 10,000 random playouts/second on 9x9 using a single
> thread on a 32-bit iMac, using the gc compiler, which doesn't do any
> optimization. I suspect that a board structure that tracked
> pseudo-liberties could do better.
> 
> I probably won't have a chance to work on this for a while, but I
> think the next step might be some kind of tree search. I'd be
> interested in a particularly simple, standard, and easy-to-port
> implementation to use as reference.
> 
> Source code:
> http://github.com/skybrian/Gongo
> 
> Previous discussion on the Go language list:
> http://groups.google.com/group/golang-nuts/browse_thread/thread/99ab46f5b7219a5b/22e58d9223db10ef
> 
> - Brian
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
  
_
Téléchargez Internet Explorer 8 et surfez sans laisser de trace !
http://clk.atdmt.com/FRM/go/182932252/direct/01/___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] [ANNOUNCE] BitmapGo mockup version 2.0 (for MS Visual C++ and g++)

2009-12-08 Thread Denis fidaali


 In (at least) some implementation, it is far easier to check for captures than 
to check for "atari".

 It sounds very close to what you do, the difference being.
"has the group exactly 1 liberty ?". (atari)
"has the group exactly 0 liberty ?" / "at least 1 liberty"

The later is faster.
 - it is faster when using the concept of pseudo liberty (faster to 
incrementaly make up)
 - until you get some really fast way of counting 1 and 0 in an integer, it is 
faster when using bitmaps for liberties.
(you then only have to check weither it is full 0  map or not)

BUT most advanced engines do have a LOT of use for the atari status of strings.
If only speed is your goal, you'll probably have to use the "has no liberty ?" 
test as much as you can.
(probably this lead to : "make the move, then check the 0 liberty status of 
adjacent ennemi/ 0 liberty status of new made -fusionned- friendly group, 
eventually cancel the move if not legal)

To me 10K light-playouts(no self pseudo-eye fills) on a 19x19 sounds very fast 
:)
I don't know how it compares to libego though. But then, do libego provides the 
"is in atari" status for strings ?

It seems to me that having the bitmap for each groups of stone on the board, 
and the liberty maps for each of those groups is a strong basis for heavy 
playouts experiments. So your library (if it provides this) would have a great 
potential for all the students conducting experiments on go montecarlo tree 
decision  :)

Date: Mon, 7 Dec 2009 11:43:05 -0800
Subject: Re: [computer-go] [ANNOUNCE] BitmapGo mockup version 2.0 (for MS   
Visual C++ and g++)
From: rene.vandeveerd...@gmail.com
To: computer-go@computer-go.org

"Strictly legal" refers to all the empty points that provide at least one 
liberty to a friendly group if a friendly stone would be placed on that point. 
The liberty can be provided (i) by an empty neighboring point, (ii) by a 
capture of a neighboring enemy string (i.e., a neighboring enemy string that 
currently is in atari), or (iii) by connecting to a neighboring friendly string 
that is not currently in atari (otherwise I just played on its last liberty). 
Both the second and third criterion require knowledge of the atari status of 
strings.
[...]
However, the question remains for the first part of the move selection, which 
determines which move candidates are considered "strictly legal".
René  
_
Nouveau ! Tout Windows Live débarque dans votre téléphone. Voir les Windows 
phone
http://clk.atdmt.com/FRM/go/175819071/direct/01/___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] [ANNOUNCE] BitmapGo mockup version 2.0 (for MS Visual C++ and g++)

2009-12-04 Thread Denis fidaali

o much complexity could be removed by not maintaining atari status for strings 
(especially for capturing moves) but atari information is essential for my 
move generation algorithm (to disallow suicides, etc)

 question: does the "do not play in your own eye" selection rule require 
atari information? (my guess is yes) follow-up question: do you maintain 
atari status incrementally or determine it on demand?

- It first depends on what you call an eye. At least some bots use pseudo-eyes. 
Which does not need any information about liberties and status.
Basically : if it is empty and surrounded by friendly stones, it is a 
candidate. Then check if it looks like a false eye. If not, this is a pseudo 
eye.
This a very good approximation of an eye. And prevent the problem of not 
connecting, when a participating string is in atari. I think most fast light 
implementation use this definition.

- I checked (very quickly) your code. What would help non cpp speakers like 
myself, would be an overview of your algorithm. I think there is a probably a 
lot of ways to implements bitmap playout (i tryed at least two very different 
ones). So a quick description of the key principle (like your definition of an 
eye in the "do not play in your own eye" rule would be neat.

 You put this commentary right above the scoring method
/**

 * assumptions

 *   all occupied points are considered alive

 *   black territory cannot have white neighbors

 *   no seki

 */
 I don't think you should need the "no seki" hypothesis for scoring. I think 
most use the following scoring policy :
If an empty intersection is surrouded by friendly stones (no adjacent ennemy or 
empty intersection), then this is a point for the surrounding collor.
Any point occupied is a point for the corresponding color.
 If you score a final position, and there is a seki, this policy should give 
you correct chineese-scoring.



Here is quick description of the bitmap simulator i mostly played with :
Minimal memory signature : maintain only black stone, white stone, and border 
(constant) maps between public method calls.

  
_
Téléchargez Internet Explorer 8 et surfez sans laisser de trace !
http://clk.atdmt.com/FRM/go/182932252/direct/01/___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] [ANNOUNCE] Computer Go 2009 - "complete" state of art

2009-12-04 Thread Denis fidaali


 hi,
I think it is a usefull overwiew, wich can help dig through this list, or even 
help to avoid having to dig through it :)

> Date: Fri, 4 Dec 2009 12:51:00 +0100
> From: pa...@ucw.cz
> To: computer-go@computer-go.org
> Subject: [computer-go] [ANNOUNCE] Computer Go 2009 - "complete" state of art
> 
>   Hello!
> 
>   Today I held a presentation on the "complete" state of art in
> Computer Go, showing hopefully all the current widely applicable
> results, most successful strategies and worst current problems (in my
> interpretation, of course) - you can find it at:
> 
>   http://pasky.or.cz/~pasky/go/
> 
>   The presentation has a section introducing basic Go rules and tactics
> so it should be suitable even for AI researches not very familiar with
> Go. Of course it is not perfect and some parts assume accompanying
> explanations and few formulas on blackboard, but I think it could still
> be good material to give quick overview on currently used approaches
> with sufficient depth to satisfy a computer science scholar, plus
> quick references to the papers.
> 
>   P.S.: One IMHO important thing I now realize I've missed in the "open
> problems" section is parallelization on GPU.
> 
>   P.P.S.: The presentation also shows some preliminary (noticeably
> positive) results of naive dynamic komi application in handicap games.
> I plan to publish this later in a paper when I try more approaches and
> do more comprehensive testing.
> 
>   Hope it's useful,
> 
>   Petr "Pasky" Baudis
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
  
_
Tchattez en direct en en vidéo avec vos amis !  
http://www.windowslive.fr/messenger/___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Where to find AmigoGtp ?

2009-12-03 Thread Denis fidaali


 thanks a lot. Google is realy nice, when you just happen to know the right key 
words.
strangely, if i type "amigogtp" now, i do find links .. but i'm pretty sure i 
got totally unrelated stuff when i tryed it. I don't think i was drunk ..
So i guess i should appology for the question then.

> Date: Thu, 3 Dec 2009 13:20:29 +0100
> From: pa...@ucw.cz
> To: computer-go@computer-go.org
> Subject: Re: [computer-go] Where to find AmigoGtp ?
> 
>   Hi!
> 
> On Thu, Dec 03, 2009 at 09:24:33AM +0100, Denis fidaali wrote:
> >  AmigoGtp seems a very powerfull tool for tuning very bots (like pure Amaf 
> > bots ..) on19x19. GnuGo tends to be much too strong to be any usefull. At 
> > least with my ultra-light playout policy (wich doesn't even forbid 
> > one-eye-fill). On the contrary the AmigoGtp on Cgos19x19 seems to provide 
> > an interesting opponent.
> > 
> >  So i'd like to get it working on my home computer with twogtp. Where can i 
> > download a ready for linux AmigoGtp bot ?
> 
>   A 10s Google query points me at http://amigogtp.sourceforge.net/ ;)
> 
>   Petr "Pasky" Baudis
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
  
_
Vivez Noël avant l'heure avec Hotmail Magic Moment !
http://www.hotmailmagicmoment.com___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Where to find AmigoGtp ?

2009-12-03 Thread Denis fidaali

Hello there.

 AmigoGtp seems a very powerfull tool for tuning very bots (like pure Amaf bots 
..) on19x19. GnuGo tends to be much too strong to be any usefull. At least with 
my ultra-light playout policy (wich doesn't even forbid one-eye-fill). On the 
contrary the AmigoGtp on Cgos19x19 seems to provide an interesting opponent.

 So i'd like to get it working on my home computer with twogtp. Where can i 
download a ready for linux AmigoGtp bot ?


 
  
_
Tchattez en direct en en vidéo avec vos amis !  
http://www.windowslive.fr/messenger/___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] stoTest 19x19

2009-11-30 Thread Denis fidaali


 stoTest is a (most certainly buggy) Amaf monteCarlo bot.

The main features is that it doesn't use the notion of eye. (it only use 
captures rules). So basically, it plays a random number of moves, and "score" 
the result. The scoring being each stone gives a point to the owner, and the 
empty point surrounded by one color gives one point to this color.

 test01 used "winning percentage" for the first three game, then i switched to 
"average score". Surprisingly it managed to win some games on 19x19. (against 
weakest bots for sure).
  
_
Windows 7 à 35€ pour les étudiants !
http://www.windows-7-pour-les-etudiants.com___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] RefBot (thought-) experiments

2008-12-16 Thread Denis fidaali


 I agree that the experience is interesting in itself.
 I also agree that it's hard to draw any conclusion
 from it :) Running the game to the end would probably
 give near 0% win for the AMAF bot.

 Running the 5k bot against the 100K bot is certainly
 something you would like to do if you were to argue
 that 5k is indeed stronger. Although it also might
 be that for some reason the 5k bot is better at
 the oppening. The 5k has a wider range of choice
 while playing than the 100k bot. So it's easy
 to imagine that it plays the good (oppening) moves more.

 All in all, trying to assess the strength of a bot
 is awfully hard. It can make very good move
 and yet be very weak. It can have good global
 perception, or move ordonnancing, and be very
 weak. It can predict pro-moves with incredible
 accurracy, and still be very weak. (although you
 then would be able to use this prediction feature 
 in a monte-carlo bot - CrazyStone).

 I guess any hard data will always be welcome. Your
 experiment was very original, in that few people
 would have tried it. I have no idea what one should
 conclude out of it. But it certainly can't hurt our 
 understanding :) (or un-understanding) Maybe some day
 someone will look-up at this particular experiment
 and come out with the next computer-go revolution :)

> Date: Mon, 15 Dec 2008 21:10:07 -0200
> From: tesujisoftw...@gmail.com
> To: computer-go@computer-go.org
> Subject: Re: [computer-go] RefBot (thought-) experiments
> 
> Weston,
> 
> Although those result sound intriguing, it also looks like a
> convoluted experiment. I wouldn't call gnu-go an expert judge,
> although it is an impartial one. The fact that it says that the 5K
> ref-bot is ahead after 10 moves 46% of the time alone makes it suspect
> in my eyes. But it is curious it consistently shows a much lower
> percentage for the bot with more playouts.
> 
> It would have been much more persuasive if you had simply run a 5K
> playout bot against a 100K bot and see which wins more. It shouldn't
> take much more than a day to gather a significant number of games.
> twogtp is perfect for this. Or connect both to CGOS and see which ends
> up with a higher rating. But in that case it will take a week or more
> before you get conclusive data. Unless the difference is really clear.
> 
> I did in fact put up a 100K+ ref-bot on CGOS for a little while, and
> it ended up with a rating slightly (possibly insignificantly) higher
> than the 2K ref-bot. Maybe I didn't put it there long enough,
> certainly not for thousands of games. But it didn't look anywhere near
> to supporting your findings.
> 
> I say 100K+ because I didn't set it to a specific number, just run as
> many as it could within time allowed. Generally it would reach well
> over 100K per move, probably more like 250K-500K. That should only
> make things worse according to your hypothesis.
> 
> So although I think the result of your experiment is very curious, I
> think it might be a bit hasty draw your conclusion.
> 
> Mark
> 
> 
> On Mon, Dec 15, 2008 at 8:30 PM, Weston Markham
>  wrote:
> > Hi.  This is a continuation of a month-old conversation about the
> > possibility that the quality of AMAF Monte Carlo can degrade, as the
> > number of simulations increases:
> >
> > Me:  "running 10k playouts can be significantly worse than running 5k 
> > playouts."
> >
> > On Tue, Nov 18, 2008 at 2:27 PM, Don Dailey  wrote:
> >> On Tue, 2008-11-18 at 14:17 -0500, Weston Markham wrote:
> >>> On Tue, Nov 18, 2008 at 12:02 PM, Michael Williams
> >>>  wrote:
> >>> > It doesn't make any sense to me from a theoretical perspective.  Do you 
> >>> > have
> >>> > empirical evidence?
> >>>
> >>> I used to have data on this, from a program that I think was very
> >>> nearly identical to Don's reference spec.  When I get a chance, I'll
> >>> try to reproduce it.
> >>
> >> Unless the difference is large, you will have to run thousands of games
> >> to back this up.
> >>
> >> - Don
> >
> > I am comparing the behavior of the AMAF reference bot with 5000
> > playouts against the behavior with 10 playouts, and I am only
> > considering the first ten moves (five from each player) of the (9x9)
> > games.  I downloaded a copy of Don's reference bot, as well as a copy
> > of Mogo, which is used as an opponent for each of the two settings.
> > gnugo version 3.7.11 is also used, in order to judge which side won
> > (jrefgo or mogo) after each individual match.  gnugo was used because
> > it is simple to set it up for this sort of thing via command-line
> > options, and it seems plausible that it should give a somewhat
> > realistic assessment of the situation.
> >
> > jrefgo always plays black, and Mogo plays white.  Komi is set to 0.5,
> > so that jrefgo has a reasonable number of winning lines available to
> > it, although the general superiority of Mogo means that egregiously
> > bad individual moves will be punished.
> >
> > In the games played, Mogo would occasionally crash.  (This 

RE: [computer-go] latest and greatest ref bots?

2008-12-09 Thread Denis fidaali


Computer-go isn't exactly first on google searches.
It may get some attention latter,
if students and proffessors sudenly get interested
into it. It would probably mean quite some work
to get together the material that would make it
very easy to browse through everyones contribution
(for interested student) and make it easy
for teacher to gather  all the material they need.

It would be cool if it could become a standard project
for students to realize. I think it's easy enougth with
montecarlo techniques that it really has the potential.
AMAF is strong enougth that any beginner will be 
impressed (and the weighted AMAF is even better).
And then advanced student can always include
some tree-logic.

> From: [EMAIL PROTECTED]
> Subject: Re: [computer-go] latest and greatest ref bots?
> Date: Mon, 8 Dec 2008 20:32:50 -0200
> To: computer-go@computer-go.org
> CC: 
> 
> Well, I don't know about 'latest' or 'greatest', but I've posted a  
> Java implementation of Don's reference definition a while ago. And my  
> recent effort to come to a MCTS reference implementation is there as  
> well.
> 
> http://plug-and-go.dev.java.net
> 
> I don't think the site has seen much traffic yet.
> 
>   Mark
> 
> On 8-dec-08, at 19:41, terry mcintyre wrote:
> 
> > What's the status of the greatest-and-latest reference bots? Are  
> > the sources available anywhere?
> >
> >  Terry McIntyre <[EMAIL PROTECTED]>
> >
> >
> > -- Libertarians Do It With Consent!
> >
> >
> >
> >
> > ___
> > computer-go mailing list
> > computer-go@computer-go.org
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> 
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant !
http://www.windowslive.fr/messenger/1.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Mogo MCTS is not UCT ? (GENERALIZED_AMAF)

2008-12-02 Thread Denis fidaali

In the mentioned article it is said that :

"

A weakness remains, due to the initial time: at the very beginning, neither AMAF

values nor standard statistics are available

"



I have developed a model that aim at healing that part. It gives 
virtual_simulations

out of arbitrary ancestor at a rate of 1/2 for each node you go through. 

I called that process "GENERALIZED_AMAF". I'm not familiar

with mathematical terms, but i think the idea is dead simple :



 AMAF goes as this : 

Evaluate the expected results KNOWING that move A has been played.



GENERALIZED_AMAF :

Evaluate the expected results KNOWING that move A (AND) move B (AND) move C

has been played.



--

You can easily build for example an AMAF_map, which would get both the moves

that have been marked as "played" by black on one part, and the moves that has

been "played" by white in the other part. That is associated with the result 
(black win/lose)

Given a node, classical AMAF would then try to check out every of those maps,

for each simulation starting from that node where the child you are trying to 
score has been played.

Generalized AMAF can build statistics out simulation from arbitrary

ancestors from the considered node. You would get (1/2)^n simulations

from the ancestor as GENERALIZED_AMAF_simulations (n the number of choice
made from the ancestor to the node you assess).



Suppose that you have 1000 simulations for a root node R.

Then amaf gives you about 500 simulations for every child

let's call them Cn those child, where n is the id of the child.
Cn also represent the move made from R to reach that child.



Then for each child of Cn, you can get 250 generalized_amaf

simulations. you would consider from the set of simulations from the root, only 
those where
Cn has been played, and then aggregate the results in the AMAF fashion for each 
son of Cn.

My prototype was scoring 30% win against Gnu-go lvl 0, 
without any tuning using plain standard_light_simulations. 
It would use the generalized-amaf
as a way to get RAVE values. Then it would guarantees that
the most promising nodes is explored exponentially more.
(it used a raw non deterministic algorithm)
I did not however tried to compare that win ratio, with
the win ratio i would have got out of standard Amaf.

Best regard,
Denis FIDAALI.

> Date: Mon, 1 Dec 2008 21:55:03 +0100
> From: [EMAIL PROTECTED]
> To: computer-go@computer-go.org
> Subject: Re: [computer-go] Mogo MCTS is not UCT ?
> 
> >  I think it's now well known that Mogo doesn't use UCT.
> > I realize that i have no idea at all what Mogo do use for
> > it's MCTS.
> 
> A complicated formula mixing
> (i) patterns (ii) rules (iii) rave values (iv) online statistics
> 
> Also we have a little learning (i.e. late parts of simulations
> are evolved based on online statistics and not only the early parts).
> 
> > I really wonder if there was an article describing
> > the new MCTS of mogo somewhere that i missed.
> > How is it better than UCT ?
> 
> http://www.lri.fr/~teytaud/eg.pdf contains most of the information
> (many other things
> have been tried and kept as they provided small improvements, but
> essentially the
> ideas are in this version)
> 
> Best regards,
> Olivier
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant !
http://www.windowslive.fr/messenger/1.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Mogo MCTS is not UCT ?

2008-12-01 Thread Denis fidaali


 Let's assume that the UCT formula is 
UCTValue(parent, n) = winrate + sqrt((ln(parent.visits))/(5*n.nodevisits))
(taken from sensei library)



What is the Upper confidence bound term ? That would'nt be 
sqrt((ln(parent.visits))/(5*n.nodevisits)) ??

I doubt that exploring only the move with the best winrate would
lead to a fast enough convergence even on 9x9. Is that what you meant
by dropping the upper confidence bound term ? Otherwise, 
what does the formula without the upper confidence bound term do looks like ?



My understanding is that MoGo dropped the upper confidence bound portion.
 That makes it a bit faster, but still deterministic for a given set of playout 
results. 
Heuristics and RAVE give a sufficiently good move ordering that less 
exploration is needed. 
IIRC, Valkyra still uses UCT, but has a very low coefficent on the upper 
confidence bound term.

_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail ! C'est gratuit !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Mogo MCTS is not UCT ?

2008-12-01 Thread Denis fidaali


 I think it's now well known that Mogo doesn't use UCT.
I realize that i have no idea at all what Mogo do use for
it's MCTS.

There are only two things i dislike about UCT :
- It's slow to compute.
- It's deterministic


I really wonder if there was an article describing
the new MCTS of mogo somewhere that i missed.
How is it better than UCT ?

_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail ! C'est gratuit !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread Denis fidaali

Thanks a lot for your quick answer.

By conjecture, i suppose you mean that
no experiments yet has been ran as
to assess this hypothesis ? 

I think Sylvain (and maybe just everyone else) has tried
at some point to use a UCT decision bot, as a way to
get the simulation done. Then using those high level
simulations in an other UCT decision tree (or AMAF,
or FirstMove wining stats)
>From what i recall, the results were disappointing.
Also don dailey tried to build an AMAF over a bot
that would use AMAF as a decision with very little
expectation that this would lead to anything worthy.
I don't know how hard Sylvain tried at his time. 

Yet you have this feeling that using High level mogo
games as a way to get a simulation done could lead
to interesting results. I also have this feeling.
For example, it is well known that Mogo-style decision
(or crazy stone) lead to very poor understanding of seki
(and/or semeai ?) Would'nt the use of high level game
as simulation get to better understanding of those
really nasty situations ?

 Then, i guess that if nobody has ever run
any experiments, as to get measure of
the efficiency of increasing the UCT tree
against using high-level-simulation, there must be a reason ...
Is it that that it is known it would consume to much time and resources ?
Is it that knowing the results of this measure would prove of little value ?

If there is a point where high-level-simulations really give a stronger 
evaluation
function, wouldn't it be good to know about it ? 

Date: Sun, 30 Nov 2008 10:10:14 +0100
From: [EMAIL PROTECTED]
To: computer-go@computer-go.org
Subject: Re: [computer-go] Mogo Opening, Building Strategy ?



Is there any theoretical reasons for the Mogo Opening being built out of

self play, rather than by spending time increasing the number of simulations
at the root, and after a time, keeping what seems to be the best ?





There are practical reasons: our approach can be used with humans or other 
programs as opponent as well;

we can benefit from games launched for other purposes than opening-building; 
and we can easily parallelize

the algorithm on grids.



No other reason, at least for me - but these reasons are enough I guess. The 
alternate approach is nice,

but is difficult to use for tenths of years of CPU - whereas using preemptable 
mode in grids, we can have access

to a huge computational power.



>From a more technical point of view, I think that the idea of using results of 
>games of strong versions of mogo

is better for avoiding biases in the MC. But it's only a conjecture.

Olivier

 


_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail ! C'est gratuit !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread Denis fidaali


 Hi.

Is there any theoretical reasons for the Mogo Opening being built out of
self play, rather than by spending time increasing the number of simulations
at the root, and after a time, keeping what seems to be the best ?

 Obviously one could object that increasing the number of simulations
would lead to out of memory. But then it seems that there are ways
of killing the less explored branches. Another thing also, is that
it may be far easier to get a massive parallel processing of self play.

 So i really wondered what would be best, both for time efficiency, 
and strength : spending a lot of computer power refining the tree,
or using self play from a position, to assess its value ? 
What was the reason behind the choice of the Mogo team ?

_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] RAVE formula of David Silver (reposted)

2008-11-29 Thread Denis fidaali


 From my own experience, an important testing case whenever trying
to get AMAF to work, is the scaling study.

 My prototype was quite strong considering that it used only 1000 light playout
(and score 25-30 % win against gnugo lvl 0), but it seemed to not get much
over that as the number of playout grew ... (also there had a serious
exponential complexity problem, which i never get into the trouble of 
investigating :) )

 I know that Zoe was about 2000 elo with i think 50k simulations, and ... never
got any real better as the number of simulations increased.

Both prototype were toying with AMAF, so i really think you need a bit of 
scalability
study whenever trying to asses an engine employing it. (although it could very 
well be
that the scalability trouble came out of some nasty bugs. Both aforementioned 
prototype
where quite messy ...) 

> From: [EMAIL PROTECTED]
> Subject: Re: [computer-go] RAVE formula of David Silver (reposted)
> Date: Sat, 29 Nov 2008 03:39:58 -0200
> To: computer-go@computer-go.org
> CC: 
> 
> 
> On 28-nov-08, at 17:28, [EMAIL PROTECTED] wrote:
> 
> > I would be very interested to see the RAVE code from Valkyria. I'm  
> > sure others would be too.
> >
> 
> I'm much more interested in a general concise description. If such a  
> description cannot be given easily, then I think there's little point  
> including it in the definition of a MCTS reference engine.
> 
> I found a serious flaw in my code collecting the AMAF scores, which  
> explains why I wasn't seeing any gains so far with AMAF turned on.  
> Now over the first 100+ games UCT+RAVE scores 90% over plain UCT. I'm  
> going to run a test overnight, but so far it looks good. It should  
> have collected a few thousand samples by tomorrow.
> 
> Hopefully next week I can go back to testing my list of playout  
> improvements, which is why I started making the MCTS reference  
> implementation in the first place. This RAVE stuff caused a bit of a  
> distraction, but it's nice to have if it works.
> 
> Mark
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: hex robot

2008-11-28 Thread Denis fidaali

Thanks a lot.
It seems that there are much more resources for hex out there than
one would think looking at the results from Google.
It's just scattered everywhere in an impossibly
deep labyrinth :)

At some point, i expect to need a good player
for getting some feed back about some of the
engines. Do you happen to have any suggestion
how i should do that ?
(i don't expect to need it before at least a month though,
so i ask just in case there would be a self imposing easy way)

But then maybe i'll be able to simply use the link you provide
as a good reference : there is a bot included in. Do you to happen
to know how strong this version is ? (like beginner, average, good ?)
I know very little about the strength of hex players. It seems that
there are far fewer Hex-player than goPlayers, or chess player. So
i wouldn't expect there to be many "super-expert" like go-pros, or chess
grand-master.

> Date: Fri, 28 Nov 2008 12:09:16 +0200
> From: [EMAIL PROTECTED]
> To: computer-go@computer-go.org
> Subject: Re: [computer-go] Re: hex robot
> 
> 
> > GTP you can simply use as-is, I don't see why that wouldn't work.
> > GoGui is also open-source and can possibly also be easily adapted to
> > Hex as well. But to be honest, I don't really need a Gui that much.
> > But twogtp is really useful.
> HexGui already exists. It uses GTP. Here's a link:
> http://mgame99.mg.funpic.de/downloads.php
> 
> -- 
>  Tapani Raiko, <[EMAIL PROTECTED]>, +358 50 5225750
>  http://www.cis.hut.fi/praiko/
> 
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant ! 
http://www.windowslive.fr/messenger/1.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Monte-Carlo Tree Search reference bot

2008-11-28 Thread Denis fidaali

 My own experiments showed a very big increase in strength
especially with about 1000 light-playouts. I had many features
and parameters witch may make a difference. But my
main hypothesis was that it shouldn't ...

I think that i weighted real simulations (aka first move) 10 times more than
virtual one (aka AMAF moves). 

I assume that by expansions you mean allowing data for every child
of a node. So let's assume that your root node is called A, you would
then allocate for every of it's child. Then when make up a simulation
running from A, you would first apply the UCT formula for
determining wich first move you should make ... then
 you would both get back the AMAF score for every moves 
that has been played during the simulation for virtual scoring
AND you would use the true first move for Real scoring (normal UCT part)

Of course, this shouldn't behave exactly like AMAF, because for one thing
you can get a better judgment for the first move because of UCT, and
then you also make the first move weight more. (although you probably
technically should also do so with AMAF despite that the first move
is still 100% random. Which is part of what the
weighted AMAF does. But first move weight would be arbitrary
thus defying the purpose of a standard way)

Last, you could try to downgrade the weight of the virtual score part
as the number of simulation grows.

like with a (if nbsim < 1000)
winRatio=[realRatio *(1000+nbsim)]/3000+[virtualRatio*(2000-(1000+nbsim))]/3000

I'm not sure about the formula :) but you get the idea of taking more and more 
into account
the realRatio. I think that what i did was that i didn't take the realRatio 
into accout at all at
first for estimating the winRatio, and then increased it linearly to the point 
where i wouldn't
take into accout the virtualRatio.


The drawback with all that, is that you will certainly have to make several 
versions, although
they would share a lot of code. I don't think that trying to get One version 
behave as
several is a good idea. You probably should keep the code safe, once you have 
made a lot
of testing of it. For reference purpose. Then you'll be able to exactly know 
the strength
of a particular piece of software, even in two years from now. I din't do that. 
I tried
to make it evolve. And with poor svn versionning capacity, and poor management 
of
the measures/software link : i'm now to a point where all i can do is make 
random
guesses, although i made millions of experiments in the last past years :) I 
just
can't link those experiments back to a fixed piece of software.

 Then, if by any means, you think a code is bug free. Which is hard to bet on,
you really should put that somewhere, with all the proof (and experiments)
you can associate with it. (Even on a web page if you ask me , or a thesis)
To me that is the most usefull part of the standard ref bot : 
because a lot of persons have tried it, you can be sure of the exact link
between the software, and the experimental results. It makes assertions 
reproducibles, and that's really great.


> From: [EMAIL PROTECTED]
> Subject: Re: [computer-go] Monte-Carlo Tree Search reference bot
> Date: Fri, 28 Nov 2008 01:48:09 -0200
> To: computer-go@computer-go.org
> CC: 
> 
> 
> On 27-nov-08, at 19:50, Denis fidaali wrote:
> 
> > So, you use AMAF for "simulating" the first UCT evaluations ?
> > I though the classical way to use AMAF, was to affect only the
> > win/lose ratio portion of the uct equation.
> > Obvioulsy it should be allowed to use an arbitrary
> > large number of AMAF simulation accumulating them longer
> > than what it take to expand a node.
> > I think that a classical way to affect the win/ratio is to
> > decrease the effect of the AMAF correction as the number
> > of simulation grows.
> >
> > If you test with a very low number of simulation
> > (in the 1000 - 3000 range), i think you should be
> > able to get out a very nice improvement out of the
> > AMAF version. If you don't, i would think that something
> > is wrong somewhere.
> >
> > What test process do you use for this version ?
> 
> I tested it mostly doing 2,000 playouts. When AMAF is true I create a  
> map of virtual-win values of all the moves played during a playout.  
> These values get accumulated over all the playouts (not just the  
> first ones). The 'virtual-value' of a move is calculated as follows:
> 
> exploration-factor * UCT + ( (nr-wins*2 + nr-virtual-wins) / (nr- 
> playouts*2 + nr-virtual-playouts))
> 
> where the exploration-factor is currently sqrt(0.2) and UCT is sqrt 
> ( log( nr-parent-playouts ) / ( nr-playouts+1) )
> 
> Like I said, I haven't had time to experiment much so this formula  
> may not be any good. I had also exp

RE: [computer-go] Monte-Carlo Tree Search reference bot

2008-11-27 Thread Denis fidaali

So, you use AMAF for "simulating" the first UCT evaluations ?
I though the classical way to use AMAF, was to affect only the
win/lose ratio portion of the uct equation. 
Obvioulsy it should be allowed to use an arbitrary
large number of AMAF simulation accumulating them longer
than what it take to expand a node.
I think that a classical way to affect the win/ratio is to 
decrease the effect of the AMAF correction as the number
of simulation grows.

If you test with a very low number of simulation
(in the 1000 - 3000 range), i think you should be
able to get out a very nice improvement out of the 
AMAF version. If you don't, i would think that something
is wrong somewhere.

What test process do you use for this version ?

> From: [EMAIL PROTECTED]
> Date: Thu, 27 Nov 2008 18:15:34 -0200
> To: computer-go@computer-go.org
> CC: 
> Subject: [computer-go] Monte-Carlo Tree Search reference bot
> 
> I have added a MCTS implementation to my reference-bot project. It  
> allows you to specify how many nodes to search, after how many nodes  
> to expand and whether to use AMAF or not. If you specify the number  
> of nodes before expansion to be the same as the total number of nodes  
> to search and set AMAF to true you get the equivalent of Don's  
> original ref-bot. When you set AMAF to false and set the number of  
> nodes before epxansion to 1 you get my original UCT search algorithm.
> 
> Between those extremes there might be a good formula to combine AMAF  
> with tree-search, but unfortunately I haven't had time lately to look  
> for one. The few small attempts I made show no benefit using AMAF in  
> tree-search, only when used on a single level. The contrast between  
> the two exptremes is very stark, so I'm actually convinced there  
> should be a way to use both. This implementation is also quite a bit  
> slower than my original search algorithm but I also didn't have time  
> yet to trace it. It might simply be due to the different expansion  
> method, which is much more expensive with a value of 1. Also,  
> searching for the best UCT node gets more expensive with more  
> (unused) nodes on each level. Using a higher expansion value may  
> alleviate the performance hit. Anyway I think this is a reasonable  
> starting point.
> 
> At first I intended to create a different project for the search  
> reference bot. But half of the code (the MC part) is the same. And I  
> don't want to end up having to maintain the same code in two places.  
> I also didn't want to separate out some code into a separate library  
> and making the structure for the simple ref-bot more complicated.  
> This organization may need some more thought though.
> 
> I'll update the project pages tomorrow.
> 
> Mark
> 
> 
> 
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail ! C'est gratuit !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: hex robot

2008-11-27 Thread Denis fidaali


 Thanks all for your support and suggestion.
I'll let you know if i happen to get any success.



> Date: Thu, 27 Nov 2008 16:08:24 +0100
> From: [EMAIL PROTECTED]
> To: computer-go@computer-go.org
> Subject: Re: [computer-go] Re: hex robot
> 
> Dave Dyer wrote:
> > At 01:52 AM 11/27/2008, Denis fidaali wrote:
> >
> > ...
> >   
> >> But what really lacks (or i wasn't able to find anyway) is a strong 
> >> community like there is for go.
> >>
> >> A CGOS equivalent.
> >> A GTP equivalent.
> >> A Gogui equivalent.
> >> A Kgs equivalent.
> >> 
> >
> >
> > I don't think there's a match between your goals and what boardspace 
> > wants, but you ought to discuss cloning and adapting CGOS/GTP for
> > Hex with Don.
> >
> >
> >
> > ___
> > computer-go mailing list
> > computer-go@computer-go.org
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> >   
> 
> This may be what you want:
> http://www.cs.ualberta.ca/~mburo/ggsa/
> 
> You may also be interested in a few messages in the game-programming forum:
> http://www.grappa.univ-lille3.fr/icga/phpBB3/viewforum.php?f=8
> 
> Rémi
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] RE: hex robot

2008-11-27 Thread Denis fidaali


 Hopefully you will be proved wrong by the end of January 2009.
I don't feel like i should involve anyone in this before a working prototype is 
out. Your skepticism is probably statistically correct, so we'll resume this 
conversation in two month. If my algorithm doesn't work indeed, i'll certainly 
stop all work on go and hex altogether.

Sincerely
Denis FIDAALI.

> Date: Thu, 27 Nov 2008 01:57:55 -0800
> To: computer-go@computer-go.org
> From: [EMAIL PROTECTED]
> Subject: [computer-go] RE:  hex robot
> 
> 
> Permit me to play the skeptic here; I think you're going about it absolutely 
> backwards - unless you already have a strong algorithm which depends on 128 
> bit rotations, and only lack an efficient hardware engine to run it on.
> 
> If your idea of fun is to really feel the bits squishing between your toes, 
> by all means do, but I don't think it's likely you will simultaneously make 
> advances in Hex theory or practice.
> 
> 
> At 12:19 AM 11/27/2008, Denis fidaali wrote:
> 
> 
> > My current plan is to tackle directly the power of x86-64bits architecture. 
> > Because it's now quite well represented. And there is this "larrabee" 
> > project that may get out in one or two years (48 x86-64bits processors). So 
> > my true goal is to try and see what i can do with my quad Q9300 2.5Ghz 
> > running a 64 bits linux. Obviously one need a bit of work before being able 
> > to hope to achieve something with assembly.
> 
> etc...
> 
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: hex robot

2008-11-27 Thread Denis fidaali


 hi.
I have put a lot of though in that Hex bot stuff. I realize now how eager i am 
to try my ideas with an Hex bot. It's been a long time since i first realize 
how much more elegant it would be to try those ideas for the Hex game anyway. 
Your site seems a great source of interest for that game. When i tried to find 
out a community for Hex-computer, i stumbled upon it. But what really lacks (or 
i wasn't able to find anyway) is a strong community like there is for go.

 A CGOS equivalent.
 A GTP equivalent.
 A Gogui equivalent.
 A Kgs equivalent.

That made it hard for me to settle for putting on the hex bot a lot of work. 
But as i said, it seems so much more reasonable to me, to first try my ideas 
with Hex, rather than with go. It seems that Montecarlo is as sound for Hex as 
it is for go. Given that some of the good-playing programs are montecarlo ones; 
Although it seems the strongest one is not.
So what i will do, is to make up a prototype, that suit my goals : assembly 64 
bits montecarlo prototype, with all the tricks i want to try in it. I then will 
provide a Java gui, for Playing against One instance of the bot when there is 
an Hardware online willing to support it. I will use a protocol that seems 
promising, or make up a new one from scratch, if nothing hits me as being 
standard. It would be logical to use the same protocol that the Computer Games 
Olympiad uses. But i wasn't able to figure out where to find it.

 So if all goes well, i should have a prototype available for trying punctually 
through a java Applet by the end of January 2009. If this prototype shows 
promises, then i might try to port it to a more flexible langage (like Java). 
The name of the prototype will be Stohex.



> Date: Wed, 26 Nov 2008 13:56:19 -0800
> To: computer-go@computer-go.org
> From: [EMAIL PROTECTED]
> Subject: [computer-go] Re: hex robot
> 
> At 01:31 PM 11/26/2008, Denis fidaali wrote:
> >Speaking of hex ... I really think it would be a nice intermediary game 
> >before tackling the complexity of go. Do you know of any good community (and 
> >protocol equivalent to GTP) where i could start to look for submitting a bot 
> >?
> 
> There are a couple of pretty decent programs and some nice underlying
> theory.  Also a perfect play bot for a small board.  I would start
> at the Hex wiki page to research them.
> 
> A credible Hex bot is on my wish list for Boardspace.   The one I wrote
> is laughably weak, but it will be a significant effort to write a better
> one.  If you're willing to work in Java and within the constraints of
> a realtime web site, lets talk.  
> 
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail ! C'est gratuit !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: hex robot

2008-11-27 Thread Denis fidaali



 My current plan is to tackle directly the power of x86-64bits architecture. 
Because it's now quite well represented. And there is this "larrabee" project 
that may get out in one or two years (48 x86-64bits processors). So my true 
goal is to try and see what i can do with my quad Q9300 2.5Ghz running a 64 
bits linux. Obviously one need a bit of work before being able to hope to 
achieve something with assembly.

 Although if there really is nothing to do about it, i may try to translate 
brutally the intended assembly algorithm in java. But it may be quite slow. 
Especially with the heavy use of 128 bits rotations. (yes the x86 has a 128 
bits rotation instruction). I don't know about java, but the C rotation is not 
specified for at least one (or both) the extreme rotation range (like 0 or 32 
in 32bits modes). I don't even know how to do a 64 bits rotation in java. 
(appart from a time consuming combination of 32 bits ones). So there is chances 
that my translation will be quite slow. Also if i go for java, i probably won't 
try too hard to optimize : because it is not my target platform.

 So it would been hundreds time more motivating for me if i could run the bot 
you hope for on a quad 64-bits x86 environnement. But if you provide a way to 
(roughly) test the strength of the bot, i guess i will try it even if it's in 
java, because i'm so curious about how my ideas would do with hex. Hex is so 
much easier to look at :) No strange "capture" factors. If you give me a strong 
assurance (and a nice communication protocol) that you can provide this 
testing, i guess i could have a bot ready for the end of January 2009 (be it in 
java or in assembly). If it's really fun, i may even end up trying to optimize 
it a bit :) Also i would rather work for a fixed size ... (but once again, i'm 
willing to do it anyway, although i won't try to optimize too hard with the 
variable sizing)

 So the worst scenario would be that i try my ideas and they don't work out 
well by the end of January, the bot is so slow and cpu-eater, and it doesn't 
even beat out yours :) But i'm really willing to try.

 I don't understand your point about the real-time web site constraint. Do you 
mean that you would expect to have hundreds of people playing against the bot. 
All instances running on the same hardware and sharing ressources with the all 
the web components ? If so, i think it really gets far away from what i can be 
involved in. My goal is to design a bot, and then implement it in assembly, 
running on a quadQ9300 with 8GB of memory. Ideally i would try to use most of 
those ressources. I may try to downgrade from that, but it probably won't do it 
to get half a cycle every three seconds, within a 64kb memory frame :). It is 
not one hundred percent excluded that i provide a full time server of my own 
(although it would probably not be running a 64 bits system, nor dedicate all 
ressources to the running instance). But that would be running only one 
instance of the bot at a time. And we would have to agree on a communication 
protocol. (alike GTP for go). This HTP (Hex text protocol) is probably the way 
to go, if people are to get serious with Hex anyway. I'm sure that the protocol 
already exists. One would have to agree and pick up a good and easy one.


> Date: Wed, 26 Nov 2008 13:56:19 -0800
> To: computer-go@computer-go.org
> From: [EMAIL PROTECTED]
> Subject: [computer-go] Re: hex robot
> 
> At 01:31 PM 11/26/2008, Denis fidaali wrote:
> >Speaking of hex ... I really think it would be a nice intermediary game 
> >before tackling the complexity of go. Do you know of any good community (and 
> >protocol equivalent to GTP) where i could start to look for submitting a bot 
> >?
> 
> There are a couple of pretty decent programs and some nice underlying
> theory.  Also a perfect play bot for a small board.  I would start
> at the Hex wiki page to research them.
> 
> A credible Hex bot is on my wish list for Boardspace.   The one I wrote
> is laughably weak, but it will be a significant effort to write a better
> one.  If you're willing to work in Java and within the constraints of
> a realtime web site, lets talk.  
> 
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: flash crowd from TV

2008-11-26 Thread Denis fidaali

Speaking of hex ... I really think it would be a nice intermediary game before 
tackling the complexity of go. Do you know of any good community (and protocol 
equivalent to GTP) where i could start to look for submitting a bot ?

> Date: Wed, 26 Nov 2008 11:04:07 -0800
> To: computer-go@computer-go.org
> From: [EMAIL PROTECTED]
> Subject: [computer-go] Re: flash crowd from TV
> 
> 
> >
> >That's impressive, especially considering the fairly long search path 
> >between "Go" and "igowin".
> 
> It happens.  One day recently I was idling at boardspace.net, when
> in the course a few minutes the site was overrun by about 30 guests,
> all speaking German and wanting to play Hex.   It turned out that 
> "A Beautiful Mind" was showing on German TV.
>   
> 
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail ! C'est gratuit !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] I will build a site for the Reference Bots, Specs, and hard data concerning them.

2008-11-24 Thread Denis fidaali


 Hi.
I feel like it's a shame not to publicize anything on all this Reference bot 
work that has been done.
 So in the coming days, i'll do my best to get a web-site up and running for it.
 It probably will be plain html. I have absolutely no design skill, so it may 
be a bit flat.

I'm open to all suggestions concerning what i should put on this site. 
I think that it would be better to also put there as much useful web-links as 
we can think of. 
The planned url for this project is www.computer-go.net
Obviously there will be a link to this mailing list Archives.

I'm also open to any submissions that would be less than 2Megabyte (assuming 
there won't be that many).
 You can mail them to this email.
 I have full control of the web-site, so it probably is possible to set-up a 
git repository.
 Although i have just no idea how to do that.

Thanks in advance for any suggestion/advice/.css that might help me get this 
project on track :)

_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] RE : UCT RefBot

2008-11-21 Thread Denis fidaali


 I think that most people trying go-programming will try at least to experiment 
once with UCT.
The first logical step, is to build an amaf-bot. The other logical step, is to 
build a UCT bot. That's exactly the path i followed. And i bet many others have 
done that too. So it may be guessed that many more will do so. I feel that the 
most important thing is to be able to be rightly confident that the 
implementation is roughly right. It's true that an implementation can also 
serves as a basis for something else. But that will not be possible if you 
can't get a strong confidence that it is rightly done.

 So i guess that you should keep things as simple as you can in your 
reference-implementation. Litles tweaks will be easily doable after you get the 
specification understandable once. For exemple, i do not think that the 
"explore the pass" lines are a must have. You can test an UCT implementation 
that never pass as long as there are "valid" moves left. (valid in the sense 
nor suicide, nor pseudo-eyes). That's simple. And yet i think the program can 
play strongly enougth (given enougth simulations are made - Say 50 000).

 UCT has many constants built in. (By the way, i don't really understand those 
2 and 10 factors. Wouldn't that go in the exploration-factor ?? as *sqrt(1/5) 
). I guess that any value would be good enougth, as long as it makes the 
behavior of the bot rather clear. So other people can adjust this factor, and 
compare their results. If later on, after the implementation has cought some 
attention, if one value get to be known as better, it'll still be time to put 
it in. It probably won't be a big fuss to adjust anyone's implementation to it 
anyway. You have to set up a conventionnal value that people can use as a 
reference, be it bad. SO 1.0 (or sqrt(1/5) would be Okay i suppose).

 I don't think that wasting simulations in the end-game is really a problem for 
a reference implementation.

The main problem i spot, is that you may need a fair number of simulations, to 
get some inter implementations reproducible data. Not everyone will be able or 
willing to put so much computing power to that usage. But even then, to have a 
reference specification will never be a loss. Especially once the 
AMAF-reference-specification start to get it's own pages (if it's not already 
the case : i have always have great trouble to track out all the links. Maybe 
it would be good to put it as a CGOS partner or something :) Then all people 
have to know is where to find CGOS. So the goal of the UCT-reference would be 
to be presented along with the AMAF-reference, with all the data that has been 
collected about how to make "sure" that one implementation behavior is correct. 
And also maybe, along with some popular boost (like the weight for the 
AMAF-ref), or a Basic way of making RAVE work with it. But that'll be later.


_
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant ! 
http://www.windowslive.fr/messenger/1.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] win_ratio VS mean_score : any hard values ?

2008-11-03 Thread Denis fidaali

Hi there.
This kinda ask for a large study :)
 ***
 So here is the final question (read more for explanations):
 ***
 How do best_score_goal scales against win_only_goal as the number of playout 
increases (for both, but with varying speed - best_score_goal geting more 
increase for each iteration).
 
***



We've seen quite some study on AMAF lately.
I'd now like to ask a question about the behavior of
"UCT_like" go analysers.

First rather than talking about go, i'd rather talk
about the sub-game of go that the standard-light-simulator is exploring :
+ Suicide not allowed
+ Playout in own pseudo-eye not allowed
+ Can't pass, unless no empty legal intersections are available.

Now here is what i call "UCT_like" behavior :
- The node that got explored the most is the "best".
- The exploration strategy is so as to get a logarithmic regret on a fixed 
distribution.
- The engine uses informations gathered on nodes, to tweak gradually the 
probability
of selecting the first moves of the playout from "all move is equiprobable" to 
"only the best move is considered". We discard there any consideration for 
"AMAF" or any other fancy stuff.

 I tried to get this definition so that any plain UCT_bot using 
standard-light-playout, would be conforming. Any Epsilon-greedy exploration 
strategy would also be conforming.
 
 
 
 Here are my questions :
 ---
 
 First, how do the different parameters affects the scalability ? ( both 
against gnu-go-lvl-1, and against conforming engines ) What is the shape of the 
effect of all those parameters ? As long as we still have a conforming bot. For 
example, is a basic epsilon-greedy-bot really different than a uct_bot in it's 
efficiency ?
 
 Now here is the real question : How do the win_only_goal and the 
best_score_goal compares to each other in terms of won games ? It is obvious 
that it's easier to say : "this game is won for black" When compared to "Black 
can win this game by up to 4,5 points, no more assuming perfect play for both 
players"
 
 by win_only_goal, i assume of course that the bot only try to assess the best 
move in regard to wining the game based on a fixed komi.
 by best_score_goal, i assume that the bot will dynamically try to find the 
absolute best move regarding to the theoritical min/max score.
 
 I think it is obvious that if a position is won, say for black, and assuming 
enough playout are made, both win_only_goal and best_score_goal will give a 
sure win. 
 
 ***
 So here is the final question (read more for explanations):
 ***
 How do best_score_goal scales against win_only_goal as the number of playout 
increases (for both, but with varying speed - best_score_goal geting more 
increase for each iteration).
 
***
_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] RE: My idea for a non-regression board-implementation graphical tool.

2008-10-23 Thread Denis fidaali

Go-gui as proven it's usefullness.
I currently mainly use it for GTP testing, and for two-gtp measures.
At some point i implemented the analysis commands. However i remember
it as a very consuming task, and it wasn't very usefull to me.
Rather than only pointing randomly at Gogui, i'd rather you point me at either
a tutorial, or a documentation that target specifically the needs i want to 
cover.


The need i want to be covered is : 
*
A TOOL FOR UNIT and NON-REGRESSION TESTING OF A BOARD IMPLEMENTATION.

I want it to be really simple to implement, and i want it to enable iterative
testing. I want it so, that in 30 minutes maximum, you can both know
how to use the tool, and have implemented any specific code you need
to use it.

For example, i'd like first to be able to debug the board-implementation
before the capture-detection is completed.
Classically i do the following :
- Specify roughly the data structure
- Implement the basic things to get a board with black,white, empty (optionally 
edge).
- Implements (and debug heavily) the stuff for capture detection
- Implements the stuff for eye detection.
- Implements the stuff for ko detection.

- At any point, i implement a way to get an empty intersection randomly 
suggested.
Sometime i do this very late, sometime i do it very early.

- I then implements some more stuff to have the possibility of making 
standard-light-playouts
- I implement a full random bot GTP version.
- I then implements an amaf GTP version.

What i want here, is a tool, that i use BEFORE the first full random GTP bot.
A flexible tool, that i can also use to make up non-regression board testing, 
and archive/load them as appropriate,
for both same and very different board implementations.


--
Now, if gogui enable to cover exactly those needs, how does it comply with the 
30 minute window ?
Who can help me point out where to look to learn to use those particular 
features that cover my needs.
What can be done to make it even more worthy for all the Board-implementation 
debugging needs ?

As for now, i rejected gogui as a board-implementation debugging tool. I use 
gogui only as GTP-bot testing tool.


_
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant !
http://www.windowslive.fr/messenger/1.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] My idea for a non-regression board-implementation graphical tool.

2008-10-23 Thread Denis fidaali




 Hi. I would like to build-up a graphical regression tools for board 
implementation.
 I try to set up things in this document. All suggestions/questions are welcome.
 
 It is first targeted for myself. Initially it will serve as a proof builder,
 to be distributed with my first attempt at implementing don's standard. 
 This implementation of don's standard will be targeted for readability.
 I realize that the standard for unit-testing and non regression is
 very high on this list. I don't feel like i can keep-up with such
 a standard, unless i got a generic tool that makes the task easier.
 
 So my project is to build a Java graphical utility to help reduce the time
 of creating a new board implementation from scratch (and testing it). It will 
use a communication
 protocol as simple as possible as an abstraction layer between the 
graphical-tool
 and the board-implementation. The end goal is to be able to share a non 
regression
 test-suit between board-implementations.
 
 
 There are two "actors" : 
 
 - the board-implementation (BI)
 - the graphical non regression tool (GRT)
 
 
 BI : It represents all the data and function for managing the board-state. It 
only responds to messages.
 GRT : it's a graphical tool that communicate with the BI as a controler. It 
sends requests to the BI,
 and interpret the answers as is needed.
 The developpement will be iterative. My goal here is to share the 
specification of the first iteration.
 The code name of this first iteration is GRT_V1_it1. It is closely associated 
to my attempt to
 comply with don's specification in the most readable possible way (code name 
BI_V1_it1)
 
 --
 There are two projects currently to be specified :
 --
 - GRT_V1_it1 : first iteration of the GRT project version 1.
 - BI_V1_it1 : a board implementation counterpart to GRT_V1_it1 (complying to 
don's standard)
 
 
 ++
 GRT_V1_it1 specification :
 ++
  In the first iteration GRT is to be able to do the following :
  + Set up the client BI to be connected (ala GOGUI with GTP_clients)
  + Propose to the user a list of dataType to visualize at any time (I call 
those VectKey)
  + Propose to the user a list of action to send to the boardImplementation (I 
call those ActionKey)
- A typical action would be : "play a stone at x,y"
  + A simple click must enable to send an action. A parameter will be 
generated, representing the position
- Typically "play a stone at" is the action and (x,y) is the parameter 
representing the stone to play


 
 GRT protocol (V1 iteration1)
 
 In the first iteration :
 - the GRT part only sends "requests"
 - the BI part only sends "responses"
 
 there are two data type from the communication point of view :
 - element : can be viewed as a string
 - list :  can be viewed as a succession of elements (a list of string)
 
 request : 
 + VectKey_list {request for the list of VectKey the BI can handle}
 + Action_list {request for the list of Action the BI can handle}
 + Vect_demand Parameter1(VectKey) {request for the data associated with 
VectKey given in argument}
 + Action_board Parameter1(Action) Parameter2(position) {resquest for the BI to 
commit the Action at position}
 
 responses :
 + VectKey_list Parameter1(list of VectKey) { send back the list of supported 
VectKey }
 + Action_list Parameter1(list of Action) { send back the list of supported 
Action }
 + Vect_response { .. see details on it ..}
 
 --
  The main difficulty is to specify how to send back the data representing the 
internal state of the board implementation.
  It has to be both standard and flexible. I usually have three types of data 
in the internal states :
  +  Global data (example : player whom it is to make a move, current score 
evaluation)
  +  Whithout edge per intersection data (example : The black and white stones 
on the board. Without taking into account the edges)
  +  With edge per intersection data (example : The black and white stones, the 
empty intersection, the edge intersections)
  
  Then there is the problem of the board_size. In the first iteration, it is 
the responsibility of the BI only to know
  about the board size.
  
  Here how i chose to represent a Vect_response :
  Prequel_size (an integer)
  Postquel_size (an integer)
  Square_size (the usual board size to be multiplied by itself. Example 9 for 
9x9)
  data : a list of elements
  
  The Prequel_size=n indicate that the n first elements should be "ignored" (or 
considered as global data)
  The Postquel_size=n indicate that the n last elements should be "ignored" (or 
considered as global data)
  The Square_size enable to format correctly the elements between Prequel_size 
and Postquel_size
  
  Example (for a 3x3 board)
  Prequ

[computer-go] From zero to playing on CGOS in 10 minutes

2008-10-22 Thread Denis fidaali

[computer-go] From zero to playing on CGOS in 10 minutes
---
Don Dailey drdailey at cox.net
---
For one thing,  komi is different.   I used 0.5 for running this test.
I would have use 0.0  but some implementations don't like even komi's.
- Don


--
--
The 114 length is worrisome. Have you tested it against one of don reference 
bots ?

(I have by the way, great trouble managin all the links and urls everyone gives 
all the time :) Is there a way
to get them such that i can bookmark something and have access to all those ? A 
wikipage connected with this list maybe ?)

Currently my own implementation seems to have the komi wrong somehow. It seems 
i give one point more to black :)
My numbers are a bit higher for black wins than dons. I'll have to investigate 
i guess.

_
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant !
http://www.windowslive.fr/messenger/1.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: AMAF (from daren)

2008-10-22 Thread Denis fidaali

--
Darren Cook darren at dcook.org
--

(I'd like to hijack Denis's mail; I've changed the subject)

> My tests shows that even with as few as 10 amaf simulation per move,
> the win rate against the pure-random strategy is almost 100%.

I'd thought people were saying that AMAF only helped with weak bots.
Once the playouts were cleverer, and/or you were doing a lot of them, it
didn't help, or even became a liability.

But that is just an impression I'd picked up from day to day reading of
this list; and at CG2008 people were still talking about AMAF in their
papers (*). Can someone with a strong bot confirm or deny that AMAF is
still useful?

*: But those papers may have been comparing programs with relatively few
playouts (i.e. weak programs), due to the excessive CPU cycles needed to
statistically-significantly compare programs with more playouts.

---
Answers : 
---
My results attest that you have to be prudent whenever using AMAF.
Still, i think that MOGO used it for a long time, calling it RAVE 
(i guess you'd have to look out for a paper :) I never really tried
to understand what was going on exactly there. )

My personal tests were exclusively with light-simulations. I used amaf
for biasing the score in the tree-phase, switching gradually to a more
standard node-scoring policy. The result i had was great for low simulation
counts. (I never really tryed to make the thing scale
for i wanted something quick to test)
  Zoe used heavy playout (a project made by Ivan Dubois that i was
distantly involved in). Using amaf for hard-pruning showed impressive results. 
Zoe was about 2000 on Cgos. Although it had some scaling trouble. It didn't used
pondering nor did it used zobrist hashing. I think it remains to be shown what
zobrist hashing really do for you in the context of go bots.

So i really think that AMAF is still promising, although you have to be careful
of what you try to do with it. And you can't avoid a lot of testing, including 
thorough
scalability before trying to conclude anything.
 However i experienced wide issues with scaling. The scaling property proof is 
very very cpu-consuming,
but it can hardly be avoided before stating that something work (or doesn't for 
that matter).
For example MOGO team reported that with really huge number of simulation, they 
suddenly wanted
more randomness in the playout. That makes one wondering if light-playout 
wouldn't be suddenly efficient
with really high numbers of simulations. It'll probably not be known until we 
got more powerful hardware :)
 That's why i grew so interested in the "per light-playout" efficiency study. 
It was cpu-friendly. That's also why
i believe that having a really fast light-playout implementation could be 
something good. But it would have to be
rock-solid. That's why i'm so happy with the "standard light playout project". 
You can get excellent confidence
that your conforming implementation is correct. (you can even test 
non-conforming for that matter :) )


--
Darren Cook darren at dcook.org
--

> I used the following strategy ... though i have no hard-data to give
> you, it gave me a fair result.

Another thing I've picked up from this list is that when you get that
hard, statistically significant data you can frequently be surprised ;-).

Darren
---
Answers : 
---
i think i got the < sign wrong anyway :)

I did a lot of testing. BUT i lost the data :)
Beside, i would really have trouble figuring what the thing really exactly did
at the time, but i tried to give a simple description of something close enough.
 That's true of course. You need a lot of tests before your statement has any 
strength.
I usually go for 1000 games before assuming that i get an approximate feeling 
of how things goes. 
Don seems to feel like 100 000 is more comfortable :)

 I think Hard-Data is really what matters for modern go programming. The more 
you have, the more you can be
confident about your results. You then have to cross-check with somebody's else 
data. It's really hard to get convinced
that a bot-implementation is conforming to what you think it should do. Idea 
are of little values by themselves,
 measures about them are the real treasure. It can hardly be done by a lonely 
team, and get high enough confidence
that all is how it should. I have heard so many reports about something that 
proved to wrong, although it was thoroughly tested
by one team. I have been in this situation so many times myself :) The main 
issue being bugs.
_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à 

[computer-go] survey : Debuging a board implementation.

2008-10-22 Thread Denis fidaali


 Hi.

-
WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
-




-
Here is how i do it :
-

I have made quite a few board implementation in the past years.
Initially i used a custom made dirty Java-swing-visualizer. It was very
usefull to get started, as we had the goal to reach the 20k simulation/s marks.
We used not very user-readable structures, and so it was very helpfull
for testing purposes. The java-swing-visualizer enabled me to visualize
easily different views on the board-state.

Later on, when switching to C and Assembly, i felt back on ASCII output. I 
would then scroll to find something worth finding.
When working with those ASCII output, i had the most success by doing it with 
two phases.
First phase i used a simple fixed sequence to verify that the structures where 
still coherent at the end of it.

+*+*+
+*+*+
+*.*+


This shape was made up, move after moves, and positioned on an edge.
Then black plays to capture the two white stones inside the shape,
Then white play inside the eye, to capture black
then black captures the white stone again.

+ represent white
* represent black

Then i would simply make random moves, and verify that all was going well 
(especially capture detection).
usually i deactivated eye-filling-interdiction for this phase. It enable the 
board to loop a few times. I would
then make 1000 moves, and check that everything was fine, even after the board 
had been wiped out a
few time.


-
WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
-
_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Request for : Recursive Simulation Data + Tree decision standard

2008-10-22 Thread Denis fidaali

Hi.


--
 Recursive Simulation Data

As anyone hard-data on the performances of a recursive simulator bot ?
(What the hell does that mean ? :) )

You would use for example an AMAF strategy to make up a simulation of how the 
game may go.
(rather than doing a very-fast nearly 100% random one).

[
My tests shows that even with as few as 10 amaf simulation per move, the win 
rate against
the pure-random strategy is almost 100%. (It's not a question of recursive 
anything in those results)
]

 So i really wonder how would behave the engine using this simulation policy :
 For each move, you select one of the highest AMAF score after 10 simulations. 
If all simulation get the same result,
(black win, or black loose) then the game is over (the winner being who ever is 
shown by the simulations).
Obviously you can use more simulation per move too.

 This idea is very basic, so i would like to have data for the following cases :

 - Using AMAF over the "recursive simulation"
 - Using a well known algorithm (like UCT) with those recursive simulations.

The reasons why it may not have been tried to much, is that it would consumes 
obviously a lot of simulations. (and thus be inefficient)
But i think it's really interesting to have those data. (which i probably 
should be working on myself but well if someone
already have done that particular study ). Montecarlo is well-known to be 
inefficient anyway :)

--
Tree decision standard


Now that we got a "standard light policy", maybe we should get our hand dirty 
on a standard tree-decision algorithm,
based on those "standard light policy". Mostly, this basic algorithm is known 
to scale well, although inefficient if
not using AMAF or RAVE or whatever. Still it's something that almost everybody 
has done ... so it's standard.

I would like also to discuss not-deterministic alternatives to UCT. As it would 
be more natural to get Multi-threading then.
So it's about a "standard tree decision algorithm that is multi-threading 
friendly".


I used a simple stochastic alternative to UCT with good results.
+ Be nmove the number of legal move from the node.
+ Be best_nvisit_child the number of visit to the "best" of this node.

 I used the following strategy : 
Get a number R between 0 and best_nvisit_child + nmove. If R> best_nvisit_child 
then explore the best move. Otherwise pick another one at random.
I think it's an epsilon-greedy equivalent. I didn't try to tune it. But it's 
stochastic, and though i have no hard-data to give you, it gave me a fair 
result.

I made a bot that scored 30% win against a version of gnu go lvl 0, using this 
tree exploration strategy and 1000 light simulations. This bot however had a 
lot of experimental stuff,
with the goal of optimizing the per-light-simulation performances. It was very 
limited in scaling as the number of simulation increased. 
(something we easily run onto when messing up with AMAF :) )

cheers.
_
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant ! 
http://www.windowslive.fr/messenger/1.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] RE: jref, cref

2008-10-21 Thread Denis fidaali

[computer-go] jref, cref
623022 Jref-081016-2k 1330 1912008-10-21 00:55:13 
623023 Cref-081020-2k 1330? 272008-10-21 00:55:13

-
Cheer the greet work :)
-

I just came back from a trip. 
I suddenly realize how hard and time consuming it is to follow a conversation 
down there :)

I think i'll try to give a conforming Java version as soon as you get the http 
page done :) (with the spec on it)
Then it'll be easy to be sure that this is conforming indeed. (although i'm not 
very happy about the super ko check
maybe i'll skip it, and just pretend it is there, as that probably won't be 
noticed by the black-box test)

One difficulty being that i integrated some framework for a project and that
i don't really want to make it public yet. Although it probably has been 
explored
yet. But i don't know it :p. (The simple playout generate data for this project)

Right now my priority is to set up my Alpha-Beta experiment right. Which 
appears to be time consuming
I don't know how and why i'm so slow at programming :) It's only a straight 
forward Alpha-Beta look-up.

I really look forward for the http-page, and i'll try my best to participate :)

_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Automated external light simulation validation.

2008-10-15 Thread Denis fidaali

--
Don Said
--
But first of all, thank you for you generous praise, I probably don't
deserve it (but I'll take what I can get :-)

+
Response :
+
It's hard to argument about that :)
Still you have been a constant presence
on this list. As i recall it, you may
well be the person that have the
top post per day score :)
It doesn't automatically mean that all
is good, but still that's something that gives
a feeling of security. If you are involved, we know
that (the project) can easily sustend the test of time :)
(if it's worthy enough that is). That may be why i so much
want you to feel good about it :) Still i probably think all
that i said.

--
Don Said
--
An external tester will test for conformance and it will compare 2 bots,
one of which we "trust" as being conforming.   But the tester will not
be deterministic, it will throw random positions at the bots so that a
black box author cannot present it with something that has hard coded
answers.   Also, I don't want it to be tuned for any given position.  

+
Response :
+
i definitely agree with all that. I guess rather than just yes or no,
the test will output a probability of conformance :)

--
Don Said
--
I envision that you might be able to seed the tester with a random
number in order to duplicate the testing conditions and if this becomes
structured enough the "official" test would use a hidden standard seed.
This would be required so that different programs are not presented with
a different series of tests.  

+
Response :
+
 I'm not sure i really get the point. Or are you thinking past the
"light playout" contest ? What's the problem with programs being
presented different series of tests ?

--
Don Said
--
GTP is pretty much a necessity and is also very much a standard.  It's
necessary for external game playing and testing and for having an
external tester.  It doesn't make sense to produce a different system.
We could make it spit out numbers that you key in to a spreadsheet of
some kind or you could do the math manually, but this becomes unwieldy
if the test is to be very sophisticated.   
+
Response :
+
I agree that we need the program
to communicate with the outside world.
I agree that GTP is standard. I don't really think
we should avoid it.
I disagree if this means we shouldn't try to think out
a system that would help people to take advantage
of this standard in a faster and easier way. That do not mean we should
 indeed implement anything. Except if it's truly worth
it. Still my opinion is that there is room for discussions there.

--
Don Said
--
In fact, this is the whole "raison d'être" (reason for being) of GTP,
for communicating with programs. 
+
Response :
+
I'm french by the way :p
Well, i disagree. That's why GTP is 
standard : it allows for communicating
with programs. It is a protocol of 
communication with programs ..
Still it was engineered for the needs
of gnugo (in particular regression testing i think). 

When i look at my code, I use a sort
of own made communication standard.
I use it as a set of call to function.
And it's engineered to be a whole more
easier than to output directly in the GTP
format.

For example, when i output a move,
i just output a number. Not a vertex per-se.
I have the feeling that a lot of people would
use a representation like this one i use. Where
you give a number to each intersection
for example from 1 to 81. I use 0 as the
pass message. And -1 as the resign message.

It allows me to concentrate more on what is
meaningful .. then i have a layer that translate
all that to effective GTP-commands. Now it may
be (or not) that this 1..81 representation do indeed reduce
the thinking and testing time for interacting with GTP.
It does for me. So i'd be happy to know if it would do for others
too. I know that it was not very enjoyable for me to spend
so much time on the GTP part. What i would like is to come
up with a "better" way of representing things. That would be easier
and more natural to implement. I have made that very informally
on my own systems. Still it needs a lot more tunning and criticizing.

So the plan was to do exactly what i do : propose for the contenders
a way of handling messages and responses easier to implement. Then
add a little external module to translate those into well-formed GTP commands.

Suppose you have a GTP server, let's say GOGUI. Then you woul

[computer-go] re : java reference bot

2008-10-14 Thread Denis fidaali

---
Don said
---
I don't really want to be too involved in drafting this as it's not my
forte.  However I hope someone else (who is a better writer) will be
willing.  

+++ Answer +++
+++
What you did up now has great value. It gives us a strong basis
to build up onto. I think that the more it pleases you, the better it
will be able to serve as this basis. We have to help you
because it's just to much to ask you to do all the hard work.
But you can't say that you are not skillful, it doesn't seems
to be true to me. Making up a draft, all alone, is probably not a task
anyone in his right minds feels comfortable at. But if enough people
helps you, it becomes doable :) What i like is that you seem quite
experienced, and so you easily get to the point, and sort out the
issues. If things are to be kept on track, we most certainly needs
someone that is able to do just that. And it seems to me that
you're sort of great for this :) Witch by no mean should result
in you doing all the work. Arbitrating what should be, and what
shouldn't be in the draft seems to be your forte :)



---
Don said
---
> 
> We'll probably have to get a bit deeper in the gtp-part ultimately.

Do you mean the explanation or the actual commands I added?   For the
explanation we could point them to the excellent GTP draft standard and
then explain this as 2 additional commands and then clarify what I
wrong.

I made only 2 GTP command as I wanted to be careful not to add anything
that would likely impose a "barrier" to implementation and this
influenced the specification in many places too.  

+++ Answer 1+++

The ref-nodes commands returns
the number of moves that were made during the N simulations ?

another GTP command setting up N ? 

So i guess we strongly enforce the number of simulations then ? For example, my 
N-Thread implementation
don't quite guarantees that it won't do a bit more simulations than it should.
Still for most N big enough, it doesn't really change the average results ..
but if you get the number of moves, with the aim of dividing it by exactly N,
it won't gives the right result ... unless i calculate what the number of moves
should have been :)
What i like with the average number of moves, is that it gives a strong evidence
that the implementation is correct. But i don't really see that as being an
absolute needs for the black-box testing.

I propose that we use both the match against the reference bot (50% win/loss 
being the right value there
for say 2000 games ... the number of simulation unspecified there - but same 
for reference and to-be-tested bots)
Then to make sure, we also should have a command for knowing the value of each 
legal moves of a position.
If the bot pass those two tests, with all sound parameters, then it is valid. 
The implementation can't use
fancy tricks to get those two tests rights. Because we can come up with an 
infinity of position to score ..
and we can use different values for the number of simulations. So it gives both 
a lot of freedom on how
you implement it, and a really strong way of getting sure if a bot is conform.


+++ Answer 2+++

I really like the way you try (we all try) to zoom-in
onto what's really the most meaningful. I think
that we almost have it right now. As far that the 
board implementation is concerned (and the montecarlo & AMAF).

But GTP is also an important part. I would really have liked
not to have to take GTP into account :) But my bots wouldn't
have been able to interact with the outside world then.
Still i think we have to make it as easy as possible. I guess
it's probably sound to just point to the GTP specification. 
Personally, i feel quite uncomfortable with asking someone
to have to read the GTP specification. Picking-up all by himself
witch commands to implements, and witch are not worthy.
I think that nearly as much time will be spent on the GTP part,
than on the first light-simulation part. I feel uncomfortable with that.

My personal feeling is that it would be best to provide an intermediate
layer more friendly to simple AMAF-only bots. That would mean that
the protocol part would be made as simple as it can. It would be consistent
with what we are trying to do with the AMAF-decision part ... But it means
a lot more work from us. For one things, we would have to provide a translator
that would act as an intermediate between the simple-AMAF implementation
and a true GTP protocol implementation. I think Java would enable to get easily
a cross platform translator to encapsulate the target simple-AMAF program.
But then we would also have to devise a protocol that is indeed more easy
to implement, and more natural for the simple-AMAF implementation. I have
been thinking of that for a long time. And i may have ideas. But once again,
it means quite some work and effort.

(GTP is now hidden to me by layers and layers of helpers)
here is the p

[computer-go] java reference bot

2008-10-14 Thread Denis fidaali


 Hi.
 First things first : i think the specification is enougth as it is.
 I hope that we can end up with something useable by anyone,
 even non go people as a final goal. We'll have to get non-go experienced 
 people to beta-read it (and criticized) for us, i suppose. And we'll probably
 have to get a HTML version with some fancy illustration (wich i won't be 
helpfull for)
 
 I really look forward to be able to get involved non go people easily :)
 I'm pretty sure a lot of them accepting this contest would end up
 being very valuable for the community :)

We'll probably have to get a bit deeper in the gtp-part ultimately.
-- 

===
   5.  Scoring is Chinese scoring.  When a play-out completes, the 
  score is taken accounting for komi and statistics are kept.  
  
I think i would like it if we just gave how it should be done.
Using the eye definition we impose anyway.

-
 I propose : 
-
  5. Scoring is done at the end of a game (two consecutive pass)
   , in the following way :
   each stone on the board gives a point to the player who
   owns it. An empty intersection gives a point to the player (if any) who
   has a stone on each orthogonal intersection around it.
   If black's score is greater than 0.0 then it is scored as a black win. 
Otherwise
   it is scored as a white win.
 
 
 ===
   1. Must be able to play complete games for comprehensive conformity
 testing.
 
 I do not quite understand the point. But it can't really hurt either .. :)
 
 ===
   2. In the play-out phase, the moves must be chosen in a "uniformly
 random" way between legal moves that do not fill 1 point eyes and
 obey the simple-ko restriction.

 When a move in the play-out is not possible, a pass is given.
 
I'd like that we got more descriptive on the simple-ko restriction, if possible.
(i'll try to propose something, but i'm getting low on time right now)

===
  3. Play-outs stop after 2 consecutive pass moves, OR when N*N*3
 moves have been completed, except that at least 1 move gets tried
 where N is the size of the board.  So if the board is 9x9 then
 the game is stopped after 9*9*3 = 81*3 = 243 move assuming at
 least one move has been tried in the play-outs.
 
I don't quite get the point of the "except that at least 1 move gets tried"
part

===
   ref-nodes -> return total moves executed in play-outs
   (including both pass moves at end of each
   play-out.)

   ref-score -> return total win fraction for black.  
   
i do not find ref-nodes that much descriptive for "return total moves executed 
in play outs"
Maybe it is quite standard to call that number ref-nodes ? As it's only amaf,
there are no node per-se are there. what about a ref-numberOfMove command ?
--
Here are the data you requested for with the implementation i want
to use as a reference. I'm not able to get value for integer komi.
(my system do no account for draws ..)

+++
Komi 0.5
mean score =0.5244261847821416
+++
komi 5.5
mean score =0.44674397181685754
+++
Komi 6.5
mean score =0.4467712426921182
+++
Komi 7.5
mean score =0.42132957622630574
+++
87317.15777498983 Playout/sec
Time=11.461321297
Number of playout=1000770.0
Mean moves per sim 111.06128680915695
+++

It uses the mersene twister for random-generation.
But this is a 4 thread test. I use nanotime as an help
to set up the (per thread) seed, combined with thread number.

I think it's interesting to give the Playout/sec score.
Then your reference bot "can" be used as a refence benchmark.
That is not perfect of course, but that gives something to chew.
(I get quite a large variation in speed from run to run
with 1000 000 simulations. Ranging from less than 80k/s to close to 90k/s
with 4 threads over my 4 cores.

My implementation is in java too, and has nothing fancy to it,
so i might as well publish it later on. I probably should clean it
up a bit. And make a few optimisation (by refactoring).




--
Don said :
--
I made a reference bot and I want someone(s) to help me check it out
with equivalent data from their own program.  There are no guarantees
that I have this correct of course.

Doing 1 million play-outs from the opening position I get the following
numbers for various komi:

  playouts:1,000,000
  komi:5.5
 moves:  111,030,705
 score:  0.445677

  playouts:1,000,000
  komi:6.0
 moves:  111,066,273
 score:  0.446729

  playouts:1,000,000
 

[computer-go] Anyone With Measures for MiniMax & LightSimulations

2008-10-12 Thread Denis fidaali


 Hi. I recall somehow that don ran some experiments with min-max(alpha-beta) 
algorithms a while ago. 
So i wonder if anyone had hard-data about some "min-ma"x equivalent algorithms 
using "winrate", and "light-simulations".
I'd like to know what parameters were used back then, and as much of the actual 
experiments results as possible
(Games against gnugo, scaling, games against amaf bot, cgos rating etc.. as 
long as it concerns min-max&light-playout)
Thank you in advance.

I have made a few tests with AMAF&min-max in the past few days, all giving 
terrible results. My assumption is that my implementations where faulty.
So i have to design a path for getting it right somehow ... Any hard data would 
help me a lot there in knowing how close i am to having it right.
_
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant !
http://www.windowslive.fr/messenger/1.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] simple MC reference bot and specification

2008-10-12 Thread Denis fidaali

--
Don SAID :
--

David Fotland suggested another rule,  tell me what you think.

His rule is to stop the game at or beyond move N*N*2 where N is the
board size and score the board no matter what.  But in the play-outs he
first plays 1 move before making this test.  

If a game actually lasts that long,  the play-outs are not worth much
but the game is probably over anyway and the board gets scored as is.

My own implementation depends on there being no eyes bigger than 1
point, because it's fast and easy to score a board that has only 1 point
eyes.   But a simple modification makes it work without any appreciable
speed sacrifice:   If a game ends beyond move N*N*2 then use the slow
scoring routine that doesn't assume the board is complete.   It should
be very rare that this routine has to be used.

- Don

-
Answer :
-
I like the principle, however the value you propose seems a bit low ...
that would mean 81*2=162 for 9x9 right ?

I traced the histogram of game size, counting pass
On the left the length of the game
on the right the number of games that fall into this category.
I think you can use a higher value then like N*N*3. Wich may work for larger 
board as well.
(maybe i should try out to get the distribution out of 19x19 ? :) )

--
There are two reason why i do not like "counting one stone captures".
The first one is that i'm not 100% sure it'll set things right. Obviously
limiting game size can hardly have pathological behavior to it,
in term of missing an infinite loop case.

The second things, is that i tryed at one point to implement some
board implementation with bitmaps. I don't remember all the details
so maybe i'm wrong. But i'm quite sure i had a way of notplaying
ko wrong, while i didn't counted the number of stone captured.
Anyway the implementation at this time didn't showed much promise
it was like 3k sim/sec while the goal at the time was to get a way
to produce between 100K and 200k simulations/sec per processor.
Anyway i do not think i really exhausted all the possibilities with bits
maps, and i want to give it another shot in the future.

Then when considering the size problem, i think it definitely is easier
to get an implementation which is 9x9 tuned with bitMaps :)

So my suggestion is to make the black box test aimed at 9x9.
It may also be a bit easier to give hard values to help people
make sure they are bug-free before they try the black box
test suite.
--


FOR 9X9 :
mean score =2.164072737304306
85775.89950308551 Playout/sec
Time=11.663847372
Number of playout=1000477.0
Mean moves per sim 111.06255716023456

Game length : number of games in that category
74 : 1
75 : 5
76 : 14
77 : 31
78 : 82
79 : 159
80 : 333
81 : 596
82 : 1063
83 : 1676
84 : 2352
85 : 3529
86 : 4708
87 : 6323
88 : 7840
89 : 9899
90 : 11773
91 : 14314
92 : 16021
93 : 18832
94 : 20376
95 : 22754
96 : 24179
97 : 26590
98 : 27447
99 : 29852
100 : 29532
101 : 31527
102 : 30818
103 : 32565
104 : 3
105 : 31795
106 : 29896
107 : 30761
108 : 28561
109 : 28786
110 : 26408
111 : 26316
112 : 23620
113 : 22981
114 : 20407
115 : 20034
116 : 17571
117 : 16794
118 : 14478
119 : 13761
120 : 11977
121 : 10898
122 : 9505
123 : 8833
124 : 7626
125 : 7494
126 : 6306
127 : 6766
128 : 5661
129 : 7020
130 : 5862
131 : 7856
132 : 6427
133 : 9115
134 : 7349
135 : 10021
136 : 8230
137 : 10263
138 : 8299
139 : 9514
140 : 7923
141 : 8382
142 : 7106
143 : 7183
144 : 5908
145 : 5667
146 : 4894
147 : 4545
148 : 3855
149 : 3424
150 : 2856
151 : 2524
152 : 2022
153 : 1673
154 : 1505
155 : 1295
156 : 1061
157 : 908
158 : 724
159 : 644
160 : 510
161 : 427
162 : 345
163 : 286
164 : 255
165 : 205
166 : 156
167 : 125
168 : 109
169 : 76
170 : 66
171 : 62
172 : 54
173 : 37
174 : 30
175 : 25
176 : 27
177 : 19
178 : 19
179 : 15
180 : 8
181 : 6
182 : 6
183 : 3
184 : 6
185 : 3
187 : 2
188 : 1
189 : 1
193 : 1


FOR 19x19
---

mean score =1.907221916712493
17337.6306463181 Playout/sec
Time=57.686659752
Number of playout=1000150.0
Mean moves per sim 455.3779143128531

Komi = 0,5
Amaf Result :
||,000
||,493||,503||,500||,503||,504||,504||,503||,503||,504||,504||,504||,503||,503||,504||,504||,503||,501||,500||,492
||,502||,510||,513||,513||,515||,514||,514||,515||,514||,514||,515||,514||,514||,514||,514||,514||,512||,510||,501
||,500||,512||,515||,516||,516||,516||,516||,517||,516||,515||,517||,516||,516||,517||,517||,516||,514||,512||,500
||,502||,515||,516||,517||,517||,518||,517||,517||,516||,517||,517||,517||,517||,517||,517||,516||,517||,514||,503
||,503||,514||,517||,517||,518||,518||,517||,516||,516||,516||,516||,515||,516||,517||,517||,519||,515||,513||,504
||,504||,514||,517||,518||,517||,517||,516||,516||,516||,516||,516||,516||,517||,516||,517||,517||,516||,513||,503
||,504||,514||,516||,515||,516||,516||,517||,516||,515||,515||,516||,516||

[computer-go] simple MC reference bot and specification

2008-10-11 Thread Denis fidaali


On point 13.
1 stone captures, may eventually be "hard" for some implementation.
I think using game length as a criterion gives more freedom.

Then you have to specify what to do with those pathological games anyway.
Do you score them "as they are", or do you drop them. I do count them.
The rule for counting is probably quite standard too. I give a point for 
each stone present. Then for each empty stone, i give a point to the side
who has all orthogonal control of it, if any.

I usually implements super-ko checking very late, if i do at all.

I often start by selecting either the first or the last "best" move,
rather than picking one at random. Although picking at random
makes me more comfortable somehow. It could be interesting
to see how much difference it makes.

Finally, about the size. Is it supposed to be 9x9 only ?
9x9 only gives more freedom to fine tune for this size.
It may make the implementations less useful, if someone
knows another sizes the light-policy could succeed on.
(like maybe studying 7x7 and such ?)
_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail ! C'est gratuit !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] programming (languages) light-simulation contest

2008-10-10 Thread Denis fidaali


[computer-go] programming (languages) light-simulation contest


 The reason i have been eager to make so sure my implementation was conform
 to what i had in mind (this time ..) is that i wanted to be able
 to test various (possibly a lot) of different implementations
 in different environments (machines, langages, memory etc etc ..).
 
 So far i have developed simple light simulators for those environments :
 
 (all single threaded)
  - Java 32 bits.
  - C 32 bits linux.
  - C 64 bits linux
  - Assembly 32 bits on some intel duo with no SSE
  - Assembly 32 bits on some intel duo with some SSE
  - Assembly 64 bits with and without SSE
  - NVIDIA Cuda C

I was quite dissatisfied with the results. The development cycle was just too 
long,
there was just too many things to be tuned and tried. I got lost in the 
different versions
not remembering how it had been validated, and more than often plainly loosing 
where the code
was (i used more than 5 computers, most of witch are not there anymore - with 
no clear backup strategy)
, when it had not been changed to death anyway (without backing up - so not 
much left to recover).

 So i figured out, what i had a need for to begin with, was method.
 
-
  I have followed loosly those "hey here is a new interesting language" 
conversations.
  My conclusion where always the same : it's just a confrontation of random 
thoughts.
  So i kinda stopped trying to follow them. I think things are started to go in 
a good
  direction now : it's mostly useless to talk about ideas, or unverifiable data
  in a list like this one.
  
   So what i like a lot about how things are going now, is that it seems that 
the community
   has got here a strong will to get something to talk-about in the first 
place. Still i hardly
   anticipate this to gives something fruitful in the end. That would be 
because everyone's goal
   seems different from everyone's else. I guess that there is a goal worth 
pursuing underlying 
   the conversation. I see such a goal for myself. Which would be to answer the 
question : "what
   does a really good version of light playout looks like". I chose "light 
playouts" because
   it's so easy now to agree on an implementation : most people seems to use 
quite exactly the
   same one. That's why i was able to just ask for other to confirm that mine 
was correct.
   
   By the way, i think that AMAF implementation based on those light-play-out, 
is straight forward
   too. That's why i chose those to try to get the most out of everybody's else 
experience.
   As for me, i'm really NOT interested in knowing "what langage is good for go 
programming". That's
   simply not a question i can ask myself, nor anyone else. This question 
doesn't make any sense for
   me. Still if someone can get the "standard light playout" right in less than 
10 code line,
   and they are very understandable lines. I would be very happy to see it. But 
it would never mean for me that,
   "this language is BEST". Even if the peformances are optimal there. I think 
90% of the 
   "this language is the best" debate gets it's root in some affective part of 
the people
   engaged with it. 
   
   So .. i'm not interested at all in a langage contest. I really don't care 
about that.
   Still i find interesting to argue over facts, rather than about random 
thoughts.
   Now, to really set the frame-rules of this "language contests" seems a 
challenge 
   really interesting in itself. I don't think it'll be easy to do. I think that
   it has to be so that most people do agree that it has been set up properly.
   And then, it has to be so, that absolutely anybody that has the slightest 
interest
   in the matter will be able, and will be willing to participate.
   
   I didn't really put a lot of thought in it, but here are what i can think of
   right now.
   
   -It should be useful. So let's make it a programming contest, rather than a 
pure
   language contest. It may be slower to get enough data to be able to compare 
different
   languages.But then it'll make it so that more people can participate and 
find values in it.
   I would never try anything for a "language contest" myself. But i may well 
want to invest 
   some time in a programming contest. So when i say "it should be useful" the 
question i really
   ask is : what information (or benefices) should we hope to get out of this 
challenge.
   That would both help us to get nice ideas about how to set it, and a base 
for discussing if what
   we proposed is right or wrong. And then later on, we can compare the real 
benefice of it, with
   what we expected to get.
   
   - It should allow a lot of freedom. Once again this is based on the 
assumption, that
   it is most probable that NOBODY will actually join the contest. So we have 
to get
   it as wide

[computer-go] AMAF Scalability study + Responses to previous

2008-10-09 Thread Denis fidaali

PS : how can i do so that my response to this mailing-list will be correctly 
indented ? 
(for example i would have liked to set this one as a response to my previous 
post).


 I have made some experiments with my AMAF implementation. It is both an 
attempt at understanding how the beast scales, and also another seek at 
asserting it does what i'd like it to do indeed. So if someone could confirm he 
gets the same results for the same engine ... (especially maybe for the 
low-cpu-costs experiments).
 
 Three bots are represented here
 GNUGO  : GNU Go 3.7.11   lvl1 (without any fancy options)
 RANDOM : Uses the equiprobability play, with no feeling self "pseudo-eyes"
 AMAF   : Uses amaf, with the given number of playout. It is a "first player 
who played there" type Amaf.
 
 
--
 Every victory ratio mesure was made over a 
 ---
 1000 games match alternating black and white. Komi is 6,5
 ---
 Here are the results :
---

-
GNUGO vs  AMAF
-
 Number_of_Amaf_Simulations | Victory_percentage_for_GNUGO | 
Victory_percentage_for_Amaf
 300|   97,9 |  2,1
 500|   94,6 |  5,4
 1000   |   90,5 |  9,5
 5000   |   87,0 |  13,0
 1  |   88,7 |  11,3
 
 
AMAF vs  AMAF
-
 Number_of_Amaf_Simulations A1 |  Number_of_Amaf_Simulations A2 | 
Victory_percentage_for_A1 | Victory_percentage_for_A2
 20 |   100 |   0,8 |   99,2
 50 |   100 |   12,6|   87,4
 70 |   100 |   27,6|   72,4
 
 100|   1000|   0,1 |   99,9
 250|   1000|   6,6 |   93,4
 500|   1000|   26,3|   73,7
 750|   1000|   41,4|   58,6


AMAF vs  RANDOM
-
 Number_of_Amaf_Simulations | Victory_percentage_for_AMAF | 
Victory_percentage_for_RANDOM
 5  |   89,3 |  10,7
 10 |   97,4 |  2,6
 20 |   99,5 |  0,5
 100|   100  |  0


=
Note : in the match 1 AMAF vs gnugo there was a game that didn't finish 
because of a
triple ko. (i ran another one instead)

There is certainly a number of interesting things to read out there. Although 
we would
probably need more data before drawing any conclusion. Especially, i'd like to 
understand better how 1 simulations
can be worst than 5000. Unfortunatly the matches with 1 takes quite a lot 
of cpu
time. Can we degrade performances more with more simulations ? :) How does 
5000AMAF fares
agains 1AMAF, i wonder. Although i'm more interested about the upscales 
that the
downscales :)



- Answer to another post 
---
Ingo said :
---
Some of you may want to stone me for this heresy,
but read carfully before.

When you have MCST-/UCT for Go that can work for
real-valued scores (or at least a version that can
work for three-valued scores: winm, draw, loss),
you may look at Go with different scoring systems.

Example: 
A win by 0.5+k points is worth 100+k.
A loss by 0.5+k points gives score -100-k.
Let's call b=100 the base score. 
Question: How does the playing style change with b?
(Of course, for very large B you should have almost the
normal playing style.)

Ingo.

PS: Such evaluations may also help to reduce the problem
of MC's laziness.


Answer :

 This is, of course a good idea. So good in fact, that many people have tryed 
it. With the conclusion that it mainly help to degrade how often a program do 
win. The MC's laziness seems to be a golden one indeed.
 Yet i do not know if the discussion of it has been exausted when applyed to 
WIN/vs/DRAW
 
 For example, we could modelized a theoritical situation, where developpers get 
paid based on the performances of their bots. If they win, they get X $ as a 
reward. if they lose, they get X $ as a loss. And if they get a draw, there is 
not cost and no reward involved. Then it would be interesting to find out what 
the best balance is between looking out for a win, a looking out for a draw. 
When a montecarlo bot can't find a draw, it'll often throw all his stones out 
of the window in a vein attemp to achieve a win no matter what ... So it could 
be interesting to try to balance this.
 
  Anyway it's not connected to the "laziness" problem in any ways i can see. 
Most probably because i do not per-se see the "laziness" as a "problem". More 
as a feature.
  
  
-

[computer-go] More Characteristic values (AMAF Characteristics from empty board)

2008-10-08 Thread Denis fidaali

hi,
Thanks Dave Hillis for this quick response. (And by the way, thanks to Remy 
Coulomb for a previous response he made a while ago. I just though that post 
containing only thanks would be noise to most people there ...)

To Don and Christoph : I reallize that i was probably not as clear as i though 
i was.
I have built up a light simulator. There are no tree involved. It is only 
choosing a move with equiprobabilty from the set of empty points on the board.
If the move is not valid, it just choose another one. If it's a "pseudo eye" 
then it again chooses another point. Whenever no point can be validly chosen, 
it just pass. Whenever two consecutives pass occurs, the simulation end.

 Now, i wanted to make sure that my implementation had any chances to be 
correct. So i though I'd post the characteristic statistical values that i get 
out of it. Indeed i though it could benefits others later on, in particular if 
someone could corroborate them :)


So here is another set of Values. It is the all-move-as-first score from an 
empty board for black stones (black plays first in my simulations). It gives 
the probability of black wining the game (while playing randomly), if he has 
played an intersection before white during the simulation.
 ,0566 means 0.566 chances out of 1 that a game where black has played there is 
a win for black.

Could anyone once again confirm that those results sounds correct ?


mean score =2.1726424464558005
79655.5328628921 Playout/sec
Time=12.563621936
Number of playout=1000762.0
Mean moves per sim 111.0673969035

Amaf Result :  (from empty board, black plays first)
||,490||,507||,506||,511||,514||,511||,506||,507||,490
||,507||,524||,533||,539||,541||,539||,533||,524||,507
||,506||,534||,544||,550||,553||,551||,544||,533||,505
||,511||,539||,550||,558||,561||,558||,551||,539||,511
||,513||,541||,553||,562||,566||,560||,552||,541||,513
||,511||,539||,550||,558||,560||,558||,550||,539||,511
||,507||,532||,544||,551||,552||,550||,544||,532||,506
||,507||,523||,534||,538||,541||,538||,532||,523||,507
||,491||,507||,505||,512||,513||,511||,506||,507||,492


PS : how can i do so that my response to this mailing-list will be correctly 
indented ? (for example i would have liked to set this one as a response to my 
previous post).
_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail ! C'est gratuit !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Light simulation : Characteristic values

2008-10-07 Thread Denis fidaali

 Hi, i have been searching a bit through the archive, and was not able to get a 
complete list of the characteristics value for the light simulations.

 By light simulation i mean : Playing randomly the valid moves (No suicide 
allowed), but not in the "true" eyes. I have a rule for defining a true eye : 
all vertical and horizontal point around are the player's color. And depending 
on the position :
 - no edge around : NOT ( two diagonal or more are the opponent's color).
 - one or two edge around : NOT ( one diagonal or more is the opponent's color)

The engine is written in java, and run on a quad core Q9300 @ 2.50 Ghz.
The code has been lightly optimized, and use pseudo-liberties to detect 
captures.

Here is what i get :
---

mean score =2.086382709065067(black win on average by 2.08 points)
78454.10819786233 Playout/sec   (Trying to take advantage of the 4 cores)
Time=12.752397841(in seconds)
Number of playout=1000478.0
Mean moves per sim 111.05742754963127 (Number of move per simulation, 
considering pass as a move)
Mean moves(no pass) per sim 107.37764148736903 (Number of move per simulation, 
not counting passes).

Note : as there are no seki possible in the simulation, the mean score for the 
light simulations maybe bugged for seki, and still get the correct value.
_
Installez gratuitement les 20 émôticones Windows Live Messenger les plus fous ! 
Cliquez ici !
http://www.emoticones-messenger.fr/___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: mogo beats pro!

2008-08-11 Thread Denis fidaali

How long will it be until a computer system reach pro level play ?
(answering Bob Hearn
question)

Maybe that rather than taking the raw speed of hardware as a reference, we 
could use the raw number of simulation (per second) as a base of speculation. 
Assuming it's a fixed game time with the same setting than the 9 stone game 
that was played.

 + So first question is : how much more 1.7 millions simulations can yield in 
strength than what it does in today's mogo version ? Mogo has been in a very 
intense state of development lately. I have no figures about the scaling of the 
efficiency per simulation, but it has to be taken into account for trying to 
guess what it will all look likes in ten years.

 + the second question is : how much more will the algorithm be tuned to the 
hardware (or the hardware to the soft). I guess the theoretical throughput of 
the system mogo was run onto is MUCH more than 1.7 millions simulations per 
second (taking into account all the tree logic). Given the exact same property 
of the simulation that were used, i'd think that in 10 years times, given 
nothing more to do ... it could easily reach 17 millions simulations per second 
on the same system.

 + the third question is : will moore law olds ?
So far moore law was linked to mono-threading paradigm. It may be true that 
super-computer didn't improved by much (i don't know about that to much). But 
there are also evidences that with the explosion and democratisation of 
multiprocessors, the moore law will not held in it's current form. (GPU card 
have had there efficiency increased but much more than x2 every two years)

 + the fourth question being : will there be much more efficient go-solving 
method birthing in the next few years. And also, will not mogo get access to 
more and more powerfull hardware as it'll close up to pro level ... (lately the 
top raw computing power accessible to mogo program seems to have increased by a 
lot ... was it running on a monocore 1ghz pentium 3 years ago ?)


 It's true that ten years is a short span of time indeed. It may seems a bit 
optimistic to hope for kim to fall in a fair even-game given 1day per move 
seting in ten years for now. But i wouldn't call that totally irealistic 
either. I guess i would easily put a bet on it, if the odds were about 1/20, me 
getting 16 times the amount of money i have bet if a go-program reaches 
pro-level on 19x19 toward the end of the year 2018. I'd probably go as far as 
1/3 for the end of the year 2028.

-- Another interesting question is : how much time before mogo can tackle the 
three-stone handicap game against a pro ? I have read somewhere that asked how 
much stones he would need for a sure win against god himself if he's life was 
at stake, the pro answered : three stones. 





Bob Hearn wrote :
So -- quick back-of-the-envelope calculation, tell me where I am  
wrong. 63% win rate = about half a stone advantage in go. So we need  
4x processing power to increase by a stone. At the current rate of  
Moore's law, that's about 4 years. Kim estimated that the game with  
MoGo would be hard at 8 stones. That suggests that in 32 years a  
supercomputer comparable to the one that played in this match would be  
as strong as Kim.

This calculation is optimistic in assuming that you can meaningfully  
scale the 63% win rate indefinitely, especially when measuring  
strength against other opponents, and not a weaker version of itself.  
It's also pessimistic in assuming there will be no improvement in the  
Monte Carlo technique.

But still, 32 years seems like a surprisingly long time, much longer  
than the 10 years that seems intuitively reasonable. Naively, it would  
seem that improvements in the Monte Carlo algorithms could gain some  
small number of stones in strength for fixed computation, but that  
would just shrink the 32 years by maybe a decade.

How do others feel about this?

I guess I should also go on record as believing that if it really does  
take 32 years, we *will* have general-purpose AI before then.

_
Pendant tout l'été, consultez vos emails Hotmail sur votre mobile !
http://www.messengersurvotremobile.com/?d=hotmail___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Denis fidaali


 Hi there.

I do agree with your point Robert Waite.
I have yet seen no such paper as one that would prove that there is such thing 
as scalability based on any mathematical proofs.
So all your points at criticizing the "mathematical certainty" of the 
scalability, is probably 100% right. There is no such things as mathematical 
certainty there.

It can be modelized easily, as you already did : what if the the "evaluation 
function" is giving "on purpose" wrong data. How would one mathematically prove 
that it doesn't ? You would at a minimum have to know WHAT the "evaluation 
function" ACTUALLY exactly is ... In fact all the evidences that we have 
gathered about the scalability may rather been surprising to some persons : why 
in hell does all that works so well ?

 But, it's a "proven" fact that it does indeed works well so far. So that it 
seems perfectly natural to speak such phrases as "there are evidences that 
given the hardware we got in twenty years, human will be beaten by current 
algorithms". I don't see how those evidences can be qualified with the term 
"mathematical", but they are here (hiding among us !). Now if someone has the 
feeling that maybe there is a roadblock, it has to be considered for what it is 
: a personal intuition. What is this intuitions precisely based on ? Why are 
you trying to share it with us in the first place. For myself, i believe that 
what you are trying to do, is to begin to analyses all the data the community 
has gathered so far, trying to understand why indeed it worked so well that it 
even beaten out a pro with a 9 stones handicap and with as few as 1.7 million 
evaluations/second (running on some 800 hundreds cores). To the point that the 
pro felt he had no chances of wining at all with that much of a handicap. Your 
are trying to understand this, and are probably right on track for that goal. 
The term "mathematical" is very valuable to you, and you'll find it that it has 
a much wider use (on this list) than what you would like it to. But now, 
"mathematics" as proven to be of little use in the context of go programming 
lately. It's more of a "physician" world. You make up a (mathematical) model. 
You test it again "reality" via experimentations. You then get "empirical" 
certitudes that the model is indeed correct.

 There is no way of mathematically proving that light speed would still be 
constant if i chose to dance naked on the champs-Elysée some day. You'll 
definitely find no paper on that. Yet to speak of it as mathematically certain, 
is probably not as wrong as it sound.


 But as it is, i'm playing the devil advocates here. I'm totally agreeing with 
you. I found your way to fight irrationnality very interesting indeed. It's 
been very refreshing.


-
Robert Waite has wrote :
I would really like to see what paper you are referring to. Do you mean
"Bandit based Monte-Carlo Planning"? Please post the name of the paper which
you are referring to. I do not think that the empirical evidence is
overwhelming that it is scalable in a practical way for the problem of
beating a human.

Now the topic has moved to scalable to beat a human and I disagree with the
interpretation of the data. We are both interpreting data. Your data doesn't
count as a theory.. where you reduced my theory to one that has no data. We
are both interpreting the same data. Diminishing returns was just an example
of something that could be a roadblock. I was questioning how this
necessarily scales to humans. It seems more data is needed from MC-programs
vs. humans to make a rigorous theory of scalability. So far.. the only
scalability that seems proven is a case for solving the game... not beating
humans. There is some point between that would most likely in my opinion
lead to humans being beaten.. some amount of calculation before you solved
it.. but the shape of this curve is something I am unsure of. It doesn't
seem that unreasonable to question if there is a practical scalability.

_
Retouchez, classez et partagez vos photos gratuitement avec le logiciel Galerie 
de Photos !
http://www.windowslive.fr/galerie/___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Gui,GTP, and exploration of internal nodes states.

2008-07-31 Thread Denis fidaali

Hi there.



To my best knowledge, most people do use Gogui and gtp. This provides
interesting ways to see analysis results. But only in a "flat" way.

 When the need arise to look deeper in what the bot is thinking, one
seems to need to develop his own system. It's quite hard in fact, and
lack interactive features.

Personally, when i make up a bot, and try to debug it, and to improve
it : i like very much to understand how it reaches the conclusions it
does reach. I like to be able to visualize not only the score of the
top node, but the score of any deep node.





PART 1 :

 Basically, i'd like GTP (or one of it's extensions like gogui
provides) to feature a standard way for the exploration of deepnodes
states. And not only for the rootnode.

For an example, i may wonder why the program doesn't not explore a
prety obviously good branch. I'd like there to be a command for me to
do so. I'd describe the branch i want to know more about, and then
launch an analysis command for that particular branch. It could be
achieved through the concept of "path from the main node".



 Analysis command would not only be able to be passed for the root
node, but for any node through a path. A path there being a list of
valid moves.

Then one would need a gui to use that easily. The only easy to use
solution that came to my mind. (and i actually implemented it, in some
dirty ways) was to duplicate the board view. So i would have two
boards. One main board that does work exactly as gogui board do. And
one VARIATION board, that is used to build up a path through clicks and
visual interaction. The VARIATION board is the one i would like to see
the result of the analysis_through_path command.



English is not my native language, so i have no idea if what i said was
clear or not. (feel free to interpet and put it in a better english, if
you think you grabbed the idea ...)





- Do you think that having a gui and protocol to help explore deeper
nodes would be usefull ? Would you like it to be integrated in an
already mature and nice environement like gogui ?





PART 2 :

 Next, most bot use some kind of iterative exploration. That eventually
converge to a good result. Sometime starting with very bad
understanding of the board position. I'd like to be able to have more
(interactives) informations about those behavior too. So maybe it would
be possible to add a command to the standard genmove, so it could be
added a parameter that would set the bot for a certain number of
iteration analysis. Of course it is most usefull with what i ask for in
part1 ..



 So when you play a tournament, and your bot loose in a position. You
would be able to launch gogui, set up the position. And then explore
visually what your bot think for different variations, with any sort of
depth from the effective position you want to analyse. And you'd be
able to visualize how it does converge (or diverge) from what you think
is right as the number of iteration is grown through the repetitve use
of an analysis command.





I hope that i have been clear enougth that this list will understand my point. 
And i'm eager for your opinions on this toppic.

_
Votre correspondant a choisi Hotmail, un e-mail ultra sécurisé. Créez le votre 
gratuitement !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/