Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-27 Thread Don Dailey
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I added 2 bots:

   Anchor_1k
   GenAnchor_1k

I'm not positive I found the correct source code, but I believe this
is the standard AnchorMan code.

 Anchor_1k  is AnchorMan  running at 1000 simulations.

 GenAnchor_1k is AnchorMan WITHOUT the move incentives.  No tricks to
 encourage it avoid auto-atari or any other special code.

I think 1000 simulations is a good number since there is a sharp point
of diminishing returns beyond this.  Also, one could test a lot of bot's
without consuming a lot of CPU cycles.

- - Don



Jason House wrote:
> On 9/27/07, *steve uurtamo* <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
> 
> Are you getting the same number of playouts as
> everyone else? 
> 
> 
> 
> That varies wildly from bot to bot.
> DoDo (not online right now) does 64 sims (weak - 800 ELO)
> ReadyFreddy does 500 sims
> Control Boy does 5,000 sims (> 1400 ELO)
> hb-amaf2 does > 5,000 sims (variable) (very weak - < 500 ELO)
> myCtest-10k-AMAF-3 does 10,000 sims
> myCtest-10k-AMAF-5 does 10,000 sims
> myCtest-10k-AMAF-8 does 10,000 sims (> 1400 ELO)
> myCtest-50k-AMAF-5 does 50,000 sims
> 
> Three more AMAF bots exist (that I'm aware of), but the author(s) have
> not joined into the discussions
> ego_allfirst2 (> 1400 ELO)
> libEGO_AMAF
> libEGO_AMAF2 (> 1400 ELO)
> ego110_allfirst (very weak - < 500 ELO)
>  
> 
> s.
> 
> 
> - Original Message 
> From: Jason House <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>>
> To: computer-go      <mailto:computer-go@computer-go.org>>
> Sent: Thursday, September 27, 2007 9:33:14 AM
> Subject: Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS
> 
> I've kicked off another all moves as first variant.  I think it
> matches all recommendations on how to improve the all moves as first
> performance, but still appears to be quite weak. 
> http://cgos.boardspace.net/9x9/cross/hb-amaf2.html
> 
> 
> 
> 
> 
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG+8udDsOllbwnSikRAncVAJ9JLzYPCSlcjmiry4KCbLSV8ZJLSACgwS8/
ubIPuf6artOqa2Wj7zfHneg=
=W25n
-END PGP SIGNATURE-
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-27 Thread Urban Hafner


On Sep 27, 2007, at 15:52 , Jason House wrote:

Three more AMAF bots exist (that I'm aware of), but the author(s)  
have not joined into the discussions

ego_allfirst2 (> 1400 ELO)
libEGO_AMAF


libEGO-AMAF is based on libEGO (v0.114). The way it works is like this:

1. Find all legal moves and remove the moves into ones own eyes.
2. Select a random move from 1. and play a random simulation
3. Increment the play count of all moves of the player & if the player
   won increment the won count
4. go back to 2. unless move time is over
5. Play the move with the highest winning percentage
5.1 Resign if best move has winning percentage of 0.

Time management: move_time = max(0.05*time_left, 0.1)


libEGO_AMAF2 (> 1400 ELO)


The same but don't record all moves but for a game with N moves record
only the first N**0.75 moves (as myCtest-AMAF-8).

Both are available in a darcs repository (same license as libEGO, i.e.
GPLv2):

darcs get http://darcs.bettong.net/libEGO-AMAF

BTW, if anyone has any suggestions on improving it I'd like to hear
them!

Urban














PGP.sig
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-27 Thread Jason House
On 9/27/07, steve uurtamo <[EMAIL PROTECTED]> wrote:
>
> Are you getting the same number of playouts as
> everyone else?
>


That varies wildly from bot to bot.
DoDo (not online right now) does 64 sims (weak - 800 ELO)
ReadyFreddy does 500 sims
Control Boy does 5,000 sims (> 1400 ELO)
hb-amaf2 does > 5,000 sims (variable) (very weak - < 500 ELO)
myCtest-10k-AMAF-3 does 10,000 sims
myCtest-10k-AMAF-5 does 10,000 sims
myCtest-10k-AMAF-8 does 10,000 sims (> 1400 ELO)
myCtest-50k-AMAF-5 does 50,000 sims

Three more AMAF bots exist (that I'm aware of), but the author(s) have not
joined into the discussions
ego_allfirst2 (> 1400 ELO)
libEGO_AMAF
libEGO_AMAF2 (> 1400 ELO)
ego110_allfirst (very weak - < 500 ELO)


s.
>
>
> - Original Message 
> From: Jason House <[EMAIL PROTECTED]>
> To: computer-go 
> Sent: Thursday, September 27, 2007 9:33:14 AM
> Subject: Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS
>
> I've kicked off another all moves as first variant.  I think it matches
> all recommendations on how to improve the all moves as first performance,
> but still appears to be quite weak.
> http://cgos.boardspace.net/9x9/cross/hb-amaf2.html
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-27 Thread steve uurtamo
Are you getting the same number of playouts as
everyone else?

s.


- Original Message 
From: Jason House <[EMAIL PROTECTED]>
To: computer-go 
Sent: Thursday, September 27, 2007 9:33:14 AM
Subject: Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

I've kicked off another all moves as first variant.  I think it matches all 
recommendations on how to improve the all moves as first performance, but still 
appears to be quite weak.  
http://cgos.boardspace.net/9x9/cross/hb-amaf2.html

Here's all the variants that have now run on CGOS.  Descriptions of the brief 
descriptions are further down in the e-mail:
housebot-xxx-amaf - original eye, one pass to end game, random empty move then 
scan

hb-amaf-alteye - alternate eye rule #1, two passes to end game, random empty 
move then scan
hb-amaf-alteye2 - alternate eye rule #2, two passes to end game, random empty 
move then scan
hb-amaf-alt - original eye, two passes to end game, random empty move then scan

hb-amaf2 - alternate eye rule #2, two passes to end game, random empty legal 
move

What they all mean:

EYES
original eye - don't fill if all 4 neighbors must be the same chain.  (Can miss 
some forms of life)


alternate eye rule #1 - don't fill if all 4 neighbors are same color and 
opposing color can't capture by playing there (can yield chains of false eyes 
that can be captured)

alternate eye rule #2 - don't fill if all 4 neighbors are same color and 
diagonals can't prove it to be a false eye (one enemy stone along edge/corner, 
two enemy stones in center).  (Matches what Don indicates everyone else uses)


END OF GAME / SCORING
one pass to end game - Stop when one side has no more legal moves and assume 
their stones in atari are dead and their opponent wins an open ko (if it 
exists).  (can give incorrect results in strange ko situations and with very 
large groups in atari)


two passes to end game - Continue when one side passes from no legal moves. 
Only stop when both sides have no more legal moves.  No stones will be left in 
atari or a ko situation.  Scoring is unambiguous.  (Matches Don's 
recommendations)


RANDOM MOVE SELECTION
random empty move then scan - Pick a random empty board position.  Use it if 
legal.  If not legal, scan through list and pick next legal move found.  
(should match lib ego implementation)


random empty legal move - Pick a random empty board position.  Use it if legal. 
 If not legal, exclude from pool and pick a new random move.  (Matches Don's 
recommendations for pure randomness)







  

Luggage? GPS? Comic books? 
Check out fitting gifts for grads at Yahoo! Search
http://search.yahoo.com/search?fr=oni_on_mail&p=graduation+gifts&cs=bz___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-27 Thread Jason House
I've kicked off another all moves as first variant.  I think it matches all
recommendations on how to improve the all moves as first performance, but
still appears to be quite weak.
http://cgos.boardspace.net/9x9/cross/hb-amaf2.html

Here's all the variants that have now run on CGOS.  Descriptions of the
brief descriptions are further down in the e-mail:
housebot-xxx-amaf - original eye, one pass to end game, random empty move
then scan
hb-amaf-alteye - alternate eye rule #1, two passes to end game, random empty
move then scan
hb-amaf-alteye2 - alternate eye rule #2, two passes to end game, random
empty move then scan
hb-amaf-alt - original eye, two passes to end game, random empty move then
scan
hb-amaf2 - alternate eye rule #2, two passes to end game, random empty legal
move

What they all mean:

EYES
original eye - don't fill if all 4 neighbors must be the same chain.  (Can
miss some forms of life)

alternate eye rule #1 - don't fill if all 4 neighbors are same color and
opposing color can't capture by playing there (can yield chains of false
eyes that can be captured)

alternate eye rule #2 - don't fill if all 4 neighbors are same color and
diagonals can't prove it to be a false eye (one enemy stone along
edge/corner, two enemy stones in center).  (Matches what Don indicates
everyone else uses)

END OF GAME / SCORING
one pass to end game - Stop when one side has no more legal moves and assume
their stones in atari are dead and their opponent wins an open ko (if it
exists).  (can give incorrect results in strange ko situations and with very
large groups in atari)

two passes to end game - Continue when one side passes from no legal moves.
Only stop when both sides have no more legal moves.  No stones will be left
in atari or a ko situation.  Scoring is unambiguous.  (Matches Don's
recommendations)

RANDOM MOVE SELECTION
random empty move then scan - Pick a random empty board position.  Use it if
legal.  If not legal, scan through list and pick next legal move found.
(should match lib ego implementation)

random empty legal move - Pick a random empty board position.  Use it if
legal.  If not legal, exclude from pool and pick a new random move.
(Matches Don's recommendations for pure randomness)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Christoph Birk

On Fri, 21 Sep 2007, Jason House wrote:

Are you using AMAF, UCT, or something else?


Nothing at all. Really pure random playouts.
I am working on an AMAF version for comparison ...


If it's no trouble to you, it
would be nice to see them running online while all of this AMAF stuff is
going on.


ok. I'll run them continously.


I find it interesting that your 10k and 50k bots have wildly
different performance given what Don has indicated.


I think he was referring to his Achorman. The heavier the
playout the smaller the improvement with more playouts, I guess.
I tried a 250k version, but that only in creased the rating by
about 100 ELO.

Christoph
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Jason House
On 9/21/07, Christoph Birk <[EMAIL PROTECTED]> wrote:
>
> It might be hard to compare your AMAF-bots with Don's since he
> uses quite some tricks to improve their performance. I suggest
> you compare with some plain-vanilla program I keep for comparison
> on CGOS
>
>   myCtest-10k (ELO ~1050)
>   myCtest-50k (ELO ~1350)
>
> They do just 1 (5) pure random simulations. Your AMAF-bots
> should be at least that good if they have no significant bugs,
> correct?
> If you are interested I can run them 24/7 on CGOS (currently they
> only play once per week to keep them on the list).



Are you using AMAF, UCT, or something else?  If it's no trouble to you, it
would be nice to see them running online while all of this AMAF stuff is
going on.  I find it interesting that your 10k and 50k bots have wildly
different performance given what Don has indicated.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Jason House
On 9/21/07, Don Dailey <[EMAIL PROTECTED]> wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Jason,
>
> I noticed from several emails that you are probably doing a lot of
> little things differently and assuming they make no difference.



This thread has certainly helped highlight them.  I now have a list of
things and a crude ordering of what may be affecting the quality of the
simulations and the results.  I plan to experiment with changes to the
various things... but don't seem to get more than 20 minutes at a time to
dedicate to the experimentation.



For
> instance you still haven't tried the same exact eye-rule we are using so
> you can't really say with complete confidence that there is no difference.



Huh?  I thought my eye alternative #2 was the "exact eye rule".  A center
point is considered an eye if all 4 neighbors are friendly stones and no
more than one diagonal is an enemy stone.  An edge (corner) are eyes if all
three (both) neighbors are friendly stones and none of the diagonals are
enemy stones.

I realize that alternative #1 wasn't correct, but I thought #2 was.  Please
let me know if that's an incorrect assumption.



If you really want to get to the bottom of this,  you should not assume
> anything or make approximations that you believe shouldn't make much
> difference even if you are right in most cases.   There could be one
> clearly wrong thing you are doing, or it could be many little things
> that all make it take a hit.



I'm systematically experimenting with these things.  So far, this has been
the eyes and playing a random game until one or two passes (in a row).

housebot-xxx-amaf - original eye, one pass to end game
hb-amaf-alteye - alternate eye rule #1, two passes to end game
hb-amaf-alteye2 - alternate eye rule #2, two passes to end game
hb-amaf-alt - original eye, two passes to end game

I still have yet to experiment with random move generation.  I'm 100%
confident that this is what "effective go library" does, but I don't
consider that evidence that it's the most correct method (only the fastest).


I am curious myself what the difference is and I'm willing to help you
> figure it out but we have to minimize the ambiguity.



I appreciate the help.  When we're all done, I'll take a crack at writing a
few pages describing the experiments and the outcomes.


I was going to suggest the random number generator next, but for these
> simple bots there doesn't seem to be a great deal of sensitivity to the
> quality of the random number generator if it's reasonable - at least for
> a few games.



Out of all things, I would suspect my PRNG the least.  It's an open source
Mersenne Twister implementation (Copyright (C) 1997 - 2002, Makoto Matsumoto
and Takuji Nishimura).  My understanding is considered a really good random
number generator.  I'll likely try alternatives to how random moves are
selected based on the random number generator.



Ogo (which is almost the same as AnchorMan) has a poor quality RNG and
> if you play a few hundred games you will discover lot's of repeated
> results.   With a good quality generator there have NEVER been repeated
> games that I have ever seen.   So it could be a minor factor.
>
> One thing you mentioned earlier that bothers me is something about when
> you end the random simulations.  AnchorMan has a limit, but it's very
> conservative - a game is rarely ended early and I would say 99.9% of
> them get played to the bitter end.



I don't use a mercy rule, and have tried out playing games to the bitter end
(neither side has a legal non-eye-filling move).  I assume this would
resolve your concerns.


Are you cheating here?   I suggest you make the program as identical to
> mine as you can - within reason.



I agree and I'm slowly trying to do that.


If you are doing little things wrong
> they accumulate.   I learned this from computer chess.  Many
> improvements are worth 5-20 ELO and you can't even measure them without
> playing thousands of games - and yet if you put a few of them together
> it can put your program in another class.



I don't disagree with what you're saying.  At 20 ELO per fix, 800 ELO is
tough to overcome.  I'd hope I can't have that many things wrong with a
relatively pure monte carlo program ;)  I'm hoping to find at least one
really big flaw...  something that'd put it close to ReadyFreddy.

I want to try a breadth of changes to see if I can something akin to a magic
bullet.  As I go forward, I will also try various combos of the implemented
hacks and see how they do.  It's easier to put up a new combo and see how it
does.  One small problem I have is that I can only run two versions reliably
and rankings at my bots' current level seem to fluctuate a lot by which bots
are currently running.



In my first marketable chess program I worked with my partner and I
> obsessed every day on little tiny speedups - most of them less than 5%
> speedups.  We found 2 or 3 of these every day for weeks it seem

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Christoph Birk

On Fri, 21 Sep 2007, Jason House wrote:

I guess it really depends on what the point of the test is.  I'm trying to
understand the performance gap between my AMAF bot(s) and Don's AMAF bots.
For comparison, here's the ratings and # of simulations:

ELO
1434  - ControlBoy- 5000 simulations per move
1398  - SuperDog  - 2000 simulations per move
1059  - ReadyFreddy - 256 simulations per move
763- Dodo - 64 simulations per move
<600   - all my amaf- 5000-25000 simulations per move
<300   - ego110_allfirst- ???


It might be hard to compare your AMAF-bots with Don's since he
uses quite some tricks to improve their performance. I suggest
you compare with some plain-vanilla program I keep for comparison
on CGOS

 myCtest-10k (ELO ~1050)
 myCtest-50k (ELO ~1350)

They do just 1 (5) pure random simulations. Your AMAF-bots
should be at least that good if they have no significant bugs,
correct?
If you are interested I can run them 24/7 on CGOS (currently they
only play once per week to keep them on the list).

Christoph

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Don Dailey
nd let them play 1000 games on my
> computer. It takes about a day and a half.
> 
> - Dave Hillis
> 
> 
> 
> -Original Message-
> From: Cenny Wenner <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
> To: computer-go  <mailto:computer-go@computer-go.org>>
> Sent: Tue, 18 Sep 2007 3:33 pm
> Subject: Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS
> 
> By the data in your upper table, the results need to uphold their mean
> 
> for 40 times as many trials before you even get a significant*
> difference between #1 and #2.
> 
> Which are the two methods you used?
> 
> On 9/18/07, Jason House <
> [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
> > original eye method = 407 ELO
> > alt eye method #1   = 583 ELO
> > alt eye method #2   = 518 ELO
> >
> > While both alternate methods are probably better than the original, I'm 
> not
> 
> > convinced there's a significant difference between the two alternate
> > methods.  The cross-tables for both are fairly close and could be luck 
> of
> > the draw (and even which weak bots were on at the time).  I put raw 
> numbers
> 
> > below.  Since I made one other change when doing the alt eye method, I
> > should rerun the original with that other change as well (how I end 
> random
> > playouts and score them to allow for other eye definitions).
> 
> >
> > While I think the alternate eye definitions helped, I don't think they
> > accounted for more than 100-200 ELO
> >
> > vs ego110_allfirst
> > orig= 33/46 = 71%
> > #1 =  17/20 = 85%
> 
> > #2 =  16/18 = 89%
> >
> > vs gotraxx-1.4.2a
> > orig=N/A
> > #1 = 2/8   = 25%
> > #2 = 3/19 = 16%
> >
> >
> > On 9/17/07, Jason House <
> [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> > wrote:
> > >
> > >
> > >
> > > On 9/17/07, Don Dailey <
> [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
> > > > Another way to test this, to see if this is your problem,  is for 
> ME to
> > > > implement YOUR eye definition and see if/how much it hurts 
> AnchorMan.
> > > >
> 
> > > > I'm pretty much swamped with work today - but I may give this a try 
> at
> > > > some point.
> > > >
> > >
> > > I'd be interested in seeing that.  It looks like my first hack at an
> 
> > alternate eye implementation bought my AMAF version about 150 ELO (not
> > tested with anything else).  Of course, what I did isn't what others are
> > using.  I'll do another "alteye" version either today or tomorrow.  It 
> may
> 
> > be possible that some of my 150 was because I changed the lengths of the
> > random playouts.
> > >
> >
> >
> > ___
> > computer-go mailing list
> 
> > computer-go@computer-go.org <mailto:computer-go@computer-go.org>
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> >
> 
> 
> -- 
> Cenny Wenner
> ___
> computer-go mailing list
> computer-go@computer-go.org
>  <mailto:computer-go@computer-go.org>
> http://www.computer-go.org/mailman/listinfo/computer-go/
> 
> 
> *Check Out the new free AIM(R) Mail*
> <http://o.aolcdn.com/cdn.webmail.aol.com/mailtour/aim/en-us/index.htm>
> -- Unlimited storage and industry-leading spam and email virus
> protection.
> 
> ___
> computer-go mailing list
> computer-go@computer-go.org <mailto:computer-go@computer-go.org>
> http://www.computer-go.org/mailman/listinfo/computer-go/
> 
> 
> 
> 
> 
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG8+vIDsOllbwnSikRAt9xAKDPeN2yx4kzgVwtO4Ff4/gzH9XD2gCglgFC
Y5d9XLIK0AGtubZFzy0X7MI=
=tID1
-END PGP SIGNATURE-
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Jason House
I guess it really depends on what the point of the test is.  I'm trying to
understand the performance gap between my AMAF bot(s) and Don's AMAF bots.
For comparison, here's the ratings and # of simulations:

ELO
 1434  - ControlBoy- 5000 simulations per move
 1398  - SuperDog  - 2000 simulations per move
 1059  - ReadyFreddy - 256 simulations per move
 763- Dodo - 64 simulations per move
<600   - all my amaf- 5000-25000 simulations per move
<300   - ego110_allfirst- ???

Looking at the cross table with ReadyFreddy, running (that's doing 5% of the
work that my bots are), the results are 0/14, 0/20, 0/24, and 0/10.  Even
with the small samples, I'm quite certain that the performance of my bot is
way worse than any of Don's.

I'm not particularly concerned if alternate eye method #1 is marginally
better than #2 (or vice versa).  I'm reasonably confident that their
performance is similar and that their performance is better than my original
method.

I'm content for now to find out the major causes of performance gaps and
then revisit what is truly the best combo when I get around to doing quality
coding of features instead of quick hacks for testing.  Currently, both the
random move selection strategy and the game scoring strategy have come under
question.

On 9/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> I'm going to echo Cenny's comment. Small samples like this can be very
> misleading. For this kind of test, I usually give each algorithm 5000
> playouts per move and let them play 1000 games on my computer. It takes
> about a day and a half.
>
> - Dave Hillis
>
>
> -Original Message-
> From: Cenny Wenner <[EMAIL PROTECTED]>
> To: computer-go 
> Sent: Tue, 18 Sep 2007 3:33 pm
> Subject: Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS
>
> By the data in your upper table, the results need to uphold their mean
> for 40 times as many trials before you even get a significant*
> difference between #1 and #2.
>
> Which are the two methods you used?
>
> On 9/18/07, Jason House <[EMAIL PROTECTED]> wrote:
> > original eye method = 407 ELO
> > alt eye method #1   = 583 ELO
> > alt eye method #2   = 518 ELO
> >
> > While both alternate methods are probably better than the original, I'm not
> > convinced there's a significant difference between the two alternate
> > methods.  The cross-tables for both are fairly close and could be luck of
> > the draw (and even which weak bots were on at the time).  I put raw numbers
> > below.  Since I made one other change when doing the alt eye method, I
> > should rerun the original with that other change as well (how I end random
> > playouts and score them to allow for other eye definitions).
> >
> > While I think the alternate eye definitions helped, I don't think they
> > accounted for more than 100-200 ELO
> >
> > vs ego110_allfirst
> > orig= 33/46 = 71%
> > #1 =  17/20 = 85%
> > #2 =  16/18 = 89%
> >
> > vs gotraxx-1.4.2a
> > orig=N/A
> > #1 = 2/8   = 25%
> > #2 = 3/19 = 16%
> >
> >
> > On 9/17/07, Jason House <[EMAIL PROTECTED] > wrote:
> > >
> > >
> > >
> > > On 9/17/07, Don Dailey <[EMAIL PROTECTED]> wrote:
> > > > Another way to test this, to see if this is your problem,  is for ME to
> > > > implement YOUR eye definition and see if/how much it hurts AnchorMan.
> > > >
> > > > I'm pretty much swamped with work today - but I may give this a try at
> > > > some point.
> > > >
> > >
> > > I'd be interested in seeing that.  It looks like my first hack at an
> > alternate eye implementation bought my AMAF version about 150 ELO (not
> > tested with anything else).  Of course, what I did isn't what others are
> > using.  I'll do another "alteye" version either today or tomorrow.  It may
> > be possible that some of my 150 was because I changed the lengths of the
> > random playouts.
> > >
> >
> >
> > ___
> > computer-go mailing list
> > computer-go@computer-go.org
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> >
>
>
> --
> Cenny Wenner
> ___
> computer-go mailing list
> [EMAIL PROTECTED]://www.computer-go.org/mailman/listinfo/computer-go/
>
>  --
> *Check Out the new free AIM(R) 
> Mail*<http://o.aolcdn.com/cdn.webmail.aol.com/mailtour/aim/en-us/index.htm>-- 
> Unlimited storage and industry-leading spam and email virus protection.
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-20 Thread dhillismail
I'm going to echo Cenny's comment. Small samples like this can be very 
misleading. For this kind of test, I usually give each algorithm 5000 playouts 
per move and let them play 1000 games on my computer. It takes about a day and 
a half.

- Dave Hillis


-Original Message-
From: Cenny Wenner <[EMAIL PROTECTED]>
To: computer-go 
Sent: Tue, 18 Sep 2007 3:33 pm
Subject: Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS



By the data in your upper table, the results need to uphold their mean
for 40 times as many trials before you even get a significant*
difference between #1 and #2.

Which are the two methods you used?

On 9/18/07, Jason House <[EMAIL PROTECTED]> wrote:
> original eye method = 407 ELO
> alt eye method #1   = 583 ELO
> alt eye method #2   = 518 ELO
>
> While both alternate methods are probably better than the original, I'm not
> convinced there's a significant difference between the two alternate
> methods.  The cross-tables for both are fairly close and could be luck of
> the draw (and even which weak bots were on at the time).  I put raw numbers
> below.  Since I made one other change when doing the alt eye method, I
> should rerun the original with that other change as well (how I end random
> playouts and score them to allow for other eye definitions).
>
> While I think the alternate eye definitions helped, I don't think they
> accounted for more than 100-200 ELO
>
> vs ego110_allfirst
> orig= 33/46 = 71%
> #1 =  17/20 = 85%
> #2 =  16/18 = 89%
>
> vs gotraxx-1.4.2a
> orig=N/A
> #1 = 2/8   = 25%
> #2 = 3/19 = 16%
>
>
> On 9/17/07, Jason House <[EMAIL PROTECTED] > wrote:
> >
> >
> >
> > On 9/17/07, Don Dailey <[EMAIL PROTECTED]> wrote:
> > > Another way to test this, to see if this is your problem,  is for ME to
> > > implement YOUR eye definition and see if/how much it hurts AnchorMan.
> > >
> > > I'm pretty much swamped with work today - but I may give this a try at
> > > some point.
> > >
> >
> > I'd be interested in seeing that.  It looks like my first hack at an
> alternate eye implementation bought my AMAF version about 150 ELO (not
> tested with anything else).  Of course, what I did isn't what others are
> using.  I'll do another "alteye" version either today or tomorrow.  It may
> be possible that some of my 150 was because I changed the lengths of the
> random playouts.
> >
>
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>


-- 
Cenny Wenner
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



Check Out the new free AIM(R) Mail -- Unlimited storage and industry-leading 
spam and email virus protection.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-20 Thread Cenny Wenner
Another alternatives for testing the performance would be
1. replay a game and count the number of right/near guesses. This
could be pro games, games againt fairly good players, or games played
by the bot before. The benefit of this is that you might be able to
estimate the strength on every few moves rather than an entire game.
2. play against earlier versions of the same bot.
3. keep bots yourself.
4. improve benchmark/regression scores instead of playing strength,
http://www.cs.ualberta.ca/~games/go/cgtc for instance.

See "twogtp" on how to have bots on your own computer play each other.

This might also be of interest:
http://www.andromeda.com/people/ddyer/go/shape-library.html

On 9/20/07, Jason House <[EMAIL PROTECTED]> wrote:
> Christoph Birk wrote:
> >>   // Loop to do #1 above
> >>   while (p != singletonSimplePass){
> >>   if (numMoves < keepMax)
> >>   moves[numMoves] = p;
> >>   workingCopy.play(c,p);
> >>   c = c.enemyColor();
> >>   p = randomLegalMove(c, workingCopy, twister);
> >>   numMoves++;
> >>   }
> >>
> >
> > Do you really stop the simulation after a single pass, ie. when
> > one side has no more move to play but the other does?
> > I believe that this would end many games before they are (really) over
> > and that might lead to false results in the simulations.
> >
> > Christoph
> >
>
> Actually, that's the only difference between housebot-621-amaf and
> hb-amaf-alt.  alt is playing games all the way to the end like you
> suggest.  Looking at the win rate against ego110_allfirst, it looks like
> it may be doing a bit worse (but more samples are needed).  It's
> unfortunate that ranks that low vary so much based on which bots are on
> CGOS.
>
> In the future, I'll probably offer the option to do either method.  My
> logic behind stopping at the first pass is that it's highly unlikely to
> form life in the void from captured stones.  Since capturing the stones
> would increase the length of the game and isn't very likely to change
> the outcome of the game, I figured it'd be a good compromise.
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>


-- 
Cenny Wenner
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-19 Thread Jason House

Christoph Birk wrote:

  // Loop to do #1 above
  while (p != singletonSimplePass){
  if (numMoves < keepMax)
  moves[numMoves] = p;
  workingCopy.play(c,p);
  c = c.enemyColor();
  p = randomLegalMove(c, workingCopy, twister);
  numMoves++;
  }



Do you really stop the simulation after a single pass, ie. when
one side has no more move to play but the other does?
I believe that this would end many games before they are (really) over
and that might lead to false results in the simulations.

Christoph
  


Actually, that's the only difference between housebot-621-amaf and 
hb-amaf-alt.  alt is playing games all the way to the end like you 
suggest.  Looking at the win rate against ego110_allfirst, it looks like 
it may be doing a bit worse (but more samples are needed).  It's 
unfortunate that ranks that low vary so much based on which bots are on 
CGOS.


In the future, I'll probably offer the option to do either method.  My 
logic behind stopping at the first pass is that it's highly unlikely to 
form life in the void from captured stones.  Since capturing the stones 
would increase the length of the game and isn't very likely to change 
the outcome of the game, I figured it'd be a good compromise.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-18 Thread Jason House
On 9/18/07, Jason House <[EMAIL PROTECTED]> wrote:
>
> Don't play in that spot if the "4" neighbors match your color
>


To avoid the questions about corners and edges, the reason the four is in
quotes is because it's usually 4, but can be 3 on the edge and 2 in the
corner.  I tend to call stuff 4-neighbors and 8-neighbors, even if the true
count ends up being different because some would be off the board.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-18 Thread Jason House
Method #1 - Don't play in that spot if the "4" neighbors match your color
and it'd be a suicide play for the opponent to play in that spot.

Method #2 - Don't play in that spot if the "4" neighbors match your color
and either...
  It's a corner/edge and no diagonals are an enemy stone
  Or it's a central point and no more than one diagonal is an enemy stone

For completeness, my original method was: Don't play in that spot if all "4"
neighbors" are the same chain.  I had switched to that after doing a more
strict variant of #2 where empty points were considered to be like enemy
stones.  Between what I was using and the more strict #2, my version worked
in more cases.

I believe method #2 is what others use.  I think that it alleviates my
concerns about long chains of false eyes.  When an eye truly becomes false,
it becomes legal to connect them.  It is possible to create two false eyes
at the same time, but then it's possible to play in that spot the enemy
would play instead to repair the issue.  I don't think I have any issues
with that anti-eye-filling rule (besides taking more resources to detect
it).

After seeing a nice boost from this with the amaf, I'll likely officially
switch to method #2 down the road.  It's down the road because I think I'll
roll it into a more generic 3x3 pattern matching framework within random
playouts.  It'll also stop most illegal moves the random move selection
currently picks.

I still plan on trying to figure out why my 1-ply AMAF implementation(s)
play so much worse than ReadyFreddy and Anchorman.  From what I can tell,
HouseBot's AMAF search is doing about as much work as Anchorman (minus some
heuristics) but with far worse results.  I suspect that getting my AMAF to
work nearly as well as Don's may give a big boost to all of my other 1-ply
search methods (and future multi-ply searches)


On 9/18/07, Cenny Wenner <[EMAIL PROTECTED]> wrote:
>
> By the data in your upper table, the results need to uphold their mean
> for 40 times as many trials before you even get a significant*
> difference between #1 and #2.
>
> Which are the two methods you used?
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-18 Thread Cenny Wenner
By the data in your upper table, the results need to uphold their mean
for 40 times as many trials before you even get a significant*
difference between #1 and #2.

Which are the two methods you used?

On 9/18/07, Jason House <[EMAIL PROTECTED]> wrote:
> original eye method = 407 ELO
> alt eye method #1   = 583 ELO
> alt eye method #2   = 518 ELO
>
> While both alternate methods are probably better than the original, I'm not
> convinced there's a significant difference between the two alternate
> methods.  The cross-tables for both are fairly close and could be luck of
> the draw (and even which weak bots were on at the time).  I put raw numbers
> below.  Since I made one other change when doing the alt eye method, I
> should rerun the original with that other change as well (how I end random
> playouts and score them to allow for other eye definitions).
>
> While I think the alternate eye definitions helped, I don't think they
> accounted for more than 100-200 ELO
>
> vs ego110_allfirst
> orig= 33/46 = 71%
> #1 =  17/20 = 85%
> #2 =  16/18 = 89%
>
> vs gotraxx-1.4.2a
> orig=N/A
> #1 = 2/8   = 25%
> #2 = 3/19 = 16%
>
>
> On 9/17/07, Jason House <[EMAIL PROTECTED] > wrote:
> >
> >
> >
> > On 9/17/07, Don Dailey <[EMAIL PROTECTED]> wrote:
> > > Another way to test this, to see if this is your problem,  is for ME to
> > > implement YOUR eye definition and see if/how much it hurts AnchorMan.
> > >
> > > I'm pretty much swamped with work today - but I may give this a try at
> > > some point.
> > >
> >
> > I'd be interested in seeing that.  It looks like my first hack at an
> alternate eye implementation bought my AMAF version about 150 ELO (not
> tested with anything else).  Of course, what I did isn't what others are
> using.  I'll do another "alteye" version either today or tomorrow.  It may
> be possible that some of my 150 was because I changed the lengths of the
> random playouts.
> >
>
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>


-- 
Cenny Wenner
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-18 Thread Jason House
original eye method = 407 ELO
alt eye method #1   = 583 ELO
alt eye method #2   = 518 ELO

While both alternate methods are probably better than the original, I'm not
convinced there's a significant difference between the two alternate
methods.  The cross-tables for both are fairly close and could be luck of
the draw (and even which weak bots were on at the time).  I put raw numbers
below.  Since I made one other change when doing the alt eye method, I
should rerun the original with that other change as well (how I end random
playouts and score them to allow for other eye definitions).

While I think the alternate eye definitions helped, I don't think they
accounted for more than 100-200 ELO

vs ego110_allfirst
orig= 33/46 = 71%
#1 =  17/20 = 85%
#2 =  16/18 = 89%

vs gotraxx-1.4.2a
orig=N/A
#1 = 2/8   = 25%
#2 = 3/19 = 16%

On 9/17/07, Jason House <[EMAIL PROTECTED]> wrote:
>
>
>
> On 9/17/07, Don Dailey <[EMAIL PROTECTED]> wrote:
> >
> > Another way to test this, to see if this is your problem,  is for ME to
> > implement YOUR eye definition and see if/how much it hurts AnchorMan.
> >
> > I'm pretty much swamped with work today - but I may give this a try at
> > some point.
> >
>
> I'd be interested in seeing that.  It looks like my first hack at an
> alternate eye implementation bought my AMAF version about 150 ELO (not
> tested with anything else).  Of course, what I did isn't what others are
> using.  I'll do another "alteye" version either today or tomorrow.  It may
> be possible that some of my 150 was because I changed the lengths of the
> random playouts.
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-17 Thread Jason House
On 9/17/07, Don Dailey <[EMAIL PROTECTED]> wrote:
>
> Another way to test this, to see if this is your problem,  is for ME to
> implement YOUR eye definition and see if/how much it hurts AnchorMan.
>
> I'm pretty much swamped with work today - but I may give this a try at
> some point.
>

I'd be interested in seeing that.  It looks like my first hack at an
alternate eye implementation bought my AMAF version about 150 ELO (not
tested with anything else).  Of course, what I did isn't what others are
using.  I'll do another "alteye" version either today or tomorrow.  It may
be possible that some of my 150 was because I changed the lengths of the
random playouts.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-17 Thread Don Dailey
On Mon, 2007-09-17 at 08:50 -0400, Jason House wrote:
> 
> won't you miss any comb shape?
> 
> 
> I would.  I've started experimenting with some alternatives.

Another way to test this, to see if this is your problem,  is for ME to
implement YOUR eye definition and see if/how much it hurts AnchorMan. 

I'm pretty much swamped with work today - but I may give this a try at
some point.

- Don


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-17 Thread Jason House
On 9/17/07, steve uurtamo <[EMAIL PROTECTED]> wrote:
>
> >> Yeah.  An eye point is defined as an empty point where all four
> >> neighbors are the same chain.
>
> where "all four" gets modified to "all three" or "all two" on the first
> line
> or corner respectively?



Right



won't you miss any comb shape?



I would.  I've started experimenting with some alternatives.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-17 Thread steve uurtamo
>> Yeah.  An eye point is defined as an empty point where all four 

>> neighbors are the same chain.



where "all four" gets modified to "all three" or "all two" on the first line

or corner respectively?



won't you miss any comb shape?



o

oo##o

##.#o

#.##o

o###o

o



s.



  This prevents weak combos of false 

>> eyes, but does allow it to miss one kind of life. 

> Do you mean that your program would fill black eyes there:

>

> #.#O.

> .##OO

> ##OO.

> O

>

> ? This sounds like a really very very bad idea. But I may have 

> misunderstood.



Nah, you understood correctly.  I've never liked the idea that normal MC 

engines will never defend #'s position like below if false eyes are 

never filled.  2nd line creeps into territory seem like a fairly common 

situation and not defending them seems like quite a problem.  The common 

cases with life that I miss occur when there are two false eyes that 

share the same two neighboring chains.  Maybe I should upgrade my eye 

detection to find that...



##O

.#O

##O

.#O

##O

#OO

.#O

##O

#OO

.#O

##O

.OO



___

computer-go mailing list

computer-go@computer-go.org

http://www.computer-go.org/mailman/listinfo/computer-go/









  

Check out the hottest 2008 models today at Yahoo! Autos.
http://autos.yahoo.com/new_cars.html
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-16 Thread Don Dailey
On Sat, 2007-09-15 at 23:55 -0400, Jason House wrote:
> >  3. Are you sure the eye rule isn't somehow broken?
> >   
> 
> Yeah.  An eye point is defined as an empty point where all four 
> neighbors are the same chain.  This prevents weak combos of false
> eyes, 
> but does allow it to miss one kind of life. 

I think this is your problem.   You rule is very strict which might
make it always correct,  but we are talking about random play-outs
which are far from correct anyway.I think your rule is too strict
and making many results change from what they should be.Your random
play-outs are basically moving into eyes that AnchorMan won't.   (Yes,
I know that it's occasionally wrong, but it's right much more than it's
wrong.)

I would suggest that you at least TRY the common rule most of us
use and see what happens.   For a simple test it doesn't have to be
fast, just correct.   Then you will know for sure whether that is
your problem or not.  

- Don


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-16 Thread steve uurtamo
> > ? This sounds like a really very very bad idea. But I may have 
> > misunderstood.

> Nah, you understood correctly.

ouch.  it seems like you're forcing your eyes to be on the 2nd line
or above and all living groups to have stones on the 3rd
line or above.

right?

s.





   

Pinpoint customers who are looking for what you sell. 
http://searchmarketing.yahoo.com/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-16 Thread Jason House

steve uurtamo wrote:
? This sounds like a really very very bad idea. But I may have 
misunderstood.
  


  

Nah, you understood correctly.



ouch.  it seems like you're forcing your eyes to be on the 2nd line
or above and all living groups to have stones on the 3rd
line or above.

right?
  


No.  Eyes on the 1st line work too (see below for the simplest valid 
eyes).  The point is really that not filling false eyes can frequently 
lead to very significant faults in play.  False eyes along the first 
line is the example I chose because it's both easy to draw and really 
occurs in games.


Simple Corner:
. #
##

Simple Edge:
##
. #
##

A more complex corner:
. ##
#. #
###

A more complex edge:
##
. ##
#. #
###
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-16 Thread Jason House

Rémi Coulom wrote:

Jason House wrote:
Yeah.  An eye point is defined as an empty point where all four 
neighbors are the same chain.  This prevents weak combos of false 
eyes, but does allow it to miss one kind of life. 

Do you mean that your program would fill black eyes there:

#.#O.
.##OO
##OO.
O

? This sounds like a really very very bad idea. But I may have 
misunderstood.


Nah, you understood correctly.  I've never liked the idea that normal MC 
engines will never defend #'s position like below if false eyes are 
never filled.  2nd line creeps into territory seem like a fairly common 
situation and not defending them seems like quite a problem.  The common 
cases with life that I miss occur when there are two false eyes that 
share the same two neighboring chains.  Maybe I should upgrade my eye 
detection to find that...


##O
.#O
##O
.#O
##O
#OO
.#O
##O
#OO
.#O
##O
.OO

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-16 Thread Jason House

elife wrote:



>  2. Do you include passes in the random games?  You should not.
>

No passes are allowed in the random games.  If the random move
generator
says to pass it means no legal moves are available and the game
gets scored.


Hi,
  I am confused with why passed are not allowed in the random games. 
If so, will not the simulation part give wrong evaluation in case such 
as seki and so on?


That is correct...  Passing in the middle of the game can cause real 
issues.  How does a game get scored when it's incomplete? Who won?  The 
side effect is that seki is missed.  I think this is typical of MC 
engines (to miss seki deep in random games).  Some may have top level 
analysis to detect a seki about to happen, but my bot is not that advanced.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-16 Thread steve uurtamo
> Yeah.  An eye point is defined as an empty point where all four 
> neighbors are the same chain.  This prevents weak combos of false eyes, 
> but does allow it to miss one kind of life.

corner life is worth quite a few points, generally, and doesn't need to satisfy
these conditions.  in fact, it quite often won't.

s.





   

Be a better Globetrotter. Get better travel answers from someone who knows. 
Yahoo! Answers - Check it out.
http://answers.yahoo.com/dir/?link=list&sid=396545469
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-16 Thread Rémi Coulom

Jason House wrote:
Yeah.  An eye point is defined as an empty point where all four 
neighbors are the same chain.  This prevents weak combos of false 
eyes, but does allow it to miss one kind of life. 

Do you mean that your program would fill black eyes there:

#.#O.
.##OO
##OO.
O

? This sounds like a really very very bad idea. But I may have 
misunderstood.


Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-15 Thread elife
>
>
> >  2. Do you include passes in the random games?  You should not.
> >
>
> No passes are allowed in the random games.  If the random move generator
> says to pass it means no legal moves are available and the game gets
> scored.
>
>
Hi,
  I am confused with why passed are not allowed in the random games. If so,
will not the simulation part give wrong evaluation in case such as seki and
so on?
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-15 Thread Jason House



Don Dailey wrote:
This appears to be my exact logic.  
  


That's a good start :)
Also, before we get too deep into this stuff, thanks for your help.


I can imagine many places where you might be doing something wrong.  


 1. Are you sure you are scoring the final game correctly including
the proper accounting for komi?
  
I'm pretty sure.  Playouts continue until one side has no legal moves 
(simple ko, suicides, and eye filling moves are all that remain).  
Scoring at that point is exceptionally easy...  Stones in atari are dead 
and count for the opponent.  Spots unfilled by simple ko are scored for 
the non-passer (because it can simply fill that spot).  The rest is 
exceptionally obvious.  In play against it and in automated games, it 
exhibits the classic MC behavior of winning many games by half a point.



 2. Do you include passes in the random games?  You should not.
  


No passes are allowed in the random games.  If the random move generator 
says to pass it means no legal moves are available and the game gets scored.



 3. Are you sure the eye rule isn't somehow broken?
  


Yeah.  An eye point is defined as an empty point where all four 
neighbors are the same chain.  This prevents weak combos of false eyes, 
but does allow it to miss one kind of life.




How do you generate random moves for a game?  A simple way, though not
the fastest is build a list of all possible moves, pick one at random
and play
it if it's legal.  If it's not, throw the move out and pick randomly
from
the list again.  You won't get random moves if you simply start at a
random
position but then just move forward in the list until you find a legal
move.
  


I implemented the latter after lots of discussion online about the 
quality of random move generation method.  Out of curiosity, what method 
do you use?  Do you do the repeated random selection that you describe?  
(Likely with moving invalid selections to the end of the list and 
picking from the subset)


I've been trying to watch the last several games of my AMAF version on 
CGOS to see if it appears to bias near illegal (or eye filling) plays.  
I don't see evidence of it.  I can rationalize that biased moves would 
be selected more frequently by both colors.  Maybe I'll have to try 
another random move selection strategy, but I suspect it won't create a 
difference anywhere near 1000 ELO (the gap between housebot's amaf mode 
and anchorman... the best match for the # of sims)


Of course, libego uses the same random number generator and 
ego110_allfirst is comparably bad.



  4. Do you use a decent random number generator?   (probably not an
issue
 unless you have a really crappy one.
  


I use a mersenne twister.  It's really good and really fast :)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/