I didnt see this:
>>148: D1 also wins?
> You are right. Thanks for correction.
Many Faces played D1, so change it to 38 correct.
David
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:computer-go-
> [EMAIL PROTECTED] On Behalf Of Gunnar Farnebäck
> Sent: Tuesday, April 22, 2008
Traditional Many Faces (my current experimental version) gets 37 right. I
gave it about 10 seconds on each problem.
David
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:computer-go-
> [EMAIL PROTECTED] On Behalf Of Yamato
> Sent: Tuesday, April 22, 2008 7:42 PM
> To: computer-go
Gian-Carlo Pascutto wrote:
> Don Dailey wrote:
>
>> BOTH versions have NullMove Pruning and History Pruning turned off
>> because I feel that it would bias the test due to interactions
>> between selectivity and evaluation quality (I believe it would make
>> the strong version look even more scal
Thanks Gian-Carlo, Gunnar.
Current list of results.
GNU Go 3.7.12 level 0 : 24/50
GNU Go 3.7.12 level 10 : 34/50
GNU Go 3.7.12 level 15 : 37/50
GNU Go 3.7.12 mc, 1k : 30/50
GNU Go 3.7.12 mc, 10k : 31/50
GNU Go 3.7.12 mc, 100k : 38/50
GNU Go 3.7.
Don Dailey wrote:
BOTH versions have NullMove Pruning and History Pruning turned off
because I feel that it would bias the test due to interactions
between selectivity and evaluation quality (I believe it would make
the strong version look even more scalable than it is.)
There is nothing in n
Gian-Carlo Pascutto wrote:
> Don Dailey wrote:
>
>>> The rest of your story is rather anecdotal and I won't comment on it.
>> Are you trying to be politely condescending?
>
> No! Thing is:
>
> 1) I disagree with quite a few things which I have no interest in
> arguing (much) about because...
> 2
Yamato wrote:
> Gunnar Farnebäck wrote:
143: I don't see how A3 could win the semeai. A2 and C4 look more
effective.
>>> Typo, it was A2. C4 cannot work.
>> How does white defend against C4? I'm looking at B C4, W B4, B B5,
>> W B6, B A2 without finding a way out for white. Did I miss so
Those results were actually published many years ago by Don Beal and I
asked a lot of questions at the time. I remember that some care was
taken to make the study fair. It went something like this:
1. A search of depth N was performed.
2. Pseudo random number generator was called at e
Be carefull about the term 'random'. When the game ended, does it counting the
score randomly??If so, it throws the rules of the game out of the window. How
can it improve with the depth? If not, then it's not completely random. As?Don
mentioned earlier, if the?evaluation function can evaluate t
> I attached the fixed version to this email. Thanks for your help.
Leela 0.3.14
1k -> 19/50 passes
10k -> 28/50 passes
100k -> 36/50 passes
--
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo
> Alpha-beta gets better with increasing depth even with a random
> evaluation.
http://www.cs.umd.edu/~nau/papers/pathology-aaai80.pdf
(this link is from an earlier discussion:
http://computer-go.org/pipermail/computer-go/2005-January/002344.html
)
AvK
___
11 matches
Mail list logo