Those results were actually published many years ago by Don Beal and I
asked a lot of questions at the time.   I remember that some care was
taken to make the study fair.   It went something like this:

   1.  A search of depth N was performed. 
   2.  Pseudo random number generator was called at end nodes for score.
   3.   Checkmates were scored.  

This was compared against a version that basically just played a random
move.  HOWEVER it was more complicated than that:

   1.  A search of the depth N was still performed.
   2.  Checkmates were scored. 
   3.  If a checkmate (or defense) was not found,  a random move was
selected. 

With some thought it's easy to see that the random evaluation function
will favor moves that give you more mobility and restrict the opponents
mobility.    The more options you have,  the easier it is to find a line
that ends with a set of nodes for the opponent that have all low values. 

As a fun little experiment I once added a random component to the
evaluation function and tested it with self play games against a version
that doesn't have a random component.    I only tested full width
without hash table because randomness can have bad side-effects with
selectivity.     The random component hurt the speed of the program
about 20 or 30 percent, but at the same depth of search it had a modest
but clear superiority.  

- Don
   


  

[EMAIL PROTECTED] wrote:
> Be carefull about the term 'random'. When the game ended, does it counting 
> the score randomly??If so, it throws the rules of the game out of the window. 
> How can it improve with the depth? If not, then it's not completely random. 
> As?Don mentioned earlier, if the?evaluation function can evaluate the end 
> score correctly and the?search?depth?is reachng the game end, the so called 
> 'random' evaluation becomes a 100% correct evaluation function.??
>
> DL
>
>
> -----Original Message-----
> From: A van Kessel <[EMAIL PROTECTED]>
> To: computer-go@computer-go.org
> Sent: Tue, 22 Apr 2008 4:16 am
> Subject: Re: [computer-go] scalability with the quality of play-outs.
>
>
>
>   
>> Alpha-beta gets better with increasing depth even with a random
>> evaluation.
>>     
>
> http://www.cs.umd.edu/~nau/papers/pathology-aaai80.pdf
>
> (this link is from an earlier discussion:
> http://computer-go.org/pipermail/computer-go/2005-January/002344.html
> )
>
> AvK
> _______________________________________________
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
>
>   
> ------------------------------------------------------------------------
>
> _______________________________________________
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to