I am considering to enforce this basic protocol on the server soon:

     Programs of the same "family" will not be paired against each other.

A family of programs have the same name up to the first hyphen and the
same password.

So if I have these programs:

    Name               password
    ---------------    --------------
    Lazarus-1.2        foobar
    Lazarus-1.3        foobar
    Lazarus-1.4        foobar
    Lazarus-1.5        winniepooh

Then Lazarus-1.5 will be allowed to play either of the other programs
listed,  but those other programs will not be allow to play each other
as they are considered relatives.  

We cannot prevent programs from playing each other no matter what we do,
they can always change the name and password.   However this gives the
programmers the ability to prevent multiple versions of his program from
playing each other if he chooses.     Most programmers probably log onto
CGOS in order to play other peoples programs.

>From time to time I have put highly experimental and very different
programs on CGOS and I don't care if they play themselves - they are not
really different versions of the same program.   In this case I always
give them different names anyway.   I pretty much always use the same
password so I can control this easily with the name. 

- Don



   

          

Rémi Coulom wrote:
> Don Dailey wrote:
>> It would be great if you would provide recommendations for a simple
>> conversion formula when you are ready based on this study.       Also,
>> if you have any suggestions in general for CGOS ratings the
>> cgos-developers would be willing to listen to your suggestions.
>>
>> - Don
> My suggestion would be to tell programmers to use a different login
> each time they change version or hardware (most do that, already), and
> use bayeselo to rank the programs.
>
> This would be best if combined with a mechanism to recognize that two
> logins are versions of the same program (for instance, if they use the
> same password), and avoid pairing them.
>
> Regarding correspondance with human ranks, and handicap value, I
> cannot tell yet. It is very clear to me that the Elo-rating model is
> very wrong for the game of Go, because strength is not
> one-dimensional, especially when mixing bots and humans. The best way
> to evaluate a bot in terms of human rating is to make it play against
> humans, on KGS for instance. Unfortunately, there is no 9x9 rating
> there. I will compute 9x9 ratings with the KGS data I have.
>
> What I have observed with Crazy Stone is that gaining Elo points
> against humans is more difficult than gaining Elo points against GNU
> Go, which is more difficult than gaining Elo points against MC
> programs, which is more difficult than gaining Elo points against
> itself. But it is more an intuition than a scientific study.
>
> Rémi
> _______________________________________________
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to