In message <20110407213749.1047380...@mail.satirist.org>, Jay Scott
writes
Álvaro Begué alvaro.be...@gmail.com:
a method to evaluate proposed improvements that might
be much better than playing a gazillion games. A search results in two
things: A move and a probability of winning (or a score th
Currectly, For Erica, I utilize two kinds of testing,
1. I have a lot of tactical positions, mainly collected from Erica’s lost games
(KGS games against human players or from the tournaments). Some of them were
artificially designed by myself for many specific tactical situations. I let
Erica ru
On Thu, Apr 7, 2011 at 11:23 PM, terry mcintyre wrote:
> When training a shot putter, one does not merely practice the entire
> activity; one looks for weaknesses and devises strategies to strengthen the
> athlete in those areas. These strategies might include time on the weight
> bench.
>
This i
When training a shot putter, one does not merely practice the entire activity;
one looks for weaknesses and devises strategies to strengthen the athlete in
those areas. These strategies might include time on the weight bench.
We'll probably need a variety of ways to nudge programs to the nex
Álvaro Begué alvaro.be...@gmail.com:
>a method to evaluate proposed improvements that might
>be much better than playing a gazillion games. A search results in two
>things: A move and a probability of winning (or a score that can be
>mapped into a probability of winning, but let's ignore that issue
hat area if it is not needed, but should take sente elsewhere.
> >>
> >> In short, the program must master not only move A, but B, C, D, E, ... Z
> -
> >> and one should test different move orders for the opponent, including
> feints
> >> in other parts of t
y the proper
>> response
>> >> to every threat, until the job is done.
>> >>
>> >> For extra credit, test what happens if the opponent continues to thrash
>> >> after the situation becomes hopeless - the program should not
hould test different move orders for the opponent, including
> feints
> >> in other parts of the board.
> >>
> >> Having mastered a few such test cases, now introduce a ko or seki to
> make
> >> the analysis a bit more complex. Rinse and repeat.
> >
e
>> the analysis a bit more complex. Rinse and repeat.
>>
>> Terry McIntyre
>>
>> Unix/Linux Systems Administration
>> Taking time to do it right saves having to do it twice.
>>
>>
>> From: Don Dailey
>>
ke
> the analysis a bit more complex. Rinse and repeat.
>
> Terry McIntyre
>
> Unix/Linux Systems Administration
> Taking time to do it right saves having to do it twice.
>
>
> ------
> *From:* Don Dailey
> *To:* computer-go@dvandva.org
> *Sent
From: Don Dailey
To: computer-go@dvandva.org
Sent: Thu, April 7, 2011 3:33:36 PM
Subject: Re: [Computer-go] Evaluating improvements differently
On Thu, Apr 7, 2011 at 2:27 PM, Álvaro Begué wrote:
Hi,
>
>I haven't spent any time in go programming recently, but a few months
>ago I though
On Thu, Apr 7, 2011 at 2:27 PM, Álvaro Begué wrote:
> Hi,
>
> I haven't spent any time in go programming recently, but a few months
> ago I thought of a method to evaluate proposed improvements that might
> be much better than playing a gazillion games. A search results in two
> things: A move an
Hi,
I haven't spent any time in go programming recently, but a few months
ago I thought of a method to evaluate proposed improvements that might
be much better than playing a gazillion games. A search results in two
things: A move and a probability of winning (or a score that can be
mapped into a
13 matches
Mail list logo