Quoting Gunnar Farnebäck [EMAIL PROTECTED]:
10k100k 1M
GNU Go CVS 0.079 0.387 0.475
This position seems to fit he extra knowledge of Valkyria well, but
not perfectly
500 1k 10k100k
Valkyria 0.76 0.760.64
Nick Wedd wrote:
In one of the British Championship Match games, a bit over ten years
ago, Zhang Shutai made an illegal ko move against Matthew Macfadyen, and
immediately conceded that he had lost the game.
Is the game record available? I am interested because I have only found 2
situations
On Dec 13, 2007 12:17 PM, Jacques Basaldúa [EMAIL PROTECTED] wrote:
Nick Wedd wrote:
In one of the British Championship Match games, a bit over ten years
ago, Zhang Shutai made an illegal ko move against Matthew Macfadyen, and
immediately conceded that he had lost the game.
Is the game
Hi,
It seems to me like in some ways Scheme is less feature bloated than common
lisp.
in my DOS version of AUGOS, I embedded a small Lisp interpreter (Inflisp)
into the Pascal code. The Lisp files, which make up the inference engine,
are included in the runtime version downloadable from my
On Dec 12, 2007 10:19 PM, David Fotland [EMAIL PROTECTED] wrote:
Many Faces' life and death search is best first and probability based,
but I don't use UCT to select moves. I select the move that has the highest
probability of changing the value of the root (from success to fail or vice
This is an artifact of using mercy rule.
You can change it in config.cpp
use_mercy_rule = true
Should I make it default?
Thanks,
Lukasz
On Dec 10, 2007 11:41 PM, Heikki Levanto [EMAIL PROTECTED] wrote:
On Mon, Dec 10, 2007 at 04:08:48PM -0500, Don Dailey wrote:
Would you rather be 95%
On Dec 13, 2007 2:03 AM, Harald Korneliussen [EMAIL PROTECTED] wrote:
Wed, 12 Dec 2007 07:14:48 -0800 (PST) terry mcintyre wrote:
Heading back to the central idea, of tuning the predicted winning
rates and evaluations: it might be useful to examine lost games, look
for divergence between
I just want to make some comments about MC evaluation to remove some
common misunderstandings.
I have seen some complaints about misevaluation such as a program
having 65% chance of winning in a game which is lost and the other way
around. For example arguments has been proposed in line
This was right on the mark! It exposed a lot of misconceptions and
wrong thinking about MC and evaluation.
- Don
Magnus Persson wrote:
I just want to make some comments about MC evaluation to remove some
common misunderstandings.
I have seen some complaints about misevaluation such as a
steve uurtamo wrote:
Currently there is no evidence whatsoever that probability estimates
are
inferior and they are the ones playing the best GO right now
are they?
Yes - in both 9x9 and 19x19 go.
- Don
s.
Currently there is no evidence whatsoever that probability estimates
are
inferior and they are the ones playing the best GO right now
are they?
s.
Looking for last minute shopping deals?
Find them
On 12/11/07, Mark Boon [EMAIL PROTECTED] wrote:
Question: how do MC programs perform with a long ladder on the board?
My understandig of MC is limited but thinking about it, a crucial
long ladder would automatically make the chances of any playout
winning 50-50, regardless of the actual
It's quite different from PN. PN expands a leaf node one ply and backs up
values to the root. I play a line as many ply as needed until I get a high
confidence evaluation of win or lose. In this sense I am doing something
like UCT with nonrandom play outs. PN typically doesn't use move
Eric,
Yes, as Magnus also stated MC play-out doesn't really accurately
estimate the real winning probability but it still get the move order
right most of the time.
The situation is that if the position is really a win, it doesn't mean
that a MC is able to find the proof tree. But it
Jason House wrote:
MoGo uses TD to predict win rates.
Really? Where did you get that information?
--
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
Christoph,
Your bayeselo rating is 1942 on CGOS. I compiled a table that has
all players with 50 games or more which can be found here:
http://cgos.boardspace.net/9x9/hof2.html
- Don
Christoph Birk wrote:
On Tue, 11 Dec 2007, Don Dailey wrote:
Christoph,
Let me know when
Don,
This has taken me some time to formulate an answer. Mainly because
you are making so many assumption about what I understand or imagine
and what not. It makes for a confused discussion and I didn't feel
like getting into arguments like no, that's not what I meant etc.
Let me
I'm going to estimate that 100 ELO is roughly 1 rank based on this:
http://en.wikipedia.org/wiki/Go_ranks_and_ratings
This may not hold for 9x9.If a 1 kyu beats a 2kyu about 64% of the
time in an even game at 19x19, it doesn't imply that he will do the
same at 9x9, but until I have a
Don Dailey wrote:
We may be able to borrow KGS data of well established players playing
9x9 games against each other to estimate this. Would anyone like to
volunteer to do this?
Bill Shubert kindly provided this data to me. I am working on a study
about rating systems for the game of Go.
It would be great if you would provide recommendations for a simple
conversion formula when you are ready based on this study. Also,
if you have any suggestions in general for CGOS ratings the
cgos-developers would be willing to listen to your suggestions.
- Don
Rémi Coulom wrote:
Don
Hi Don,
There are not enough evidence to believe this.
Tast-3k has too few matches against each program, less than ten games
and has no matches against strongest programs including Crazy Stone,
MoGo and greenpeep. In addition, there seems some bias, that is,
his winning rate against
On Dec 13, 2007 11:39 AM, Gian-Carlo Pascutto [EMAIL PROTECTED] wrote:
Jason House wrote:
MoGo uses TD to predict win rates.
Really? Where did you get that information?
I can't seem to load http://www.lri.fr/~gelly/MoGo.htm at the moment, but I
found it there. One of the papers you can
Hi Mark,
It wasn't my intention to sound argumentative about this, I apologize
for this.
Yes, I agree that the shorter mate sequence should be chosen and also
that if all else is equal, the bigger win should be the course to follow.
There is a misconception that MC favors winning by the
Don Dailey wrote:
It would be great if you would provide recommendations for a simple
conversion formula when you are ready based on this study. Also,
if you have any suggestions in general for CGOS ratings the
cgos-developers would be willing to listen to your suggestions.
- Don
My
I'd like to start a more specific discussion about ways to combine tactical
information with MC-UCT. Here's the scenario.
It's the bot's turn and, prior to starting any playouts, it runs a tactical
analyzer (for want of a better name) that labels each string as unconditionally
alive,
It's the approach I believe to be more human-like. Not necessarily the
playing style.
Human beings chunk.
What all this fuss suggests to me is a meta-mc program... You
include routines that work out good sequences, as a human would--and
then you have the random part of the program
On Dec 13, 2007 2:28 PM, Forrest Curo [EMAIL PROTECTED] wrote:
It's the approach I believe to be more human-like. Not necessarily the
playing style.
Human beings chunk.
What all this fuss suggests to me is a meta-mc program... You
include routines that work out good sequences, as a
I am considering to enforce this basic protocol on the server soon:
Programs of the same family will not be paired against each other.
A family of programs have the same name up to the first hyphen and the
same password.
So if I have these programs:
Name password
From time to time I have put highly experimental and very different
programs on CGOS and I don't care if they play themselves
What I meant to say is that I don't care if they play other programs of
mine.
- Don
Don Dailey wrote:
I am considering to enforce this basic protocol on the
Many faces still finds the correct move on the first trial, but now it takes
74 nodes to prove the first move works, rather than one node.
It looks at a total of 114 nodes to prove that no other move works.
David
-Original Message-
From: [EMAIL PROTECTED] [mailto:computer-go-
[EMAIL
It looks like CGOS 19x19 is down again.
-David
-Original Message-
From: [EMAIL PROTECTED] [mailto:computer-go-
[EMAIL PROTECTED] On Behalf Of Don Dailey
Sent: Thursday, December 13, 2007 9:47 AM
To: Don Dailey
Cc: computer-go
Subject: Re: [computer-go] Where and How to Test the
Hi Rémi ,
Rémi Coulom: [EMAIL PROTECTED]:
Don Dailey wrote:
It would be great if you would provide recommendations for a simple
conversion formula when you are ready based on this study. Also,
if you have any suggestions in general for CGOS ratings the
cgos-developers would be willing
Isn't Greenpeep an alpha-beta searcher, not UCT/MC?
Since Go ranks are based an handicap stones, and 100 ELO points implies a
particular winning percentage, it would be an unlikely coincidence if 1 rank
is 100 ELO points. Any web site that claims this must be wrong :) and
should have little
David Fotland wrote:
Isn't Greenpeep an alpha-beta searcher, not UCT/MC?
Since Go ranks are based an handicap stones, and 100 ELO points implies a
particular winning percentage, it would be an unlikely coincidence if 1 rank
is 100 ELO points. Any web site that claims this must be wrong :)
On Dec 13, 2007 2:17 PM, [EMAIL PROTECTED] wrote:
I'd like to start a more specific discussion about ways to combine
tactical information with MC-UCT. Here's the scenario.
It's the bot's turn and, prior to starting any playouts, it runs a
tactical analyzer (for want of a better name) that
On Dec 13, 2007 2:37 PM, Don Dailey [EMAIL PROTECTED] wrote:
I am considering to enforce this basic protocol on the server soon:
Programs of the same family will not be paired against each other.
I frequently look at the games between my bot version more than I look at
them with other
On Dec 13, 2007 3:09 PM, David Fotland [EMAIL PROTECTED] wrote:
Isn't Greenpeep an alpha-beta searcher, not UCT/MC?
I could have sworn I heard it described as UCT/MC with MoGo-like
enhancements.
___
computer-go mailing list
Seems like the final solution to this would need to build out the
search tree to the end of the game, finding a winning line. And then
search again with a different evaluation function (one based on
points). If the second search cannot find a line that wins bigger
than the first search did, just
Jason House:
Don't forget that local tactical analysis can be reused many moves
later if the local area has remained unaffected.
In a multi-core
system, it may become increasingly valuable to dedicate a core to
tactical analysis.
In another post, libego with a million playouts per move had
My program StoneGrid calculates unconditional life and death at every move,
in the UCT Tree and in the random playout. I think it helps on its strength
a little bit, especially in the end game. In the begining of the game, seems
to be completely useless. It is slow. But it makes the random playout
On Dec 13, 2007 3:33 PM, Chris Fant [EMAIL PROTECTED] wrote:
Seems like the final solution to this would need to build out the
search tree to the end of the game, finding a winning line. And then
search again with a different evaluation function (one based on
points). If the second search
Jason House wrote:
The paper introduces RAVE and
near the end talks about using heuristics for initial parameter
estimation. The heuristic they used was based TD.
Ah, you're talking about RLGO. RLGO was trained with TD, but MoGo itself
doesn't use TD (directly).
There are posts from Sylvain
At the end of a playout there is probably some code that says samoething
like
reward = (score komi) ? 1.0 : 0.0;
You can just replace it with
reward = 1 / (1 + exp(- K * (score - komi)));
A huge value of K will reproduce the old behaviour, a tiny value will result
in a program that tries to
I don't want to add more mechanisms. You can build your own mechanism
by making your own password naming convention or bot naming
convention.For instance you can use the underscore character to
build separate families of bots and still keep your own branding.
We might at some point make a
On Dec 13, 2007 3:40 PM, terry mcintyre [EMAIL PROTECTED] wrote:
Jason House:
Don't forget that local tactical analysis can be reused many moves later
if the local area has remained unaffected.
In a multi-core system, it may become increasingly valuable to dedicate
a core to tactical
On Dec 13, 2007 3:52 PM, Gian-Carlo Pascutto [EMAIL PROTECTED] wrote:
Jason House wrote:
The paper introduces RAVE and
near the end talks about using heuristics for initial parameter
estimation. The heuristic they used was based TD.
Ah, you're talking about RLGO. RLGO was trained with
Nice idea and worth a try.I predict that this will weaken the
program no matter what value you use, but that there may indeed be a
reasonable compromise that gives you the better behavior with only a
very small decline in strength.
I think this bother people so much that they would be
Quoting Álvaro Begué [EMAIL PROTECTED]:
On Dec 13, 2007 2:28 PM, Forrest Curo [EMAIL PROTECTED] wrote:
It's the approach I believe to be more human-like. Not necessarily the
playing style.
Human beings chunk.
What all this fuss suggests to me is a meta-mc program... You
include routines
On Dec 13, 2007 4:01 PM, Don Dailey [EMAIL PROTECTED] wrote:
I don't want to add more mechanisms. You can build your own mechanism
by making your own password naming convention or bot naming
convention.For instance you can use the underscore character to
build separate families of bots
That's a strong program, and interesting?information.?For clarity, I assume
that you mean something like Benson's algorithm, while my intended meaning was
alive assuming perfect play. Both are relevant, we just need to keep them
sorted out.
- Dave Hillis
-Original Message-
From: John
Hi Begué and Don,
I did this in my earlier version of ggmc. The real code was:
reward = 0.5 * (1 + tanhf(K * (score - komi)));
# tanhf() is a float, not double, version of hyperbolic tangent
function.
# I use tanh() as exp() may cause overflow.
# You can see the code from http://www.gggo.jp/
Regarding correspondance with human ranks, and handicap value, I cannot
tell yet. It is very clear to me that the Elo-rating model is very wrong
for the game of Go, because strength is not one-dimensional, especially
when mixing bots and humans. The best way to evaluate a bot in terms of
human
Yes, StoneGrid only uses Benson's algorithm.
On Dec 13, 2007 4:30 PM, [EMAIL PROTECTED] wrote:
That's a strong program, and interesting information. For clarity, I
assume that you mean something like Benson's algorithm, while my intended
meaning was alive assuming perfect play. Both are
Please excuse me if this question has been answered before, my brief
look through the archives I have did not find it. How does one
compute unconditional life and death? Ideally, in an efficient
manner. In other words, I want to know, for each group of stones on
the board that share a common
On Dec 13, 2007 4:40 PM, George Dahl [EMAIL PROTECTED] wrote:
Please excuse me if this question has been answered before, my brief
look through the archives I have did not find it. How does one
compute unconditional life and death? Ideally, in an efficient
manner. In other words, I want to
Thanks!
- George
On 12/13/07, Jason House [EMAIL PROTECTED] wrote:
On Dec 13, 2007 4:40 PM, George Dahl [EMAIL PROTECTED] wrote:
Please excuse me if this question has been answered before, my brief
look through the archives I have did not find it. How does one
compute unconditional life
I think Martin Mueller published an improvement to benson's algorithm that
is also proved correct.
David
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of John Fan
Sent: Thursday, December 13, 2007 1:36 PM
To: computer-go
Subject: Re: [computer-go] MC-UCT and tactical
Are you suggesting a mechanism that allows you to turn this off and on
at will and that is separate from the naming and password convention?
One thing I definitely would not do is allow you to select opponents you
prefer to play or not to play - whatever control we have will be limited
to our
There's some value to human-human games in this proposed tournament, I think.
Some humans might play or worse at 5 minute time controls. Comparison with
longer games might be interesting.
Terry McIntyre [EMAIL PROTECTED]
They mean to govern well; but they mean to govern. They promise to be
Mark Boon wrote:
Let me therefore change the discussion a bit to see if this will make
things more clear. Consider a chess-playing program with an
unorthodox search method. When playing a human after while it
announces check-mate in thirty-four moves. Yet the human can clearly
see it's check-mate
The standard one is Benson's algorithm
http://senseis.xmp.net/?BensonsAlgorithmhttp://senseis.xmp.net/?BensonsAlgorithm
The standard caveat is that this algorithm alone is very weak - it
typically applies to zero stones on a position played out using
Japanese rules. But you have to start
The standard one is Benson's algorithm
http://senseis.xmp.net/?BensonsAlgorithmhttp://senseis.xmp.net/?BensonsAlgorithm
The standard caveat is that this algorithm alone is very weak - it
typically applies to zero stones on a position played out using
Japanese rules. But you have to start
On Dec 13, 2007 4:51 PM, Don Dailey [EMAIL PROTECTED] wrote:
Do you have a suggestion for a specific mechanism for this?
I was mostly just thinking a file that cgos looks for that includes bot
names and the preferences. The don't play list would need obvious
restrictions like what you've
On Dec 13, 2007 4:50 PM, David Fotland [EMAIL PROTECTED] wrote:
I think Martin Mueller published an improvement to benson's algorithm
that is also proved correct.
Yes. Safety under alternating play. It's more generally applicable but I
didn't think it met the needs of the original request.
Thomas Wolf wrote a life-and-death program some while back, with much stronger
abilities; he mentions that it only works for fully enclosed positions.
http://www.qmw.ac.uk/~ugah006/gotools/
You may wish to read http://lie.math.brocku.ca/twolf/papers/mono.pdf
Dave Dyer is too modest to refer
-Original Message-
From: Jason House [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Thu, 13 Dec 2007 3:20 pm
Subject: Re: [computer-go] MC-UCT and tactical information
On Dec 13, 2007 2:17 PM, [EMAIL PROTECTED] wrote:
I'd like to start a more specific
Don Dailey wrote:
I don't really know what you mean by one-dimensional. My
understanding of playing strength is that it's not one-dimensional
meaning that it is foiled by in-transitivities between players with
different styles.You may be able to beat me, but I might be able to
beat
Here is a card game I thought of while considering how to chunk
moves based on mc outcomes...
It is not in any way equivalent to programming go, but there are
significant similarities.
You have a deck of 360 cards numbered sequentially. (This is not as
complex as go, but the tree of
There's a sort of hierarchy of life-and-death methods, for which
Benson's algorithm is the base.
My status database is next above that, but it is actually a lookup table
based on a problem solver, such as Wolfs or mine. The unique thing
about the database is that it could be dropped in to a
On Dec 13, 2007 5:33 PM, [EMAIL PROTECTED] wrote:
-Original Message-
From: Jason House [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Thu, 13 Dec 2007 3:20 pm
Subject: Re: [computer-go] MC-UCT and tactical information
On Dec 13, 2007 2:17 PM, [EMAIL
The rule of thumb I try to follow is to connect when possible; try to
disconnect enemy groups - but don't bother with cuts if the separated groups
are alive; don't let yourself be cut off if you can't make two eyes.
These rules seem to make good sense; they're not just human style for the
This might be of interest given the recent interest in Go programming
in functional languages (Lisp).
http://lambda-the-ultimate.org/node/2533
___
computer-go mailing list
computer-go@computer-go.org
What I mean is that if human player H beats computer C1 65% of the
time, and computer C2 also beats computer C1 65% of the time, then I
would expect that H would be stronger than C2, especially if both C1
and C2 are MC programs. If it is the case, then it would make it
difficult to compare
Impasse: noun,
1. There is no argument so elegant and compelling that it will prove the
negative that making UCT greedier could not possibly lead to more won games.
2. Everyone who has tried it one way, will have tried some variations. It's not
as if it takes a?lot of code. No one has reported
[EMAIL PROTECTED] wrote:
Impasse: noun,
1. There is no argument so elegant and compelling that it will prove
the negative that making UCT greedier could not possibly lead to more
won games.
I could hardly fail to disagree with you less.
___
Hi Don,
Don Dailey: [EMAIL PROTECTED]:
I want to clarify this:
The new CGOS chart uses bayeselo to recalculate all the ratings for the
players - it does not use CGOS ratings.
Hm, now I remembered that there were not so few games wrongly ended
and scored by server's hang-up. In addition,
Many strong programs have 100% scores against many opponents and many
games. They cannot be hanging up very often.
When the server hangs, the current game you are playing is not scored.
I don't think there is a major problem here.
As far as network problems CGOS considers that part of
I got a smile out of that.
_ Dave Hillis
-Original Message-
From: Don Dailey [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Thu, 13 Dec 2007 8:52 pm
Subject: Re: [computer-go] low-hanging fruit - yose
[EMAIL PROTECTED] wrote:
Impasse: noun,
1. There is no
Why don't you mention the several versions on one login name
problem?
And, I considered CGOS is not the Nascar type commercial races but a
field to help developers to improve their progrms, say, in some
academic sense.
What is your reason to name it as 'Hall of fame'? I'm not Western
and can
I was thinking that it could be quicker to do prototyping in something
like python, while having fast low-level functions in C. ...
I have done a Python binding for the current libego. You can get it from
http://mjw.woodcraft.me.uk/2007/pyego/ .
I did this as an exercise in using Pyrex
Hi. My program greenpeep is currently UCT-based, with some MoGo-like
enhancements and some additional learning.
I described it more here:
http://computer-go.org/pipermail/computer-go/2007-October/011438.html
http://computer-go.org/pipermail/computer-go/2007-November/011865.html
Regarding the
Hideki Kato wrote:
Why don't you mention the several versions on one login name
problem?
I don't consider it a major problem. The theory is that a big
improvement against versions of the same program might not translate to
equivalent improvements vs other programs.I want to see that
Your sentences make me strongly believe it's too early.
I won't be against your idea. Again, just claiming it's too early.
Following your analogy to sports, there should be some gurantee of
fairness and agreement of participants.
Our presupposition was that only recent results were important.
Don't worry Hideki,
Nothing has changed on CGOS, only something has been added and it has
no affect on what is already there.
The standard current standings page also stays the same. No change I
promise.
Different versions of a program running on CGOS has never been an issue
before, and
Don Dailey [EMAIL PROTECTED] writes:
I thinks it's very difficult to outperform C since C really is just
about at the level of assembly language.
No, in special cases it's not that hard to outperform C, because the
language spec dictates some not so efficient details. C has an ABI and
it's
85 matches
Mail list logo