[computer-go] IEEE Trans. CIAIG

2009-02-25 Thread Lucas, Simon M
 The IEEE Transactions on Computational Intelligence and AI in Games

 invites submissions on computer go - many of the ideas discussed on

 this list are of core interest to the journal.

 

 The journal offers an efficient and thorough review process.  Currently
the

 average time between submission and first decision is less than six
weeks.

 

 More details here:

 

  http://www.ieee-cis.org/pubs/tciaig/

 

 best wishes,

 

  Simon Lucas

  IEEE T-CIAIG EiC

 

 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-25 Thread Ernest Galbrun
Dave,
Thank you for taking the time giving me these advices. I will give you my
opinion about your last point first, beacause I think it is the most
important point, stressing out what I really wish to achieve with this
project. I am perfectly aware that I am very naive and bold in my approach
to tackle this problem. That's the point. My project is not really a
computer-go project, it hasn't much to do with AI. It's about natural
selection only, the go game is a pretext. As such, my intent is not to
express my art in evolving an artificial neural network, it is to give my
players the same opportunities that our DNA ancestors had a few billion
years ago.

With that in mind, here is how I feel with the other points you mentionned :
- testing with smaller board would be wise indeed, and I am running a single
9x9 ecosystem (opengo don't support smaller board), the problem being that I
only have limited ressource and I think it is way funnier to evolve real go
players than ersatz practice-level contestants. Beside, if my approach ever
give any result, there is much "meta-evolution" that needs to occur first
(evolution about the efficiency of evolution), and this will probably take
as much time on a small board, this time being totally lost when/if I try to
scale-up.
- I will sure try to test it against other computer-go player. I have to
implement a GTP interface to my players, this is on my TODO list.
- There is, in theory, a way for any intern function to duplicate and be
used elsewhere through the definition of genes in my neural network. The
players will have to find out how to use this. And yes, I intend the players
to find by themselves about the simplest go principle, I think this is what
evolution is best at (you now, actually evolving).

Ernest Galbrun

On Tue, Feb 24, 2009 at 20:52,  wrote:

> Ernest,
> Fun stuff! I have a co-evolved neural net that used to play on KGS as
> “Antbot9x9”. I use the same net in the progressive widening part of my MCTS
> engine. I would guess that many people experiment along these lines but they
> rarely report results.
>
> Here are some suggestions that might be relevant:
> - If you test your approach on smaller board sizes you can get
> results orders of magnitude faster. 7x7 would be a good starting size. (If
> you use 5x5, make sure your super-ko handling is rock solid first.)
> - Take the strongest net at every generation and bench mark it
> against one or more computer opponents to measure progress over time.
> Suitable computer opponents would be light playouts (random), heavy playouts
> (a bit tougher), Wally (there’s nothing quite like getting trounced by the
> infamous Wally to goad one into a new burst of creativity), and Gnugo.  When
> you have a net you like, it can play against other bots online at CGOS and
> get a ranking.
> - Use a hierarchical architecture, or weight sharing or something
> to let your GA learn general principles that apply everywhere on the board.
> A self-atari move on one spot is going to be roughly as bad as on any other
> spot. You probably don’t want your GA to have to learn not to move into
> self-atari independently for every space on the board.
> - Use the “mercy rule” to end games early when one color has an
> overwhelming majority of the stones on the board.
> - Feed the net some simple features. To play well, it will have to
> be able to tell if a move would be self-atari, a rescue from atari, a
> capturing move,… Unless you think it might not need these after all, do you
> really want to wait for the net to learn things that are trivial to
> pre-calculate? You are probably reluctant to feed it any features at all. As
> a motivating exercise, you could try having the GA evolve a net to calculate
> one of those features directly.
> - GA application papers tend to convey the sense that the author
> threw a problem over a wall and the GA caught it and solved it for him.
> Really, there’s a lot of art to it and a lot of interactivity. Fortunately,
> that’s the fun part.
> - Dave Hillis
>
>
>
> -Original Message-
> From: Ernest Galbrun 
> To: computer-go 
> Sent: Tue, 24 Feb 2009 7:28 am
> Subject: Re: [computer-go] Presentation of my personnal project : e
> volution of an artificial go player through random mutation and natural
> selection
>
>   I read a paper a couple years ago about a genetic algorithm to evolve
>>  a neural network for Go playing (SANE I think it was called?).  The
>> network would output a value from 0 to 1 for each board location, and
>> the location that had the highest output value was played as the next
>> move.  I had an idea that the outputs could be sorted to get the X
>> "best" moves, and that that set of moves could be used to direct a
>> minimax or monte carlo search.  I haven't had the chance to prototype
>> this, but I think it would be an interesting and possibly effective
>> way to combine neural networks with the current Go algorithms.
>>
>