[Computer-go] ICGA Computer Olympiad in Taiwan

2017-12-19 Thread Rémi Coulom
Hi, The Computer Olympiad was announced yesterday: " Dear Colleagues, The ICGA is pleased to announce that the 2018 Computer Olympiad and the 10th International Conference on Computer and Games (CG 2018) will be held in Taiwan, from July 9th-13th inclusive. The Chess events, including the Wor

Re: [Computer-go] Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

2017-12-19 Thread Fidel Santiago
Hello, I was thinking about this development and what it may mean from the point of view of a more general AI. I daresay the next experiment would be to have just one neural net playing the three games, right? To my understanding we still have three instances of the same *methodology* but not yet

Re: [Computer-go] Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

2017-12-19 Thread Andy
Google has already announced their next step -- Starcraft2. But so far the results they published aren't mind blowing like these. 2017-12-19 9:15 GMT-06:00 Fidel Santiago : > Hello, > > I was thinking about this development and what it may mean from the point > of view of a more general AI. I da

Re: [Computer-go] Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

2017-12-19 Thread Roel van Engelen
>I was thinking about this development and what it may mean from the point of view of a more general AI. >I daresay the next experiment would be to have just one neural net playing the >three games, right? >To my understanding we still have three instances of the same *methodology* but not yet a si

Re: [Computer-go] Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

2017-12-19 Thread Marc Landgraf
There is not much to achieve there though. It is expected that an AI will be able to outplay a Human opponent simply on micro tricks. Perfect single unit micromanagment across the entire map can easily gain a large enough edge, that the strategic decision making with imperfect information doesn't

[Computer-go] mcts and tactics

2017-12-19 Thread Dan
Hello all, It is known that MCTS's week point is tactics. How is AlphaZero able to resolve Go tactics such as ladders efficiently? If I recall correctly many people were asking the same question during the Lee Sedo match -- and it seemed it didn't have any problem with ladders and such. In chess

Re: [Computer-go] mcts and tactics

2017-12-19 Thread Stephan K
2017-12-20 0:26 UTC+01:00, Dan : > Hello all, > > It is known that MCTS's week point is tactics. How is AlphaZero able to > resolve Go tactics such as ladders efficiently? If I recall correctly many > people were asking the same question during the Lee Sedo match -- and it > seemed it didn't have a

Re: [Computer-go] mcts and tactics

2017-12-19 Thread uurtamo .
You guys are killing me. Let's do what the space science guys did; Parallelize via slow computation. If you need me to handle errors, I can do ecc's. I know about how to correct for errors. Why are we all trying to find compute power independently? Let's just add it up. There's no real money her

Re: [Computer-go] mcts and tactics

2017-12-19 Thread Andy
How do you interpret this quote from the AGZ paper? "Surprisingly, shicho (“ladder” capture sequences that may span the whole board) – one of the first elements of Go knowledge learned by humans – were only understood by AlphaGo Zero much later in training." To me "understood" means the neural net

Re: [Computer-go] mcts and tactics

2017-12-19 Thread David Wu
I wouldn't find it so surprising if eventually the 20 or 40 block networks develop a set of convolutional channels that traces possible ladders diagonally across the board. If it had enough examples of ladders of different lengths, including selfplay games where game-critical ladders "failed to be

Re: [Computer-go] mcts and tactics

2017-12-19 Thread Brian Sheppard via Computer-go
>I wouldn't find it so surprising if eventually the 20 or 40 block networks >develop a set of convolutional channels that traces possible ladders >diagonally across the board. Learning the deep tactics is more-or-less guaranteed because of the interaction between search and evaluation throug

[Computer-go] Mcts and tactics

2017-12-19 Thread patrick.bardou via Computer-go
Hi Daniel, AGZ paper: greedy player based on policy network (= zero look-ahead) has an estimated ELO of 3000 ~ Fan Hui 2p. Professional player level with Zero look-ahead. For me, it is the other striking aspect of 'Zero' ! ;-) IMO, this implies that the NN has indeed captured lots of tactics. Eve