Hi Simon,
do your remember my silly remarks in an email discussion almost a year ago?
You had written:
>> So, yes, with all the exciting work in DCNN, it is very tempting
>> to also do DCNN. But I am not sure if we should do so.
And my silly reply had been:
> I think that DCNN is somehow in a d
I would just mention that Maven/Scrabble truncated rollouts are not comparable
to Go/MCTS truncated rollouts. An evaluation function in Scrabble is readily at
hand, because scoring points is hugely correlated with winning. There is no
evaluation function for Go that is readily at hand.
There ha
> I'd propose these as the major technical points to consider when
> bringing a Go program (or a new one) to an Alpha-Go analog:
> ...
> * Are RL Policy Networks essential? ...
Figure 4b was really interesting (see also Extended Tables 7 and 9): any
2 of their 3 components, on a single machin
On 1/27/16 12:08 PM, Aja Huang wrote:
2016-01-27 18:46 GMT+00:00 Aja Huang mailto:ajahu...@google.com>>:
Hi all,
We are very excited to announce that our Go program, AlphaGo, has
beaten a professional player for the first time. AlphaGo beat the
European champion Fan Hui by 5 ga
I think the first goal was and is to find a pathway that clearly works to
reach into the upper echelons of human strength, even if the first version
used a huge amount of resources. Once found, then the approach can be
explored for efficiencies from both directions, top down (take this away
and see
On Thu, Jan 28, 2016 at 3:14 PM, Stefan Kaitschick
wrote:
> That "value network" is just amazing to me.
> It does what computer go failed at for over 20 years, and what MCTS was
> designed to sidestep.
Thought it worth a mention: Detlef posted about trying to train a CNN
on win rate as well in F
On Thu, Jan 28, 2016 at 10:29:29AM -0600, Jim O'Flaherty wrote:
> I think the first goal was and is to find a pathway that clearly works to
> reach into the upper echelons of human strength, even if the first version
> used a huge amount of resources. Once found, then the approach can be
> explored
I always thought the same. But I don't think they tackled the decomposition
problem directly.
Achieving good(non-terminal) board evaluations must have reduced the
problem.
If you don't do full playouts, you get much less thrashing between
independent problems.
It also implies a useful static L&D ev
Hi!
Since I didn't say that yet, congratulations to DeepMind!
(I guess I'm a bit disappointed that no really new ML models had to be
invented for this though, I was wondering e.g. about capsule networks or
training simple iterative evaluation subroutines (for semeai etc.) by
NTM-based appro
Indeed – Congratulations to Google DeepMind!
It’s truly an immense achievement. I’m struggling
to think of other examples of reasonably mature
and strongly contested AI challenges where a new
system has made such a huge improvement over
existing systems – and I’m still struggling …
Simon Lucas
I think such analysis might not bee too usefull. At least chess players
think it is not very usefull. Usually for learning you need "wake-up" your
brains so computer analysis without reasons probabaly on marginally useful.
But very entertaining
2016-01-28 13:27 GMT+02:00 Michael Markefka :
> I t
> here a comment by Antti Törmänen
> http://gooften.net/2016/01/28/the-future-is-here-a-professional-level-go-ai/
Thanks, exactly what I was looking for. He points out black 85 and 95
might be mistakes, but didn't point out any dubious white (computer)
moves. He picks out a couple of white moves a
2016-01-28 12:23 GMT+01:00 Michael Markefka :
> I find it interesting that right until he ends his review, Antti only
> praises White's moves, which are the human ones. When he stops, he
> even considers a win by White as basically inevitable.
>
> White moves are the AI ones, check the players
___
That would make my writing nonsense of course. :)
Thanks for the pointer.
On Thu, Jan 28, 2016 at 12:26 PM, Xavier Combelle
wrote:
>
>
> 2016-01-28 12:23 GMT+01:00 Michael Markefka :
>>
>> I find it interesting that right until he ends his review, Antti only
>> praises White's moves, which are t
I think many amateurs would already benefit from a simple blunder
check and a short list of viable alternatives and short continuations
for every move.
If I could leave my PC running over night for a 30s/move analysis at
9d level and then walk through my game with that quality of analysis,
I'd be
I find it interesting that right until he ends his review, Antti only
praises White's moves, which are the human ones. When he stops, he
even considers a win by White as basically inevitable.
Now Fan Hui either blundered badly afterwards, or more promising, it
could be hard for humans to evaluate
Hi Xavier,
Really nice comments by Antti Törmänen, to the point and very clear
explanation. Thanks for the pointer.
best regards,
Jan van der Steen
On 28-01-16 11:45, Xavier Combelle wrote:
here a comment by Antti Törmänen
http://gooften.net/2016/01/28/the-future-is-here-a-professional-leve
here a comment by Antti Törmänen
http://gooften.net/2016/01/28/the-future-is-here-a-professional-level-go-ai/
2016-01-28 11:19 GMT+01:00 Darren Cook :
> > If you want to view them in the browser, I've also put them on my blog:
> >
> http://www.furidamu.org/blog/2016/01/26/mastering-the-game-of-go
> If you want to view them in the browser, I've also put them on my blog:
> http://www.furidamu.org/blog/2016/01/26/mastering-the-game-of-go-with-deep-neural-networks-and-tree-search/
> (scroll down)
Thanks. Has anyone (strong) made commented versions yet? I played
through the first game, but it j
Congratulations!
What I find most impressive is the engineering effort, combining so many
different parts, which even standalone would be a strong program.
I think the design philosophy of using 3 different sources of "go
playing" strength is great in it self (and if you read the paper there
On 28.01.2016 04:57, Anders Kierulf wrote:
Please let me know if I misinterpreted anything.
You write "Position evaluation has not worked well for Go in the past"
but I think you should write "...Computer Go..." because applicable,
reasonably accurate theory for human players' positional eval
21 matches
Mail list logo