David, that's a fantastic and succinct summarization. Tysvm!

On Jan 9, 2017 12:19 AM, "David Ongaro" <david.ong...@hamburg.de> wrote:

> On Jan 5, 2017, at 10:49 PM, Robert Jasiek <jas...@snafu.de> wrote:
>
>
> On 06.01.2017 03:36, David Ongaro wrote:
>
> Two amateur players where analyzing a Game and a professional player
> happened to come by.
> So they asked him how he would assess the position. After a quick look he
> said “White is
>
> > leading by two points”. The two players where wondering: “You can count
> that quickly?”
>
> Usually, accurate positional judgement (not only territory but all
> aspects) takes between a few seconds and 3 minutes, depending on the
> position and provided one is familiar with the theory.
>
>
> Believe it or not, you also rely on “feelings” otherwise you wouldn’t be
> able to survive.
>
> Some see DNNs as some kind of “cache” which has knowledge of the world in
> compressed form. Because it's compressed it can’t always reproduce learned
> facts with absolute accuracy but on the other hand it has the much more
> desired feature to even yield reasonable results for states it never saw
> before.
>
> Mathematically (the approach you seem yourself constrain into) there
> doesn’t seem to be a good reason why this should work. But if you take the
> physical structure of the world into account things change. In fact there
> is a recent pretty interesting paper (not only for you, but surely also for
> other readers in this list) about this topic: https://arxiv.org/abs/
> 1608.08225.
>
> I interpret the paper like this: the number of states we have to be
> prepared for with our neural networks (either electronic or biological) may
> be huge, but compared to all mathematically possible states it's almost
> nothing. That is due to the fact that our observable universe is an
> emergent result of relatively simple physical laws. That is also the reason
> why deep networks (i.e. with many layers) work so well, even though
> mathematically a one layer network is enough. If the emergent behaviours of
> our universe can be understand in layers of abstractions, we can scale our
> network linearly by the number of layers matching the number of
> abstractions. That’s a huge win over the exponential growth required when
> we need a mathematical correct solution for all possible states.
>
> The “physical laws” for Go are also relatively simple and the complexity
> of Go is an emergent result of these. That is also the reason why the DNNs
> are trained with real Go positions not just with random positions, which
> make up the majority of all possible Go positions. Does that mean the DNNs
> won’t perform well when evaluating random positions, or even just the
> "arcane positions” you discussed with Jim? Absolutely! But it doesn’t have
> to. That’s not its flaw but its genius.
>
> David O.
>
>
> _______________________________________________
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to