On Mon, Feb 18, 2019 at 10:05 AM Stefan Reich via AGI <agi@agi.topicbox.com>
wrote:

> Nothing wrong with pushing your own results if you consider them
> worthwhile...
>

Well, I think on one level it's much the same as Pissanetzky.

Pissanetzky's is a meaningful way of relating elements which generates new
patterns. You have new patterns all the time, but they are nevertheless
meaningful, because the relationships generating them are meaningful. So it
takes us away from the idea learning every pattern, which is what I believe
traps deep learning (and prevents Tesla from spotting firetrucks..., and
getting to that last mile self-driving.)

Similarly I found new patterns, which were very much like Pissanetzky's
invariant permutations. But I did it for language. When I projected out
these new patterns of "invariants" for each new sentence, I found hierarchy.

You can think of this as a next stage in a progression from symbolism to
distributed representation, now to novel but meaningful rearrangements of
distributed elements.

Meanwhile deep learning just keeps pushing against a ceiling of what can be
learned.

FWIW you can see an old and simple demo of the principle of hierarchy
coming out of novel rearrangements (of embeddings) at:

demo.chaoticlanguage.com

Summary paper circa 2014 at:

Parsing using a grammar of word association vectors
http://arxiv.org/abs/1403.2152

-Rob

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T581199cf280badd7-M3c751400cb9d6996ef37d278
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to