Hi,

> > What I think is that the set of patterns in perceptual and motoric data
> has
> > radically different statistical properties than the set of patterns in
> > linguistic and mathematical data ... and that the properties of the set
> of
> > patterns in perceptual and motoric data is intrinsically better suited to
> > the needs of a young, ignorant, developing mind.
>
> Sure it is. Systems with different sensory channels will never "fully
> understand" each other. I'm not saying that one channel (verbal) can
> replace another (visual), but that both of them (and many others) can
> give symbol/representation/concept/pattern/whatever-you-call-it
> meaning. No on is more "real" than others.


True, but some channels may -- due to the statistical properties of the data
coming across them -- be more conducive to the development of AGI than
others...


>
>
> > All these different domains of pattern display what I've called a "dual
> > network" structure ... a collection of hierarchies (of progressively more
> > and more complex, hierarchically nested patterns) overlayed with a
> > heterarchy (of overlapping, interrelated patterns).  But the statistics
> of
> > the dual networks in the different domains is different.  I haven't fully
> > plumbed the difference yet ... but, among the many differences is that in
> > perceptual/motoric domains, you have a very richly connected dual network
> at
> > a very low level of the overall dual network hierarchy -- i.e., there's a
> > richly connected web of relatively simple stuff to understand ... and
> then
> > these simple things are related to (hence useful for learning) the more
> > complex things, etc.
>
> True, but can you say that the relations among words, or concepts, are
> simpler?



I think the set of relations among words (considered in isolation, without
their referents) is "less rich" than the set of relations among perceptions
of a complex world, and far less rich than the set of relations among
{perceptions of a complex world, plus words referring to these
perceptions}....

And I think that this lesser richness makes sequences of words a much worse
input stream for a developing AGI

I realize that quantifying "less rich" in the above is a significant
challenge, but I'm presenting my intuition anyway...

Also, relatedly and just as critically, the set of perceptions regarding the
body and its interactions with the environment, are well-structured to give
the mind a sense of its own self.  This primitive infantile sense of
body-self gives rise to the more sophisticated phenomenal self of the child
and adult mind, which gives rise to reflective consciousness, the feeling of
will, and other characteristic structures of humanlike general
intelligence.  A stream of words doesn't seem to give an AI the same kind of
opportunity for self-development....



>
> In this short paper, I make no attempt to settle all issues, but just
> to point out a simple fact --- a laptop has a body, and is not less
> embodied than Roomba or Mindstorms --- that seems have been ignored in
> the previous discussion.


I agree with your point, but I wonder if it's partially a "straw man"
argument.  The proponents of embodiment as a key  aspect of AGI don't of
course think that Cyc is disembodied in a maximally strong sense -- they
know it interacts with the world via physical means.  What they mean by
"embodied" is something different.

I don't have the details at my finger tips, but I know that Maturana, Varela
and Eleanor Rosch took some serious pains to carefully specify the sense in
which they feel "embodiment" is critical to intelligence, and to distinguish
their sense of embodiment from the trivial sense of "communicating via
physical signals."

I suggest your paper should probably include a careful response to the
characterization of embodiment presented in

http://www.*amazon*.com/*Embodied*-*Mind*
-Cognitive-Science-Experience/dp/0262720213

I note that I do not agree with the arguments of Varela, Rosch, Brooks,
etc.  I just think their characterization of embodiment is an interesting
and nontrivial one, and I'm not sure NARS with a text stream as input would
be embodied according to their definition...

-- Ben



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to