On Thu, Sep 4, 2008 at 2:10 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>> Sure it is. Systems with different sensory channels will never "fully
>> understand" each other. I'm not saying that one channel (verbal) can
>> replace another (visual), but that both of them (and many others) can
>> give symbol/representation/concept/pattern/whatever-you-call-it
>> meaning. No on is more "real" than others.
>
> True, but some channels may -- due to the statistical properties of the data
> coming across them -- be more conducive to the development of AGI than
> others...

I haven't seen any evidence for that. For human intelligence, maybe,
but for intelligence in general, I doubt it.

> I think the set of relations among words (considered in isolation, without
> their referents) is "less rich" than the set of relations among perceptions
> of a complex world, and far less rich than the set of relations among
> {perceptions of a complex world, plus words referring to these
> perceptions}....

Not necessarily. Actually some people may even make the opposite
argument: relations among non-linguistic components in experience are
basically temporal or spatial, while the relations among words and
concepts have much more types. I won't go that far, but I guess in
some sense all channels may have the same (potential) richness.

> And I think that this lesser richness makes sequences of words a much worse
> input stream for a developing AGI
>
> I realize that quantifying "less rich" in the above is a significant
> challenge, but I'm presenting my intuition anyway...

If your condition is true, then your conclusion follows, but the
problem is in that "IF".

> Also, relatedly and just as critically, the set of perceptions regarding the
> body and its interactions with the environment, are well-structured to give
> the mind a sense of its own self.

We can say the same for every input/out operation set of an
intelligent system. "SELF" is defined by what the system can feel and
do.

> This primitive infantile sense of
> body-self gives rise to the more sophisticated phenomenal self of the child
> and adult mind, which gives rise to reflective consciousness, the feeling of
> will, and other characteristic structures of humanlike general
> intelligence.

Agree.

> A stream of words doesn't seem to give an AI the same kind of
> opportunity for self-development....

If the system just sits there and passively accept whatever words come
into it, what you said is true. If the incoming "words" is causally
related to its outgoing "words", will you still say that?

> I agree with your point, but I wonder if it's partially a "straw man"
> argument.

If you read Brooks or Pfeifer, you'll see that most of their arguments
are explicitly or implicitly based on the myth that only a robot "has
a body", "have real sensor", "live in a real world", ...

> The proponents of embodiment as a key  aspect of AGI don't of
> course think that Cyc is disembodied in a maximally strong sense -- they
> know it interacts with the world via physical means.  What they mean by
> "embodied" is something different.

Whether a system is "embodied" does not depends on hardware, but on semantics.

> I don't have the details at my finger tips, but I know that Maturana, Varela
> and Eleanor Rosch took some serious pains to carefully specify the sense in
> which they feel "embodiment" is critical to intelligence, and to distinguish
> their sense of embodiment from the trivial sense of "communicating via
> physical signals."

That is different. The "embodiment" school in CogSci doesn't focus on
body (they know every human already has one), but on experience.
However, they have their misconception about AI. As I mentioned,
Barsalou and Lakoff both thought strong AI is unlikely because
computer cannot have human experience --- I agree what they said
except their narrow conception of intelligence (CogSci people tend to
take "intelligence" as "human intelligence").

> I suggest your paper should probably include a careful response to the
> characterization of embodiment presented in
>
> http://www.amazon.com/Embodied-Mind-Cognitive-Science-Experience/dp/0262720213
>
> I note that I do not agree with the arguments of Varela, Rosch, Brooks,
> etc.  I just think their characterization of embodiment is an interesting
> and nontrivial one, and I'm not sure NARS with a text stream as input would
> be embodied according to their definition...

If I got the time (and motivation) to extend the paper into a journal
paper, I'll double the length by discussing "embodiment in CogSci". In
the current version, as a short conference paper, I'd rather focus on
"embodiment in AI", and only attack the "robot myth".

Pei


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to