I have a paper
(http://www.cogsci.indiana.edu/farg/peiwang/PUBLICATION/#semantics) on this
topic, which is mostly in agreement with what Kevin said.

For an intelligent system, it is important for its concepts and beliefs to
be grounded on the system's experience, but such experience can be textual.
Of course, sensorimotor experience is richer, but it is not fundamentally
different from textual experience.

Pei

----- Original Message -----
From: "maitri" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, December 09, 2002 5:52 PM
Subject: Re: [agi] AI on TV


> I don't want to underestimate the value of embodiment for an AI system,
> especially for the development of consciousness.  But this is just my
> opinion...
>
> As far as a very useful AGI, I don't see the necessity of a body or
sensory
> inputs beyond textual input.  Almost any form can be represented as
> mathematical models that can easily be input to the system in that manner.
> I'm sure there are others on this list that have thought a lot more about
> this than I have..
>
> Kevin
>
> ----- Original Message -----
> From: "Shane Legg" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Monday, December 09, 2002 4:18 PM
> Subject: Re: [agi] AI on TV
>
>
> > Gary Miller wrote:
> > > On Dec. 9 Kevin said:
> > >
> > > "It seems to me that building a strictly "black box" AGI that only
uses
> > > text or graphical input\output can have tremendous implications for
our
> > > society, even without arms and eyes and ears, etc.  Almost anything
can
> > > be designed or contemplated within a computer, so the need for dealing
> > > with analog input seems unnecessary to me.  Eventually, these will be
> > > needed to have a complete, human like AI.  It may even be better that
> > > these first AGI systems will not have vision and hearing because it
will
> > > make it more palatable and less threatening to the masses...."
> >
> > My understanding is that this current trend came about as follows:
> >
> > Classical AI system where either largely disconnected from the physical
> > world or lived strictly in artificial mirco worlds.  This lead to a
> > number of problems including the famous "symbol grounding problem" where
> > the agent's symbols lacked any grounding in an external reality.  As a
> > reaction to these problems many decided that AI agents needed to be
> > more grounded in the physical world, "embodiment" as they call it.
> >
> > Some now take this to an extreme and think that you should start with
> > robotic and sensory and control stuff and forget about logic and what
> > thinking is and all that sort of thing.  This is what you see now in
> > many areas of AI research, Brooks and the Cog project at MIT being
> > one such example.
> >
> > Shane
> >
> >
> > -------
> > To unsubscribe, change your address, or temporarily deactivate your
> subscription,
> > please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to