Gary Miller wrote:
On Dec. 9 Kevin said:
"It seems to me that building a strictly "black box" AGI that only uses text or graphical input\output can have tremendous implications for our society, even without arms and eyes and ears, etc. Almost anything can be designed or contemplated within a computer, so the need for dealing with analog input seems unnecessary to me. Eventually, these will be needed to have a complete, human like AI. It may even be better that these first AGI systems will not have vision and hearing because it will make it more palatable and less threatening to the masses...."
My understanding is that this current trend came about as follows:

Classical AI system where either largely disconnected from the physical
world or lived strictly in artificial mirco worlds. This lead to a
number of problems including the famous "symbol grounding problem" where
the agent's symbols lacked any grounding in an external reality. As a reaction to these problems many decided that AI agents needed to be
more grounded in the physical world, "embodiment" as they call it.

Some now take this to an extreme and think that you should start with
robotic and sensory and control stuff and forget about logic and what
thinking is and all that sort of thing. This is what you see now in
many areas of AI research, Brooks and the Cog project at MIT being
one such example.

Shane


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Reply via email to