Dead right (in an ambiguous way :) )

Basically an AGI without open-ended concepts will never live in the real world. 
I should add that I don't believe early, true  AGI's *will* be anywhere near 
capable of natural language. All they will need is one or more systems of 
open-ended concepts.

Emotions are one such system. A drive/urge of hunger for food is an open-ended 
concept that allows an animal to seek/eat any of a whole range of foods, 
(unless you're pregnant and it's 1 a.m. and only one particular form of 
chocolate will do - but even then it could be any of many brands).

A body can be regarded as itself a system of open-ended concepts - for 
effecting concepts of how to seek goals.  Hands and other limbs and indeed a 
torso offer a potentially infinite range of ways to effect commands to "handle" 
objects. They offer a roboticist many "degrees of freedom" - true "mobility."

(Do you think about the body, natural or robotic, from this POV, Bob?)
  Bob M: Mark Waser <[EMAIL PROTECTED]> wrote:
    >> But the question is whether the internal knowledge representation of the 
AGI needs to allow ambiguities, or should we use an ambiguity-free 
representation.  It seems that the latter choice is better. 

    An excellent point.  But what if the representation is natural language 
with pointers to the specific intended meaning of any words that are possibly 
ambiguous?  That would seem to be the best of both worlds.


  This is fine provided that the AGI lives inside a chess-like ambiguity free 
world, which could be a simulation or maybe some abstract data mining 
environment.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to