Stathis: Are you simply arguing that an embodied AI that can interact with the
real world will find it easier to learn and develop, or are you arguing that there is a fundamental reason why an AI can't develop in a purely virtual environment?
The latter. I'm arguing that a disembodied AGI has as much chance of getting to know, understand and be intelligent about the world as Tommy - a deaf, dumb and blind and generally sense-less kid, that's totally autistic, can't play any physical game let alone a mean pin ball, and has a seriously impaired sense of self , (what's the name for that condition?) - and all that is even if the AGI *has* sensors. Think of a disembodied AGI as very severely mentally and physically disabled from birth - you wouldn't do that to a child, why do it to a computer? It might be able to spout an encyclopaedia, show you a zillion photographs, and calculate a storm but it wouldn't understand, or be able to imagine/ reimagine, anything. As I indicated, a proper, formal argument for this needs to be made - and I and many others are thinking about it - and shouldn't be long in forthcoming, backed with solid scientific evidence. There is already a lot of evidence via mirror neurons that you do think with your body, and it just keeps mounting.
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=90550639-4bac43
