On Jan 28, 2008 6:43 AM, Mike Tintner <[EMAIL PROTECTED]> wrote: > > Stathis: Are you simply arguing that an embodied AI that can interact with > the > > real world will find it easier to learn and develop, or are you > > arguing that there is a fundamental reason why an AI can't develop in > > a purely virtual environment? > > The latter. I'm arguing that a disembodied AGI has as much chance of getting > to know, understand and be intelligent about the world as Tommy - a deaf, > dumb and blind and generally sense-less kid, that's totally autistic, can't > play any physical game let alone a mean pin ball, and has a seriously > impaired sense of self , (what's the name for that condition?) - and all > that is even if the AGI *has* sensors. Think of a disembodied AGI as very > severely mentally and physically disabled from birth - you wouldn't do that > to a child, why do it to a computer? It might be able to spout an > encyclopaedia, show you a zillion photographs, and calculate a storm but it > wouldn't understand, or be able to imagine/ reimagine, anything. As I > indicated, a proper, formal argument for this needs to be made - and I and > many others are thinking about it - and shouldn't be long in forthcoming, > backed with solid scientific evidence. There is already a lot of evidence > via mirror neurons that you do think with your body, and it just keeps > mounting.
Of course this is a variation on "the grounding problem" in AI. But do you think some sort of **absolute** grounding is relevant to effective interaction between individual agents (assuming you think any such ultimate grounding could even perform a function within a limited system), or might it be that systems interact effectively to the extent their dynamics are based on **relevant** models, regardless of even proximate grounding in any functional sense? ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=90569120-23ee79