Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-29 Thread Ben Goertzel
Hi, There is something about the gist of your response that seemed strange to me, but I think I have put my finger on it: I am proposing a general *class* of architectures for an AI-with-motivational-system. I am not saying that this is a specific instance (with all the details nailed down) of

[agi] Re: [singularity] Motivational Systems that are stable

2006-10-28 Thread Richard Loosemore
Ben, There is something about the gist of your response that seemed strange to me, but I think I have put my finger on it: I am proposing a general *class* of architectures for an AI-with-motivational-system. I am not saying that this is a specific instance (with all the details nailed dow

[agi] Re: [singularity] Motivational Systems that are stable

2006-10-27 Thread Richard Loosemore
Curious. A couple of days ago, I responded to demands that I produce arguments to justify the conclusion that there were ways to build a friendly AI that was extremely stable and trustworthy, but without having to give a mathematical proof of its friendliness. Now, granted, the text was com