Re: Safe forms of AGI [WAS Re: [singularity] The humans are dead...]

2007-05-31 Thread Roland Pihlakas
On 5/31/07, Chuck Esterbrook <[EMAIL PROTECTED]> wrote: On 5/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Instead, what you do is build the motivational system in such a way that > it must always operate from a massive base of thousands of small > constraints. A system that is constrain

Re: Safe forms of AGI [WAS Re: [singularity] The humans are dead...]

2007-05-31 Thread Chuck Esterbrook
On 5/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: Instead, what you do is build the motivational system in such a way that it must always operate from a massive base of thousands of small constraints. A system that is constrained in a thousand different directions simply cannot fail in a

RE: Safe forms of AGI [WAS Re: [singularity] The humans are dead...]

2007-05-30 Thread Keith Elis
Thanks for your response, Richard. I'm not on equal footing when it comes to cognitive science, but I do want to comment on one idea. Richard Loosemore wrote: >Instead, what you do is build the motivational system in such >a way that >it must always operate from a massive base of thousands o

Safe forms of AGI [WAS Re: [singularity] The humans are dead...]

2007-05-29 Thread Richard Loosemore
Keith Elis wrote: Answer me this, if you dare: Do you believe it's possible to design an artificial intelligence that won't wipe out humanity? Yes, most certainly I do. I can hardly stress this enough. Did you read my previous post on the subject of motivation systems? This contained m