On 5/31/07, Chuck Esterbrook <[EMAIL PROTECTED]> wrote:
On 5/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Instead, what you do is build the motivational system in such a way that
> it must always operate from a massive base of thousands of small
> constraints. A system that is constrain
On 5/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Instead, what you do is build the motivational system in such a way that
it must always operate from a massive base of thousands of small
constraints. A system that is constrained in a thousand different
directions simply cannot fail in a
Thanks for your response, Richard. I'm not on equal footing when it
comes to cognitive science, but I do want to comment on one idea.
Richard Loosemore wrote:
>Instead, what you do is build the motivational system in such
>a way that
>it must always operate from a massive base of thousands o
Keith Elis wrote:
Answer me this, if you dare: Do you believe it's possible to design an
artificial intelligence that won't wipe out humanity?
Yes, most certainly I do.
I can hardly stress this enough.
Did you read my previous post on the subject of motivation systems?
This contained m