On May 28, 2007, at 5:44 PM, Keith Elis wrote:

Richard Loosemore wrote:

Your email could be taken as threatening to set up a website
to promote
violence against AI researchers who speculate on ideas that, in your
judgment, could be considered "scary".

I'm on your side, too, Richard.

Answer me this, if you dare: Do you believe it's possible to design an
artificial intelligence that won't wipe out humanity?


Speaking for myself I believe that it is possible that an AI will not wipe out humanity. I don't believe it is possible to design one that provably will not wipe out humanity. Does this mean we should not proceed? I don't think so.

- samantha



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to