Technologically, AI is far far easier than uploading.  So, AI will
come first, and we will have to build AI that is reliably nice to us,
or suffer the consequences.

It's not "controlling an entity smarter than you" if it was built from
scratch to be nice, and continues to be nice of its own accord.

When you guys are imagining scenarios such as "if it's so nice to us
that it decides to kill us", what you're relying on is the use of your
cognitive hardware to see that the AI would, in fact, be wrong.  For a
truly nice AI, we'd want to include the cognitive hardware that
underlies human reasoning about morality, so that these errors would
be far less likely to occur.

I suggest that people in this thread read the comments on the
"Friendly AI is bunk" post, particularly Starglider and Nick Hay's.
They essentially answer every point that Shane has made, and he has
yet to address them.

Nice to see people finally talking about this, though.

--
Michael Anissimov
Lifeboat Foundation      http://lifeboat.com
http://acceleratingfuture.com/michael/blog

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to