Keith Elis wrote:
Shane Legg wrote:
--------------------
If a machine was more intelligent/complex/conscious/...etc... than
all of humanity combined, would killing it be worse than killing all of
humanity?
--------------------
You're asking a rhetorical question but let's just get the correct
answer out there first: If it comes down to killing me or a machine, I
want that machine dead. If you're going to navel-gaze over some
hair-splitting ethical conundrum concerning who it makes more objective
sense to terminate, I'll kill it myself while you're pondering. And
since you're not sure whether killing machines is worse or better than
killing me and the people I care about, I'm probably going to have to do
something about you, too, since you're the guy trying to build the damn
things.
I have been of a mind for years to start a public website about 'Scary
AI Researchers' where people can look up the scariest things said by the
various AI researchers and learn more about them. I haven't done this
because I don't want to put anyone at risk. But someone will come up
with this website eventually. And then everything you ever wrote on the
topic *anywhere* will be taken completely out of context and it will
take an Act of Congress to set the record straight.
Perhaps "something needs to be done" about you too, eh?
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8