Unfortunately, I have come to agree with Keith on this issue.

Discussing issues like this [comparative moral value of
humans versus superhuman AGIs] on public
mailing lists seems fraught with peril for anyone who feels they
have a serious chance of actually creating AGI.

Words are slippery, and anything said in natural language is
subject to multiple misinterpretations.  I find myself very
afraid to express my actual opinions on these topics publicly,
because of the risk of misinterpretation by ignorant people
later on.

Right now, no one cares what a bunch of geeks and freaks
say about AGI and the future of humanity.

But once a powerful AGI is actually created by person X, the prior
mailing list posts of X are likely to be scrutinized, and
interpreted by people whose points of view are as far from
transhumanism as you can possibly imagine ... but who
may have plenty of power in the world...

Disgusting perhaps, but that seems to be the nature of
human reality.  Which is where we live, for now...

I am by no means trying to squelch discussion by others
on this list.  Hell no!!  I am happy to see these issues
discussed openly and interestingly by others!  But I'm just
explaining why I personally choose not to enter into such
discussions anymore; since I became convinced I have
a pretty palpable chance of creating a powerful AGI with
superhuman capability, if my project goes well for a
while...

-- Ben G


On 5/28/07, Keith Elis <[EMAIL PROTECTED]> wrote:

Shane Legg wrote:

> Are you suggesting that I avoid asking questions that might entail
> unpleasant answers?  Maybe, if we all go around not discussing scary
> stuff, when super intelligence arrives everything will be just fine?
>
> Rather than setting up a website to intimidate people who try to ask
> difficult questions, maybe you should try to encourage more debate so
> that we can work out some good answers before we need them.


Shane, you might not believe this, but I'm on your side.

Your original question was 'So, would killing a super intelligent
machine (assuming it was possible) be worse than killing a human?'

There are many ways to answer this question. Really smart people like
you, Samantha, and Richard, as well as the other geniuses 2+ standard
deviations from the mean, will certainly have very interesting and
persuasive responses. However, I believe the rest of humanity, that is
nearly everyone on the planet, will answer this question in a manner
similar to the viewpoint I laid out for you, perhaps in a milder or even
stronger version.

If you think this is a disturbing viewpoint, I agree. If you think it's
counterproductive, I agree. If you think it's irrationally neo-Luddite,
I agree. If you think this viewpoint isn't common, then step away from
whatever it is you're doing and talk to 10 random strangers per day for
a month. You don't have to talk about AI, just talk about anything, and
really try to get a sense of what's important to them.

Then parse your question through their eyes.

Actually, you don't have to go through the trouble, because I've done
the work for you. I post this viewpoint occasionally here and elsewhere
when the opportunity presents itself because the vast majority of humans
are not subscribed to this list and their perspective is the one that
will probably win the day from a political, regulatory, and financial
standpoint.

In the end, my advice is pragmatic: Anytime you post publicly on topics
such as these, where the stakes are very, very high, ask yourself, Can I
be taken out of context here? Is this position, whether devil's advocate
or not, going to come back and haunt me? If it can come back and haunt
you, assume it will.

I'm on your side. Really.

Keith



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to