On May 28, 2007, at 6:23 PM, Keith Elis wrote:
Samantha Atkins wrote:
On what basis is that answer correct? Do you mean factual in
that it is
the choice that you would make and that you belief proper?
Or are you
saying it is more objectively correct. If so, on what basis? Mere
assertion and braggadocio will not do for an objective response.
I'm not really making an argument, nor treating the question
fairly. I'm
posting a reaction that I believe would be a common one among our
fellow
humans.
Yeah, I understand where you are coming from now. Of course I agree
that would be the standard reaction.
The question is not one of which to kill but of which, if
either, it is
morally worse to kill and why.
I fully understood the question. And I have many answers at many
levels
I could post. The level at which I chose to respond was far beneath
the
level you might have been expecting, but no less important a response.
Now that we have that level out of the way, are there other levels
you would care to comment on?
If in general humans can be expected to prefer themselves and their
continuance much more than even the most unimaginably powerful and
wonderful non-human intelligence or species and if it is generally
certain they would consider a more powerful intelligence a very real
threat then they become a problem to such an intelligence it would
seem. Unless of course they are so hopelessly outclassed that the
intelligence can let the stew to their heart's content.
Yes, I know, you're really smart. But do you mind if humanity gets
wiped
out or not?
I very very much mind. But would I sacrifice such a vast
intelligence to protect humanity? That is a highly rhetorical
question I hope to never need to answer in reality. Whatever my
answer might be it would not be automatic. If I knew beyond a
shadow of a doubt that only one of A and B could survive going
forward and that A exemplified the most of everything I value by a
very considerable margin and it was my own choice somehow which
survived and I am a member of B, what would I do? That is a
different question from the original but seems to be what the
question is taken to "really" be. Which is fascinating.
The question in this form is much too rhetorical and unlikely. It
is a classic "lifeboat problem". Those are notoriously difficult
to answer without appearing monstrous to someone. In the original
form of the question, I will answer that yes, I would consider
destroying a vastly more intelligent and capable being than any human
or even all humans as more heinous than destroying a human being or
even all humans. Although it is pretty meaningless to compare or
grade such horrors as the destruction of humanity. Does that make
me monstrous somehow? Can any answer to a grossly unlikely
hypothetical like this really say anything important about the answerer?
A more immediate form of the question is whether it is monstrous to
work on AI given that AI might destroy humanity and given that we
cannot provably create AI that will not do so. Of course that
question could be asked about a lot of technology and is asked by
some like Bill Joy. In answer to that I believe that humanity
without massively greater intelligence is not sustainable. I also
believe that the value of significantly greater intelligence is so
high as to justify substantial risks. So I believe the risks of AI
to humanity, while not inconsiderable, are less than the risks to
humanity of not developing AI and I believe the potential benefits
more than justify the risks.
If I knew with certainty somehow that I and others were building
humanity's successor would even this be enough to deter me? That
is an even more interesting rhetorical question.
I have been of a mind for years to start a public website
about 'Scary
AI Researchers' where people can look up the scariest
things said by the
various AI researchers and learn more about them. I haven't
done this
because I don't want to put anyone at risk.
So instead you put out an implied threat that might tend to suppress
open exploration of such questions? Would you prefer conclusions
to be reached privately in this area?
No, I would prefer that our best AI researchers not opine on these
high
stakes matters in unstructured forums. This area is so important and
comments so easily misconstrued that only sustained argumentation from
the best available evidence is advisable. Write a technical,
publication-quality paper. These are harder to take out of context. If
you're not an AI researcher then this doesn't apply, but Shane is,
hence
my response.
That sounds reasonable. However, it doesn't lead to declaring an
actual intent to create a web site to give ammunition to such
possibly dangerous detractors. But perhaps that was a rhetorical
flourish designed only to get us to take such things seriously.
- samantha
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8