Hi Pei / Colin,
> Pei: This is
the conclusion that I have been most afraid of from this
> "Friendly
AI" discussion. Yes, AGI can be very dangerous, and I don't
> think any of
the solutions proposed so far can eliminate the danger
> completely. However
I don't think this is a valid reason to slow down
> the research.
Wow! This is an interesting
statement. "yes, new development X
could be very dangerous, but since we can't get 100% certainty of
safety, we should press ahead with an implementation that is very
significantly less than 100% guaranteed safe because we might need
this technology to ensure that we are safe! And we can't afford to slow
down the development of this technology X even if the purpose is to
make technology X safer"
So far, noone on this list has suggested
stopping AGI research or
development. What has been suggested is that, if
it is necessary to
free resources to work on the means to make AGIs safe/friendly,
then the work on building the basic AGI mentation architecture should
be slowed to free those resources and to allow the work on friendliness
implementation to catch up.
No-one on the list has suggested any
reason for all the haste. Why is
the haste important or necessary?
You might like to compare the AGI
development issue to the
Manhattan Project. There was an argument that having the A-bomb,
while dangerous, was going to be a net benefit - in terms of ensuring
that the Germans didn't get it first and then later in terms of bringing the
Pacific war to a faster close.
But safety was always a consideration.
Firstly at the obvious level that
the bomb had to be safe enough for the US to handle and deliver. It
was all pretty pointless building a bomb that was likely to blow up
before it left the US! Secondly Openheimer was concerned that setting
off an A-bomb could cause a run-away fire in the atmosphere - I've
forgotten what he and others thought might combust (I guess it was
oxygen and nitrogen). If such a run-away conflagration could be
triggered then there was clearly no point in having the bomb since it
would kill everyone. But the crucial point was that this issue of run-
away conflagration was (a) identified as a legitimate concern, (b) it was
investigated, and (c) the bomb was not used until the issue had been
shown to not be a problem.
> Pei: I don't
think any of the solutions proposed so far can eliminate
> the danger completely
Maybe so, but reducing it at least
somewhat seems to me to be worth
the effort.
> Pei: So my position is: let's go ahead, but carefully.
So far at least, that's my own position
too. But what do you mean by
being careful if it doesn't include using multiple strategies to try to
significantly improve the odds that AGIs will be safe and friendly?
You said:
> Pei: (2) Don't
have AGI developed in time may be even more dangerous.
> We may encounter
a situation where AGI is the only hope for the
> survival of the
human species. I haven't seen a proof that AGI is
> more likely to
be evil than otherwise.
I haven't seen the case for why we
actually are urgently and critically
dependent on having AGIs to solve humans big problems. (Safe &
friendly AGIs could be useful in lots of areas but that's totally different
from being something that we cannot survive without.)
I personally think humans as a society
are capable of saving
themselves from their own individual and collective stupidity. I've
worked explicitly on this issue for 30 years and still retain some
optimism on the subject.
> Colin: I'm with Pei Wang. Let's explore and deal with it.
OK, if you're with Pei, what exactly
is the position that you are not with?
Cheers, Philip