On 9/10/06, Shane Legg <[EMAIL PROTECTED]> wrote:
 I did not claim that a primary interest in self preservation was a
 necessary feature when designing an AGI.  I only claimed that
 the greater an AGI's emphasis on self preservation, the more
 likely it is that it will survive.

You claimed that an AGI is "critically unstable due to evolutionary
pressure" unless it "is primarily interested in its own self
preservation".

To me, "I did not claim that a primary interest in self preservation
was a necessary feature" seems to directly contradict this.

But, ok, maybe you don't actually mean to criticize systems, whose
interest in self-preservation arises only as a derived value. Thanks
for the clarification. In that case I would like to point out, that
the "Friendly AI" systems which SIAI-folks consider, certainly value
self-preservation as a derived value, and hence this observation of
yours isn't really criticism of Friendly AI, even though you seemed to
present it as such.

Sounds absurd to all those of us, who have often considered our own
existence to hold only (or almost only) derived value, and clearly see
that such a feature doesn't necessarily constitute an evolutionary
disadvantage.

 Sorry, but you're going to have to explain this to me more explicitly.

The quote from Nick Hay, which I included in my message, I meant as a
more explicit explanation of what it is that I "clearly see".

The central point is, that I might not value my own existence for it's
own sake, but if I want to e.g. see to it that some other individuals
survive and are happy, I will do what is necessary to ensure my
continued existence, if that is required for me to make sure that
those other individuals survive and are happy.

An AGI-example: we might have a superintelligence that only cares
about the happiness of humans, and about it's own continued existence
only insofar as that is necessary to ensure the happiness of humans.
Such a superintelligence does not have self-preservation as a primary
(which I take to mean non-derived) interest, but it suffers from no
relevant evolutionary disadvantage because of this. It will resist
with all it's might any scenario in which it perishes in a way that
endangers the happiness of humans.

It would not resist scenarios, where it's destruction is necessary for
the happiness of humankind, which I see as a nice feature.

(You might want to point out, that above I said "no relevant
evolutionary disadvantage", which differs from what I said earlier:
"no evolutionary disadvantage". I'll provide an example where even
"irrelevant evolutionary disadvantages" are avoided, if you find that
necessary. These examples would lack the nice feature I mentioned in
the preceding paragraph, however, so they would be worse ideas for
implementation.)

--
Aleksei Riikonen - http://www.iki.fi/aleksei

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to