samantha <[EMAIL PROTECTED]> wrote:
Why is being maximally self-preserving incompatible with being a
desirable AGI exactly?  What is the "maximal" part?

In this discussion, maximal self-preservation includes e.g. that the
entity wouldn't allow itself to be destroyed under any circumstances,
which I see as an unnecessarily problematic and limiting feature,
which I wouldn't want to include in an AGI. Such a feature would
prevent e.g. the inclusion of safety measures of the kind, where the
AGI automatically shuts off if it finds catastrophic bugs in it's own
code.

Human beings are relatively hard-wired toward self-preservation.
That does not mean that this goal is never ever superseded nor that
self-preservation is incompatible with ethical behavior.

Yes, in the case of humans the goal of self-preservation can be
superseded, and hence humans are not maximally self-preserving in the
sense of the word that was used in this discussion.

Rational self-interest can even be posited as a better guide to ethical
behavior than other more "unselfish" notions.  Are we reifying an old
debate from human ethical philosophy onto AGIs?

It seems that we aren't, at least not yet.

Ben mentioned one of the best counterarguments to this: if the first
AGI system to achieve superintelligence is
non-maximally-self-preserving, it might nevertheless be able to
prevent other entities from ever reaching superintelligence because of
it's head start (which it could use to obtain close control of all
yet-to-be-finished AGI research projects, and to set up a very
extensive surveillance network), and thus it would never face any real
competitors that would be able to exert evolutionary pressure.

This prevention of other intelligences is not at all a desirable outcome
in my opinion.  I do not believe that any intelligence can be all things
within itself.

I am not advocating the prevention of all other intelligences (or
even, all other superintelligences), only the prevention (or
limitation) of superintelligences that would want to prevent the first
superintelligence from doing what we'd want it to do. I meant to show,
that it is in principle possible in some scenarios for the first
superintelligence to prevent all those other human-created
superintelligences, that we'd deem undesirable.

I also do not believe that these "evolutionary" arguments are very
enlightening when applied to a radically different type of intelligent
being largely responsible for its own change over time and systems
of such beings.

My "evolutionary argument" was that in some specific scenarios, no
significant outside evolutionary pressure will ever be exerted on the
first superintelligence. I do not see how the necessary difference
(when compared to humans) of this superintelligence would take away
from the validity of this argument.

How the superintelligence chooses to evolve (or make variations/copies
of itself to form a larger system) could be called "interior
evolutionary pressure" -- anyway, something else than outside
evolutionary pressure, which was what my "evolutionary argument" was
concerned with. "Interior evolutionary pressure" will be determined by
how we choose to program the first superintelligence.

It would not resist scenarios, where it's destruction is necessary for
the happiness of humankind, which I see as a nice feature.

I do not see this as an axiomatically good feature.   Considering the
limited intelligence and very fickle ways of humans I consider this a
great threat to the viability of any greater than human intelligence.

Would you like to present an example of a scenario, where this feature
would be a problem?

In most cases, ensuring the happiness of humans should be a very easy
minor task for a proper superintelligence, one that it wouldn't need
to devote a significant part of it's attention to, and one that
wouldn't impose noticeable limits on it or it's growth and more
interesting pursuits.

(Many/most humans would probably want a superintelligence to augment
them to be more-than-human. It is a bit more complicated problem, how
a nice superintelligence should deal with augmented humans, which
aren't necessarily very simple creatures.)

I don't think it would at all rational to consider human happiness,
whatever that may be, as more important than the very existence of a
much greater intelligence.

I find rationality to be value-independent in the sense that all
internally consistent value systems are equally rational.

--
Aleksei Riikonen - http://www.iki.fi/aleksei

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to