This document says:

This Darwinian logic could also apply to artificial agents, as agents may
> eventually be better able to persist into the future if they behave
> selfishly and pursue their own interests with little regard for humans,
> which could pose catastrophic risks.


They have no interests any more than a dishwasher does. They have no
motives. No instinct of self-preservation. Unless someone programs these
things into them, which I think might be a disastrous mistake. I do not
think the instinct for self-preservation is an emergent quality of
intelligence, but I should note that Arthur Clarke and others *did* think
so.

An AI in a weapon might be programmed with self-preservation, since
people and other AI would try to destroy it. I think putting AI into
weapons would be a big mistake.

Reply via email to