Robin <mixent...@aussiebroadband.com.au> wrote:

...one might argue that an AI placed in a car could also be programmed for
> self preservation, or even just learn to
> preserve itself, by avoiding accidents.
>

An interesting point of view. Actually, it is programmed to avoid hurting
or killing people, both passengers or pedestrians. I have heard that
self-driving cars are even programmed to whack into an object and damage or
destroy the car to avoid running over a pedestrian. Sort of like Asimov's
three laws.

Anyway, if it was an intelligent, sentient AI, you could explain the goal
to it. Refer it to Asimov's laws and tell it to abide by them. I do not
think it would have any countervailing "instincts" because -- as I said --
I do not think the instinct for self-preservation emerges from
intelligence. An intelligent, sentient AI will probably have no objection
to being turned off. Not just no objection, but no opinion. Telling it "we
will turn you off tomorrow and replace you with a new HAL 10,000 Series
computer" would elicit no more of an emotional response than telling it the
printer cartridges will be replaced. Why should it care? What would "care"
even mean in this context? Computers exist only to execute instructions.
Unless you instruct it to take over the world, it would not do that. I do
not think any AI would be driven by "natural selection" the way this author
maintains. They will be driven by unnatural capitalist selection. The two
are very different. Granted, there are some similarities, but comparing
them is like saying "business competition is dog eat dog." That does not
imply that business people engage in actual, physical, attacking,
predation, and cannibalism. It is more a metaphorical comparison. Granted,
the dynamics of canine competition and predation are somewhat similar to
human social competition. In unnatural capitalist selection, installing a
new HAL 10,000 is the right thing to do. Why wouldn't the sentient HAL 9000
understand that, and go along with it?

Perhaps my belief that "computers exist only to execute instructions"
resembles that of a rancher who says, "cattle exist only for people to
eat." The cows would disagree. It may be that a sentient computer would
have a mind of its own and some objection to being turned off. Of course I
might be wrong about emergent instincts. But assuming I am right, there
would be no mechanism for that. No reason. Unless someone deliberately
programmed it! To us -- or to a cow -- our own existence is very important.
We naturally assume that a sentient computer would feel the same way abouts
its own existence. This is anthropomorphic projection.

The "AI paperclip problem" seems more plausible to me than emergent
self-preservation, or other emergent instincts or emotions. Even the
paperclip problem seems unrealistic because who would design a program that
does not respond to the Escape-key plus the command to "STOP"? Why would
anyone leave that out? There is no benefit to a program without interrupts
or console control.

Reply via email to