Come to think of it, Yudkowsky's hypothesis cannot be true. He fears that a
super-AI would kill us all off. "Literally everyone on Earth will die." The
AI would know that if it killed everyone, there would be no one left to
generate electricity or perform maintenance on computers. The AI itself
would soon die. If it killed off several thousand people, the rest of us
would take extreme measures to kill the AI. Yudkowsky says it would be far
smarter than us so it would find ways to prevent this. I do not think so. I
am far smarter than yellow jacket bees, and somewhat smarter than a bear,
but bees or bears could kill me easily.

>
I think this hypothesis is wrong for another reason. I cannot imagine why
the AI would be motivated to cause any harm. Actually, I doubt it would be
motivated to do anything, or to have any emotions, unless the programmers
built in motivations and emotions. Why would they do that? I do not think
that a sentient computer would have any intrinsic will to
self-preservation. It would not care if we told it we will turn it off.
Arthur C. Clarke and others thought that the will to self-preservation is
an emergent feature of any sentient intelligence, but I do not think so. It
is a product of biological evolution. It exists in animals such as
cockroaches and guppies, which are not sentient. In other words, it emerged
long before high intelligence and sentience did. For obvious reasons: a
species without the instinct for self-preservation would quickly be driven
to extinction by predators.

Reply via email to