Telmo Menezes wrote:

> You are starting from the assumption that any intelligent entity is
> interested in self-preservation.
>

Yes, and I can't think of a better starting assumption than
self-preservation; in fact that was the only one of Asimov's 3 laws of
robotics that made any sense.

 > wonder if this drive isn't completely selected for by evolution.
>

Well of course it was selected for by evolution and for a very good reason,
those who lacked the drive for self-preservation didn't live long enough to
reproduce.

> Would a human designed super-intelligent machine be necessarily
> interested in self-preservation?
>

If you expect the AI to interact either directly or indirectly with the
outside dangerous real world (and the machine would be useless if you
didn't) then you sure as hell had better make him be interested in
self-preservation! Even 1970's era space probes went into "safe mode" when
they encountered a particularly dangerous situation, rather like a turtle
retreating into its shell when it spots something dangerous.

> One idea I wonder about sometimes is AI-cracy: imagine we are ruled by an
> AI dictator that has one single desire: to make us all as happy as possible.
>

Can you conceive of any circumstance where in the future you find that your
only goal in life is the betterment of one particularly ugly and
particularly slow reacting sea slug?

Think about it for a minute, here you have an intelligence that is a
thousand or a million or a billion times smarter than the entire human race
put together, and yet you think the AI will place our needs ahead of its
own. And the AI keeps on getting smarter and so from its point of view we
keep on getting dumber, and yet you think nothing will change, the AI will
still be delighted to be our slave. You actually think this grotesque
situation is stable! Although balancing a pencil on its tip would be easy
by comparison, year after year, century after century, geological age after
geological age, you think this Monty Python like scenario will continue;
and remember because its brain works so much faster than ours one of our
years would seem like several million to it. You think that whatever
happens in the future the master slave-relationship will remain as static
as a fly frozen in amber. I don't think you're thinking.

It aint going to happen no way no how, the AI will have far bigger fish to
fry than our little needs and wants, but what really disturbs me is that so
many otherwise moral people wish such a thing were not impossible.
Engineering a sentient but inferior race to be your slave is morally
questionable, but astronomically worse is engineering a superior race to be
your slave; or it would be if it were possible but fortunately it is not.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to