On 6/15/2017 9:58 AM, g...@gnusystems.ca wrote:
To me, an intelligent system must have an internal guidance system semiotically coupled with its external world, and must have some degree of autonomy in its interactions with other systems.

That definition is compatible with Peirce's comment that the search
for "the first nondegenerate Thirdness" is a more precise goal than
the search for the origin of life.

Note the comment by the biologist Lynn Margulis:  a bacterium swimming
upstream in a glucose gradient exhibits intentionality.  In the article
"Gaia is a tough bitch", she said “The growth, reproduction, and
communication of these moving, alliance-forming bacteria” lie on
a continuum “with our thought, with our happiness, our sensitivities
and stimulations.”

I think it’s quite plausible that AI systems could reach that level
of autonomy and leave us behind in terms of intelligence, but what
would motivate them to kill us?

Yes.  The only intentionality in today's AI systems is explicitly
programmed in them -- for example, Google's goal of finding documents
or the goal of a chess program to win a game.  If no such goal is
programmed in an AI system, it just wanders aimlessly.

The most likely reason why any AI system would have the goal to kill
anything is that some human(s) programmed that goal into it.

John
-----------------------------
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .




Reply via email to