BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px; }
 I'd suggest that an AI system without a goal is not an AI system;
it's pure randomness. The question emerges -  can a goal, or even the
Will to Intentionality, or 'Final Causation',  emerge from randomness?
After all, Peirce's account of the emergence of such habits from
randomness and thus, intentionality, is clear:

        "Out of the womb of indeterminacy we must say that there would have
come something, by the principle of Firstness, which we may call a
flash. Then by the principle of habit there would have been a second
flash.....then there would have come other successions ever more and
more closely connected, the habits and the tendency to take them ever
strengthening themselves'... 1.412

        Organic systems are not the same as inorganic. Can a non-organic
system actually, as a system, develop its own habits? According to
Peirce, 'Mind' exists within non-organic matter - and if Mind is
understood as the capacity to act within the Three Categories - then,
can a machine made by man with only basic programming, move into
self-development? I don't see this - as a machine is like a physical
molecule and its 'programming' lies outside of itself.

        Edwina
 On Thu 15/06/17 11:42 AM , John F Sowa s...@bestweb.net sent:
 On 6/15/2017 9:58 AM, g...@gnusystems.ca [1] wrote: 
 > To me, an intelligent system must have an internal guidance system
 
 > semiotically coupled with its external world, and must have some  
 > degree of autonomy in its interactions with other systems. 
 That definition is compatible with Peirce's comment that the search 
 for "the first nondegenerate Thirdness" is a more precise goal than 
 the search for the origin of life. 
 Note the comment by the biologist Lynn Margulis:  a bacterium
swimming 
 upstream in a glucose gradient exhibits intentionality.  In the
article 
 "Gaia is a tough bitch", she said “The growth, reproduction, and 
 communication of these moving, alliance-forming bacteria” lie on 
 a continuum “with our thought, with our happiness, our
sensitivities 
 and stimulations.” 
 > I think it’s quite plausible that AI systems could reach that
level 
 > of autonomy and leave us behind in terms of intelligence, but what

 > would motivate them to kill us?  
 Yes.  The only intentionality in today's AI systems is explicitly 
 programmed in them -- for example, Google's goal of finding
documents 
 or the goal of a chess program to win a game.  If no such goal is 
 programmed in an AI system, it just wanders aimlessly. 
 The most likely reason why any AI system would have the goal to kill

 anything is that some human(s) programmed that goal into it. 
 John 


Links:
------
[1]
http://webmail.primus.ca/javascript:top.opencompose(\'g...@gnusystems.ca\',\'\',\'\',\'\')
-----------------------------
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .




Reply via email to