BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px; }I
would agree with Gary R - I think that the definitions have to be
clear.

        Intelligence does not also mean 'conscious'; nor does it mean
'living'. After all, 'matter is effete Mind'. A crystal is operating
as an intelligent organization of matter; i.e., a semiosic form. But
is it 'living'? 

        I would consider that a vital aspect of 'life' is that the
individual instantiation, the particular morphology or Token, 
enables the continuity of its Type [Thirdness] by self-replication.
So, the Thirdness of a bacterium or rabbit is expressed within the
particular morphological Form [Secondness] of a bacteria or rabbit -
which reproduces itself [in Secondness] in another version of
Thirdness..while the first Form dies off. 

        AI doesn't seem to function this way; i.e., within the Categories
defining its material existence. Do the Categories operate within its
'intelligent operations'? People are always questioning whether an AI
can 'feel'; or can develop logical habits [Thirdness]...or is it
doomed to operate forever in 'bits' [Secondness]. Science Fiction
assumes that AI, can function within the Categories in both its
material and intelligent actions. I don't know...

        Edwina
 On Wed 14/06/17 11:36 PM , Gary Richmond gary.richm...@gmail.com
sent:
 Helmut, Gary F, list,
 Helmut wrote: I hope that there still is a big step from
intelligence to life.
 Gary F wrote:
 If you have something better than a pooh-pooh argument that
artificial  intelligence is inherently impossible, or that inorganic
systems are inherently incapable of living (and sign-using), I would
like to hear it. I haven’t heard a good one ye t.
 I don't know whether anyone is arguing that "artificial intelligence
is inherently impossible"--far from it. And inorganic systems and AI
are certainly capable of "sign-using," every laptop computer or smart
phone demonstrates that. 
 But as Helmut "hopes" and I suppose that I would more or less insist
upon, there is "a big step from intelligence to life." 
 So, in my critical pooh-poohing logic, I do not see,  contra Gary F,
how inorganic systems are capable of really living. Granted,
intelligence in evident even in the growth of crystals. But I would
not--and I do not think that Peirce ever claimed--that crystals were
living, let along "life forms."
 Best, 
 Gary R
 Gary RichmondPhilosophy and Critical ThinkingCommunication
StudiesLaGuardia College of the City University of New YorkC 745718
482-5690 [1] 
 On Wed, Jun 14, 2017 at 3:08 PM, Helmut Raulien  wrote:
  List,  I hope that there still is a big step from intelligence to
life. I hope that there will never be living, breeding robots without
"off"-switches, they would kill us as fast as they could. Best, Helmut
 14. Juni 2017 um 20:18 Uhr
 g...@gnusystems.ca [3] wrote:
        Gary R, Jon et al., 
        Logic, according to Peirce, is “only another name for semiotic
(σημειωτικη), the quasi-necessary, or formal, doctrine of
signs … [ascertaining] what must be the characters of all signs
used by a ‘scientific’ intelligence, that is to say, by an
intelligence capable of learning by experience” (CP 2.227).  
        Nobody, including humans, learns by experiences they don’t have.
Scientific inquirers “discover the rules” (as Bateson put it) of
nature and culture, by making inferences — abductive, deductive and
inductive. But what they can learn is constrained by what observations
they are physically equipped to make, as well as their semiotic
ability to make inferences from them. 
        You seem to be saying that a non-human system which has apparently
not made inferences before will never be able to make them. But this
is what Peirce called a pooh-pooh argument. Besides, my Go-playing
example was only that, a single example of an AI system that clearly
has learned from experience and is capable of making an original move
that proves to be effective on the Go board. Of course the Go universe
is very small compared to the universe of scientific inquiry, but
until an AI is equipped to make observations in much larger fields,
how can we be so sure that it will not be able to make inferences
from them as well as humans do, just as it can match human experts in
the field of Go?  
        Yes, the rules of Go are given — given for human players as well
as any other players. Likewise, the grammar of the language we are
using is given for both of us. Does that mean that we can never use
it to say something original, or to formulate new inferences? Why
should it be different for non-human language users? It strikes me as
a very dubious assumption that learning to learn  in any field is
necessarily non-transferable to other fields of learning. And the
fields of learning opening up to AI systems are expanding very
rapidly. 
        You can say “that Gobot is hardly a life form,” but then you can
just as easily say that the first organisms on Earth were “hardly
life forms,” or — contra Peirce — that a symbol is “hardly a
life form.” But somebody might ask, How do you define “life”? 
        If you have something better than a pooh-pooh argument that
artificial intelligence is inherently impossible, or that inorganic
systems are inherently incapable of living (and sign-using), I would
like to hear it. I haven’t heard a good one yet. 
        Gary f. 
        From: Gary Richmond [mailto:gary.richm...@gmail.com [4]]
 Sent: 14-Jun-17 12:41
 To: Peirce-L 
 Subject: Re: [PEIRCE-L] RE: Rheme and Reason 
        Gary F, Jon A, list,  
        Gary F wrote:  
        The question is whether silicon-based life forms are evolving, i.e.
whether AI systems are potential players in what Gregory Bateson
called “life—a game whose purpose is to discover the rules, which
rules are always changing and always undiscoverable.”      
        And in an earlier post wrote:   
        I see some of these developments as evidence that abduction (as
Peirce called it) and “insight” are probably not beyond the
capabilities of AI systems that can learn inductively.      
        But the rules of Go (and chess, etc.) do not need to be
discovered--they are given. (of course, the playing of the game--the
strategy--is not). Then, if life is defined as "a game whose purpose
is to discover the rules, which rules are always changing and always
undiscoverable" (although I'm not sure that I find that definition
satisfactory), to extrapolate from a robot being able to learn to
play a game where the rules do not need to be discovered, to suggest
that a robot's ability to get better at playing such games with given
rules ("can learn inductively" in such situations) to this being
"evidence that abduction. . . and 'insight' are probably not beyond
the capabilities of AI systems" seems to me to go way too far.   
        So I, like Jon A, haven't seen any real intelligence shown in
Artificial Intelligence systems, even those that can beat a master Go
player at such a game (hardly "the game of life").    
        Furthermore, Gary F's question as to "whether silicon-based life
forms are evolving" begs the question (although there may be
silicon-based life forms on some distant planet for all we know)
since that Gobot is hardly a life form.   
        Best,   
        Gary R   
            ----------------------------- PEIRCE-L subscribers: Click on
"Reply List" or "Reply All" to REPLY ON PEIRCE-L to this message.
PEIRCE-L posts should go to peirce-L@list.iupui.edu [6] . To
UNSUBSCRIBE, send a message not to PEIRCE-L but to
l...@list.iupui.edu [7] with the line "UNSubscribe PEIRCE-L" in the
BODY of the message. More at  http://www.cspeirce.com/peirce
[8]-l/peirce-l.htm .     
 -----------------------------
 PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY
ON PEIRCE-L to this message. PEIRCE-L posts should go to
peirce-L@list.iupui.edu [9] . To UNSUBSCRIBE, send a message not to
PEIRCE-L but to l...@list.iupui.edu [10] with the line "UNSubscribe
PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce [11]-l/peirce-l.htm .


Links:
------
[1] http://webmail.primus.ca/tel:(718)%20482-5690
[2]
http://webmail.primus.ca/javascript:top.opencompose(\'h.raul...@gmx.de\',\'\',\'\',\'\')
[3]
http://webmail.primus.ca/javascript:top.opencompose(\'g...@gnusystems.ca\',\'\',\'\',\'\')
[4]
http://webmail.primus.ca/javascript:top.opencompose(\'gary.richm...@gmail.com\',\'\',\'\',\'\')
[5]
http://webmail.primus.ca/javascript:top.opencompose(\'peirce-l@list.iupui.edu\',\'\',\'\',\'\')
[6]
http://webmail.primus.ca/javascript:top.opencompose(\'peirce-L@list.iupui.edu\',\'\',\'\',\'\')
[7]
http://webmail.primus.ca/javascript:top.opencompose(\'l...@list.iupui.edu\',\'\',\'\',\'\')
[8] http://www.cspeirce.com/peirce-l/peirce-l.htm
[9]
http://webmail.primus.ca/javascript:top.opencompose(\'peirce-L@list.iupui.edu\',\'\',\'\',\'\')
[10]
http://webmail.primus.ca/javascript:top.opencompose(\'l...@list.iupui.edu\',\'\',\'\',\'\')
[11] http://www.cspeirce.com/peirce-l/peirce-l.htm
-----------------------------
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .




Reply via email to