The suspicious appearance that information "transfer" between human 
minds doesn't ever actually occur, only cross fertilization between 
private mental ecologies of original design, is partly offered just to 
be 'far out' of course.    It's also entirely consistent with  Pam's 
sense that we'll use computers for what they do best and be very glad 
for it.     The real point was to suggest what the correct structure 
of nature to model is.    Perhaps it has a long way to go, but I 
wouldn't entirely rule out the productive use of virtual or artificial 
ecologies, IF, we are clear enough in our own thoughts to observe how 
nature actually does work, and to be guided by it.



Phil Henshaw                       ¸¸¸¸.·´ ¯ `·.¸¸¸¸
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave 
NY NY 10040                       
tel: 212-795-4844                 
e-mail: [EMAIL PROTECTED]          
explorations: www.synapse9.com    
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On 
Behalf Of Bruce Abell
Sent: Friday, July 21, 2006 6:46 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] real tinking


Pamela--

That's a nice clarification of discussions that were getting waaaay 
out there.

--Bruce
----- Original Message ----- 
From: Pamela McCorduck 
To: The Friday Morning Applied Complexity Coffee Group 
Sent: Friday, July 21, 2006 1:49 PM
Subject: Re: [FRIAM] real tinking


It's hard for me to imagine what's meant by the phrase "a real 
thinking machine."  Human level and human versatility?  We can get 
those the old fashioned way.   


What we already have are programs that think better (deeper, faster, 
more imaginatively--whatever that means) in certain narrow domains.  
One of those is the far from negligible domain of molecular biology.  
Such programs cannot get themselves to the airport, or enjoy 
strawberries, but they really don't need to, do they?  Contemporary 
molecular biology would be unthinkable (ahem) without such programs.


Likewise, chess is now something machines do better than humans, and 
Kasparov, at least, says he is learning a great deal from how programs 
play chess.


Some confusion has arisen because historically, the field of 
artificial intelligence both tried to model human thought, and tried 
to solve certain problems by hook or by crook (without reference to 
how humans do it).  They were two distinct efforts.  Cognitive 
psychologists were grateful to have in the computer a laboratory 
instrument that would allow them to move beyond rats running mazes 
(yes, folks, this is where cognitive psychology was in the 1950s).  
People interested in solving problems that humans are inept at solving 
were glad to have a machine that could process symbols.


I'm just now reading Eric Kandel's graceful memoir, "In Search of 
Memory."  Kandel, a Nobel laureate and biologist, has devoted his life 
to understanding human memory, which he believes is one of the great 
puzzles whose solution would lead directly to understanding human 
thought.  He hasn't the least doubt that these seemingly intractable 
problems will someday be cracked.  I don't either.  And we won't go 
crazy doing it.  


Pamela McCorduck




On Jul 21, 2006, at 11:23 AM, James Steiner wrote:


I suspect that we won't ever get a real thinking machine by
deliberately trying to model thought. I suspect that the approach that
will ultimately work is one of two:  One: a "sufficiently complex"
evolutionary simulation system, or rather set of competing systems,
will create a concious-seeming intelligence all by itself (though that
intelligence will be non-human, and not modeled after human thought,
and we might not understand each other well--how do you instill an AI
with human concepts of morality?) or two, someone will create a
super-complex physics simulation that can take hyper-detailed 3D brain
CAT/PET/etc scan data as input then simply simulate the goings on at
the atomic level, the "mind" being an emergent property of the
"matter." Of course, the mind will probably instantly go insane, even
if provided with sufficient quantity and types of virtual senses and
body.


And we *still* won't know how the mind happens.


;)
~~James


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org






"The amount of money one needs is terrifying .." 

-Ludwig van Beethoven 










-----------------------------------------------------------------------
---------


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to