>> My point, in that essay, is that the nature of human emotions is rooted in 
>> the human brain architecture, 

    I'll agree that human emotions are rooted in human brain architecture but 
there is also the question -- is there something analogous to emotion which is 
generally necessary for *effective* intelligence?  My answer is a qualified but 
definite yes since emotion clearly serves a number of purposes that apparently 
aren't otherwise served (in our brains) by our pure logical reasoning 
mechanisms (although, potentially, there may be something else that serves 
those purposes equally well).  In particular, emotions seem necessary (in 
humans) to a) provide goals, b) provide pre-programmed constraints (for when 
logical reasoning doesn't have enough information), and c) enforce urgency.

    Without looking at these things that emotions provide, I'm not sure that 
you can create an *effective* general intelligence (since these roles need to 
be filled by *something*).

>> Because of the difference mentioned in the prior paragraph, the rigid 
>> distinction between emotion and reason that exists in the human brain will 
>> not exist in a well-design AI.

    Which is exactly why I was arguing that emotions and reason (or feeling and 
thinking) were a spectrum rather than a dichotomy.


  ----- Original Message ----- 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 1:05 PM
  Subject: Re: [agi] Pure reason is a disease.





  On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
    >> Well, this tells you something interesting about the human cognitive 
architecture, but not too much about intelligence in general...

    How do you know that it doesn't tell you much about intelligence in 
general?  That was an incredibly dismissive statement.  Can you justify it?


  Well I tried to in the essay that I pointed to in my response.

  My point, in that essay, is that the nature of human emotions is rooted in 
the human brain architecture, according to which our systemic physiological 
responses to cognitive phenomena ("emotions") are rooted in primitive parts of 
the brain that we don't have much conscious introspection into.  So, we 
actually can't reason about the intermediate conclusions that go into our 
emotional reactions very easily, because the "conscious, reasoning" parts of 
our brains don't have the ability to look into the intermediate results stored 
and manipulated within the more primitive "emotionally reacting" parts of the 
brain.  So our deliberative consciousness has choice of either 

  -- accepting not-very-thoroughly-analyzable outputs from the emotional parts 
of the brain

  or

  -- rejecting them

  and doesn't have the choice to focus deliberative attention on the 
intermediate steps used by the emotional brain to arrive at its conclusions. 

  Of course, through years of practice one can learn to bring more and more of 
the emotional brain's operations into the scope of conscious deliberation, but 
one can never do this completely due to the structure of the human brain. 

  On the other hand, an AI need not have the same restrictions.  An AI should 
be able to introspect into the intermediary conclusions and manipulations used 
to arrive at its "feeling responses".  Yes there are restrictions on the amount 
of introspection possible, imposed by computational resource limitations; but 
this is different than the blatant and severe architectural restrictions 
imposed by the design of the human brain. 

  Because of the difference mentioned in the prior paragraph, the rigid 
distinction between emotion and reason that exists in the human brain will not 
exist in a well-design AI.

  Sorry for not giving references regarding my analysis of the human 
cognitive/neural system -- I have read them but don't have the reference list 
at hand. Some (but not a thorough list) are given in the article I referenced 
before. 

  -- Ben G

------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to