Hank,

    Do you have a personal "understanding/design of AGI and intelligence in 
general" that predicts a soon-to-come singularity?  Do you have theories or a 
design for an AGI?

John



Hank Conn wrote:

  It has been my experience that one's expectations on the future of 
AI/Singularity is directly dependent upon one's understanding/design of AGI and 
intelligence in general.
   
  On 12/5/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: 
    John,

    On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote: 
    >
    > I don't believe that the singularity is near, or that it will even occur. 
 I
    > am working very hard at developing real artificial general intelligence, 
but
    > from what I know, it will not come quickly.  It will be slow and 
    > incremental.  The idea that very soon we can create a system that can
    > understand its own code and start programming itself is ludicrous.

    First, since my birthday is just a few days off, I'll permit myself an 
    obnoxious reply:
    <grin>
    Ummm... perhaps your skepticism has more to do with the inadequacies
    of **your own** AGI design than with the limitations of AGI designs in
    general?
    </grin>

    Seriously: I agree that progress toward AGI will be incremental, but 
    the question is how long each increment will take.  My bet is that
    progress will seem slow for a while -- and then, all of a sudden,
    it'll seem shockingly fast.  Not necessarily "hard takeoff in 5
    minutes" fast, but at least "Wow, this system is getting a lot smarter 
    every single week -- I've lost my urge to go on vacation" fast ...
    leading up to the phase of "Suddenly the hard takeoff is a topic for
    discussion **with the AI system itself** ..."

    According to my understanding of the Novamente design and artificial 
    developmental psychology, the breakthrough from slow to fast
    incremental progress will occur when the AGI system reaches Piaget's
    "formal stage" of development:

    http://www.agiri.org/wiki/index.php/Formal_Stage

    At this point, the "human child like" intuition of the AGI system will
    be able to synergize with its "computer like" ability to do formal
    syntactic analysis, and some really interesting stuff will start to
    happen (deviating pretty far from our experience with human cognitive
    development).

    -- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to