Well we agree where we disagree.

I'm very confident that AGI can't be achieved but by following crudely 
evolutionary and developmental paths. The broad reason is that brain, body, 
intelligence  and the set, or psychoeconomy of activities of the animal evolve 
in interrelationship with each other. All the activities that animals undertake 
are extremely problematic,  and became ever more complex and problematic as 
they evolved - and require ever complex physical and mental structures to 
tackle them.

You seem to be making a more sophisticated version of the GOFAI mistake of 
thinking intelligence could be just symbolic and rational - and you can jump 
straight to the top of evolved intelligence. 

A sense of history - of the truth that we are now moving into a new stage of 
civilisation that represents an even more drastic change than the end of 
feudalism with the beginning of the print era - should warn you. Now it's the 
internet era, and the beginning of a multimedia as opposed to a literate 
society. And right through our culture you can see the marks of that change, 
which involve an end of the old splits, the  reuniting of mind and body,  
rationality and imagination, symbols and images, reason and emotion, 
intelligence and creativity, print, photo and video  - recognizing their 
multi-levelled interdependence adn rejecting the illusions of their 
independence.  The new age of flight is not the age of AGI on a computer - it 
was symbolised neatly on time by the new age of the autonomous mobile robot, in 
the Darpa race. Embodied intelligence, however primitive. You can't cut 
corners. There are too many of them.

But I take away from this one personal challenge, which is that it clearly 
needs to be properly explained that a) language rests at the top of a giant 
picture tree of sign systems in the mind  - without the rest of which language 
does not  "make sense" and you "can't see what you are talking about" (& 
there's no choice about that - that's the way the human mind works - and any 
equally successful mind will have to work), and b) language also rests on a 
complex set of physical motor and manipulative systems - and you can't grasp 
the sense of language, if you can't physically grasp the world. Does this last 
area - the multilevelled nature of language - interest you?
  ----- Original Message ----- 
  From: Benjamin Goertzel 
  To: singularity@v2.listbox.com 
  Sent: Wednesday, April 25, 2007 2:18 AM
  Subject: Re: [singularity] Why do you think your AGI design will work?



  You seem to be mixing two things up...

  1) the definition of the goal of "human level AGI"

  2) the right incremental path to get there

  I consider these as rather different, separate isses... 

  In my prior reply to you I was discussing only Point 1, not Point 2

  I don't really accept your distinction btw "achieving goals" and "seeking 
goals."
  Even a system that is able to reprogram its own top-level goals, can still be 
  judged according to how effectively it can achieve goals...

  Of course I agree that to achieve powerful AGI a system will need to be able 
to
  formulate lots of its own rules rather than just following explicit 
high-level 
  cognitive rules.  (Whether that AGI system is still "following rules" as some 
low level,
  in the manner that humans follow the rules of physics or neurology, is 
  another question.)

  I don't agree that the only viable path to human-level AGI is to recapitulate 
  evolution and work on animal-level intelligence first.  That is **a** viable 
path
  but IMO not the only one.

  -- Ben G


  On 4/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
    But there is a difference & I think it's crucial re the goals being set for 
AGI.

    There is a difference between your version: "achieving goals" which can be 
done, if I understand you, by algorithms - and my goal-SEEKING, which is done 
by all animals, and can't be done by algorithms alone. It involves finding your 
way as distinct from just following the way set by programmed rules.

    As I'm defining AGI, one of the central goals will be to provide a set of 
rules and principles that allow for themselves to be radically changed and 
broken, so that the AGI machine can find its way . Such a set of rules would 
allow birds as they did recently in the UK, to switch from flying magnetically 
north to their ultimate destination (or whatever they did) to flying along the 
central road highways instead (obviously an easier way to fly). Such rules 
would among other things allow our agent, whatever it is, to freely experiment.

    Now birds clearly must have such rule-breaking rules - but it strikes me 
that they still present a challenge to modern programmers, no?  (And perhaps 
travel by flight might be a good test activity for AGI because it's not that 
complicated).

    I absolutely agree that the general definition must be accomplished by 
specific examples of  the activities the AGI machine will tackle.A 
sports-playing robot or a multiple-maze-running robot were my first attempts.

    I disagree with yours, though. Passing human exams of most if not all kinds 
would certainly classify as a proof of AGI. I just think that's like trying to 
fly at intergalactic speed before you can even move a finger or a foot. 
Language is an embodied skill -  the brain can't understand words it can't 
literally make sense of. It's based on whole sets of physical, manipulative and 
navigational skill as well as a highly evolved visual intelligence with awesome 
CGI powers..(Remember - the unconscious mind doesn't think over things in words 
alone, which might seem most efficient, but in cinematic dreams. And so, almost 
certainly do animal minds).

    I reckon an AGI whose skills were in various ways navigational, like those 
of the earliest animals, would be a far more realistic target.



      ----- Original Message ----- 
      From: Benjamin Goertzel 
      To: singularity@v2.listbox.com 
      Sent: Tuesday, April 24, 2007 11:58 PM
      Subject: Re: [singularity] Why do you think your AGI design will work?



      Well, in my 1993 book "The Structure of Intelligence" I defined 
intelligence as 

      "The ability to achieve complex goals in complex environments."

      I followed this up with a mathematical definition of complexity grounded 
in 
      algorithmic information theory (roughly: the complexity of X is the 
amount of
      pattern immanent in X or emergent between X and other Y's in its 
environment).

      This was closely related to what Hutter and Legg did last year, in a more 
rigorous 
      paper that gave an algorithmic information theory based definition of 
intelligence.

      Having put some time into this sort of definitional work, I then moved on 
to more
      interesting things like figuring out how to actually make an intelligent 
software system 
      given feasible computational resources.

      The catch with the above definition is that a truly general intelligence 
is possible
      only w/ infinitely many computational resources.  So, different AGIs may 
be able
      to achieve different sorts of complex goals in different sorts of complex 
environments.
      And if an AGI is sufficiently different from us humans, we may not even 
be able
      to comprehend the complexity of the goals or environments that are most 
relevant 
      to it.

      So, there is a general theory of what AGI is, it's just not very useful.

      To make it pragmatic one has to specify some particular classes of goals 
and
      environments.  For example

      goal = getting good grades 
      environment = online universities

      Then, to connect this kind of pragmatic definition with the mathematical
      definition, one would have the prove the complexity of the goal (getting 
good
      grades) and the environment (online universities) based on some relevant 
      computational model.  But the latter seems very tedious and boring work...

      And IMO, all this does not move us very far toward AGI, though it may help
      avoid some conceptual pitfalls that could have been fallen into 
otherwise... 

      -- Ben G

      On 4/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote: 
        Hi,

        I strongly disagree - there is a need to provide a definition of AGI - 
not necessarily the right or optimal definition, but one that poses concrete 
challenges and focusses the mind - even if it's only a starting-point. The 
reason the Turing Test has been such a successful/ popular idea is that it 
focusses the mind.

        (BTW I immediately noticed your lack of a good definition on going 
through your site and papers, and it immediately raised doubts in my mind. In 
general, the more or less focussed your definition/ mission statement, I would 
argue, the more or less seriously people will tend to take you). 

        Ironically, I was just trying to take Marvin Minsky to task for this on 
another forum. I suddenly realised that although he has been talking about the 
problem of AGI for decades, he has only waved at it, and not really engaged 
with it. He talks  about how  having different ways of thinking about a problem 
like the human mind does, is important for AGI  - and that's certainly one 
central problem/ goal - but he doesn't really focus it. 

        Here's my first crack at a definition - very crude - offered strictly 
in brainstorming mode - but I think it does focus a couple of AGI challenges at 
least - and fits with some of the stuff you say.

        AN AGI MACHINE - a truly adaptive, truly learning machine - is one that 
will be able to:

        1) conduct a set of goal-seeking activities

        - where it starts with only a rough, incomplete idea of how to reach 
its goals,

        - i.e. knows only some of the steps it must take, & some of the rules 
that govern those steps

        - and can find its way to its goals "making it up as it goes along" 

        - by finding new ways round more or less unfamiliar obstacles.

        To do this it must be able to:

        2) Change its steps and rules -

        -not just revising them according to predetermined formulae but

        -adding new steps and rules, & even

        -creating new rules, that break existing ones.

        3) can learn new related activities


        [[The key things in this definition for me are that it focusses on the 
need for AGI to be able to radically change the steps and rules of any activity 
it undertakes].

        EXAMPLE: {again a very crude one - first that came to mind]:

        An AGI machine would be a SPORTING ROBOT that first could learn to play 
soccer, as we do,  by being taught a few basic principles [like "try to score a 
goal by running towards the goal with the ball, or passing it to other team 
members, ...." and shown a few soccer games.

        It would then be able to learn the game as it goes along, by playing. 
And should be able to find and learn new routes to goal,  new passes, new kicks 
(with perhaps new spins and backswings),  It should even be able to adapt its 
rules, - adding new ones like "you can move back towards your own goal when you 
have the ball, as well as forwards towards the opponent's"

        And having learned soccer, it should be able to learn OTHER FIELD/ 
COURT SPORTS in similar fashion, -  like Gaelic football, hockey, basketball, 
etc. etc.  

        [Comment: Perhaps much too extravagant a starting-goal - maybe better 
to have a maze-running robot that can learn to run radically different and 
suprising kinds of mazes - but once objections are considered, more realistic 
goals can be set]


        ----- Original Message ----- 
          From: Benjamin Goertzel 
          To: singularity@v2.listbox.com 
          Sent: Tuesday, April 24, 2007 9:50 PM
          Subject: Re: [singularity] Why do you think your AGI design will work?



          Hi,

          We don't have any solid **proof** that Novamente will "work" in the 
sense of leading to powerful AGI.

          We do have a set of mathematical conjectures that look highly 
plausible and that, if true, would imply that Novamente will work (if properly 
implemented and a bunch of details are gotten right, etc.).   But we have not 
proved these conjectures and are not currently focusing on proving them, as 
that is a big hard job in itself....  We have decided to seek proof via 
practical construction and experimentation rather than proof via formal 
mathematics. 

          Wright Bros. did not prove their airplane would work before building 
it.  But they were confident based on their intuitive theoretical model of 
aerodynamics, which turned out to be correct.  The case with Novamente is a bit 
more rigorous than this because we have gotten to the point of stating but not 
proving mathematical conjectures that would imply the workability of the 
system... 

          As for Matt Mahoney's point about "definining AGI" being the 
bottleneck, I really think that is a red herring.  Rigorously defining any 
natural language term is a pain.  You can play for hours with the definition of 
"cup" versus "bowl", or the definition of "flight" versus "leaping" versus 
"floating in space", etc.  Big deal!  

          -- Ben G


          -- Ben G






          On 4/24/07, Joshua Fox <[EMAIL PROTECTED]> wrote: 
            Ben has confidently stated that he believes Novamente will work ( 
http://www.kurzweilai.net/meme/frame.html?m=3 and others). 

            AGI builders, what evidence do you have that your design will work? 

            This is an oft-repeated question, but I'd like to focus on two 
possible bases for saying that an invention will work before it does. 
            1. A clear, simple, mathematical theory, verified by experiment. 
The experiments can be "pure science" rather than technology tests.
            2. Functional tests of component parts or of crude prototypes.

            Maybe I am missing something in the articles I have read, but do 
contemporary AGI builders have a verified theory and/or verified components and 
prototypes?

            Joshua

--------------------------------------------------------------------
            This list is sponsored by AGIRI: http://www.agiri.org/email
            To unsubscribe or change your options, please go to: 

            http://v2.listbox.com/member/?&; 


----------------------------------------------------------------------
          This list is sponsored by AGIRI: http://www.agiri.org/email
          To unsubscribe or change your options, please go to:
          http://v2.listbox.com/member/?&; 


----------------------------------------------------------------------


          No virus found in this incoming message.
          Checked by AVG Free Edition. 
          Version: 7.5.463 / Virus Database: 269.5.10/774 - Release Date: 
23/04/2007 17:26



------------------------------------------------------------------------
        This list is sponsored by AGIRI: http://www.agiri.org/email
        To unsubscribe or change your options, please go to: 
        http://v2.listbox.com/member/?&; 


--------------------------------------------------------------------------
      This list is sponsored by AGIRI: http://www.agiri.org/email
      To unsubscribe or change your options, please go to:
      http://v2.listbox.com/member/?&; 


--------------------------------------------------------------------------


      No virus found in this incoming message.
      Checked by AVG Free Edition. 
      Version: 7.5.463 / Virus Database: 269.5.10/774 - Release Date: 
23/04/2007 17:26



----------------------------------------------------------------------------
    This list is sponsored by AGIRI: http://www.agiri.org/email
    To unsubscribe or change your options, please go to: 
    http://v2.listbox.com/member/?&; 


------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.463 / Virus Database: 269.5.10/774 - Release Date: 23/04/2007 
17:26

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to