Hi Ben,

Thanks for all replies. Just a quick wrap-up of my main point  (& no doubt we 
can/ will re-engage/ redevelop ideas on other threads. I think it's simply that 
AGI must be "flexi-principle". That it can and does, in effect, say : "well, 
those are my assumptions on this activity, but I could be wrong.." rather like 
this man does:

"We don't have any solid **proof** that Novamente will "work" in the sense of 
leading to powerful AGI.

We do have a set of mathematical conjectures that look highly plausible and 
that, if true, would imply that Novamente will work (if properly implemented 
and a bunch of details are gotten right, etc.).   But we have not proved these 
conjectures and are not currently focusing on proving them, as that is a big 
hard job in itself....  We have decided to seek proof via practical 
construction and experimentation rather than proof via formal mathematics."

Rather like even the simplest animals do - extensive research, often linked to 
biorobotics,  has now  shown that they all use flexible navigational strategies.

All forms of life are scientists/ technologists.

P.S. My point re language, extremely succinctly, is that the brain processes 
all info. simultaneously on at least 3 levels - as a 'picture tree' - as 
symbols, 'outline' graphics AND detailed images,  supplying & checking on all 3 
levels right now in your brain, even as you are apparently just processing on 
the one level. of symbols/words. And that picture tree, I believe, will also be 
essential for AGI. No need to develop this here - but do you also understand 
something like that?

Best


  ----- Original Message ----- 
  From: Benjamin Goertzel 
  To: singularity@v2.listbox.com 
  Sent: Wednesday, April 25, 2007 3:38 AM
  Subject: Re: [singularity] Why do you think your AGI design will work?





  On 4/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
    Well we agree where we disagree.

    I'm very confident that AGI can't be achieved but by following crudely 
evolutionary and developmental paths. The broad reason is that brain, body, 
intelligence  and the set, or psychoeconomy of activities of the animal evolve 
in interrelationship with each other. All the activities that animals undertake 
are extremely problematic,  and became ever more complex and problematic as 
they evolved - and require ever complex physical and mental structures to 
tackle them.


  Yes, that is how things evolved in nature.  That doesn't mean it's the only 
way things can be.

  Airplanes don't fly like birds, etc. etc. 



    You seem to be making a more sophisticated version of the GOFAI mistake of 
thinking intelligence could be just symbolic and rational - and you can jump 
straight to the top of evolved intelligence.


  No, I absolutely don't think that intelligence can be "just symbolic" -- and 
I don't think that given plausible computational resources, intelligence can be 
"just rational." 

  "Purely symbolic/rational" versus "animal-like" are not the only ways to 
approach AGI...

   



    But I take away from this one personal challenge, which is that it clearly 
needs to be properly explained that a) language rests at the top of a giant 
picture tree of sign systems in the mind  - without the rest of which language 
does not  "make sense" and you "can't see what you are talking about" (& 
there's no choice about that - that's the way the human mind works - and any 
equally successful mind will have to work), and b) language also rests on a 
complex set of physical motor and manipulative systems - and you can't grasp 
the sense of language, if you can't physically grasp the world. Does this last 
area - the multilevelled nature of language - interest you?


  I already understand all those points and have done so for a long time.  They 
are statements about human psychology.  Why do you think that closely humanlike 
intelilgence is the only kind? 

  As it happens my own AGI project does include embodiment (albeit, at the 
moment, simulated embodiment in a 3D sim world) and aims to ground language in 
perceptions and actions.  However, it doesn't aim to do so in a slavishly 
humanlike way, and also has room for more explicit logic-like representations. 

  "There are more approaches to AGI than are dreamt of in your philosophy"  ;-)

  -- Ben G




------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.463 / Virus Database: 269.6.0 - Release Date: 24/04/2007 00:00

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to