Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread Mark Waser
Models that are simple enough to debug are too simple to scale. The contents of a knowledge base for AGI will be beyond our ability to comprehend. Given sufficient time, anything should be able to be understood and debugged. Size alone does not make something incomprehensible and I

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread BillK
On 11/14/06, James Ratcliff wrote: If the contents of a knowledge base for AGI will be beyond our ability to comprehend then it is probably not human level AGI, it is something entirely new, and it will be alien and completely foriegn and unable to interact with us at all, correct? If you

Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread William Pearson
Richard Loosemoore As for your suggestion about the problem being centered on the use of model-theoretic semantics, I have a couple of remarks. One is that YES! this is a crucial issue, and I am so glad to see you mention it. I am going to have to read your paper and discuss with you

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread Ben Goertzel
Hi, I would also argue that a large number of weak pieces of evidence also means that Novamente does not *understand* the domain that it is making a judgment in. It is merely totally up weight of evidence. I would say that intuition often consists, internally, in large part, of summing

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread James Ratcliff
I concur here, also it was quoted that earlier that an AGI couldnt be understood because humans cant understand the brain.So when we become able to understand the brain, will this view be reversed? Or is the thought that we will NEVER be able to understand the brain. Because while I believe it to

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread James Ratcliff
If the "contents of a knowledge base for AGI will be beyond our ability to comprehend" then it is probably not human level AGI, it is something entirely new, and it will be alien and completely foriegn and unable to interact with us at all, correct? If you mean it will have more knowledge than we

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread James Ratcliff
No presumably you would have the ability to take a snapshot of what its doing, or as its doing it it should be able to explain what it is doing.James RatcliffBillK [EMAIL PROTECTED] wrote: On 11/14/06, James Ratcliff wrote: If the "contents of a knowledge base for AGI will be beyond our ability to

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread James Ratcliff
Does it generate any kind of overview reasoning of why it does something?If in the VR you tell the bot to go pick up something, and it hides in the corner instead, does it have any kind of useful feedback or 'insight' into its thoughts?I intend to have different levels of thought processes and

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread James Ratcliff
Very good, I agree,and this is one of the requirements for the Project Halo contest (took and passed the AP chemistry exam)http://www.projecthalo.com/halotempl.asp?cid=30Also it is a critical task for expert systems to explain why they are doing what they are doing, and for business application, I

Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread Matt Mahoney
I will try to answer several posts here. I said that the knowledge base of an AGI must be opaque because it has 10^9 bits of information, which is more than a person can comprehend. By opaque, I mean that you can't do any better by examining or modifying the internal representation than you