On 10/22/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:

Also to Novamente, if I understand correctly.  Terms are linked by a 
probability and confidence.  This seems to me to be an optimization of a neural 
network or connectionist model, which is restricted to one number per link, 
representing probability.

I'm afraid the difference of these two types of system is too large to
be compared in this way. In general, the weight of a link in NN is not
probability.

To model confidence you would have to make redundant copies of the input and 
output units and their connections.  This would be inefficient, of course.

I guess we use the word "confidence" differently. For what I mean, see
http://pages.google.com/edit/NARS.Wang/wang.confidence.pdf

One aspect of NARS and many other structured or semi-structured knowledge representations that concerns me is the direct representation of concepts such as 
"is-a", equivalence, logic ("if-then", "and", "or", "not"), quantifiers ("all", "some"), time 
("before" and "after"), etc.  These things seem fundamental to knowledge but are very hard to represent in a neural network, so it seems expedient to 
add them directly.  My concern is that the direct encoding of such knowledge greatly complicates attempts to use natural language, which is still an unsolved problem.  
Language is the only aspect of intelligence that separates humans from other animals.  Without language, you do not have AGI (IMHO).

I agree that the distinction between "innate knowledge" and "acquired
knowledge" is a major design decision. However, I believe it is
necessary to make the notions you mentioned innate, though in
different forms as how they are usually handled in symbolic AI.

My concern is that structured knowledge is inconsistent with the development of 
language in children.

First, I'm not so sure about the above conclusion. For example, to me,
"is-a" (which is called "inheritance" in NARS) is nothing but the
relation between special patterns and general patterns, which needs to
be there for many types of learning to happen.

Second, if it is indeed the case in children, it still doesn't mean
that AGI must be developed in the same way.

If these notions can be easily developed from more basic ones, we can
make them learned. However, it is not the case so far.

As I mentioned earlier, natural language has a structure that allows direct 
training in neural networks using fast, online algorithms such as perceptron 
learning, rather than slow algorithms with hidden units such as back 
propagation.  Each feature is a linear combination of previously learned 
features followed by a nonlinear clamping or threshold operation.  Working in 
this fashion, we can represent arbitrarily complex concepts.

It depends on your model of concept. For mine, the NN mechanism is not
enough to learn a concept. See
http://nars.wang.googlepages.com/wang.categorization.pdf

Children also learn language as a progression toward increasingly complex 
patterns.

Sure, I have no problem about that.

- phonemes beginning at 2-4 weeks

- phonological rules for segmenting continuous speech at 7-10 months [1]

- words (semantics) beginning at 12 months

- simple sentences (syntax) at 2-3 years

- compound sentences around 5-6 years

Since I don't think AGI should accurately duplicate human
intelligence, I make no attempt to follow the same process.

Attempts to change the modeling order are generally unsuccessful.

It depends. For example, of course an AGI also needs to learn simple
sentences before compound sentences, but I don't think it is necessary
for it to start at phonemes.

For example, attempting to parse a sentence first and then extract its meaning does not work.  You cannot parse a 
sentence without semantics.  For example, the correct parse of "I ate pizza with NP" depends on whether NP is 
"pepperoni", "a fork", or "Sam".

Fully agree. See http://nars.wang.googlepages.com/wang.roadmap.pdf ,
Section 3(2).

Now when we hard code knowledge about logic, quantifiers, time, and other 
concepts and then try to retrofit NLP to it, we are modeling language in the 
worst possible order.  Such concepts, needed to form compound sentences, are 
learned at the last stage of language deveopment.  In fact, some tribal 
languages such as Piraha [2] do not ever reach this stage, even for adults.

It depends on what you mean by "logic" and so on. Of course things
like propositional logic and predicate logic are not innate, but
learned at a very late age. However, I believe there is an "innate
logic", a general-purpose reasoning-learning mechanism, which must be
coded in the initial structure of the system. See
http://nars.wang.googlepages.com/wang.roadmap.pdf , Section 4(3).

I don't think anyone is arguing that learning can come from nowhere.
The difference is in what should be included in this innate logic. For
example, I argued that "inheritance" should be included in it in my
book, Section 10.2 (sorry, no on-line material).

My caution is that any language model we develop has to be trainable in order 
from simple to complex.

Again, no problem here.

The model has to be able to first learn simple sentences in the absence of any 
knowledge of logical relations, and then there must be a mechanism for learning 
such relations.

As mentioned above, to me "inheritance" is the basic general-special
relation, "and" is the concurrence of two sub-patterns, and
"before-after" is basic temporal order in experience. I don't think
any of them is "learned", except their accurate and declarative
definitions.

I realize that human models of logical relations must be horribly inefficient, 
given how long it takes children to learn them.  I think to solve AGI, we need 
to develop a better understanding of such models.  I do not hold out too much 
hope for a computationally efficient solution, given our long past record of 
failure.

Again, it depends on what you mean by "logical relations".

Pei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to