If the "contents of a knowledge base for AGI will be beyond our ability to comprehend"  then it is probably not human level AGI, it is something entirely new, and it will be alien and completely foriegn and unable to interact with us at all, correct?
  If you mean it will have more knowledge than we do, and do things somewhat differently, I agree on the point.
  "You can't look inside the box because it's 10^9 bits."
Size is not a acceptable barrier to looking inside.  Wiki, is huge and will get infineltly huge, yet I can look inside it, and see that "poison ivy causes rashes" or whatnot.
The AGI will have enourmous complexity, I agree, but you should ALWAYS be able to look inside it.  Not in the tradional sense of pages of code maybe or simple set of rules, but the AGI itself HAS to be able to generalize and tell what it is doing.
  So something like, I see these leafs that look like this, supply picture, can I pick them up safely, will generate a human readable output that can itself be debugged. Or asking about the process of doing something, will generate a possible plan that the AI would follow, and a human could say, no thats not right, and cause the AI to go back and reconsider with new possible information.
  We can always look inside the 'logic' of what the AGI is doing, we may not be able to directly change that ourselves easily.

The four step process doesnt really seem applicable to me, to a general AI:
>1. Develop a quantifiable criteria for success, a test score.
>2. Develop a theory of learning.
>3. Develop a training and test set (about 10^9 bits compressed).
>4. Tune the learning model to improve the score.

This opaque test and try works only for easily defined reduced domain sets, such as sound processing, vision, or character recognition.
1.  The criteria itself is success, as defined by each individual and grouped sets of tasks.
2. We can and need to develope a theory
3. How do we realistically define a training or test set of this size?
Each piece of the trainign set is intertwined as you said with too many others, you cant have one part without the other, so which would come first there?
The test set would have to be actual 'experience'  as an ongoing basis, and it will continually be reorganizing itself.

The theory part here will be the only key factor, and will determine how the AI is created.
  I have become more and more of the mind that the AI must be grown up from the ground, given a few small abilities, and access to many specialized low-level NN methods such as pattern recognition / motor control, and then iteratively improoved alongside humans.
  It will have to reason and plan with the limited information and structures it has always and KNOW this internally, and seek out new information and structures as needed.
  Having looked at the nueral network type AI algorithms, I dont see any fathomable way that that type of architecture could create a full AGI by itself.  It is good with the vision task because it can be given full examples, and has a definite easy outcome that can be graded.  But a common sense world knowledge training set of 1000's of examples of each thing is not something that we can create.
  Modeling against human AI, (which Im usually not a stickler for) we see that given a few examples of behaviour, we are able to learn.  (Though my daughter still cannot get the puppy is a 'she' not a 'he'). We need to do this with the common sense type AI as well. 

James Ratcliff

Matt Mahoney <[EMAIL PROTECTED]> wrote:
James Ratcliff <[EMAIL PROTECTED]> wrote:
>Well, words and language based ideas/terms adequatly describe much of the upper levels of human interaction and see >appropriate in that case.
>
>It fails of course when it devolpes down to the physical level, ie vision or motor cortex skills, but other than that, using
>language internaly would seem natural, and be much easier to look inside the box ,and see what is going on and correct the
>system's behaviour.

No, no, no, that is why AI failed.  You can't look inside the box because it's 10^9 bits.  Models that are simple enough to debug are too simple to scale.  How many times will we repeat this mistake?  The contents of a knowledge base for AGI will be beyond our ability to comprehend.  Get over it.  It will require a different approach.

1. Develop a quantifiable criteria for success, a test score.
2. Develop a theory of learning.
3. Develop a training and test set (about 10^9 bits compressed).
4. Tune the learning model to improve the score.

Example:

1. Criteria: SAT analogy test score.
2. Theory: word associtation matrix reduced by singular value decomposition (SVD).
3. Data: 50M word corpus of news articles.
4. Results: http://iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-48255.pdf

An SVD factored word association matrix seems pretty opaque to me.  You can't point to which matrix elements represent associations like cat-dog, moon-star, etc, nor will you be inserting such knowledge for testing.  If you want to understand it, you have to look at the learning algorithm.  It turns out that there is an efficient neural model for SVD.  http://gen.gorrellville.com/gorrell06.pdf

It should not take decades to develop a knowledge base like Cyc.  Statistical approaches can do this in a matter of minutes or hours.
 
-- Matt Mahoney, [EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303



_______________________________________
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! http://www.falazar.com/projects/Torrents/tvtorrents_show.php


Everyone is raving about the all-new Yahoo! Mail beta.
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Reply via email to