--- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

>>--- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>>>Text compression is IMHO a terrible way of measuring incremental progress 
>>>toward AGI.  Of course it  may be very valuable for other purposes...

>>It is a way to measure progress in language modeling, which is an important 
>>component of AGI 
That is true, but I think that measuring progress in AGI **components** is a 
very poor approach to measuring progress toward AGI....

>Focusing on testing individual system components tends to lead AI developers 
>down a path of refining system components for optimum functionality on 
>isolated, easily-defined test problems that may not have much to do with 
>general intelligence.  

A language model by itself can pass the Turing test because it knows P(A|Q) for 
any question Q and answer A. However, to model a single person the training 
text should be a transcript of all that person's communication since birth. We 
don't have that kind of training data, and the result would not be very useful 
anyway. I would rather use a system trained on Wikipedia, and it doesn't affect 
the learning algorithms.

One can argue that a system isn't AGI if it can't see, walk, experience human 
emotions, etc. There isn't a compression test for these other aspects of 
intelligence, but so what?

-- Matt Mahoney, [EMAIL PROTECTED]




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to