--- On Fri, 10/31/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> The question that worries me is: **What does it matter if AIXI __is__ 
> optimal, given that it uses infinitely many resources**??

Because it puts machine learning research on a firmer theoretical foundation. 
For example, we know from experimental results that the longer you train a 
neural network on a data set, the lower the training error will get. But when 
you test it on a different set, there is an optimal amount of training, after 
which results get worse. What AIXI does is explain this observation. As the 
network is trained, it grows in algorithmic complexity. The proper stopping 
point is when it is just complex enough to be consistent with the training 
data, and no more.

This principle applies not just to neural networks, but to all fields of 
machine learning, such as clustering, genetic algorithms, SVM, decision trees, 
polynomial regression, etc.

> And, what does it matter if AIXI-tl is near-optimal, given that it uses 
> infeasibly much resources?

AIXI^tl was not proven optimal. There are many better solutions for specific 
cases. The fact that AIXI is non computable suggests that an ad-hoc approach is 
in fact necessary. There is no "neat" solution to AGI.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to