--- On Thu, 10/30/08, Pei Wang <[EMAIL PROTECTED]> wrote:

> Even if that is the case, I don't accept it as a reason
> to tolerant this opinion in AGI research.

The point is not that AGI should model things at the level of atoms. The point 
is that we should apply the principle of Occam's Razor to machine learning and 
AGI. We already do that in all practical learning algorithms. For example in 
NARS, a link between two concepts like (if X then Y) has a probability and a 
confidence that depends on the counts of (X,Y) and (X, not Y). This model is a 
simplification from a sequence of n events (with algorithmic complexity 2n) to 
two small integers (with algorithmic complexity 2 log n). The reason this often 
works in practice is Occam's Razor. That might not be the case if physics were 
not computable.

-- Matt Mahoney, [EMAIL PROTECTED]





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to