On Thu, Oct 30, 2008 at 5:36 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> The point is not that AGI should model things at the level of atoms.

I didn't blame anyone for doing that. What I said is: to predict the
environment as a Turing Machine (symbol by symbol) is just like to
construct a building atom by atom. The problem is not merely in
complexity, but in the level of description.

> The point is that we should apply the principle of Occam's Razor to machine 
> learning and AGI.

If by "Occam's Razor" you mean "the learning mechanism should prefer
simpler result", I don't think anyone has disagreed (though people may
not use that term, or may justify it differently), but if by "Occam's
Razor" you mean "learning should start by giving simpler hypotheses
higher prior probability", I still don't see why.

> We already do that in all practical learning algorithms. For example in NARS, 
> a link between two concepts like (if X then Y) has a probability and a 
> confidence that depends on the counts of (X,Y) and (X, not Y).

Yes, except it is not a "probability" in the sense of "limit of frequency".

> This model is a simplification from a sequence of n events (with algorithmic 
> complexity 2n) to two small integers (with algorithmic complexity 2 log n).

This is your interpretation, which is fine, though I don't see why I
must see it the same way, though I do agree that it is a summary of
experience.

> The reason this often works in practice is Occam's Razor. That might not be 
> the case if physics were not computable.

Again, this is a description of your belief, not a justification of this belief.

Pei


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to