What Hutter proved is (very roughly) that given massive computational resources, following Occam's Razor will be -- within some possibly quite large constant -- the best way to achieve goals in a computable environment...
That's not exactly "proving Occam's Razor", though it is a proof related to Occam's Razor... One could easily argue it is totally irrelevant to AI due to its assumption of massive computational resources ben g On Tue, Oct 28, 2008 at 2:23 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > Hutter proved Occam's Razor (AIXI) for the case of any environment with a > computable probability distribution. It applies to us because the observable > universe is Turing computable according to currently known laws of physics. > Specifically, the observable universe has a finite description length > (approximately 2.91 x 10^122 bits, the Bekenstein bound of the Hubble > radius). > > AIXI has nothing to do with insufficiency of resources. Given unlimited > resources we would still prefer the (algorithmically) simplest explanation > because it is the most likely under a Solomonoff distribution of possible > environments. > > Also, AIXI does not state "the simplest answer is the best answer". It says > that the simplest answer consistent with observation so far is the best > answer. When we are short on resources (and we always are because AIXI is > not computable), then we may choose a different explanation than the > simplest one. However this does not make the alternative correct. > > -- Matt Mahoney, [EMAIL PROTECTED] > > > --- On Tue, 10/28/08, Pei Wang <[EMAIL PROTECTED]> wrote: > > > From: Pei Wang <[EMAIL PROTECTED]> > > Subject: [agi] Occam's Razor and its abuse > > To: agi@v2.listbox.com > > Date: Tuesday, October 28, 2008, 11:58 AM > > Triggered by several recent discussions, I'd like to > > make the > > following position statement, though won't commit > > myself to long > > debate on it. ;-) > > > > Occam's Razor, in its original form, goes like > > "entities must not be > > multiplied beyond necessity", and it is often stated > > as "All other > > things being equal, the simplest solution is the best" > > or "when > > multiple competing theories are equal in other respects, > > the principle > > recommends selecting the theory that introduces the fewest > > assumptions > > and postulates the fewest entities" --- all from > > http://en.wikipedia.org/wiki/Occam's_razor<http://en.wikipedia.org/wiki/Occam%27s_razor> > > > > I fully agree with all of the above statements. > > > > However, to me, there are two common misunderstandings > > associated with > > it in the context of AGI and philosophy of science. > > > > (1) To take this statement as self-evident or a stand-alone > > postulate > > > > To me, it is derived or implied by the insufficiency of > > resources. If > > a system has sufficient resources, it has no good reason to > > prefer a > > simpler theory. > > > > (2) To take it to mean "The simplest answer is usually > > the correct answer." > > > > This is a very different statement, which cannot be > > justified either > > analytically or empirically. When theory A is an > > approximation of > > theory B, usually the former is simpler than the latter, > > but less > > "correct" or "accurate", in terms of > > its relation with all available > > evidence. When we are short in resources and have a low > > demand on > > accuracy, we often prefer A over B, but it does not mean > > that by doing > > so we judge A as more correct than B. > > > > In summary, in choosing among alternative theories or > > conclusions, the > > preference for simplicity comes from shortage of resources, > > though > > simplicity and correctness are logically independent of > > each other. > > > > Pei > > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." -- Robert Heinlein ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34 Powered by Listbox: http://www.listbox.com