If not verify, what about falsify? To me Occam's Razor has always been seen as a tool for selecting the first argument to attempt to falsify. If you can't, or haven't, falsified it, then it's usually the best assumption to go on (presuming that the costs of failing are evenly distributed).

OTOH, Occam's Razor clearly isn't quantitative, and it doesn't always pick the right answer, just one that's "good enough based on what we know at the moment". (Again presuming evenly distributed costs of failure.)

(And actually that's an oversimplification. I've been considering the costs of applying the presumption of the theory chosen by Occam's Razor to be equal to or lower then the costs of the alternatives. Whoops! The simplest workable approach isn't always the cheapest, and given that all non-falsified-as-of-now approaches have closely equal plausibility...perhaps one should instead choose the cheapest to presume of all theories that have been vetted against current knowledge.)

Occam's Razor is fine for it's original purposes, but when you try to apply it to practical rather than logical problems then you start needing to evaluate relative costs. Both costs of presuming and costs of failure. And actually often it turns out that a solution based on a theory known to be incorrect (e.g. Newton's laws) is "good enough", so you don't need to decide about the correct answer. NASA uses Newton, not Einstein, even though Einstein might be correct and Newton is known to be wrong.

Pei Wang wrote:
Ben,

It seems that you agree the issue I pointed out really exists, but
just take it as a necessary evil. Furthermore, you think I also
assumed the same thing, though I failed to see it. I won't argue
against the "necessary evil" part --- as far as you agree that those
"postulates" (such as "the universe is computable") are not
convincingly justified. I won't try to disprove them.

As for the latter part, I don't think you can convince me that you
know me better than I know myself. ;-)

The following is from
http://nars.wang.googlepages.com/wang.semantics.pdf , page 28:

If the answers provided by NARS are fallible, in what sense these answers are
"better" than arbitrary guesses? This leads us to the concept of "rationality".
When infallible predictions cannot be obtained (due to insufficient knowledge
and resources), answers based on past experience are better than arbitrary
guesses, if the environment is relatively stable. To say an answer is only a
summary of past experience (thus no future confirmation guaranteed) does
not make it equal to an arbitrary conclusion — it is what "adaptation" means.
Adaptation is the process in which a system changes its behaviors as if the
future is similar to the past. It is a rational process, even though individual
conclusions it produces are often wrong. For this reason, valid inference rules
(deduction, induction, abduction, and so on) are the ones whose conclusions
correctly (according to the semantics) summarize the evidence in the premises.
They are "truth-preserving" in this sense, not in the model-theoretic sense that
they always generate conclusions which are immune from future revision.

--- so you see, I don't assume adaptation will always be successful,
even successful to a certain probability. You can dislike this
conclusion, though you cannot say it is the same as what is assumed by
Novamente and AIXI.

Pei

On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang <[EMAIL PROTECTED]> wrote:
Ben,

Thanks. So the other people now see that I'm not attacking a straw man.

My solution to Hume's problem, as embedded in the experience-grounded
semantics, is to assume no predictability, but to justify induction as
adaptation. However, it is a separate topic which I've explained in my
other publications.
Right, but justifying induction as adaptation only works if the environment
is assumed to have certain regularities which can be adapted to.  In a
random environment, adaptation won't work.  So, still, to justify induction
as adaptation you have to make *some* assumptions about the world.

The Occam prior gives one such assumption: that (to give just one form) sets
of observations in the world tend to be producible by short computer
programs.

For adaptation to successfully carry out induction, *some* vaguely
comparable property to this must hold, and I'm not sure if you have
articulated which one you assume, or if you leave this open.

In effect, you implicitly assume something like an Occam prior, because
you're saying that  a system with finite resources can successfully adapt to
the world ... which means that sets of observations in the world *must* be
approximately summarizable via subprograms that can be executed within this
system.

So I argue that, even though it's not your preferred way to think about it,
your own approach to AI theory and practice implicitly assumes some variant
of the Occam prior holds in the real world.
Here I just want to point out that the original and basic meaning of
Occam's Razor and those two common (mis)usages of it are not
necessarily the same. I fully agree with the former, but not the
latter, and I haven't seen any convincing justification of the latter.
Instead, they are often taken as granted, under the name of Occam's
Razor.
I agree that the notion of an Occam prior is a significant conceptual beyond
the original "Occam's Razor" precept enounced long ago.

Also, I note that, for those who posit the Occam prior as a **prior
assumption**, there is not supposed to be any convincing justification for
it.  The idea is simply that: one must make *some* assumption (explicitly or
implicitly) if one wants to do induction, and this is the assumption that
some people choose to make.

-- Ben G



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to