>I formalize this argument as a model P(H)P(E|H), perform deductive
>inference to compute P(H|E),  and then apply the result back to the
>world.  I claim that my result P(H|E) models my belief about H, updated
>by the evidence E.   That step, applying my model to make a claim about
>the world (or at least what I believe about the world), is not
>deduction.  I never said it was.  I can't prove to anyone, including
>myself, that this belief is "right."  

The step to which you refer is induction.  H is the hypothesis that the
world is such that it makes sense to use induction.  You are using
induction to justify induction, which is circular.

>The applicability of this conclusion to the world assumes that the
>connection between E and H postulated in my model actually obtains in
>the world, and that my observations on E are accurate, and that my
>prior beliefs about H are not too dogmatic.  Any time we apply a
>mathematical model to the world, the conclusions we draw are subject to
>the assumption that the model is a sufficiently good representation for
>the purpose to which we are putting it.  

When you postulate a connection between E and H in your model, you
assume that induction is true.  This forbids you from using your model
to draw any conclusions about the validity of induction.

>If the joint distribution P(H,E) is a bad model, then the answer I get
>will be bad.   If it's an accurate model of my beliefs about the world,
>but a bad model of the world, then the answer will be a correct
>reflection of my beliefs, which will be wrong.  You're right that I
>can't prove the applicability of my model to the world.  All I can say
>about this analysis is that in my opinion, our living in a "learnable"
>universe is justified by the evidence.  I could, of course, be wrong.

I am not sure what you mean by "justified."  There is no valid chain of
reasoning by which you can update a posterior for this belief, so you
must have some other notion of justification.

>The latter model has much higher probability to me than the former, for
>reasons I've discussed.  Therefore, I make confident predictions about
>the future.  Of course, if I'm wrong (and I can't prove I'm not) then
>all my predictions will go haywire tomorrow.  However, I happen to
>believe (of course I could be wrong) that I have strong grounds for
>being confident in my predictions.

The point of the discussion is that there is no non-circular chain of
reasoning by which you can have a higher probability of you previous
observations being germane for future prediction.

>In one of my models ("learnable universe") I assume at least some form
>of stationarity, and that the past is germane to the present and
>future.  Conditional on that model, what I see *right now* (including
>my memories of the past -- which are encoded as memory traces in the
>present state of the world) is the kind of thing I'd expect to see. 
>This is not circular.  I am postulating a model and looking at what
>kind of data I would expect under the model.

This tells you something about the world at the moment you made your
last observation.  Unless you assume your premise, it tells you nothing
about the world at this instant or in the future.

>Assuming a non-dogmatic prior, my posterior odds ratio for "learnable
>universe" against "monkeys typing" is  enormous.  Of course, there are
>lesser degrees of "unlearnability" that are not so easily refuted, and
>those have non-negligible posterior probability.
>
>This is not circular reasoning.

If all of this is done while you are being perfectly clear about the
distribution in question, then there is no problem.  Once you start
talking about the future, you need to assume that the distribution is
the same and the reasoning becomes quite blatantly circular.

>I don't understand what you mean.  Within the "monkeys typing" universe
>model, evidence about today does not influence my prediction for
>tomorrow, because tomorrow has nothing to do with today.  However,
>evidence about today is quite germane to the question of whether I'm in
>a "learnable" or "monkeys typing" universe.  Hence, it's germane to the
>question of whether what happens today has anything to do with what's
>going to happen tomorrow.

The problem is that this is a biconditional.  Evidence is germane iff
the world is learnable.  Therefore, using evidence to determine if the
world is learnable is circular.

>>The template for the particular form of circular reasoning in question here
>>requires a premise about an inference rule (or precondition thereof) and
>>then the use of that rule to establish the premise.  
>
>That is NOT what I am doing.

This is *precisely* what you are doing.

>I'm not using Bayes Rule to establish Bayes Rule.  Bayes Rule is a
>valid form of logical deduction, and stands on that merit.

Correct.  As I said, you are using Bayes rule, under the assumption that
the precondition for its applicability is valid, with the aim of
establishing the precondition.  Specifically, you are using Bayes rule,
a precondition of which is that evidence was drawn from the correct
distribution.  You are then using this to increase your posterior on the
hypothesis that indeed the distribution does not change.

To fill in the template:  Premise = induction = stationary.  Inference
rule = Bayes rule.

>To say this another way -- I do not attempt to establish by induction
>that induction is valid in general.  I aim to argue by induction that
>ours is a universe in which induction "works."  I also argue (and I
>hope to formalize this some day) that it is probable that I would be
>able to do this in a universe in which induction works, and improbable
>in any other kind of universe.  This provides strong evidence that ours
>is a universe in which induction works.  This is a perfectly reasonable
>and non-circular argument.

Unless you mean something non-standard by "works", this is a completly
circular argument.  When you assume X to increase your posterior on X,
this is circular.

>Sure.  This reinforces Clark's point that if you have a bad model
>you'll make bad inferences.  No one can prove any of us doesn't have a
>horribly bad model.  If the model you just cited is your model, then
>you'll go on making abysmal predictions quite confidently.  But it's
>not circular reasoning, it's just a bad model.

(regarding the counter-inductionist argument)

The argument is circular because it assumes the premise of
counter-inductionism to establish counter-inductionism.  It's a basic
principle of formal reasoning that such tactics form circular arguments
and that circular arguments are not valid.

The problem with accepting circular reasoning is precisely that it
destroys your ability to distinguish good hypotheses from bad ones.
The counter-inductionist view is not a model; it is a hypothesis.  The
fact that the same line of reasoning supports two competeley
contradictory hypotheses should be a huge warning that one cannot accept
such circular reasoning.

>But note that you are missing the second part of my argument above. 
>You argue by counter-induction that ours is a universe in which
>counter-induction works.  But you cannot establish that it is probable
>that you should be able to do this in only those universes in which
>counter-induction works.  In fact, you've shown that you can argue by
>counter-induction that counter-induction works, in a universe in which
>it does *not* work.  Therefore, the likelihood ratio does not favor
>counter-induction as a valid general principle.

The point of the argument is that the probabilities are bogus because
they arrived upon by circular reasoning.  Arguing about which of too
bogus probabilities is larger is a pointless endeavor.

-- 
Ron Parr                                       email: [EMAIL PROTECTED]   
--------------------------------------------------------------------------
          Home Page: http://robotics.stanford.edu/~parr

Reply via email to