At 09:53 29/07/04 -0700, Hal Finney wrote:

Tell me again where I am going wrong.



OK.




Consider each of these examples:

117. q
...
191. Bp
...
207. p -> q

Now, we will say that the machines "believes" something if it is one of
its theorems, right?  So we can say that the machine "believes q", it
"believes Bp", and it "believes p->q", right?  We could equivalently say
it "believes q is true", etc., but that is redundant.  If it writes x
down as a theorem, we will say it believes x, which is shorthand for
saying that it believes x is true.  The "is true" part has no real
meaning and does not seem helpful.

We also have this shorthand Bx to mean "the machine believes x".  So we
(not the machine, but us, you and I!) can also write, Bq, BBp and B(p->q),
and all of these are true statements, right?




Until here you are right.







The problem arises when we start to use this same letter B in the
machine's theorems.  It is easy to slide back and forth between the
machine's B and our B.  But there is no a priori reason to assume that
they are the same.  That is something that has to be justified.



The problem arises here, indeed. And I disagree with what you are saying. We take, by personal choice, the total NAIVE STANCE toward the machine. That means that by definition Bx means for us, and for the machine that the machine believes x. Exactly like when the machine believes (p -> q), it means, like for us (-p v q), that p is false or q is true. If the machine believes Bp, it just means that the machine believes it will believe p. In case the machine print Bp and then never print p, it will means (for us) that the machine has a false belief.






Focus on 207. p->q for a moment.  We know that, according to the machine's
rules, this theorem means that if it ever writes down p as a theorem,
it will write down q.



Take care. "p->q" is also true in case p is false (by propositional logic), it could means that the machine believes -p. Let us look at the following example: with f denoting any contradiction (that is f can be seen as an abbreviation of (p & -p)) The machine obeys to classical propositional calculus (CPC). Thus the machine believes all proposition f->p (with any p). And what you said is correct, that will entails that if the machine believes f, then it will believe p, and then, if we add the assumption that the machine is consistent, it will never believes f, that is Bf is false for the machine, and so we know then that [Bf -> <what-you want>] will always be true (because we know also the CPC.




Therefore it is true that Bp->Bq.



Yes. Because the machine obeys CPC.




This is simply
another way of saying the same thing!  Bp means that p is a theorem,
by definition of the letter B, in the real world.  And similarly Bq
means that q is a theorem.  Given that p->q is a theorem, then if p is
a theorem, so is q.  Therefore it is true that B(p->q) -> (Bp -> Bq).
This is not a theorem of the machine, it is a truth in the real world.


Right.




What I want to say is that 207. p->q "means" Bp -> Bq.  It means that if
the machine ever derives p, it will derive q.  This is a true statement
about the operations of the machine.  It is not a theorem of the machine.


OK, but what is a theorem by the machine, is "p->q", independently
that *we* know this entails Bp -> Bq. So your expression
"p->q" means Bp -> Bq, is misleading. The naive stance is that the
machine believes p->q. (or, if we want to insist, that the machine
believes "p->q" is true, but as you said this does not add anything.
Actually p->q could be false, and the machine could have false beliefs,
in which case both "p->q" and "(p->q) is true" are false.




When we talk about what something "means", I think it has to be what it
means to us, not what it means to the machine.



Why? If you talk with any platonist, you should better keep the naive stance. If not it is like you suspect some problem with the platonist brain, and you will no more talk about the same thing with him. It will be much harder for you to show him that he is doing a mistake for exemple.




When the machine writes
117.q, it doesn't mean anything to the machine.



Why? With the naive stance, it means the machine believes q. For perhaps unknown reason to us the machine believes q. Perhaps the machine has make a visit to the KK island; and some native told her that "if I am a knight then q". Or more simply the native said "1+1 = 2", and then latter said q. I have said that the machine believes all classical tautologies, and that if the machine believes X, and then X->Y, then the machine believes Y. But I NEVERsaid that the machine believes ONLY the tautologies. The machine can have its personal life and got some personal non logical beliefs (non tautological belief) on its own. Like in some of the problem I gave where machine develops beliefs on the knight/kanve nature of the natives.



To us it means that the
machine believes q, or that the machine believes q is true.


This is right, but keep in mind that it means something for the machine.
It means q, or "q is true".




Given this approach, I am very hesistant to say that 191. Bp means that
the machine believes that it believes p.


You are hesitant because you hesitate to keep the naive stance.
But with it, things are very clear and simple. By definition (so to say),
if the machine believes ever Bp (perhaps at the 191 day), it means
the machine believes it believes p. If later we got reason to change our mind
we will unhesitatingly change our interpretation of B. Perhaps When the
machine said Bp she was really meaning that her friend <another machine>
believes p. Such circumstance will happen later!
But until we met such machine, Bp is supposed to have exactly
the same meaning for us and for the machine, like for the theorems
of CPC.



I have no problem saying that it
means that the machine believes Bp.  But to say that the machine "believes
that it believes p" uses the word "believes" in two very different and
confusing ways.  The first "believes" is just a statement about what the
machine has derived as a theorem.  We choose to say that the machine's
theorems are what it "believes".  I am OK with that.  But the second
"believes" refers to the letter B, which we are arbitrarily choosing to
identify with this word "belief".



By decision. We are interested in self-referentially correct machines. Those will be the machine which handle the B correctly with respect that to what they believe.





To the machine, B is just a letter.  I still say that I need to know what
the rules are that the machine will apply to that letter.  I see that I
was wrong to think that p -> Bp was a rule the machine would have if it
were "normal".  You said that for a normal machine, if it ever proved p,
it would also prove Bp.  Okay, but how could it possibly do this without
ANY rules to deal with the letter B?



Good question. But it would be misleading to think we need to answer it now. Later we will interview a self-referentially correct universal machine and justify why that machine is normal. But, since recently, I have decide to follow Smullyan's idea to work a little bit axiomatically before going the arithmetical machine. B will get a purely arithmetical interpretation, and the normality of the SRC universal machine will be a theorem of arithmetic.





Normality is not something one can just assert about a machine.


Here I disagree, but of course this is due to the fact that I'm
a classical mathematician. I can just say: let us consider a normal
machine, let us suppose that machine makes a visit to the KK island, ...
And then study the consequence of its normality when, for example, the
machine meets some knight, knaves, whatever ...
Later we will interrogate a Universal Machine, and we will know (like her!)
why she is normal. But I prefer to modularize the difficulties in the
manner of Smullyan. At least for now.



You would
need to give it a rule for dealing with the letter B, then you could prove
(not the machine proving, you would prove it) that if the machine ever
derived p, its rules for dealing with the letter B would then cause it
to derive Bp.  In this way you would show that the machine was normal.



Right, at least when we will choose some particular (universal) machine to deal with. But that's not so easy (it needs some amount of computer science), and the very reasoning leading to G and G* can be done easily without. I could even say: look any SRC universal machine is normal, see Boolos 93 for a proof. But of course your question was genuine.




My axiom, which I should have written as, "0a. for all x, x->Bx", would
in fact be sufficient to show that a machine which had that axiom would
be normal.



I will avoid quantification on proposition (and you notation will prompt some futur discussion) but it could be confusing to focus on it now. So okay: p -> Bp (as an axiom scheme: taken as true for any p), makes the machine normal. But it makes it complete too, and that's is *quite* demanding. (See my last post to Jesse for a futur capital role for the formula p->Bp though, but that's anticipation).




A machine which had this rule for dealing with the letter
B would be normal, because any time it derived p, it could immediately
derive Bp using this axiom.

However, this machine may be too powerful.  Although it is normal, it
is much more.



Agreed.




So my question to you is, what is an example of an axiom for dealing
with the letter B that would make a machine be "just" normal, but no more?



There could exist ten thousand reasons making a machine normal, but
not complete (that is making Bp-> BBp true without making p->Bp true).
My favorite is that a machine needs only to believe in the Peano axiom
of Arithmetic. Another: to believe in the axiom of any theory recursively
extending PA, or ZF, .... Another is to believe in classical logic
and in any programming language, etc.
But we can, like any mathematician, just define a machine normal
by saying Bp->BBp is true about the machine, that is saying that a
machine is normal if she believes that she believes p when she believes p.
You can also say that a machine is normal when she has enough
introspective power to BBp any time when she Bp.
Oh, any accurate machine (for which Bp->p is true) is
obviously normal.
last but not the least (but here I anticipate), but in case you remember
what I said about the Kripke Possible World semantics of modal logic,
normality can be defined in that semantics: a machine is normal when
it just *has* a semantic of Kripke, i.e. a semantics in term of possible worlds.
(the machine should better know also that B(p->q)->(Bp->Bq))
(Note that some logician, like Krister Segerberg use "normality" in a
slightly more general sense). But you can remember as an heuristic
(it could give a good motivation): a machine is normal if she lives
in a multiverse ... G will be normal. G* will NOT be normal ...


Bruno

http://iridia.ulb.ac.be/~marchal/



Reply via email to