Hi Bruno,

>> I follow your reasoning, from one of your recent articles. This leaves
>> me dissatisfied, but if I try to verbalize this dissatisfaction I feel
>> stuck in a loop. Perhaps this illustrates your point.
>
>
> We might need to do some detour about what it would mean to explain 
> consciousness, or matter.
> I might ask myself if you are not asking too much, perhaps. Eventually, 
> something has to remain unexplainable for reason of self-consisteny. I 
> suspect it will be just where our intuition of numbers or combinators, or of 
> the distinction finite/infinite comes from (assuming mechanism), or just why 
> we trust the doctor!

I thought about it for some time. It seems that at a meta level, we
are always stuck in this situation of "give me one miracle for free
and everything else becomes explainable". The miracle can be matter,
or consciousness, or arithmetic. I believe I have to accept this state
of affairs for the reason of self-consistency that you express above,
but I'm human and I still feel the curiosity. Epistemic limits are
hard to accept.

Could it even be that it doesn't make sense to say that materialism is
true or false, or that idealism is true or false and so on? I mean in
the same sense that the sun is not really the center of the solar
system (the center is just a human mental model), but assuming it
makes it simpler to describe the orbits. Perhaps assuming materialism
makes it easier to describe certain aspects of nature, while assuming
comp makes it easier to describe others, but in the end we always have
to sacrifice something. Model realism at the meta level...

>>> It goes from the rough dissociated universal consciousness of Q to the 
>>> elaborate self-consciousness of PA or ZF, or us.
>>>
>>>
>>>
>>>
>>>
>>>> Darwinism does not seem to require it.
>>>
>>> It does. When the machine opts for <>p in the doubt between p and <>p, if 
>>> it let it go, in some sense, it transforms itself into a more speedy and 
>>> more efficacious machine, with respect to its most probable history.
>>> So, consciousness brings a self-speedable ability, which is quite handy for 
>>> self-moving being living in between a prey and a predator.
>>
>> I'm not convinced. Consider a simple computer simulation where agents
>> are controlled by evolving rules. Agents can eat blue or red pills.
>> 90% of the time blue pills give them energy and red pills cause
>> damage. 10% of the time the opposite happens. It is not possible to
>> know before eating a pill. Let's say the rule system evolves to make
>> the agents always eat blue pills and never red pills. Most of the time
>> this helps the agents, precisely because it assumes the most probable
>> histories. This is a simplified version of the sort of "decisions"
>> that evolution makes, and I would say that it is reasonable to assume
>> that our own evolutionary story consists of the accumulation of a
>> great number of such decisions. I still don't see how consciousness
>> makes a difference in such a mechanism.
>
> The reason why consciousness makes the difference is not related to the 
> environment, but is intrinsic to the machine itself.
>
> I am aware to be quick on this, but the reason is a bit mathematically 
> involved, and again, depends crucially of a discovery made by Gödel, and 
> exposed in his paper “the length of proof”.
>
> Gödel discovered the existence that if you have some essentially undecidable 
> theory, like RA, PA, ZF, there are always undecidable sentences, like <>RA in 
> RA, of <>ZF in ZF, etc, then if you add an undecidable sentence (in the 
> theory T, say) to T, you get a theory which not only will prove infinitely 
> more sentence than T, but that infinitely many proofs will be arbitrarily 
> shorter in T+the undecidable sentence than the proof of it in T, making 
> “somehow” T+the undecidable sentence much faster than T.
>
> Even if the added sentence is false, we get that speeding-up (even for 
> interesting sentences as Eric Vandenbussche convinced me (He thought that 
> this was false, but eventually he proved that statement true).
>
> Blum has got a similar result in computer science, and eventually Blum & 
> Marquez characterised the spedable machine/set (he used the w_i instead of 
> the phi_i), and he obtained the class of sub-creative set, which generalised 
> the creative set (which correspond to the universal machine).

I am very interested in this but cannot find the reference... Can you give it?

> This means that if you take a slow universal machine, like the Babbage 
> Machine, and a very efficacious machine, like a super-quantum computer, then 
> you can by make the Babbage machine more rapid than the quantum computer on 
> *almost* all inputs (= all except a finite number of exceptions), and even 
> arbitrarily more rapid. Of course the “almost” limit seriously the 
> applicability of that theorem, but in arithmetic, and for the FPI, that can 
> play a rôle.

Very interesting, and I think related to my AGI obsessions. I have
thought for a long time that AI is not intrinsically hard, but what
makes it seem hard is that the problem itself is ill-defined, and
rests on an assumption of generality of human intelligence that is not
really the case.

> In particular, take a machine which observe itself, and as some 
> inductive-inference ability. By Gödel, or G, the machine can prove that if 
> she is consistent, then her consistency is not provable. The machine can also 
> see that she never succeed in proving her consistency, and eventually link 
> this with the fact that her consistency (<>t) is not provable. Then, the 
> machine can guess that she is consistent, by the adductive 
> inductive-inference ability, and she can transform itself in a new machine 
> with “<>t” added as a new axiom. That machine will be (theoretically) more 
> efficacious (with some practical drawback). She can easily prove that his 
> “ancestor” is consistent (in one line: “see the new axiom!”), and can prove 
> infinitely more theorem, and can prove old theorem with shorter proofs. And 
> she can continue on the (constructive, and then non constructive) transfinite.

This <>t axiom (~[]t) is part of your observable / sensible hypostases, correct?

> This does not mean that a conscious machine is necessarily more efficacious 
> on all task, due notably to those finite number of exception, but it can be 
> used to argue that in the long run, that make the machine more efficacious.

I have no problem with that. I think that life is always a local
adaptation, and biological fitness is always relative to some
environment (that includes other types of evolving life). It's an
endless game with no predefined direction.

> Your exemple above is a sort of particular counter-example, but it take into 
> account a social changing environment. Here I suppose the environment fixed. 
> But if the environment changed, it will be even more benefices to compute 
> more rapidly, even to find more quickly that she is wrong about its theory 
> about her environment.

Not sure I agree. I would say that my environment can be fixed, but
there is a latent variable. Blue pills are usually good, food is
necessary and it is not possible to inspect the latent variable, so
the belief "blue pills are always good" is both false and a good
adaptation.

Telmo.

>
>
>
>
>>
>>
>>>> - What is the relationship between consciousness and matter?
>>>
>>> The first is true, the second is consistent.
>>
>> Ok. It's hard to disagree.
>
> That is one of the reason why the logic of consciousness/soul/first person 
> will be given by []p & p, and the logic of matter will be given by []p & <>p.
> Another reason is provided by the Kripke semantics, where <>t entails that we 
> are not in a cul-de-sac world, where probabilities by default do not make 
> sense.
> There are other reasons, but they are more technical, so I keep them for 
> later discussions.
>
>
>
>>
>>> (And I hope that the first is first person and the second is first person 
>>> plural, but that is exactly what Everett or QM confirms, but is still 
>>> unclear in arithmetic.
>>>
>>>
>>>
>>>
>>>> - Is there a reality that is external to conscious perception?
>>>
>>>
>>> The arithmetical reality, from which conscious perception build up the 
>>> histories. Some having long and deep reason above the substitution level, 
>>> as, by the delay invariance in the first person perspective, below our 
>>> substitution level, we have only a statistics on many histories, obeying 
>>> some quantum (like) logic. The apparent primary physical reality is really 
>>> a sum on all “fictions”.
>>>
>>> As long as nature continue to verify this, I think that explain a lot. Note 
>>> that the soul ([]p & p) is not a machine, in its own perspective. Only in 
>>> God eyes, but even that is an open question for the completed quantified 
>>> theory of the soul, where evidences exist that even God is limited to that 
>>> respect, which might explain why even God cannot predict to you, where you 
>>> will feel after a duplication.
>>
>> My intuitive understanding of FPI is that both branches occur, they
>> are both equally real and both are experienced in the first person,
>
> OK.
>
>
>
>> but from within one branch one cannot perceive the other, so the
>> indeterminacy is, in a sense, an illusion created by the limitations
>> of our own awareness -- the same limitations, of course, that make the
>> human experience possible.
>
> Exactly. To be sure, the word “illusion” is perhaps too strong, as in M, and 
> in W, you do “really” feel to be in one city, when we assume 
> computationalism. But I am rather OK. From God eyes the personality identity 
> is an illusion, but then all the observable is, which is coherent with the 
> fact that in God eyes, only number, addition and multiplication are not an 
> illusion. Science becomes a study of the laws of universal machine illusions, 
> but as you know I prefer to call them dreams. (Computations as seen by a 
> Löbian machine supported by that computation. The physical become the 
> invariant in the statistics on all computations, seen from the first person 
> point of view).
>
> Don’t hesitate to tell me you are still unsatisfied, but maybe you could try 
> to formulate what is missing. As you know: the theory will explained 99,9% of 
> consciousness and will explain why something (“1%”) must necessarily feel to 
> be not explainable (to avoid inconsistency).
>
> Best,
>
> Bruno
>
>
>
>>
>> Cheers,
>> Telmo.
>>
>>> Please, demolish me now. What do I miss? (Of course, I will be unable to 
>>> explain where the numbers comes from, but this, up to recursive 
>>> equivalence, the universal machine (Löbian like PA) can already explain to 
>>> be not explainable).
>>>
>>> Bruno
>>>
>>>
>>>
>>>
>>>
>>>>
>>>>> My view is scientifically speaking we never know anything "fundamental" 
>>>>> and
>>>>> the search for it is like the hunting of the snark.  We seek theories with
>>>>> more scope and more accuracy, but being "more fundamental" doesn't entail
>>>>> that something is most fundamental.   Mystics like Bruno postulate 
>>>>> something
>>>>> and then build structures on it which, by some (often small) agreement 
>>>>> with
>>>>> experience, PROVE their postulates.  But as Feynman used to point out, 
>>>>> this
>>>>> is Greek mathematics.  Science is like Persian mathematics in which the
>>>>> mathematician seeks to identify all the possible axiom sets that entail 
>>>>> the
>>>>> observations.
>>>>
>>>> I tend to agree that scientifically we never know anything
>>>> fundamental. I do believe that it is possible to use reason to acquire
>>>> knowledge by means that are not the scientific method. I am certain
>>>> that I possess knowledge that was not acquired by scientific means,
>>>> for example I know how it feels to be me. Even if my metaphysical
>>>> obsessions are a fool's errand, I do think it is valuable to know
>>>> where the boundaries of scientific knowledge are, and be humble enough
>>>> to recognize them.
>>>>
>>>> I feel that a lot of resistance to this stuff comes from a fear that
>>>> one is trying to slide religion or the supernatural through the back
>>>> door, so to speak. I trust that you believe that I am not trying to
>>>> sell anything like that. I only proclaim my ignorance, and the
>>>> ignorance of everyone else.
>>>>
>>>> Telmo.
>>>>
>>>>> Brent
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> Telmo.
>>>>>>
>>>>>>> We
>>>>>>> work with reasonable hypothesis that are not contradicted by the 
>>>>>>> evidence
>>>>>>> and have predictive power.  So the anesthesiologist will be able to
>>>>>>> predict
>>>>>>> that you will be inert and unresponsive during the operation and you 
>>>>>>> will
>>>>>>> not remember any of it and will not even feel that time has passed.  He
>>>>>>> will
>>>>>>> also be able to predict that this can also be achieved by a strong blow
>>>>>>> to
>>>>>>> the head... but not to the foot.
>>>>>>>
>>>>>>> Brent
>>>>>
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google Groups
>>>>> "Everything List" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send an
>>>>> email to everything-list+unsubscr...@googlegroups.com.
>>>>> To post to this group, send email to everything-list@googlegroups.com.
>>>>> Visit this group at https://groups.google.com/group/everything-list.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google Groups 
>>>> "Everything List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>>> email to everything-list+unsubscr...@googlegroups.com.
>>>> To post to this group, send email to everything-list@googlegroups.com.
>>>> Visit this group at https://groups.google.com/group/everything-list.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at https://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to