On 06 Sep 2012, at 13:16, benjayk wrote:



Bruno Marchal wrote:

Put it differently, it is what the variable used in the theory
represent. ExP(x) means that there is some number verifying P.
But this makes no sense if you only consider the natural numbers. The just
contain "123456789 + * and =". There is no notion of "veryifying" or
"ExP(x)" or even a function in the numbers.

ExP(x) is a proposition of RA, so that is the kind of thing PA manages all the time. And RA can also handle the sentence "ExP(x)", in its language, of course, representing it by some number (s(s(s(s(s... s(0))))...).
You might study Gödel 1931, or some books.





Bruno Marchal wrote:

Epistemological existence is about the memory content of such numbers,
resulting from their complex interaction with other numbers. In the
math part, they are handle by prefixing modalities, and have shape like

[]Ex[]P(x), or

[]<>Ex []<>P(x)

and more complex one. Note that those are still arithmetical sentences
as all modalities used here admit purely arithmetical intepretations.
No, they don't. You are severely confusing level and meta-level.
Even the notion of arithmetical interpretation doesn't make sense with
regards to numbers. They don't formulate anything regarding interpretations.
They just contain simply number relations.

At one level. But my computer already understand that he has to send this post. Despite at some level it is only number crunching.



You may invoke Gödel in that point, saying that they are more than that. But Gödel is only proving that we can formulate higher level concepts using numbers. He is not proving anything about numbers as a seperate axiomatic
system.

As I said. Please take the time to study it.


The proof only makes sense with regards to more powerful systems that use
the numbers.

G* minus G, yes.
G no. It is precisely what the number system can prove about itself. Still, the machine can *guess* its G*.






Bruno Marchal wrote:

For
me it seems that it is exactly backwards. We need the 1-p as the
ontology,
because it is what necessarily primitively exists from the 1-p view.

... from the 1p views.

But when we search a "scientific theory" we bet on some sharable
reality beyond the 1p view, be it a physical universe or an
arithmetical one.
If that is want science means, then science is obviously nonsense. There is no thing "beyond" the 1p view, since everything we have is the 1p view and a
3p view is only an abstraction within it.

If I thought you were an abstraction within my 1-view, I would not reply.


Yes, science can allow us to find sharable things beyond our *local
personal* viewpoint. But in your theory 1p describes all the viewpoints, not
one particular viewpoint.

Unclear.
I don't have a theory, I borrow the comp hypothesis, only. There are 8 (and more) type of viewpoints (first person, third person, first person plural, observable, sensible, etc.) and they can have infinitely many different particular contents.






Bruno Marchal wrote:

How is any of it
more meaningful than any other abitrary string of symbols?

T#gtti Hyz# uuuu8P^ii ?
Exactly, this is as meaningful as your statements, in a vacuum.
The point is simply that axioms by themselves are meaningless.

OK. But axioms are always accompanied by rules. Always.




We need to
make sense of them, and this itself needs something fundamentally beyond
them.

Yes. With comp a good notion of truth, but you would beg the question if you use this to pretend that we are more than machine.




Bruno Marchal wrote:



Bruno Marchal wrote:

Strangely you agree
for the 1-p viewpoint. But given that's what you *actually* live, I
don't
see how it makes sense to than proceed that there is a meaningful 3-
p point
of view where this isn't true. This "point of view" is really just
an
abstraction occuring in the 1-p of view.

Yes.
If this is true, how does it make sense to think of the abstraction as
ontologically real and the non-abstraction as mere empistemology? It
seems
like total nonsense to me (sorry).

Because the abstraction provides a way to make sense of how 3p numbers
get 1p views and abstract their own idea of what numbers are.

NUMBERS ====> CONSCIOUSNESS ====> PHYSICAL REALM ====> HUMAN ====>
HUMAN'S CONCEPTION OF NUMBERS
Unfortunately this just doesn't work. You never show how numbers can
actually have 1p views in the first place.

(sigh). read the sane04 paper.



The notion is completely
meaningless. It is like saying that a word has a point of view.

All you do is reflect in the numbers what is already completely beyond the numbers. But this doesn't make sense of how 3p numbers get 1p view at all. It just shows that you can interpret pretty much everything into numbers.

I just listen to what machine ideally correct can prove about themselves and the logic of their possible observation, and this by using the most standard definition in the field. Please study the work if you want understand. (I don't say: believe in it).






Bruno Marchal wrote:



Bruno Marchal wrote:



Bruno Marchal wrote:

With comp, to make things simple, we are high level programs. Their
doing is 100* emulable by any computer, by definition of programs
and
computers.
OK, but in this discussion we can't assume COMP. I understand that
you take
it for granted when discussing your paper (because it only makes
sense in
that context), but I don't take it for granted, and I don't consider
it
plausible, or honestly even meaningful.

Then you have to tell me what is not Turing emulable in the
functioning of the brain.
*everything*!

You point here on their material constitution. That begs the question.

Brains are material objects, but appealing to their material constitution
begs the question?
Just to remind you, even according to COMP brains *are* material,
non-emulable objects.

The matter in comp is defined and explained (in a way we can test), and yes, comp make us independent of it. Your remark does not take into account the reasoning.




Given that they are material objects, why would that not matter?

Because if it matter, by comp, it is part of the program, and it means you have to lower down the level.



I'd say it
is *bound* to matter, because it is what is fundamental about them.


Bruno Marchal wrote:

Rather show me *what is* turing emulable in the brain.

The chemical reactions, the neuronal processing, etc. Anything
described in any book on brain.
There has never been a single chemical reaction in a computer. Just
simulated chemical reactions, which just don't do the same as real chemical reactions (like transforming a certain amount of energy from on form to
another), so they don't perform the same function in the real world.

Chemical reaction are abstract things, I was not referring to the primitive matter.



Emulation would mean functional equivalence. But we can't use a simulated chemical reaction in the place of a real chemical reaction, thus they are
not functionally equivalent.

This has been already answered. Reread the preceding posts.




Bruno Marchal wrote:

Even
according to COMP, nothing is, since the brain is material and
matter is not
emulable.

Right. But that matter exists only in the 1p plural view, not in the
ontology.
Your distinction between ontology and epistemology is totally abitrary.
You have never said what distninguishes the assumption that there are
numbers from the fact that we are here to observe anything (regarding
ontological status). If anything, the latter is more primitively real
because we can't get rid of it. We can forget about numbers (like in
meditation), but we can't forget that we are experiencing (there isn't even
anything about it to forget).

OK. But that's not a problem for comp. Or you have perhaps more to say.




Bruno Marchal wrote:


As I see it, the brain as such has nothing to do with emulability.
We can do
simulations, sure, but these have little to do with an actual brain,
except
that they mirror what we know about it.

It seems to me you are simply presuming that everything that's
relevant in
the brain is turing emulable, even despite the fact that according
to your
own assumption nothing really is turing emulable about the brain.

... about the physical constitution of the brain. OK.
So how does it make sense to abitrarily postulate that some aspect of
reality doesn't matter with regards to our brains?

To say "yes" to a doctor, or to move at the speed of light in the local neighborhood, or simply to see the next soccer cup when very old, or whatever. This is a different topic, why some people will bet on comp, and others will not?







Bruno Marchal wrote:



Bruno Marchal wrote:

Also, I don't take comp for granted, I assume it. It is quite
different.

I am mute on my personal beliefs, except they change all the time.

But you seems to believe that comp is inconsistent or meaningless,
but
you don't make your point.
I don't know how to make it more clear. COMP itself leads to the
conclusion
that our brains fundamentally can't be emulated, yet it starts with
the
assumption that they can be emulated.

At some level. You forget the key point.
So, at some level. I don't see how that changes anything. I just mean
digitally substituted in any way.

yes, but as matter itself cannot be, you can understand that comp is a bet on a sort of truncation of ourselves. Here too, many different form of comp practice are possible. But the consequences I derive don't depend on the choice of level, only on its existence (which is assumed).




Bruno Marchal wrote:


We can only somehow try to rescue COMPs consistency by postulating
that what
the brain is doesn't matter at all, only what an emulation of it
would be
like.

Yes. the brain's constitution does not matter. Comp is functionalism
at some level.
But why wouldn't it? It is there, why would it not matter? Why would it even
be there if it didn't matter?

Indeed, step 8 explains that it is not there "ontologically", only epistemologically. I like to say that the brain is in our head (with a pun :)





Bruno Marchal wrote:

I genuinely can't see the logic behind this at all.

I think this is due to the identification of mind and brain that you
are doing, but with comp the brain is a mental commodities, like the
body. It does not create  consciousness, it relativize it in complex
context only.
I actually agree (like I have said many times). I don't identifiy brain and
mind *at all*.
The brain is "just" an object within mind, but that doesn't mean that it
somehow doesn't matter to what we experience.

So we do agree.




Bruno Marchal wrote:




Bruno Marchal wrote:


In which way does one thing substitute another thing if actually the
correct
interpretation of the substitution requires the original? It is like
saying
"No you don't need the calculator to calculate 24,3^12. You can
substitute
it with pen and pencil, where you write down 24,3^12=X and then
insert the
result of the calculation (using your calculator) as X."
If COMP does imply that interpreting a digital einstein needs a real einstein (or more) than it contradicts itself (because in this case
we can't
*always* say YES doctor, because then there would be no original
left to
interpret the emulation).
Really it is quite a simple point. If you substitute the whole
universe with
an emulation (which is possible according to COMP)

It is not.
You are right, it is not, if we take the conclusions of your
reasoning into
account. Yet COMP itself strongly seems to suggest it. That's the
contradiction.

? Comp is "it exists a level such that I survive an emulation of it".
Then it makes the whole of the observable reality, including
consciousness not Turing emulable. It might seems weird, but I don't
see a contradiction yet.
If observable reality as a whole is not emulable, there can't be a level at which there is a correct emulation, because we can't even instantiate an abstract digital emulation into reality (because observable reality is not
digital).



Contradiction: "... abstract DIGITAL emulation into reality (because observable reality is not
DIGITAL).
We can emulate digital features in a non digital reality.
And we can explain the appearances of a non digital reality in the mind of digital entity.


Bruno Marchal wrote:



Bruno Marchal wrote:

If there was something outside the universe
to interpret the simulation, then this would be the level on which
we can't
be substituted (and if this would be substituted, then the level
used to
interpret this substitution couldn't be substituted, etc....).
In any case, there is always a non-computational level, at which no
digital
substitution is possible - and we would be wrong to say YES with
regards to
that part of us, unless we consider that level "not-me" (and this
doesn't
make any sense to me).


Indeed we are not our material body. We are the result of the
activity
of the program supported by that body. That's comp.

I don't have a clue why you believe this is senseless or
inconsistent.
For one thing, with COMP we postulate that we can substitute a brain
with a
digital emulation ("yes doctor"),

At some level.



yet the brain

The material brain.


and every possible
substitution can't be purely digital according to your reasoning
(since
matter is not digital).

Change of matter is not important if it preserves the right
functionality at some level.
How does that relate to the issue? We have no way of making statements about the computational functionality of matter (and thus the right level) if
matter is non-digital.
It is ill-defined.

No. The doctor can choose the level, and there is no need to understand the details of the processing below the level, by definition of the substitution level.





You even say yourself that the correct substitution level is unknowable.

Yes. Unknowable for sure, as it is not the exact Theaetetus notion use elsewhere.



But
not only that, it can't exist, because the notion of digital substitution is
meaningless in a non-digital universe.

I see no reason for that.



Sure we can have *relatively* digital substitutions (like a physical
computer). But you can't derive anything from that, because your reasoning assumes that the substitution is digital (in a very strict sense of allowing
precise copying etc...).

Only at the level where I bet a computer can emulate me.





Bruno Marchal wrote:


Of course we could engage in stretching the meaning of words and
argue that
COMP says "functionally correct substitution", meaning that it also
has to
be correctly materially implementened. But in this case we can't
derive
anything from this, because a "correct implementation" may actually
require
a biological brain or even something more.

The consequences will go through as long as a level of substitution
exist.
But there can't, unless your assumption is taken as a vague statement,
meaning "kinda digital substitution".

? If I have a MAC in the head, I am 100% digital. If I survive in a virtual environment with it, I am 100% digital.


In this case the brain substitution might not be digital at all, except in a very weak sense by using anything that's - practically speaking - digital
(we can already do that), so your reasoning doesn't work.

You lost me here.

Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to