Bruno Marchal wrote:
> 
>>> Right. It makes only first person sense to PA. But then RA has
>>> succeeded in making PA alive, and PA could a posteriori realize that
>>> the RA level was enough.
>> Sorry, but it can't. It can't even abstract itself out to see that  
>> the RA
>> level "would be" enough.
> 
> Why?
No system can reason as if it did not exist, because to be coherent it would
than have to cease to reason.
If PA realizes that RA is enough, then this can only mean that RA + its own
realization about RA is enough.


Bruno Marchal wrote:
> 
>> I see you doing this all the time; you take some low level that can  
>> be made
>> sense of by something transcendent of it and then claim that the low  
>> level
>> is enough.
> 
> For the ontology. Yes.
I honestly never understood what you mean by ontology and epistemology. For
me it seems that it is exactly backwards. We need the 1-p as the ontology,
because it is what necessarily primitively exists from the 1-p view.
Arithmetic is one possible epistemology.

I don't even get what it could mean that numbers are ontologically real, as
we know them only as abstractions (so they are epistemology). If we try to
talk as if numbers are fundamentally real - independent of things - we can't
even make sense of numbers.
What is the abstract difference between 1 and 2 for example. What is the
difference between 0s and 0ss? What's the difference between the true
statement that 1+1=2 and the false statement that 1+2=2? How is any of it
more meaningful than any other abitrary string of symbols? 

We can only make sense of them as we see that they refer to numbers *of
objects* (like for example the string "s").
If we don't do that we could as well embrace axioms like 1=2 or 1+1+1=1 or
1+9=2343-23 or 1+3=*?ABC or  whatever else.


Bruno Marchal wrote:
> 
>> Strangely you agree
>> for the 1-p viewpoint. But given that's what you *actually* live, I  
>> don't
>> see how it makes sense to than proceed that there is a meaningful 3- 
>> p point
>> of view where this isn't true. This "point of view" is really just an
>> abstraction occuring in the 1-p of view.
> 
> Yes.
If this is true, how does it make sense to think of the abstraction as
ontologically real and the non-abstraction as mere empistemology? It seems
like total nonsense to me (sorry).


Bruno Marchal wrote:
> 
>>
>>
>> Bruno Marchal wrote:
>>>
>>> With comp, to make things simple, we are high level programs. Their
>>> doing is 100* emulable by any computer, by definition of programs and
>>> computers.
>> OK, but in this discussion we can't assume COMP. I understand that  
>> you take
>> it for granted when discussing your paper (because it only makes  
>> sense in
>> that context), but I don't take it for granted, and I don't consider  
>> it
>> plausible, or honestly even meaningful.
> 
> Then you have to tell me what is not Turing emulable in the  
> functioning of the brain.
*everything*! Rather show me *what is* turing emulable in the brain. Even
according to COMP, nothing is, since the brain is material and matter is not
emulable.

As I see it, the brain as such has nothing to do with emulability. We can do
simulations, sure, but these have little to do with an actual brain, except
that they mirror what we know about it.

It seems to me you are simply presuming that everything that's relevant in
the brain is turing emulable, even despite the fact that according to your
own assumption nothing really is turing emulable about the brain.


Bruno Marchal wrote:
> 
> Also, I don't take comp for granted, I assume it. It is quite different.
> 
> I am mute on my personal beliefs, except they change all the time.
> 
> But you seems to believe that comp is inconsistent or meaningless, but  
> you don't make your point.
I don't know how to make it more clear. COMP itself leads to the conclusion
that our brains fundamentally can't be emulated, yet it starts with the
assumption that they can be emulated.

We can only somehow try to rescue COMPs consistency by postulating that what
the brain is doesn't matter at all, only what an emulation of it would be
like.
I genuinely can't see the logic behind this at all.



Bruno Marchal wrote:
> 
>>
>> In which way does one thing substitute another thing if actually the  
>> correct
>> interpretation of the substitution requires the original? It is like  
>> saying
>> "No you don't need the calculator to calculate 24,3^12. You can  
>> substitute
>> it with pen and pencil, where you write down 24,3^12=X and then  
>> insert the
>> result of the calculation (using your calculator) as X."
>> If COMP does imply that interpreting a digital einstein needs a real
>> einstein (or more) than it contradicts itself (because in this case  
>> we can't
>> *always* say YES doctor, because then there would be no original  
>> left to
>> interpret the emulation).
>> Really it is quite a simple point. If you substitute the whole  
>> universe with
>> an emulation (which is possible according to COMP)
> 
> It is not.
You are right, it is not, if we take the conclusions of your reasoning into
account. Yet COMP itself strongly seems to suggest it. That's the
contradiction.


Bruno Marchal wrote:
> 
>> If there was something outside the universe
>> to interpret the simulation, then this would be the level on which  
>> we can't
>> be substituted (and if this would be substituted, then the level  
>> used to
>> interpret this substitution couldn't be substituted, etc....).
>> In any case, there is always a non-computational level, at which no  
>> digital
>> substitution is possible - and we would be wrong to say YES with  
>> regards to
>> that part of us, unless we consider that level "not-me" (and this  
>> doesn't
>> make any sense to me).
> 
> 
> Indeed we are not our material body. We are the result of the activity  
> of the program supported by that body. That's comp.
> 
> I don't have a clue why you believe this is senseless or inconsistent.
For one thing, with COMP we postulate that we can substitute a brain with a
digital emulation ("yes doctor"), yet the brain and every possible
substitution can't be purely digital according to your reasoning (since
matter is not digital).
So if we do a substitution, it could only be a semi-digital or a non-digital
substitution, but then your whole reasoning falls apart (the steps assume
you are solely digital).

COMP is simply contradictory, unless we take for granted your result (we are
already only arithmetical, and so no substitution does really take place -
yes doctor is just a metaphor for "I am digital"), but then it is
tautological and your reasoning is merely an explanation of what it means if
we are digital.

Of course we could engage in stretching the meaning of words and argue that
COMP says "functionally correct substitution", meaning that it also has to
be correctly materially implementened. But in this case we can't derive
anything from this, because a "correct implementation" may actually require
a biological brain or even something more.

Actually I don't think you have any problems to understand that on an
intellectual level. More probably you just don't want to lose your "proof",
because it seems to be very important you (you defended it in thousands of
posts). But honestly, this is just ego and has nothing to do with a genuine
search for truth.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34389259.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to