> On 24 Jul 2019, at 01:02, Bruce Kellett <bhkellet...@gmail.com> wrote:
> 
> On Wed, Jul 24, 2019 at 1:06 AM Bruno Marchal <marc...@ulb.ac.be 
> <mailto:marc...@ulb.ac.be>> wrote:
> On 23 Jul 2019, at 06:45, Bruce Kellett <bhkellet...@gmail.com 
> <mailto:bhkellet...@gmail.com>> wrote:
>> On Tue, Jul 23, 2019 at 2:30 PM Stathis Papaioannou <stath...@gmail.com 
>> <mailto:stath...@gmail.com>> wrote:
>> 
>> The inputs serve to put the brain in a particular state, but the brain could 
>> go into the same state without the inputs. This can be a practical problem 
>> in patients with schizophrenia: the may hear voices and are convinced that 
>> the voices are real, to the point where they might assault someone because 
>> of what they believe he said. 
>> 
>> And I believe that if a particular small area of the brain is stimulated, 
>> the subject experiences the colour red. Similarly, if the colour red is 
>> shown, that same area of the brain shows activity. So quailia are nothing 
>> but particular brain activity. There is no additional "magic sauce" in 
>> consciousness.
>> 
>> These same areas of the brain could be excited at random, as in your 
>> schizophrenic example. All that goes to show is that consciousness is 
>> nothing more than brain activity. Absent brain activity, there is no 
>> consciousness.
> 
> But absence of consciousness does not entail absence of brain activity.
> 
> It is not claimed that consciousness and brain activity are coextensive. So 
> you can have brain activity without consciousness (as in a vegetative state), 
> but there is no consciousness without brain activity.


There is no human consciousness without brain activity. But with mechanism 
things are like this:

NUMBER => CONSCIOUSNESS => PHYSICAL REALITY => BRAINS => HUMAN CONSCIOUSNESS


We, human needs brain, but that does not make brain existing in a primitive 
sense. They are appearance in the mind of the universal numbers, which does not 
require “a physical reality”, only a Turing universal machinery, like 
elementary arithmetic already is.

(I assume digital mechanism all along).



> 
> 
> With mechanism, the personal identity can be defined by the personal memory, 
> and a person cannot be identified with its brain or its body, because that 
> person can in principle do a backup of itself, and reload herself with a 
> different body and brain. Changing our bodies illustrates that the body is 
> more like a mean of transport and a way to interact with pals.
> 
> The trouble with this, as has already been pointed out, is that our bodies 
> have a significant role to play in our personal identities. If you replace 
> the brain/consciousness with a computer, -- without some sort of body with 
> independent locomotion and manipulation of the environment -- even in the 
> presence of input from the environment, my prediction is that that person 
> would go mad within a few hours or days. In other words, replacing a physical 
> brain/body with a computer will generally destroy the person.

That is arguing against mechanism, but I prefer not doing that type of 
philosophy. 

I just don’t know if mechanism is true, so I collect its concrete consequences 
and compare with the empirical observations, and it works well, notably where 
physicalist feel the need to dismiss consciousness and person, when not 
eliminating them completely.



>  
> Note that the personal identity is not a transitive notion. Step 3 actually 
> illustrates well this. I recall he cut and copy itself from Helsinki (H) in 
> both Washington (W) and Moscow (M). With the definition of the personal 
> identity above, both the HW and the HM guy are, from that personal identity 
> view,  the same person as the H person.
> 
> With a more sensible notion of personal identity, the copies are different 
> persons, and different persons from the original.


But that would entail that you die in step 1, which would again just be your 
opinion that mechanism is false.

What in the brain is NOT Turing emulable?

Without one evidence for non mechanism, this seems like speculation just to 
prevent the continuation of research and testing. 




>  
> But from the indexical first personal “lived” view, the HW guy knows that he 
> is not the same (first) person as the HM person, and vice versa, but both can 
> agree (and could have decided beforehand) that they are legally the same 
> person, and “right descendant” of the H person.
> 
> No, they have likely gone mad by this time.

In your theory. But you cannot use your theory to invalidate a reasoning done 
in another theory.




> Though experiments are all very well, but they have to conform to the basic 
> laws of existence -- physics, consciousness, and neuropsychology in this case.
>  
> If the duplication iterated, all histories are realised (in this particular 
> protocol) and it is a simple exercise to show that the majority of first 
> person obtained have no possible algorithm for both their past and futures. 
> The non random histories get rare (meagre) when the number of iteration get 
> high (in the limit).
> 
> That just means that there is no meaning that can be assigned to the 
> probability of any particular outcome -- on your scenario, all probabilities 
> are realized in practice. 

In the 3p description, yes. The point is that this is not the case for the 
majority of the first person obtained, which is is the target of the reasoning 
in step 3.




> 
> The first person I is 3p-relative, but 1p absolute (the HM person knows that 
> he is not the HW person, even if he knows that this is absolutely non 
> communicable, like “I am conscious”.
> 
> That fact that you are conscious is definitely communicable.

I meant rationally communicable. An entity cannot prove to another entity that 
she is conscious. 



> How else do you know that other people are conscious? And by "know" I do not 
> mean "prove", I mean "beyond all reasonable doubt", which is the standard of 
> "knowing" that we apply in everyday life, and hence, the only useful meaning 
> of the term in this context.

When doing metaphysics with the scientific method, we need larger and less 
vague context, and we need to doubt all ontological commitment.

I certainly believe that you are conscious. But I cannot know that for sure, I 
can only know that in the weak Theaetetus sense, in case God (Truth) share that 
belief.





> 
> The outsider knows (modulo the assumptions of course) that both copies will 
> feel like if they were the original, and rightly so.
> 
> Bruce, I know that you are not a fan of mechanism, but are you OK with all 
> this when we assume Mechanism, if only for the sake of the argument?
> 
> If we assume mechanism in the form you describe it, then we leave contact 
> with the real world as we experience it.

OK. (That’s the main point).



> So in your imaginary world, I imagine that anything you like to dream up 
> goes, so maybe these scenarios will be realized there - who knows?


The “imaginary” world is arithmetic here. It kick back a lot, and defeat all 
effective theories. 

It is full of open problems. The notion of open problem does not make sense in 
fiction and imaginary tales.




> 
> 
> One advantage of Digital Mechanism (aka computationalism) is that it allows 
> simple thought experiences which shows quickly what we are getting at (a many 
> histories interpretation of elementary arithmetic), but we get also the 
> possibility of using the mathematical theory of computations and 
> computability, so we can refine the argument, and already derive a bit of the 
> mathematical physics that the universal machine deduce itself from the 
> computationalist hypothesis.
> 
> Things should be as simple as possible, but no simpler. Your thought 
> experiments violate this basic law of science.

Where?



>  
> That is not an evidence for the truth of mechanism, of course, but I like to 
> search the key under the reverberate of mathematics, where ideas/keys can be 
> found and tested.
> 
> Simplistic models are often used in this way in science -- as a test bed for 
> ideas. It is only when you confuse the simplistic models with reality that 
> you run into trouble.


When you say yes to the doctor, the “simplistic model” is the artificial brain. 
Some people have already minute substitution of part of the brain replace by 
digital device, and the mechanist hypothesis is with us since long, and 
definitely with us since Descartes. It is used by Darwin, and is the base of 
molecular biology.

Are you saying that John Clark is “simplistic”? After all, he has already say 
yes to a doctor.

I don’t see arguments other than dismissive stances on the premise.

Bruno




> 
> Bruce 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAFxXSLQQYC4eFWW3Ahas-RcAvr38_GfTBF26zO8ysYQKBdFi5g%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAFxXSLQQYC4eFWW3Ahas-RcAvr38_GfTBF26zO8ysYQKBdFi5g%40mail.gmail.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/8A1F4351-B902-4793-BBB9-69C3F492AA35%40ulb.ac.be.

Reply via email to