On Wed, Jul 24, 2019 at 1:06 AM Bruno Marchal <marc...@ulb.ac.be> wrote:

> On 23 Jul 2019, at 06:45, Bruce Kellett <bhkellet...@gmail.com> wrote:
>
> On Tue, Jul 23, 2019 at 2:30 PM Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
>>
>>>> The inputs serve to put the brain in a particular state, but the brain
>> could go into the same state without the inputs. This can be a practical
>> problem in patients with schizophrenia: the may hear voices and are
>> convinced that the voices are real, to the point where they might assault
>> someone because of what they believe he said.
>>
>
> And I believe that if a particular small area of the brain is stimulated,
> the subject experiences the colour red. Similarly, if the colour red is
> shown, that same area of the brain shows activity. So quailia are nothing
> but particular brain activity. There is no additional "magic sauce" in
> consciousness.
>
> These same areas of the brain could be excited at random, as in your
> schizophrenic example. All that goes to show is that consciousness is
> nothing more than brain activity. Absent brain activity, there is no
> consciousness.
>
>
> But absence of consciousness does not entail absence of brain activity.
>

It is not claimed that consciousness and brain activity are coextensive. So
you can have brain activity without consciousness (as in a vegetative
state), but there is no consciousness without brain activity.


With mechanism, the personal identity can be defined by the personal
> memory, and a person cannot be identified with its brain or its body,
> because that person can in principle do a backup of itself, and reload
> herself with a different body and brain. Changing our bodies illustrates
> that the body is more like a mean of transport and a way to interact with
> pals.
>

The trouble with this, as has already been pointed out, is that our bodies
have a significant role to play in our personal identities. If you replace
the brain/consciousness with a computer, -- without some sort of body with
independent locomotion and manipulation of the environment -- even in the
presence of input from the environment, my prediction is that that person
would go mad within a few hours or days. In other words, replacing a
physical brain/body with a computer will generally destroy the person.


> Note that the personal identity is not a transitive notion. Step 3
> actually illustrates well this. I recall he cut and copy itself from
> Helsinki (H) in both Washington (W) and Moscow (M). With the definition of
> the personal identity above, both the HW and the HM guy are, from that
> personal identity view,  the same person as the H person.
>

With a more sensible notion of personal identity, the copies are different
persons, and different persons from the original.


> But from the indexical first personal “lived” view, the HW guy knows that
> he is not the same (first) person as the HM person, and vice versa, but
> both can agree (and could have decided beforehand) that they are legally
> the same person, and “right descendant” of the H person.
>

No, they have likely gone mad by this time. Though experiments are all very
well, but they have to conform to the basic laws of existence -- physics,
consciousness, and neuropsychology in this case.


> If the duplication iterated, all histories are realised (in this
> particular protocol) and it is a simple exercise to show that the majority
> of first person obtained have no possible algorithm for both their past and
> futures. The non random histories get rare (meagre) when the number of
> iteration get high (in the limit).
>

That just means that there is no meaning that can be assigned to the
probability of any particular outcome -- on your scenario, all
probabilities are realized in practice.

The first person I is 3p-relative, but 1p absolute (the HM person knows
> that he is not the HW person, even if he knows that this is absolutely non
> communicable, like “I am conscious”.
>

That fact that you are conscious is definitely communicable. How else do
you know that other people are conscious? And by "know" I do not mean
"prove", I mean "beyond all reasonable doubt", which is the standard of
"knowing" that we apply in everyday life, and hence, the only useful
meaning of the term in this context.

The outsider knows (modulo the assumptions of course) that both copies will
> feel like if they were the original, and rightly so.
>
> Bruce, I know that you are not a fan of mechanism, but are you OK with all
> this when we assume Mechanism, if only for the sake of the argument?
>

If we assume mechanism in the form you describe it, then we leave contact
with the real world as we experience it. So in your imaginary world, I
imagine that anything you like to dream up goes, so maybe these scenarios
will be realized there - who knows?


One advantage of Digital Mechanism (aka computationalism) is that it allows
> simple thought experiences which shows quickly what we are getting at (a
> many histories interpretation of elementary arithmetic), but we get also
> the possibility of using the mathematical theory of computations and
> computability, so we can refine the argument, and already derive a bit of
> the mathematical physics that the universal machine deduce itself from the
> computationalist hypothesis.
>

Things should be as simple as possible, but no simpler. Your thought
experiments violate this basic law of science.


> That is not an evidence for the truth of mechanism, of course, but I like
> to search the key under the reverberate of mathematics, where ideas/keys
> can be found and tested.
>

Simplistic models are often used in this way in science -- as a test bed
for ideas. It is only when you confuse the simplistic models with reality
that you run into trouble.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQQYC4eFWW3Ahas-RcAvr38_GfTBF26zO8ysYQKBdFi5g%40mail.gmail.com.

Reply via email to