On 5/08/2016 6:12 pm, Bruno Marchal wrote:
On 05 Aug 2016, at 04:13, Bruce Kellett wrote:
On 5/08/2016 3:41 am, Bruno Marchal wrote:
On 04 Aug 2016, at 04:37, Bruce Kellett wrote:
On 4/08/2016 1:04 am, Bruno Marchal wrote:
On 03 Aug 2016, at 07:16, Bruce Kellett wrote:
You use the assumption that the duplicated consciousnesses
automatically differentiate when receiving different inputs.
It is not an assumption.
Of course it is an assumption. You have not derived it from
anything previously in evidence.
See my answer to Brent. It is just obvious that the first person
experience differentiated when it get different experience, leading
to different memories. We *assume* computationalism. How coud the
diaries not differentiate? What you say does not make any sense.
I have been at pains to argue (in several different ways) that the
differentiation of consciousness is not automatic. It is very easy to
conceive of a situation in which a single consciousness continues in
two bodies, with the streams of consciousness arising from both
easily identifiable, but still unified in the consciousness of a
single person. (I copy below my recent argument for this in a post
replying to Russell.) So the differentiation you require is not
necessary or automatic -- it has to be justified separately because
it is not "just obvious".
Your recent expansion of the argument of step 3 in discussions with
John Clark does not alter the situation in any way -- you still just
assert that the differentiation takes place on the receipt of
different input data.
I had thought that the argument for such differentiation of
consciousness in different physical bodies was a consequence of some
mind-brain identity thesis. But I am no longer sure that even that is
sufficient -- the differentiation clearly requires separate
bodies/brains (separate input data streams), but separate bodies are
not sufficient for differentiation, as I have shown.
That was shown and explained before and is not contested here.
I thought I was contesting it.
Please read the posts.
That is why I introduce a painting in question 2.
That still just gives differentiation on different data inputs -- it
changes nothing.
But let us first see if you agree with question 1.
Do you agree that if the H-guy is told that a hot drink will be
offered to both reconstitution in W and in M, he is entitled to expect
a hot drink with probability one (assuming computationalisme and the
default hypothesis)
I do not assume computationalism, I am questioning its validity.
Do you agree that P(X) = 1 in Helsinki, if X will occur in both city?
I think that it is entirely possible that the H-guy will, after the
duplication, experience drinking two coffees.
What is required is a much stronger additional assumption, namely an
association between minds and brains such that a mind can occupy only
one brain.
Not at all. We can say that one mind occupy both brain in the
WM-duplication , before the opening of the door, assuming the
reconstitution box identical. The mind brain identity fails right at
step 3.
Mind-brain identity need not fail: what fails in my interpretation of
duplication is the one-to-one correspondence of one mind with one body.
One need something stronger that mind-brain identity to justify the
differentiation on different data inputs because we can have one-many
and many-one mind-body relationships.
We can associate a mind to a body, but the mind itself (the 1p) can be
(and must be) associated with many different bodies, in the physical
universe and later in arithmetic.
You seem to accept my point -- there is still only one mind even after
different data are fed to the copies: one mind in two bodies in this
case (a one-many relationship).
(Whether a single brain can host only one mind is a separate matter,
involving one's attitude to the results of split brain studies and
the psychological issues surrounding multiple personalities/minds.)
In other words, the differentiation assumption is an additional
assumption that does not appear to follow from either physicalism or
YD+CT.
It follows from very elementary computer science, and in our case, it
follows necessarily, as the 1p is identified, in this setting with the
content of the personal diary, which obviously differentiate on the
self-localization result made by the reconstitutions.
I think the diaries are just confusing you. The copy in M can write M in
the diary in Moscow, and the copy in W write W in the diary in
Washington. That is not necessarily different from me writing M in one
diary with my left hand while writing W in a separate diary with my
right hand. No differentiation into two separate persons is necessary in
either case. There is no "self-localization" if there is only ever one
consciousness -- the person experiences both W and M simultaneously.
As I have further pointed out, one cannot just make this an
additional assumption to YD+CT because it is clearly an empirical
matter: until we have a working person duplicator, we cannot know
whether differentiation is automatic or not. Science is, after all,
empirical, not just a matter of definitions.
Once you agree with P(Mars) = 1 in a simple classical teleportation
experience (step 1), then how could the diary not differentiate when
the reconstituted guy write the result of the self-localization?
No self localization.... the diaries in the two cities may contain
records of the correct cities, but that does not mean that there are two
separate people (consciousnesses) involved. The diaries are multiple --
the person is not.
No empirical test needs to be done, as the differentiation is obvious:
one copy experiences the city of Moscow, as his diary confirms, and
the other experiences the city of Washington, as his diaries confirms
too. If they did not differentiate, what would they write in the diary?
He writes in the diaries what he sees: it is just a matter of the
protocol whether he writes the name of the city in which each diary is
located in that particular diary, or if he writes in both diaries what
he sees in total, in which case he writes W&M in both diaries. It need
be no different from my seeing one thing with my right eye and writing
that down with my right hand, and seeing something different with my
left eye and writing that down with my left hand, or writing down both
things with both hands. (This is not a split-brain experiment.)
All the things that you bring up could easily happen without any
differentiation into two separate consciousnesses. You might find the
non-locality of the unified experience a little surprising, but that is
only because you are not used to the concept of non-locality.
I say again, even though it seems obvious to you that the
differentiation must occur, that is just a failure of imagination on
your part. Try to put yourself in the situation in which some of the
many strands of your conscious thoughts relate to bodies in different
cities. There is no logical impossibility in this. You seem to accept
that a single mind can be associated with more that one body: "We can
associate a mind to a body, but the mind itself (the 1p) can be (and
must be) associated with many different bodies, in the physical universe
and later in arithmetic." (quoted from your comment above.) Hold on to
this notion, and consider the possibility that there is no
differentiation into separate conscious persons in such a case (the 1p
is singular -- there is only ever just one person).
Bruce
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.