On Tue, Oct 29, 2013 at 2:06 PM, meekerdb <meeke...@verizon.net> wrote:

>  On 10/29/2013 8:19 AM, Jason Resch wrote:
>
> Chris,
>
>  Perhaps it is simpler to think about first person indeterminacy like
> this (it requires some familiaraity with programming, but I will try to
> elaborate those details):
>
>  Imagine there is a conscious AI inside a virtual environment (an open
> field)
> Inside that virtual environment is a ball, which the AI is looking at and
> next to the ball is a note which reads:
>
> "At noon (when the virtual sun is directly overhead) the protocol will
> begin.  In the protocol, the process containing this simulation will fork
> (split in two), after the fork, the color of the ball will change to red
> for the parent process and it will change to blue in the child process
> (forking duplicates a process into two identical copies, with one called
> the parent and the other the child). A second after the color of the ball
> is set, another fork will happen.  This will happen 8 times leading to 256
> processes, after which the simulation will end."
>
> It is 11:59 in the simulation, what can the AI expect to see during the
> next 1 minute and 8 seconds?
>
>
> I don't see that as any different.
>

It is similar, but it never hurts to look at the same problem from
different angles.  What is a little more evident in this case is that of
the 256 possible memories of the AI about to meet its doom, none contain
the memory of seeing all 256 possibilities, an in fact, the majority of
them see the ball change color back and forth at random.  Only 2 see it
stay all red or all blue for the last 8 seconds. None of them can predict
from the view inside the simulation, whether the ball will stay the same
color or change after the next fork occurs.


> The problem is still what is the referent of "the AI".  As John Clark
> points out "the AI" is ambiguous when there are duplicates.
>

Personal identity is less of an issue in this case, because it concerns the
AI or anything/anyone else inside the simulation who might also be viewing
the ball.  In this way, it is slightly more analogous to MWI since it is
the environment which is duplicated, not just the person, and so
the apparent random changing of the ball color is also something that can
be agreed upon by the group of observers within the simulation.


>   Sometimes Bruno talks about "the universal person" who is merely
> embodied as particular persons.  So on that view it would be right to say
> *the* universal person sees Washington and Moscom.
>

But not "at the same time" or as "an integrated experience", so the
appearance of randomness still arises from the first person perspective(s).


> But then that's contrary to identifying a person by their memories.  My
> view is that "a person" is just a useful model, when there is no
> duplication - and that's true whether the duplication is via Everett or
> Bruno's teleporter.
>
>
What model should be used in a world with duplication, fission machines,
mind uploading, split brains, biological clones, amnesia, etc.? Or does
personhood no longer make sense at all in the face of such situations?

Personally I believe no theory that aims to attach persons to one
psychological or physiological continuity can be successful.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to