--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> Sunday, September 9, 2007, Matt Mahoney wrote:
> 
> MM> Also, Chalmers argues that a machine copy of your brain must be
> conscious.
> MM> But he has the same instinct to believe in consciousness as everyone
> else.  My
> MM> claim is broader: that either a machine can be conscious or that
> consciousness
> MM> does not exist.
> 
> While I'm not yet ready to continue my discussion on essentially the
> same topic with Stathis on SL4, let me define this problem here.
> 
> Let's replace discussion of consciousness with more simple of 'subjective
> experience'. So, there is a host universe in which there's an
> implementation of mind (a brain or any other such thing) which we as a
> starting point assume to have this subjective experience.
> 
> Subjective experience exists as relations in mind's
> implementation in host universe (or process of their modification in time).
> From this it supposedly follows that subjective experience exists only as
> that relation and if that relation is instantiated in different
> implementation, the same subjective experience should also exist.
> 
> Let X be original implementation of mind (X defines state of the
> matter in host universe that comprises the 'brain'), and S be the
> system of relations implemented by X (the mind). There is a simple
> correspondence between X and S, let's say S=F(X). As brain can be
> slightly modified without significantly affecting the mind (additional
> assumption), F can also be modification-tolerant, that is for example
> if you replace in X some components of neurons by constructs with different
> chemistry which still implement the same functions, F(X) will not
> change significantly.
> 
> Now, let Z be an implementation of uploaded X. That is Z can as well
> be some network of future PCs plus required software and data
> extracted from X. Now, how does Z correspond to S? There clearly is
> some correspondence that was used in construction of Z. For example,
> let there be a certain feature of S that can be observed on X (say,
> feature is D and it can be extracted by procedure R,
> D=R(S)=R(F(X))=(RF)(X), D can be for
> example a certain word that S is saying right now).
> Implementation Z comes with a function L that enables to extract D,
> that is D=L(Z), or L(Z)=R(S).
> 
> Presence of implementation Z and feature-extractor L only allow the
> observation of features of S. But to say that Z implements S in the
> sense defined above for X, there should be a correspondence S=F'(Z).
> This correspondence F' supposedly exists, but it is not implemented in
> any way, so there is nothing that makes it more appropriate for Z than
> other arbitrary correspondence F'' which results in a different mind
> F''(L)=S'<>S. F' is not a near-equivalence as F was. One can't say
> that implementation of uploaded mind simulates the same mind or even in
> any way similar mind. It observes behavious of original mind using
> feature-extractors and so is functionally equivalent, but it doesn't
> exclusively provides an implementation for the same subjective
> experience.
> 
> So, here is a difference: simplicity of correspondence F between
> implementation and the mind. We know from experience that
> modifications which leave F a simple correspondence don't destroy
> subjective experience. But complex correspondences make it impossible
> to distinguish between possible subjective experiences implementation
> simulates, as correspondence function itself isn't implemented along
> with simulation.
> 
> As a final paradoxical example, if implementation Z is nothing, that
> is it comprises no matter and information ar all, there still is a
> correspondence function F(Z)=S which supposedly asserts that Z is X's
> upload. There can even be a feature extractor (which will have to implement
> functional simulation of S) that works on an empty Z. What is the
> difference from subjective experience simulation point of view between
> this empty Z and a proper upload implementation?
> 
> -- 
>  Vladimir Nesov                            mailto:[EMAIL PROTECTED]

Perhaps I misunderstand, but to make your argument more precise:

X is an implementation of a mind, a Turing machine.

S is the function computed by X, i.e. a canonical form of X, the smallest or
first Turing machine in an enumeration of all machines equivalent to X.  By
equivalent, I mean that X(w) = S(w) for all input strings w in A* over some
alphabet A.

Define F: F(X) = S (canonical form of X), for all X.  F is not computable, but
that is not important for this discussion.

An upload, Z, of X is defined as any Turing machine such that F(Z) = F(X) = S,
i.e. Z and X are equivalent.

Then the paradox in your last example cannot exist because F(nothing) != S,
because S is the shortest program that implements X and |nothing| < |S|.

The other problem is that you have not defined "subjective experience". 
Presumably this is the input to a consciousness?  If consciousness does not
exist, then how can subjective experience exist?  There is only input to the
Turing machine that may or may not affect the output.  A reasonable definition
of subjective experience would be the subset of inputs that affect the output.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=39997455-26ef5f

Reply via email to