Bruno Marchal wrote:


Le 09-oct.-06, à 21:54, George Levy a écrit :

To observe a split consciousness, you need an observer who is also split,

?
This is simple. The time/space/substrate/level of the observer must match the time/space/substrate/level of what he observes.  The Leibniz analogy is good. In your example if one observes just the recording without observing the earlier creation of the recording and the later utilization of the recording, then one may conclude rightfully that the recording is not conscious.

in sync with the split consciousness, across time, space, substrate and level (a la Zelazny - Science Fiction writer). In your example, for an observer to see consciousness in the machine, he must be willing to exist at the earlier interval, skip over the time delay carrying the recording and resume his existence at the later interval. If he observes only a part of the whole thing, say the recording, he may conclude that the machine is not conscious.

This is unclear for me. Unless you are just saying like Leibniz that you will not "see" consciousness in a brain by examining it under a microscope.

Note also that I could attribute consciousness to a recording, but this makes sense only if the recording is precise enough so that I could add the "Klaras" or anything which would make it possible to continue some conversation with the system. And then I do not attribute consciousness to the physical appearance of the system, but to some people which manifests him/it/herself through it.
Adding Klaras complicate the problem but the result is the same. Klaras must be programmed. Programming is like recording, a means for inserting oneself at programming time for later playback at execution time. I have already shown that Maudlin was cheating by rearranging his tape, in effect programming the tape. So I agree with you if you agree that programming the tape sequence is just a means for connecting different pieces of a conscious processes where each piece operates at different times.
In addition, if we are going to split consciousness maximally in this fashion, the concept of observer becomes important, something you do not include in your example.
Could you elaborate. I don't understand. As a consequence of the reasoning the observer (like the knower, the feeler) will all be very important (and indeed will correspond to the hypostases (n-person pov) in the AUDA). But in the reasoning, well either we are valid going from one step to the next or not, and I don't see the relevance of your point here. I guess I miss something.

I do not understand the connection with the hypostases in the AUDA. However, it is true that the conscious machine is its own observer, no matter how split its operation is. (i.e., time sharing, at different levels... etc). However, the examples will be more striking if a separate observer is introduced. Of course the separate observer will have to track the time/space/substrate/level of the machine to observe the machine to be conscious (possibly with a Turing test). Forgive me for insisting on a separate observer, but I think that a relativity approach could bear fruits.

You could even get rid of the recording and replace it with random inputs (happy rays in your paper).

As you can see with random inputs, the machine is not conscious to an observer anchored in the physical. The machine just appears to follow a random series of states.

But if the machine can be observed to be conscious if it is observed precisely at those times when the random inputs match the counterfactual recording. So the observer needs to "open his eyes" precisely only at those times. So the observer needs to be linked in some ways to the machine being conscious.

If the observer is the (self reflecting) machine itself there is no problem, the observer will automatically be conscious at those times.

If the observer is not the machine, we need to invoke a mechanism that will force him to be conscious at those times. It will have to be almost identical to the machine and will have to accept the same random data So in a sense the observer will have to be a parallel machine with some possible variations as long as these variations are not large enough to make the observer and the machine exist on different time/space/substrate/level.

Therefore from the point of view of the second machine, the first machine appears conscious. Note that for the purpose of the argument WE don't have to assume initially that the second machine IS conscious, only that it can detect if the first machine is conscious. Now once we establish that the first machine is conscious we can infer that the second machine is also conscious simply because it is identical.

The example is of course a representation of our own (many)world.



(**) I am open to thoroughly discuss this, for example in november.
Right now I am a bit over-busy (until the end of october).

I'll be traveling to France in early November. We'll leave the detailed discussion for later in November.


OK. Take your time.


I will, thanks. In the meanwhile I would appreciate if you could elaborate your point.



George

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---

Reply via email to