Quentin Anciaux wrote:


2010/1/11 Brent Meeker <meeke...@dslextreme.com <mailto:meeke...@dslextreme.com>>

    Stathis Papaioannou wrote:

        2010/1/11 Brent Meeker <meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>>:

            But aren't you assuming that consciousness is produced by
            the abstract
            Platonic computation - rather than by the actual physical
            process (which is
            not the same) - in other words assuming the thing being
            argued?

        No, I'm at this point assuming only that consciousness is
        produced by
        the physical process. We can assume for simplicity that the two
        machines M1 and M2 have similar architecture and similar operating
        systems. Once the program is loaded into M2 from the disc, S2
        proceeds
        exactly the same as it would have had the computation been
        allowed to
        continue running on M1. Therefore, at least after the first few
        milliseconds, the subjective content of S2 must be the same as it
        would have been on the one machine. Could the subjective
        content be
        different at the transition between S1 and S2 if the
        computation is
        split up? If there is a subjective difference it won't be
        something
        the subject can notice because, later in the course of S2, he
        can have
        no memory of it.


    But if you're only assuming that consciousness is produced by the
    physical process then the process of downloading and uploading the
    microstates and shifting the data into registers in the CPU and
    memory could produce a difference in consciousness.  These are all
    computations too, done by the operating system.  And why can't
    there be memory of it in the sense that it effects some later
    conscious state?  There are traces of the transfer process left on
    the original computer, the disc, and the second computer. Some
    subsequent program could retrieve these traces, as is done in
    forensic cases.  If physical processes instantiate consciousness,
    why shouldn't these make a difference.


Because those states are not part of the "computation" you sliced on the two computers.

They are not part of the abstract Platonic computation - but they are part of the physical computation. So the question is, on which does consciousness depend? My point is not to argue against Bruno's theory, but only to point out that saying "yes" to the doctor may not be the same as betting that consciousness=(Platonic) computation. If the doctor proposed to replace your brain with an abstract computation you'd probably say "no". If he proposed to replace you, your brain, and the whole world with which you will ever interact, i.e. a virtual you in a virtual world, would you say "yes"? You'd probably wonder how he was going to compute that whole world with which you will interact?

And also assuming computationalism... Any implementation that does the job... effectively does the job. That means while it's true there are additionnal steps in the two case computer... it's just another *valid* implementation of the same computation on one computer, assuming computationalism that change *nothing*, arguing otherwise is denying computationalism (maybe it's right and computationalism is false).



        It also can't be a difference that would disrupt the
        completion of a task or thought that requires continuity of
        consciousness spanning S1-S2, since again the subject cannot
        have any
        evidence that such a disruption occurred.

    Unless we have a theory of how consciousness is related to the
    physical computation I don't think we can conclude that.  We
    already know that subliminal perceptions can affect conscious
    thoughts - so why not subliminal memories.

We don't, but what Bruno is showing is the consequences *if* we are turing emulable.

Bruno gives himself the luxury of considering turing emulablity at arbitrarily low levels, including emulating your whole world. In fact he sidesteps the doctors problem above, by simply emulating all (arithmetically) possible worlds.

But this thread started with my questioning the idea of discrete computational states, which are inherent in a turing emulation, as being "thoughts" or "observer moments" and such moments having no order except something inherent based on their content. I thing that thoughts have duration in time and in computational steps and therefore can overlap with other thoughts and this can provide an ordering not dependent on the content of single computational states. I find this more convincing because it doesn't rely on memories which in general are not part of consciousness.

As I said I'm interested in what it takes to make a conscious AI. In terms of pure computational capacity I expect that producing human level consciousness may be within the capacity of the fastest computers within the next 50yrs. But what to have them compute? I don't think a UD is the way to go and even if it were it would give us no insight into consciousness.

Brent

If we are turing emulable, all your above objections are not valid because your objections are a level way too high (they are completely valid objections at the level you describe, but assuming comp, those are *still* computed at a lower level and hence are *part* of a computation that generate consciousness, see the generalized brain argument of Bruno).

Quentin

    Brent


    --
    You received this message because you are subscribed to the Google
    Groups "Everything List" group.
    To post to this group, send email to
    everything-list@googlegroups.com
    <mailto:everything-list@googlegroups.com>.
    To unsubscribe from this group, send email to
    everything-list+unsubscr...@googlegroups.com
    <mailto:everything-list%2bunsubscr...@googlegroups.com>.
    For more options, visit this group at
    http://groups.google.com/group/everything-list?hl=en.






--
All those moments will be lost in time, like tears in rain.

------------------------------------------------------------------------

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


Reply via email to