On 5/24/2015 4:09 AM, Pierz wrote:


On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote:



    On Sun, May 24, 2015 at 12:40 AM, Pierz <pie...@gmail.com <javascript:>> 
wrote:



        On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote:



            On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal <mar...@ulb.ac.be> 
wrote:


                On 19 May 2015, at 15:53, Jason Resch wrote:



                On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou
                <stat...@gmail.com> wrote:

                    On 19 May 2015 at 14:45, Jason Resch <jason...@gmail.com> 
wrote:
                    >
                    >
                    > On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou
                    <stat...@gmail.com>
                    > wrote:

                    >>
                    >> On 19 May 2015 at 11:02, Jason Resch 
<jason...@gmail.com> wrote:
                    >>
                    >> > I think you're not taking into account the level of 
the functional
                    >> > substitution. Of course functionally equivalent 
silicon and
                    functionally
                    >> > equivalent neurons can (under functionalism) both 
instantiate
                    the same
                    >> > consciousness. But a calculator computing 2+3 cannot
                    substitute for a
                    >> > human
                    >> > brain computing 2+3 and produce the same consciousness.
                    >>
                    >> In a gradual replacement the substitution must obviously 
be at a
                    level
                    >> sufficient to maintain the function of the whole brain. 
Sticking a
                    >> calculator in it won't work.
                    >>
                    >> > Do you think a "Blockhead" that was functionally 
equivalent to
                    you (it
                    >> > could
                    >> > fool all your friends and family in a Turing test 
scenario
                    into thinking
                    >> > it
                    >> > was intact you) would be conscious in the same way as 
you?
                    >>
                    >> Not necessarily, just as an actor may not be conscious 
in the
                    same way
                    >> as me. But I suspect the Blockhead would be conscious; 
the intuition
                    >> that a lookup table can't be conscious is like the 
intuition that an
                    >> electric circuit can't be conscious.
                    >>
                    >
                    > I don't see an equivalency between those intuitions. A 
lookup
                    table has a
                    > bounded and very low degree of computational complexity: 
all
                    answers to all
                    > queries are answered in constant time.
                    >
                    > While the table itself may have an arbitrarily high 
information
                    content,
                    > what in the software of the lookup table program is there 
to
                    > appreciate/understand/know that information?

                    Understanding emerges from the fact that the lookup table 
is immensely
                    large. It could be wrong, but I don't think it is obviously 
less
                    plausible than understanding emerging from a Turing machine 
made of
                    tin cans.



                The lookup table is intelligent or at least offers the 
appearance of
                intelligence, but it makes the maximum possible advantage of the
                space-time trade off: 
http://en.wikipedia.org/wiki/Space–time_tradeoff
                <http://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff>

                The tin-can Turing machine is unbounded in its potential 
computational
                complexity, there's no reason to be a bio- or silico-chauvinist 
against
                it. However, by definition, a lookup table has near zero 
computational
                complexity, no retained state.

                But it is counterfactually correct on a large range spectrum. 
Of course,
                it has to be infinite to be genuinely counterfactual-correct.


            But the structure of the counterfactuals is identical regardless of 
the
            inputs and outputs in its lookup table. If you replaced all of its 
outputs
            with random strings, would that change its consciousness? What if 
there
            existed a special decoding book, which was a one-time-pad that 
could decode
            its random answers? Would the existence of this book make it more 
conscious
            than if this book did not exist? If there is zero information 
content in the
            outputs returned by the lookup table it might as well return all "X"
            characters as its response to any query, but then would any program 
that
            just returns a string of "X"'s be conscious?

        I really like this argument, even though I once came up with a (bad) 
attempt to
        refute it. I wish it received more attention because it does cast quite 
a
        penetrating light on the issue. What you're suggesting is effectively 
the cache
        pattern in computer programming, where we trade memory resources for
        computational resources. Instead of repeating a resource-intensive 
computation,
        we store the inputs and outputs for later regurgitation.


    How is this different from a movie recording of brain activity (which most 
on the
    list seem to agree is not conscious)? The lookup table is just a really long
    recording, only we use the input to determine to which section of the 
recording to
    fast-forward/rewind to.

It isn't different to a recording. But here's the thing: when we ask if the lookup machine is conscious, we are kind of implicitly asking: is it having an experience *now*, while I ask the question and see a response. But what does such a question actually even mean? If a computation is underway in time when the machine responds, then I assume it is having a co-temporal experience. But the lookup machine idea forces us to the realization that different observers' subjective experiences (the pure qualia) can't be mapped to one another in objective time. The experiences themselves are pure abstractions and don't occur in time and space. How could we ever measure the time at which a quale occurs?

By having the quale of "looking at my watch" before and after the quale in 
question.

Sure we could measure brain waves and map them to reported experiences and so conclude that the brain waves and experiences occurred "at the same time", but the experience itself might have occurred at any time and just happen to correlate to those neuronal firing patterns.

Isn't this another one of those "suppose the extremely improbable". I'd say the way you relate these things, time, quale, brain activity, is by a theory - the same way you relate other things. One such theory is that the quale is part of the brain's physical activity. Another is Bruno's the quale are a proof relation between numbers.

Perhaps I experience the moment I think of as "now" exactly 100 years after it actually happened - except course such an assertion is meaningless because the subjective and the objective can't be mapped to one another at all. I've said before that a recording /is/ conscious to the extent that it is a representation of a conscious moment, just like the original "event" was (as seen perhaps by those who were there). I mean to say, how is a recording different from an observation? It's just a delayed or echoed observation. Again, /when/ is an experience? Is it happening as the neurones fire? Even Dennett - hardly a Platonist - has critiqued this naive idea, pointing out how sequence and timing of experience are really a construction. Qualia are not /in/ time and space.

Time and space are constructions too. We use "constructions" to remind ourselves that they are theory laden and might be different under another theory. But it doesn't necessarily mean it is wrong, that is *only* a construction. Science generally advances by taking its best theories seriously and pushing them to find their limit.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to