On 6/9/2020 4:14 PM, Jason Resch wrote:


On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou <stath...@gmail.com <mailto:stath...@gmail.com>> wrote:



    On Wed, 10 Jun 2020 at 03:08, Jason Resch <jasonre...@gmail.com
    <mailto:jasonre...@gmail.com>> wrote:

        For the present discussion/question, I want to ignore the
        testable implications of computationalism on physical law, and
        instead focus on the following idea:

        "How can we know if a robot is conscious?"

        Let's say there are two brains, one biological and one an
        exact computational emulation, meaning exact functional
        equivalence. Then let's say we can exactly control sensory
        input and perfectly monitor motor control outputs between the
        two brains.

        Given that computationalism implies functional equivalence,
        then identical inputs yield identical internal behavior (nerve
        activations, etc.) and outputs, in terms of muscle movement,
        facial expressions, and speech.

        If we stimulate nerves in the person's back to cause pain, and
        ask them both to describe the pain, both will speak identical
        sentences. Both will say it hurts when asked, and if asked to
        write a paragraph describing the pain, will provide identical
        accounts.

        Does the definition of functional equivalence mean that any
        scientific objective third-person analysis or test is doomed
        to fail to find any distinction in behaviors, and thus
        necessarily fails in its ability to disprove consciousness in
        the functionally equivalent robot mind?

        Is computationalism as far as science can go on a theory of
        mind before it reaches this testing roadblock?


    We can’t know if a particular entity is conscious, but we can know
    that if it is conscious, then a functional equivalent, as you
    describe, is also conscious. This is the subject of David
    Chalmers’ paper:

    http://consc.net/papers/qualia.html


Chalmers' argument is that if a different brain is not conscious, then somewhere along the way we get either suddenly disappearing or fading qualia, which I agree are philosophically distasteful.

But what if someone is fine with philosophical zombies and suddenly disappearing qualia? Is there any impossibility proof for such things?

There's an implicit assumption that "qualia" are well defined things.  I think it very plausible that qualia differ depending on sensors, values, and memory.  So we may create AI that has something like qualia, but which are different from our qualia as people with synesthesia have somewhat different qualia.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/50a0b5e9-cc56-c14d-d208-99aaa5235cbc%40verizon.net.

Reply via email to