On 4/29/2021 6:34 AM, Terren Suydam wrote:

On Thu, Apr 29, 2021 at 1:57 AM 'Brent Meeker' via Everything List <everything-list@googlegroups.com <mailto:everything-list@googlegroups.com>> wrote:



    On 4/28/2021 9:42 PM, Terren Suydam wrote:


    On Wed, Apr 28, 2021 at 8:15 PM 'Brent Meeker' via Everything
    List <everything-list@googlegroups.com
    <mailto:everything-list@googlegroups.com>> wrote:



        On 4/28/2021 4:40 PM, Terren Suydam wrote:

        I agree with everything you said there, but all you're
        saying is that intersubjective reality must be consistent to
        make sense of other peoples' utterances. OK, but if it
        weren't, we wouldn't be here talking about anything. None of
        this would be possible.

        Which is why it's a fool's errand to say we need to explain
        qualia.  If we can make an AI that responds to world the way
        we to, that's all there is to saying it has the same qualia.


    I don't think either of those claims follows. We need to explain
    suffering if we hope to make sense of how to treat AIs. If it
    were only about redness I'd agree. But creating entities whose
    existence is akin to being in hell is immoral. And we should know
    if we're doing that.

    John McCarthy wrote a paper in the '50s warning about the
    possibility of accidentally making a conscious AI and unknowingly
    treating it unethically.  But I don't see the difference from any
    other qualia, we can only judge by behavior.  In fact this whole
    thread started by JKC considering AI pain, which he defined in
    terms of behavior.


A theory would give you a way to predict what kinds of beings are capable of feeling pain. We wouldn't have to wait to observe their behavior, we'd say "given theory X, we know that if we create an AI with these characteristics, it will be the kind of entity that is capable of suffering".

Right.  And the theory is that the AI is feeling pain if is exerting all available effort to change its state.


    To your second point, I think you're too quick to make an
    equivalence between an AI's responses and their subjective
    experience. You sound like John Clark - the only thing that
    matters is behavior.

    Behavior includes reports. What else would you suggest we go on?


Again, in a theory of consciousness that explains how qualia come to be within a system, you could make claims about their experience that go beyond observing behavior. I know John Clark's head just exploded, but it's the point of having a theory of consciousness.

Of course you can have such a theory.  But how can you have evidence for or against it is the question?  How can it be anything but a speculation?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/aba1d58a-a397-a1a6-3e23-556f3637f77f%40verizon.net.

Reply via email to