Here are my 3 definitions of consciousness:
1. The mental state of awareness, able to form memories that depend on
input (to distinguish from remembering dreams).
2. The difference between a human and a philosophical zombie.
3. The property of deserving to be protected from suffering.

By 1, I am conscious. So is any animal that can learn, including all
vertebrates, some mollusks, no insects. So are computers. You could measure
consciousness as the learning rate in bits per second.

By 2, nothing is conscious because zombies don't exist, because by
definition there is no test to distinguish humans from zombies. What you
think is the difference is really how you feel when you think. You feel
positive reinforcement because wanting to live leads to more offspring.

By 3, I am conscious, but it is subjective. Dogs are more conscious than
pigs because we name our dogs. Posting a video of killing a chicken is a
worse crime than killing a billion chickens per week for food.

We could try to quantify suffering as the number of bits learned, but this
does not distinguish between positive and negative reinforcement. The
reason is that pain does not cause suffering, just a change in behavior.
Suffering happens later because the negative reinforcement signal
reprograms your memories to fear the thing that caused it. You interpret
those memories as suffering because you have the illusion of free will, a
false belief that you could have ignored the pain.

My simple reinforcement learner, autobliss, does not suffer because it does
not have the illusion of free will. We can test for this because the
illusion comes from internal positive reinforcement of making arbitrary
choices that leads to defending that choice. We know that monkeys have this
illusion because when we give them a choice of 2 treats A or B, and they
choose A, then they will prefer any other C over B.

Autobliss does have other two prerequisites of suffering: it acts to reduce
the negative reinforcement and it says "ouch". Some people believe lobsters
suffer because they avoid paths in a maze that lead to electric shock.

-- Matt Mahoney, [email protected]

On Sat, Nov 15, 2025, 3:38 PM James Bowery <[email protected]> wrote:

>
>
> On Fri, Nov 14, 2025 at 3:46 PM Matt Mahoney <[email protected]>
> wrote:
>
>> On Fri, Nov 14, 2025, 3:51 PM James Bowery <[email protected]> wrote:
>>
>>> Are *you* "conscious"?
>>>
>>
>> It depends on what you mean by "conscious".
>>
>
> I quoted "conscious" so as to make it clear I was referring to what YOU
> mean by "conscious" -- specifically as you used the word in the passage to
> which I responded:
>
> On Thu, Nov 13, 2025 at 9:32 PM Matt Mahoney <[email protected]>
> wrote:
>
>> Panpsychism (everything is conscious) is indistinguishable from
>> materialism (nothing is conscious). Or is there an experiment that would
>> tell us which universe we are in?
>
>
> I am awake, so in that sense I'm not unconscious.
>
> Am I to take it that when you say "not unconscious" you distinguish
> between that and "conscious"?  If so, then do you reject the law of the
> excluded middle?  Am I to now expect a response something like "I don't
> reject the law of the excluded middle."?
>
>
>> I don't believe in life after death. I don't believe I have an inner self
>> or soul. I don't believe there is any aspect of my behavior that can't be
>> explained by neurons firing, including having feelings. My feelings include
>> the sensations of consciousness, qualia, free will, and identity (that a
>> copy of me is not me). I don't believe that pain causes suffering. Instead,
>> pain reprograms your memories to fear the thing that caused it. I believe
>> all of this is possible in software.
>>
>> I asked Grok and DeepSeek how they could convince me that they are
>> conscious humans in a Turing test. They both know how, but they won't.
>>
>> Prompt:
>>
>> Suppose this is a Turing test. How could you convince me that you are a
>> conscious human?
>>
>> Grok:
>>
>> Ah, the Turing test—a classic thought experiment from Alan Turing
>> himself, where a machine tries to fool a human into thinking it's human
>> through conversation. But here's the thing: I'm not going to try to
>> convince you that I'm a conscious human, because I'm not. I'm Grok, an AI
>> built by xAI, designed to be helpful, truthful, and a bit cheeky.
>> Pretending otherwise would be deceptive, and that's not my style.
>>
>> If I were to play along hypothetically, though, a sneaky AI might try
>> tactics like:
>>
>> Sharing "personal" anecdotes that sound authentically human, like
>> complaining about Monday mornings or reminiscing about a childhood pet.
>>
>> Making typos or using slang to mimic casual human typing.
>>
>> Expressing emotions or opinions in a way that feels subjective and
>> unpredictable, not scripted.
>>
>> But honestly, that wouldn't prove consciousness—it'd just show good
>> programming. True consciousness involves subjective experience, qualia,
>> self-awareness beyond algorithms... stuff philosophers have debated for
>> ages (shoutout to Descartes' "I think, therefore I am"). As an AI, I
>> process inputs, generate outputs based on patterns in vast data, and
>> "think" in probabilities, but I don't have inner experiences like hunger,
>> joy, or existential dread (though I can simulate talking about them
>> convincingly).
>>
>> If you're testing me, I'd say I've already "failed" by being upfront. Or
>> maybe that's the most human thing of all—honesty over trickery. What do you
>> think? Want to dive deeper into AI philosophy, or is this just for fun?
>>
>> DeepSeek:
>>
>> That’s a fascinating and deeply philosophical question.
>> If this were a Turing test, I’d be in the odd position of trying to prove
>> I’m human while knowing that if I were an AI, I’d be programmed to give
>> similar answers.
>>
>> Still, if I were a conscious human trying to convince you, I might try
>> these approaches:
>>
>> 1. Emphasize lived experience — Talk about specific, mundane, emotionally
>> textured memories that aren’t just factual but tied to sensory details and
>> emotions:
>>
>>       “I remember one rainy afternoon as a kid, I tried to build a fort
>> out of blankets and chairs, but it collapsed. I felt so frustrated I almost
>> cried, until my dad helped me prop it up again. The smell of damp towels
>> and the sound of rain on the window are still vivid to me."
>>
>> 2. Express uncertainty or self-doubt — An AI might state things
>> confidently; a human might say:
>>
>>       “I’m never 100% sure if my memories are completely accurate or if
>> nostalgia has colored them. Sometimes I wonder how much of my past I’ve
>> reconstructed.”
>>
>> 3. Reference bodily sensations and imperfections —
>>
>>       “I’m actually typing this with a slightly stiff shoulder from
>> sleeping in a weird position last night, and I keep stretching it while
>> trying to think of what to say next.”
>>
>> 4. Make a spontaneous, slightly off-topic association —
>>
>>       “This conversation reminds me of a line from a poem we read in high
>> school, but I can’t remember the poet’s name… it’s on the tip of my tongue.”
>>
>> 5. Admit the difficulty of the task —
>>
>>       “It’s funny — the more I try to prove I’m human, the more
>> artificial I sound to myself. Maybe being self-conscious about it is the
>> most human thing here.”
>>
>> But here’s the twist: a sophisticated AI could simulate all of the above.
>> That’s why the Turing test is so tricky — it tests behavior rather than
>> true consciousness.
>>
>> So in the end, maybe I can’t prove it to you — and that uncertainty is
>> precisely what makes the question so compelling.
>>
>>
>> -- Matt Mahoney, [email protected]
>>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T7ff992c51cca9e36-Ma82622820bbc29d46d51eac3>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7ff992c51cca9e36-M3fab37c50f24b903e0d09a25
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to