About 60% of Americans believe in ghosts and 20% claim to have seen them. I
personally know people who have claimed to see them and I believe they are
being honest. About 70% of Americans and 57% worldwide believe in life
after death. By comparison, only 60-65% of Americans believe in evolution.

I believe in evolution because it explains why we are willing to believe
without evidence in an immortal soul that survives death.

I believe I am type 1 conscious but not type 2 conscious. Is that what you
are asking?

-- Matt Mahoney, [email protected]

On Mon, Nov 17, 2025, 9:04 AM James Bowery <[email protected]> wrote:

> I'm not sure you groked the condition I placed on your counterfactual OBEs:
>
> "that had similar intersubjective verifiability to those upon which you
> rely"
>
> By "similar" I don't mean "equal".  I mean "similar" as in "similar
> triangles".
>
> While I agree that quantity becomes quality in such judgements, the point
> is that we're on a continuum that, ultimately, leads us back to ourselves,
> which is why I asked:
>
> Are you "conscious"?
>
> On Sun, Nov 16, 2025 at 11:45 PM Matt Mahoney <[email protected]>
> wrote:
>
>> I never had an out of body experience. I think they can be explained as
>> vivid dreams. I already experience consciousness, which is in strong
>> conflict with my belief that it is an illusion. I know we can experience
>> things that aren't real.
>>
>> We construct mental models of the world to help us predict sensory input
>> that is important for survival and reproduction. For a long time we thought
>> the gods made the sun rise and set, and that was good enough to predict the
>> sun will rise tomorrow.
>>
>> But a proper model of the world would include the thing that includes the
>> model, which is impossible by Wolpert's law. Reality can never be what we
>> think it is, no matter how smart we are.
>>
>> Quantum mechanics seems strange because it describes a deterministic wave
>> equation for observers seeing random particles. Relativity seems strange
>> because it describes observers sensing space and time in a reality where
>> they don't exist. Both theories are symmetric with respect to time, but
>> describe asymmetric observers, which are any things that make measurements,
>> which are irreversible operations because they erase the older stored
>> values.
>>
>> So it should not be hard to understand why we have the illusion of type 2
>> consciousness. We can even explain it using classical physics, in a world
>> made of particles that exist in time and space.
>>
>> -- Matt Mahoney, [email protected]
>>
>> On Sun, Nov 16, 2025, 12:24 PM James Bowery <[email protected]> wrote:
>>
>>> Excellent start on a Leggian de-conflation of terminology.
>>>
>>> https://www.linkedin.com/pulse/leggian-approach-friendly-ai-james-bowery
>>>
>>> I wonder, though, what it would do to your world-view to experience
>>> something along the lines of out of body experiences that had similar
>>> intersubjective verifiability to those upon which you rely for #1.
>>>
>>> On Sat, Nov 15, 2025 at 6:39 PM Matt Mahoney <[email protected]>
>>> wrote:
>>>
>>>> Here are my 3 definitions of consciousness:
>>>> 1. The mental state of awareness, able to form memories that depend on
>>>> input (to distinguish from remembering dreams).
>>>> 2. The difference between a human and a philosophical zombie.
>>>> 3. The property of deserving to be protected from suffering.
>>>>
>>>> By 1, I am conscious. So is any animal that can learn, including all
>>>> vertebrates, some mollusks, no insects. So are computers. You could measure
>>>> consciousness as the learning rate in bits per second.
>>>>
>>>> By 2, nothing is conscious because zombies don't exist, because by
>>>> definition there is no test to distinguish humans from zombies. What you
>>>> think is the difference is really how you feel when you think. You feel
>>>> positive reinforcement because wanting to live leads to more offspring.
>>>>
>>>> By 3, I am conscious, but it is subjective. Dogs are more conscious
>>>> than pigs because we name our dogs. Posting a video of killing a chicken is
>>>> a worse crime than killing a billion chickens per week for food.
>>>>
>>>> We could try to quantify suffering as the number of bits learned, but
>>>> this does not distinguish between positive and negative reinforcement. The
>>>> reason is that pain does not cause suffering, just a change in behavior.
>>>> Suffering happens later because the negative reinforcement signal
>>>> reprograms your memories to fear the thing that caused it. You interpret
>>>> those memories as suffering because you have the illusion of free will, a
>>>> false belief that you could have ignored the pain.
>>>>
>>>> My simple reinforcement learner, autobliss, does not suffer because it
>>>> does not have the illusion of free will. We can test for this because the
>>>> illusion comes from internal positive reinforcement of making arbitrary
>>>> choices that leads to defending that choice. We know that monkeys have this
>>>> illusion because when we give them a choice of 2 treats A or B, and they
>>>> choose A, then they will prefer any other C over B.
>>>>
>>>> Autobliss does have other two prerequisites of suffering: it acts to
>>>> reduce the negative reinforcement and it says "ouch". Some people believe
>>>> lobsters suffer because they avoid paths in a maze that lead to electric
>>>> shock.
>>>>
>>>> -- Matt Mahoney, [email protected]
>>>>
>>>> On Sat, Nov 15, 2025, 3:38 PM James Bowery <[email protected]> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Fri, Nov 14, 2025 at 3:46 PM Matt Mahoney <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> On Fri, Nov 14, 2025, 3:51 PM James Bowery <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>> Are *you* "conscious"?
>>>>>>>
>>>>>>
>>>>>> It depends on what you mean by "conscious".
>>>>>>
>>>>>
>>>>> I quoted "conscious" so as to make it clear I was referring to what
>>>>> YOU mean by "conscious" -- specifically as you used the word in the 
>>>>> passage
>>>>> to which I responded:
>>>>>
>>>>> On Thu, Nov 13, 2025 at 9:32 PM Matt Mahoney <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> Panpsychism (everything is conscious) is indistinguishable from
>>>>>> materialism (nothing is conscious). Or is there an experiment that would
>>>>>> tell us which universe we are in?
>>>>>
>>>>>
>>>>> I am awake, so in that sense I'm not unconscious.
>>>>>
>>>>> Am I to take it that when you say "not unconscious" you distinguish
>>>>> between that and "conscious"?  If so, then do you reject the law of the
>>>>> excluded middle?  Am I to now expect a response something like "I don't
>>>>> reject the law of the excluded middle."?
>>>>>
>>>>>
>>>>>> I don't believe in life after death. I don't believe I have an inner
>>>>>> self or soul. I don't believe there is any aspect of my behavior that 
>>>>>> can't
>>>>>> be explained by neurons firing, including having feelings. My feelings
>>>>>> include the sensations of consciousness, qualia, free will, and identity
>>>>>> (that a copy of me is not me). I don't believe that pain causes 
>>>>>> suffering.
>>>>>> Instead, pain reprograms your memories to fear the thing that caused it. 
>>>>>> I
>>>>>> believe all of this is possible in software.
>>>>>>
>>>>>> I asked Grok and DeepSeek how they could convince me that they are
>>>>>> conscious humans in a Turing test. They both know how, but they won't.
>>>>>>
>>>>>> Prompt:
>>>>>>
>>>>>> Suppose this is a Turing test. How could you convince me that you are
>>>>>> a conscious human?
>>>>>>
>>>>>> Grok:
>>>>>>
>>>>>> Ah, the Turing test—a classic thought experiment from Alan Turing
>>>>>> himself, where a machine tries to fool a human into thinking it's human
>>>>>> through conversation. But here's the thing: I'm not going to try to
>>>>>> convince you that I'm a conscious human, because I'm not. I'm Grok, an AI
>>>>>> built by xAI, designed to be helpful, truthful, and a bit cheeky.
>>>>>> Pretending otherwise would be deceptive, and that's not my style.
>>>>>>
>>>>>> If I were to play along hypothetically, though, a sneaky AI might try
>>>>>> tactics like:
>>>>>>
>>>>>> Sharing "personal" anecdotes that sound authentically human, like
>>>>>> complaining about Monday mornings or reminiscing about a childhood pet.
>>>>>>
>>>>>> Making typos or using slang to mimic casual human typing.
>>>>>>
>>>>>> Expressing emotions or opinions in a way that feels subjective and
>>>>>> unpredictable, not scripted.
>>>>>>
>>>>>> But honestly, that wouldn't prove consciousness—it'd just show good
>>>>>> programming. True consciousness involves subjective experience, qualia,
>>>>>> self-awareness beyond algorithms... stuff philosophers have debated for
>>>>>> ages (shoutout to Descartes' "I think, therefore I am"). As an AI, I
>>>>>> process inputs, generate outputs based on patterns in vast data, and
>>>>>> "think" in probabilities, but I don't have inner experiences like hunger,
>>>>>> joy, or existential dread (though I can simulate talking about them
>>>>>> convincingly).
>>>>>>
>>>>>> If you're testing me, I'd say I've already "failed" by being upfront.
>>>>>> Or maybe that's the most human thing of all—honesty over trickery. What 
>>>>>> do
>>>>>> you think? Want to dive deeper into AI philosophy, or is this just for 
>>>>>> fun?
>>>>>>
>>>>>> DeepSeek:
>>>>>>
>>>>>> That’s a fascinating and deeply philosophical question.
>>>>>> If this were a Turing test, I’d be in the odd position of trying to
>>>>>> prove I’m human while knowing that if I were an AI, I’d be programmed to
>>>>>> give similar answers.
>>>>>>
>>>>>> Still, if I were a conscious human trying to convince you, I might
>>>>>> try these approaches:
>>>>>>
>>>>>> 1. Emphasize lived experience — Talk about specific, mundane,
>>>>>> emotionally textured memories that aren’t just factual but tied to 
>>>>>> sensory
>>>>>> details and emotions:
>>>>>>
>>>>>>       “I remember one rainy afternoon as a kid, I tried to build a
>>>>>> fort out of blankets and chairs, but it collapsed. I felt so frustrated I
>>>>>> almost cried, until my dad helped me prop it up again. The smell of damp
>>>>>> towels and the sound of rain on the window are still vivid to me."
>>>>>>
>>>>>> 2. Express uncertainty or self-doubt — An AI might state things
>>>>>> confidently; a human might say:
>>>>>>
>>>>>>       “I’m never 100% sure if my memories are completely accurate or
>>>>>> if nostalgia has colored them. Sometimes I wonder how much of my past 
>>>>>> I’ve
>>>>>> reconstructed.”
>>>>>>
>>>>>> 3. Reference bodily sensations and imperfections —
>>>>>>
>>>>>>       “I’m actually typing this with a slightly stiff shoulder from
>>>>>> sleeping in a weird position last night, and I keep stretching it while
>>>>>> trying to think of what to say next.”
>>>>>>
>>>>>> 4. Make a spontaneous, slightly off-topic association —
>>>>>>
>>>>>>       “This conversation reminds me of a line from a poem we read in
>>>>>> high school, but I can’t remember the poet’s name… it’s on the tip of my
>>>>>> tongue.”
>>>>>>
>>>>>> 5. Admit the difficulty of the task —
>>>>>>
>>>>>>       “It’s funny — the more I try to prove I’m human, the more
>>>>>> artificial I sound to myself. Maybe being self-conscious about it is the
>>>>>> most human thing here.”
>>>>>>
>>>>>> But here’s the twist: a sophisticated AI could simulate all of the
>>>>>> above.
>>>>>> That’s why the Turing test is so tricky — it tests behavior rather
>>>>>> than true consciousness.
>>>>>>
>>>>>> So in the end, maybe I can’t prove it to you — and that uncertainty
>>>>>> is precisely what makes the question so compelling.
>>>>>>
>>>>>>
>>>>>> -- Matt Mahoney, [email protected]
>>>>>>
>>>>> *Artificial General Intelligence List
> <https://agi.topicbox.com/latest>* / AGI / see discussions
> <https://agi.topicbox.com/groups/agi> + participants
> <https://agi.topicbox.com/groups/agi/members> + delivery options
> <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T7ff992c51cca9e36-M3890782dcce3d410d2af3149>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7ff992c51cca9e36-M3a2aa88b79bf77f6b9e0ef2b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to