> -----Original Message-----
> From: Matt Mahoney via AGI <agi@agi.topicbox.com>
> 
> We could say that everything is conscious. That has the same meaning as
> nothing is conscious. But all we are doing is avoiding defining something 
> that is
> really hard to define. Likewise with free will.


I disagree. Some things are more conscious. A thermostat might be negligibly 
conscious unless there are thresholds.


> We will know we have properly modeled human minds in AGI if it claims to be
> conscious and have free will but is unable to tell you what that means. You 
> can
> train it as follows:
> 
> Positive reinforcement of perception trains belief in quality.
> Positive reinforcement of episodic memory recall trains belief in
> consciousness.
> Positive reinforcement of actions trains belief in free will.


I agree. This will ultimately make a p-zombie which is fine for many situations.

The problem is still there how to distinguish between p-zombie and a conscious 
being. 

Solution: Protocolize qualia. A reason for Universal Communication Protocol 
(UCP) is that it scales up.

Then you might say that p-zombies can use machine learning to mimic 
protocolized qualia to deceive. And they can from past communications.

But what they cannot do is generally predict qualia. And you should agree with 
that ala Legg's proof.

John





------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mc02d54a4317de005468e466e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to