--- Mark Waser <[EMAIL PROTECTED]> wrote:

> OK.  I'm confused.  You said both
> > lets say we don't program beliefs in consciousness or free will . . . .
> > The AGI will look at these concepts rationally.  It will conclude that 
> > they do not exist because human behavior can be explained without their 
> > existence.
>     AND
> > I do believe in consciousness.  It would be impossible for me to believe 
> > otherwise.
> 
> What are you arguing?  That consciousness does not exist (since a rational 
> being will conclude that it does not) but that you can't believe that it 
> doesn't?

The human brain handles conflict between hardcoded (or emotional) beliefs, and
beliefs learned from experience by giving priority to the hardcoded belief. 
But there are different ways of doing this.  A common approach is to resolve
the conflict by finding alternative explanations for the observations.  So
when presented with a computational model of the human mind that requires no
consciousness or free will, we try to find fault with the model.  This is like
the creationist, who when presented with overwhelming evidence for evolution,
attacks Darwin's theories, e.g. God created fossils, how do you explain the
co-evolution of bees and flowers, etc.

My approach is to accept the conflicting evidence and not attempt to resolve
it.  It is like the rational person who has seen a ghost, but who also
believes that ghosts do not exist.  I know at least two such people, and this
conflict does not interfere with other aspects of their lives.

> > The answer depends on how your AGI is programmed to resolve conflicting
> > evidence.  On the one hand, humans believe in consciousness and have
> > communicated this to the AGI.  On the other, the AGI observes that it can 
> > do
> > everything a human mind can do, and yet it is running a deterministic 
> > program
> > whose outputs depend only on the program and its inputs.
> 
> I see no conflicting evidence here.  The AI should believe that it is 
> conscious. 

But the AI will see conflicting evidence.  This is important because if its
belief in consciousness comes from human communication rather than from being
hardcoded, it may resolve the conflict differently.  If the AI rejects a
belief in consciousness, it may not function properly, i.e. it may fail to
learn through creative experimentation, which seems to require a sense of free
will.  On the other hand, if it accepts consciousness by rejecting the
computational model of the mind, then it may fail to recursively self-improve
because its model is faulty.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to