--- Mark Waser <[EMAIL PROTECTED]> wrote:

> > But programming a belief of consciousness or free will seems to be a hard
> > problem, that has no practical benefit anyway.  It seems to be easier to 
> > build
> > machines without them.  We do it all the time.
> 
> But we aren't programming AGI all the time.  And you shouldn't be 
> hard-coding beliefs in your AGI anyways.  And I guarantee you that your AGI 
> will run across the concepts of consciousness or free will -- and then have 
> to evaluate their truth-value -- and then have to evaluate whether they 
> apply to itself (unless, of course, you attempt to preempt all this with 
> some other hard-coding -- which would look like a kludge to me).

We have to hard code the top level goals into an AGI (to serve humans, for
example).

But lets say we don't program beliefs in consciousness or free will (not that
we should).  The AGI will look at these concepts rationally.  It will conclude
that they do not exist because human behavior can be explained without their
existence.  It will recognize that the human belief in a "little person"
inside the brain that observes the world and makes decisions is illogical, but
the belief exists because it confers a survival advantage.  Humans who don't
believe they have any control over their environment become depressed and die.
 The AGI will be aware of animal research into learned helplessness which
proves this.

> But Google doesn't learn or understand.  This is strawman stupidity that I'm
> unwilling to get diverted by.

Google clearly does learn in the machine learning sense (for example,
personalized search).  Whether it "understands" depends on your definition. 
Is there any important distinction between "understand" and "appear to
understand"?

> > But you haven't answered my question.  How do you test if a machine is
> > conscious, and is therefore (1) dangerous, and (2) deserving of human 
> > rights?
> 
> As I said before, if it looks conscious, it is assumed to be conscious.  If 
> it looks sufficiently capable, it's assumed dangerous (to the extent that a 
> random human being is dangerous).  If it's conscious and can pass a 
> citizenship test, it's deserving of human rights.

I think to most people, an AGI would appear to be conscious to the extent it
appeared to be human.  But this is an emotional test, not a logical one.  And
it hardly settles the issue.  Even without AGI, people cannot agree on which
animals deserve rights, or at what age an embryo deserves rights.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to