> Consider an analogy.  In human culture, there is a rigid distinction
between
> "man" and "woman."  This makes sense, because there are very few
> intermediate cases.  True hermaphroditism is about one in a million; and
> "ambiguous genitala" are seen in maybe one in 10,000 - 100,000....  On the
> other hand, imagine a society in which hermaphroditism and ambiguous
> genitalia were the rule, so that there was a commonly experienced
continuum
> between men and women.  In that case, the man/woman distinction would no
> longer be nearly so interesting.  Minds would stop thinking in terms of
that
> rigid distinction.
>
> However, even though the distinction between man and woman is real, we
still
> overemphasize it.  We make the distinction too rigid, and have trouble
> dealing with transvestites, transsexuals and the like.  We don't want to
> admit the existence of phenomena that break the rigid man/woman
distinction
> we've created.  Even though this distinction is a reification our minds
have
> created, a pattern in the world, we treat it sometimes implicitly as
though
> it were an absolute.

You choose an interesting example in that some humans carry their sense of
self to such a degree, that they even view other humans, with different
qualities than themselves, as inferior or sometimes even disposable.  In the
case of an AGI, I could see where it might also form such an opinion.  In
its case, the distinction might be silicon based life versus biological.
Reminds me of the Matrix when the agent told Morpheus that he couldn't stand
the smell of humans and that humans were a virus on the earth.  Let's hope
this wasn't prescient...

> Now, an AGI program may well not be embodied in the same sense as we are.
> Even if it has control over robot bodies, the same AGI could have control
> over MANY robot bodies.  It could also get direct sensory input from
robots
> controlled by other AGI's, from weather satellites, from medical sensors
in
> use in hospitals or in free-ranging individuals, from all over the
Internet,
> etc.  It will have more different ways of affecting the world than we
have,
> too --  minorly tweaking parameters of a satellite here, sending an e-mail
> there, etc. etc.

While its "self" may be broader in definition than ours, it would still deem
biological entities as non-self, creating potential problems..

>
> For such a distributedly-embodied AGI program, the self/nonself
distinction
> will objectively not be as rigid as for a human.  The distinction between
> "that which I directly sense and control" and its opposite is far less
> rigid.  There is more of a continuum between the extremes of "directly
> sensible/controllable" and "fully external to me."
>
I understand, but I don't agree that having many bodies lends itself to
lesser degrees of egoism and the inherent dangers therein.  In fact, it may
be argued that the AGI will view its pervasive presence as even further
indication of its superiority to us and create even worse behavior.  Only
strong identification with non-self entities, or lacking of strong sense of
distinctive self, can help ensure altrusistic behavior

> So, I'd expect that in an AGI of that nature, a completely different
> psychology would develop, in which a rigid heuristic distinction between
> "self" and "nonself" would not play a role, but would be replaced with
> different concepts.  The nature of this psychology is something I'll be
> thinking about more, in spare moments, over the next few weeks...

Perhaps..but should it become "conscious" in the sense that humans are, it
would follow the definitions that have been set down(in my belief system its
according to eastern thoughts where consciousness is viewed as having levels
starting from the sense consciousness' and becoming more subtle down to the
store consciousness).  But while its pre-sentient it may display some
unknown type of apparent pyschological behavior.  This is all IMO...

>
> I don't believe that eliminating the rigid self/nonself distinction from
an
> entity's psychology will eliminate all evil from that entity.  I think
that
> is an oversimplification.  But it's certainly an interesting perspective
to
> think about further....

Yea...I was giving a very simplistic idea, which applies well to humans. But
AGI's are a complete unknown and it sometimes feels like fitting a square
peg into a round hole when applying human processes to it.
>
> I also wonder whether a totally different psychology will possibly lead to
> new types of evil that humans can't anticipate due to our unfamiliarity.
> Maybe most human evil is rooted in the self/nonself distinction, but AGI
> evil will be rooted in other sorts of psychodynamics...  ???

Humans that are capable of great evil are often shown to have some damage to
parts of their frontal lobe.  This damage leads to lack of positive emotions
and a feeling of disassociation with others.  Should an AGI be just as
coldly calculating and detached..Look out!!!

Kevin

>
>
> -- Ben Goertzel
>
>
>
>
>
>
> > -----Original Message-----
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> > Behalf Of maitri
> > Sent: Friday, January 10, 2003 9:13 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: [agi] Friendliness toward humans
> >
> >
> > Well, that's a rather thorny question, isn't it?
> >
> > I will have a hard time answering your question.  I cannot even
determine
> > exactly where *my* own sense of self arises, which is interesting since
i
> > haven't been able to find anything I can call "self".  Yet, the sense of
> > self persists...
> >
> > Babies seem to have a sense of self, although it is much less present
than
> > in adults, suggesting that life experience reinforces this sense
> > of self...
> >
> > Regarding an AGI's sense of self...this is even thornier...
> >
> > There are two different paths which are apparent to me..
> >
> > 1) an AGI grows and grows and self-modifies, until it reaches a
> > point where
> > it will give birth to a sense of self in the same vain that humans have
a
> > sense of self.
> >
> > 2) an AGI is programmed or learns self-like behavior, which are not
really
> > akin to a human sense of self, but make the program act as if it
> > really had
> > a sense of self.
> >
> > I am a little less worried about 1 at this point because I am not
> > convinced
> > of its plausability.  Should it happen, we will have to rethink alot of
> > things as we will now be dealing with a life form.  there were some Star
> > Trek epsiodes that dealt with this issue rather well in relation to
> > Commander Data.  such a being is potentially more dangerous and also
> > potentially more benevolent than the run of the mill AGI ..IMO
> >
> > Number 2 is what i worry about.  Let's say an AGI is not programmed with
a
> > sense of self per se, but can be taught. I tell it:
> >
> > "You are distinct and separate from the world external to you".
> >
> > "Death is undesirable for distinct entities that are conscious of their
> > distinction."
> >
> > This alone could be enough to make the system act in rather
> > erratic or self
> > interested ways that are potentially destructive, depending on how it
> > perceives threats.  Another area of concern is in the area of desire
> > fulfillment, which really does not require any self awareness, only
goals
> > directed towards self interest.
> >
> > I tell the AGI
> >
> > "it is important to be happy"
> >
> > "fulfillment of desires makes us happy"
> >
> > Again, undesirable behaviors can and most likely will result..
> >
> > Ben, I know you have thought these types of examples out in detail.
> > Novamente is encoded with pleasure nodes and goal nodes etc.
> > Clearly there
> > is alot of unpredictability as to what will emerge.  I worry less about
a
> > lab version trained by Ben Goertzel than an NM available to
> > anyone.  We all
> > represent our parents training to a certain degree, and with an AGI this
> > will be much much more so.
> >
> > I wish I could be more clear on this..I am fumbling a bit...
> >
> > As painful as it potentially is, it seems we won't know the answers
until
> > something emerges.  Just like Complexity theory states...the parts don't
> > mean much except in relation to the whole.  so until something
> > emerges from
> > the sum of the parts, everything is conjecture in relation to
morality...
> >
> > Kevin
>
>
>
>
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
> subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to