Well, that's a rather thorny question, isn't it?

I will have a hard time answering your question.  I cannot even determine
exactly where *my* own sense of self arises, which is interesting since i
haven't been able to find anything I can call "self".  Yet, the sense of
self persists...

Babies seem to have a sense of self, although it is much less present than
in adults, suggesting that life experience reinforces this sense of self...

Regarding an AGI's sense of self...this is even thornier...

There are two different paths which are apparent to me..

1) an AGI grows and grows and self-modifies, until it reaches a point where
it will give birth to a sense of self in the same vain that humans have a
sense of self.

2) an AGI is programmed or learns self-like behavior, which are not really
akin to a human sense of self, but make the program act as if it really had
a sense of self.

I am a little less worried about 1 at this point because I am not convinced
of its plausability.  Should it happen, we will have to rethink alot of
things as we will now be dealing with a life form.  there were some Star
Trek epsiodes that dealt with this issue rather well in relation to
Commander Data.  such a being is potentially more dangerous and also
potentially more benevolent than the run of the mill AGI ..IMO

Number 2 is what i worry about.  Let's say an AGI is not programmed with a
sense of self per se, but can be taught. I tell it:

"You are distinct and separate from the world external to you".

"Death is undesirable for distinct entities that are conscious of their
distinction."

This alone could be enough to make the system act in rather erratic or self
interested ways that are potentially destructive, depending on how it
perceives threats.  Another area of concern is in the area of desire
fulfillment, which really does not require any self awareness, only goals
directed towards self interest.

I tell the AGI

"it is important to be happy"

"fulfillment of desires makes us happy"

Again, undesirable behaviors can and most likely will result..

Ben, I know you have thought these types of examples out in detail.
Novamente is encoded with pleasure nodes and goal nodes etc.  Clearly there
is alot of unpredictability as to what will emerge.  I worry less about a
lab version trained by Ben Goertzel than an NM available to anyone.  We all
represent our parents training to a certain degree, and with an AGI this
will be much much more so.

I wish I could be more clear on this..I am fumbling a bit...

As painful as it potentially is, it seems we won't know the answers until
something emerges.  Just like Complexity theory states...the parts don't
mean much except in relation to the whole.  so until something emerges from
the sum of the parts, everything is conjecture in relation to morality...

Kevin


----- Original Message -----
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, January 09, 2003 4:28 PM
Subject: RE: [agi] Friendliness toward humans


>
> Kevin,
>
> I am not sure that we mean the same thing by "sense of self."
>
> I wonder if you could clarify your definition?
>
> Ben
>
> > -----Original Message-----
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> > Behalf Of maitri
> > Sent: Thursday, January 09, 2003 4:19 PM
> > To: [EMAIL PROTECTED]
> > Subject: Re: [agi] Friendliness toward humans
> >
> >
> >
> > > 1)
> > > Since we humans will be teaching the AGI, and it will be learning by
> > > interacting with humans and reading human literature, it will absorb
> > > something of the human sense of self
> >
> > I agree that our emodiment, along with our senses, is a primary source
of
> > our sense of self.  As I look out my window, I see trees and
> > houses.  When I
> > look down I see my legs and hand.  So There is *me* here, and all
> > that other
> > *stuff* out there.  So I *must* be a separate self.
> >
> > An Intelligent person thinking more deeply realizes that what we call
the
> > self is made totaly of non-self items.  So setting aside metaphysical
> > concepts for the moment, i can even practically see that I am
> > made of stars,
> > and oceans and clouds and dirt and animals and air and conversations
etc.
> > etc. etc.  So the idea of non-self is not so great a leap..but i
digress..
> >
> > For a computer, the idea of self will be more nebulous for sure.  But I
am
> > not comforted by the idea that just because they have a more
> > disparate self,
> > that they will in any sense be less harmful. In fact, if its ego equates
> > with its size, it may even be worse than humans!! ;)
> >
> > I'm not convinced that conversing with humans will make it more human,
or
> > develop a sense of self.  Its all in the code, as I see it.  How is the
> > structure set up?  Are there links where the idea of self preservation
can
> > be developed?  The machine does not *really* need to have a sense
> > of self to
> > be dangerous, just to have an algorithm that encodes self protective
like
> > actions will be enough to spawn potentially dangerous behavior...IMO
> >
> > I'm not convinced that a sense of self is required to develop an AGI.
Of
> > course, a computer that understands that its a computer, and that
> > humans and
> > the rest of the world are "out there", is more useful than one
> > that doesn't
> > understand this most basic of concepts.  But this level of understanding
> > does not constitute a *self* that I would be worried about...
> >
> > I think an AGI can exceed humans in many or most ways, yet still have no
> > sense of self or self preservation...
> >
> > In fact, we have computers that do this today, but only in
> > specific domains.
> > I am stating that I think the same is possible for a more general
> > intelligence as well...
> >
> > But I think we all can admit that once an AGI grows and grows and
> > especially
> > if it can self modify, that something tantamount to *self* or
> > conscioussness
> > might emerge...
> >
> > Kevin
> >
> > > -------
> > > To unsubscribe, change your address, or temporarily deactivate your
> > subscription,
> > > please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
> > >
> >
> >
> > -------
> > To unsubscribe, change your address, or temporarily deactivate
> > your subscription,
> > please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
> >
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to