"My point was that an AGI that was very rapidly
undergoing a series of profound changes might never
develop a stable self-model at all"

Exactly. 

"a human hooked into the Net with VR technology and
able to sense and act remotely via sensors and
actuators all over the world, might also develop a
different flavor of self"

Yes. I think this is an important point that I have
not seen discussed very much. It could be, that I am
not in the right circles to hear the discussions. 

--- Ben Goertzel <[EMAIL PROTECTED]> wrote:

> Hi,
> 
> Well, your point is a good one, and a different one.
> 
> The specific qualities of an AGI's self will
> doubtless be very
> different from that of a human being's.  This will
> depend not only on
> its emotional makeup but also on the nature of its
> embodiment, for
> example.  Much of the nature of human self is tied
> the localized
> nature of our physical embodiment.  An AGI with a
> distributed
> embodiment, with sensors and actuators all around
> the world or beyond,
> would have a very different kind of self-model than
> any human....  And
> a human hooked into the Net with VR technology and
> able to sense and
> act remotely via sensors and actuators all over the
> world, might also
> develop a different flavor of self not so closely
> tied to localized
> physical embodiment.
> 
> But all that is a different sort of point....  My
> point was that an
> AGI that was very rapidly undergoing a series of
> profound changes
> might never develop a stable self-model at all,
> because as soon as the
> model came about, it would be rendered irrelevant.
> 
> Imagine going through the amount of change in the
> human life course
> (infant --> child --> teen --> young adult -->
> middle aged adult -->
> old person) within, say, a couple days.  Your self
> model wouldn't
> really have time to catch up.  You'd have no time to
> be a stable
> "you."  Even if there were (as intended e.g. in
> Friendly AI designs) a
> stable core of supergoals throughout all the changes
> 
> -- Ben G
> 
> 
> On 10/11/06, Chris Norwood <[EMAIL PROTECTED]>
> wrote:
> > How much of our "selves" are driven by biological
> > processes that an AI would not have to begin with,
> for
> > example...fear? I would think that the AI's self
> would
> > be fundamentaly different to begin with due to
> this.
> > It may never have to modify itself to achieve the
> new
> > type of self that you are describing.
> >
> > --- Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > > In something I was writing today, for a
> > > semi-academic publication, I
> > > found myself inserting a paragraph about how
> > > unlikely it is that
> > > superhuman AI's after the Singularity will
> possess
> > > "selves" in
> > > anything like the sense that we humans do.
> > >
> > > It's a bit long and out of context, but the
> passage
> > > in which this
> > > paragraph occurred may be of some interest to
> some
> > > folks here....  The
> > > last paragraph cited here is the one that
> mentions
> > > future AI's...
> > >
> > > -- Ben
> > >
> > > ******
> > >
> > >
> > > "
> > > The "self" in the present context refers to the
> > > "phenomenal self"
> > > (Metzinger, XX) or "self-model" (Epstein, XX). 
> That
> > > is, the self is
> > > the model that a system builds internally,
> > > reflecting the patterns
> > > observed in the (external and internal) world
> that
> > > directly pertain to
> > > the system itself.  As is well known in everyday
> > > human life,
> > > self-models need not be completely accurate to
> be
> > > useful; and in the
> > > presence of certain psychological factors, a
> more
> > > accurate self-model
> > > may not necessarily be advantageous.  But a
> > > self-model that is too
> > > badly inaccurate will lead to a
> badly-functioning
> > > system that is
> > > unable to effectively act toward the achievement
> of
> > > its own goals.
> > >
> > > "
> > > The value of a self-model for any intelligent
> system
> > > carrying out
> > > embodied agentive cognition is obvious.  And
> beyond
> > > this, another
> > > primary use of the self is as a foundation for
> > > metaphors and analogies
> > > in various domains.  Patterns recognized
> pertaining
> > > the self are
> > > analogically extended to other entities.  In
> some
> > > cases this leads to
> > > conceptual pathologies, such as the
> > > anthropomorphization of trees,
> > > rocks and other such objects that one sees in
> some
> > > precivilized
> > > cultures.  But in other cases this kind of
> analogy
> > > leads to robust
> > > sorts of reasoning – for instance, in reading
> Lakoff
> > > and Nunez's (XX)
> > > intriguing explorations of the cognitive
> foundations
> > > of mathematics,
> > > it is pretty easy to see that most of the
> metaphors
> > > on which they
> > > hypothesize mathematics to be based, are
> grounded in
> > > the mind's
> > > conceptualization of itself as a
> spatiotemporally
> > > embedded entity,
> > > which in turn is predicated on the mind's having
> a
> > > conceptualization
> > > of itself (a self) in the first place.
> > >
> > > "
> > > A self-model can in many cases form a
> > > self-fulfilling prophecy (to
> > > make an obvious double-entendre'!).   Actions
> are
> > > generated based on
> > > one's model of what sorts of actions one can
> and/or
> > > should take; and
> > > the results of these actions are then
> incorporated
> > > into one's
> > > self-model.  If a self-model proves a generally
> bad
> > > guide to action
> > > selection, this may never be discovered, unless
> said
> > > self-model
> > > includes the knowledge that semi-random
> > > experimentation is often
> > > useful.
> > >
> > > "
> > > In what sense, then, may it be said that self is
> an
> > > attractor of
> > > iterated forward-backward inference?  Backward
> > > inference infers the
> > > self from observations of system behavior.  The
> > > system asks: What kind
> > > of system might I be, in order to give rise to
> these
> > > behaviors that I
> > > observe myself carrying out?   Based on asking
> > > itself this question,
> > > it constructs a model of itself, i.e. it
> constructs
> > > a self.  Then,
> > > this self guides the system's behavior: it
> builds
> > > new logical
> > > relationships between its self-model and various
> 
=== message truncated ===


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to