Initially, the Novamente system's motivations will be

-- please its human teachers
-- make sure its goal system maintains certain desirable "meta-goal" properties
-- learn and create new information

Designing the right initial goal system for the "representationally
explicit" portion of the "reflectively explicit" goal system of an AGI
is a hard problem, one of the hardest aspects of Friendly AI.  I don't
claim to have solved this problem yet, but I don't think it truly
**needs** to be solved until one is dealing with an AGI at human
toddler level.  So I have been focusing on other aspects of the AGI
problem.

-- Ben

On 12/8/06, James Ratcliff <[EMAIL PROTECTED]> wrote:
Ok,
 One more problem I have with goals and autonomous AGI, is in humans it
appears that we really have 2 major motivational factors, physilogical
needs, and personal 'likes'.

  If you are working on an AGI that will truly be autonomous, what are its
base motivations?  Most AGI's will have no true 'likes' in the way that
humans do.
There seems to be three answers to this,
1. Not have full autonomy, but be a 'slave' or worker strictly for a human
owner.
2. Not model any internal likes, but use the owners preferences always,
(merge with A)
3. Model explicitly the 'likes' of the AGI in terms of humanity.   IE I like
to go see the beach, so I WANT to go to the coast to see it, and plan to do
so.

What direction do you see things going?  You mentioned before that you were
aiming towards full autonomy, but can we model that, and if we do, how close
in any way is it to humanity?
  What are the intrinsic motivating factors of a fully-autonomous AGI?
Or is that just too 'alien' for us?

James Ratcliff

Ben Goertzel <[EMAIL PROTECTED]> wrote:
 > Another aspect I have had to handle is the different temperal aspects of
> goals/states, like immediate gains vs short term and long terms goals and
> how they can coexist together. This is difficult to grasp as well.

In Novamente, this is dealt with by having goals explicitly refer to
time-scope.

But indeed, supergoals with different time-scopes are prime examples
of supergoals that may contradict each other in practice (in terms of
the subgoals they generate), though being in-principle consistent with
each other.

> Your baby AGI currently is pursuing only goals externally given to it, but
> soon it would need to handle things like limited resources over time, and
> deciding on better goals for a longer term vs short term, and balancing
the
> two.

Agree ... we are not dealing with those things yet...

> Also how is your AGI handling the reward mechanism, is it just a simple
> additive number property that you are increasing via a 'pat on the head'
or
> 'good boy' reward mechanism, or is it something internally created or?

At the moment it's just a 'good boy' reward mechanism, which rewards
concrete behaviors in the sim world. Most of the system's internal
activities are not regulated by specific goal-achievement-seeking, but
rather by the intrinsic activities of the system's cognitive
processes.

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



_______________________________________
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads!
http://www.falazar.com/projects/Torrents/tvtorrents_show.php

 ________________________________
Want to start your own business? Learn how on Yahoo! Small
Business.________________________________
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to