On 12/06/06, James Ratcliff <[EMAIL PROTECTED]> wrote:
Will,
  Right now I would think that a negative reward would be usable for this
aspect.

I agree it is usable. But I am not sure it necessary, you can just
normalise the reward value.

Let say for most states you normally give 0 for a satiated entity, the
best state is 100 and the worst and the worst -100. You can just
transform that to 0 for the worst state 100 for everyday satiated
state and 200 for the best state, without affecting the choices that
most reinforcement systems would make.

So pain would be a below baseline reward.

I am using the positive negative reward system right now for
motivational/planning aspects for the AGI.
So if sitting at a desk considering a plan of action that might hurt himself
or another, the plan would have a negative rating, where another safer plan
may have a higher rating.

Heh.  Well I expect an AI system that worked like a human would have a
very tenuous link between the motivation and planning systems.

The tenuous link is ably shown by my own actions. I have stated that I
think the plausible genetically specified positive motivations are to
do with food, sex and positive social interaction. Yet I tend to plan
how to create interesting computer systems, which isn't the best route
to any of the above....

More later...

 Will Pearson

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to