On 5/31/07, Chuck Esterbrook <[EMAIL PROTECTED]> wrote:
On 5/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Instead, what you do is build the motivational system in such a way that
> it must always operate from a massive base of thousands of small
> constraints.  A system that is constrained in a thousand different
> directions simply cannot fail in a way that one constrain by a single
> supergoal is almost guaranteed to fail.

Richard, these ideas (including your Oct 2006 post) are interesting.
No angst or boredom here.

But my next question is: How does one go about building the "massive
base of thousands of small constraints"? Does each constraint need to
affect the system in a slightly different manner? If so, are these
hand coded, generated, ...?

You might want to take a look at this:
http://homepage.mac.com/a.eppendahl/work/robotics.html

See the "Reversibility" section.

Using reversibility the "goals" or "needs" of AI are more tightly
constrained with permissions. Constraints need no handcoding, only
permissions do, somewhat similarly to goals.

As I see it, it is more appropriate to implement using control theory
(or operant learning and behaviour in models of natural thinking) than
using reinforcement learning.

Its somewhat unrefined, but it looks like something orthogonal to
usual approach to goal systems.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to