Thanks for your response, Richard. I'm not on equal footing when it
comes to cognitive science, but I do want to comment on one idea.

Richard Loosemore wrote:

 >Instead, what you do is build the motivational system in such 
 >a way that 
 >it must always operate from a massive base of thousands of small 
 >constraints.  A system that is constrained in a thousand different 
 >directions simply cannot fail in a way that one constrain by a single 
 >supergoal is almost guaranteed to fail.

If you could formalize this, specifically show how a massive base of
thousands of small constraints could have consistently reliable effects
on the system it would probably be an important contribution to many
disciplines, not just AI theory. 

But, isn't a formal model of this idea sort of the Holy Grail of complex
systems theory in the first place? 

Keith  


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to