You have interpreted my below post in an overly defensive manner.
Sorry ... I'm dealing with some other frustrating things this morning so
maybe the frustratedness unintentionally rubbed off on this email exchange
...
(Are you saying Novamente is not scaleable to human level without
Well, this whole chapter of the wikibook deals with these issues...
http://www.opencog.org/wiki/OpenCogPrime:WikiBook#Probabilistic_Evolutionary_Learning
most of that chapter is about various strategies for using background
knowledge to guide probabilistic evolutionary learning
As you note From
Every thing is a constraint
Everything is not much of a constraint
;-)
On Wed, Dec 17, 2008 at 10:52 AM, Lukasz Stafiniak lukst...@gmail.comwrote:
Talking on a very abstract level, MOSES could be ultimately developed
so that it explores what you could call constraint relaxation
strategies;
Talking on a very abstract level, MOSES could be ultimately developed
so that it explores what you could call constraint relaxation
strategies; combo trees are built such that they meet the constraints
optimally with more-or-less random exploration of different
trade-offs. Pure MOSES (current)
Ben,
Thanks for your reply, It was helpful.
Your answer causes me to ask in what brain-like thinking processes would
MOSES be a win over just having the hypergraph itself compute candidate
solutions?
Hofstader's Copycat has shown that: (a) various relaxations of a given
multiple