Hi Ian,

right, the repo evolved on my system, so I've never tried a clean install,
thanks for testing that :)


On Thu, Dec 5, 2013 at 12:04 AM, Ian Danforth <[email protected]>wrote:

> This is very very cool! I'm trying to run these but running to some issues:
>
> 1. Can you describe your development workflow / git setup?
>
> I just added breznak as a remote, did a git fetch, switched to the
> utility-encoder branch, and I'm now rebuilding.
>
Yes, I just switch to utility-encoder branch, ./build.sh and then switch to
the ALife repo.


>
> Are you using the NTAX_DEVELOPER_BUILD flag? Or do you do ./build.sh for
> each new branch?
>
Never heared of, what's this flag for?


>
> 2. It looks like there is another requirement 'vtk' which is a pretty huge
> install process all by itself. As much as I want to see the pretty graphs I
> can't get it to build on my machine (OSX 10.9)
>
This is true.  vtk is a dep of mayavi, so I silently assumed that. It's
however true that the vtk is a big c++ package which can be trouble to get
built. I'd like to ditch mayavi for a more light-weight solution, any
ideas? Ideally something with matlab syntax, so I'll try matplotlib? (which
we are/were shipping, but mpl was a pain too on osx afaik)


>
> 3. Ignoring the lack of vtk I tried running the scripts but got import
> errors, you need a __init__.py file in the alife dir.
>
fixed, thanks.


>
> 4. After I added that I get this error:
>
Then it's already working for you! (sort of :) ), those are not real
errors.

> ians-air:ALife iandanforth$ python
> alife/experiments/utility_map/utility_map.py 3 6 24 24
>
> feval not set! do not forget to def(ine) the function and set it with
> setEvaluationFn()
>
This is an (intended) warning when Ulitity encoder is initialized without
the eval function (which is required), but I add it later in the class that
instantiated the encoder.. Should I reword the warning to avoid confusion?

> Can't show you nice picture; couldn't import mayavi
>
This is obvious, you don't have mayavi running (the vtk), so the program
runs but is not a much of a use w/o the plots.

Btw, atleast, you can see some screenshots in the Readme and the /imgs dir.
But right as you say, it's fun to play with it and modify, so please try to
make it run.

Cheers, breznak

Thanks for all the cool work, I really want to play around with this!
>
> Ian
>
> P.S. I think you mean "homeostasis" rather than "osmosis."
>

>
>
>
>
> On Wed, Dec 4, 2013 at 1:35 PM, Marek Otahal <[email protected]> wrote:
>
>> This mail introduces my experiments with NuPIC on simulating behavior,
>> emotions, goals and learning.
>>
>> It uses a utility-encoder:
>> https://github.com/breznak/nupic/tree/utility-encoder
>> which I'd like to ask you for review, opinions and consideration for
>> mailine.
>> More than for practical issues, I hope this encoder could be an entry
>> point for a field of some very interesting experiments with CLAs.
>>
>> The principle of the encoder is very easy, it provides some kind of
>> postprocessing of the original input, which is then added to the encoders
>> output as another field (score).
>>
>> Usecase for the encoder is eg behavior modeling (which I'll show
>> further). A typical example is: use vector encoder where two fields carry
>> the meaning of (position X, position Y); at initialization, the encoder is
>> passed user-defined evaluation function, which accepts the input and
>> produces a score of it. For this example, the score could be eucleidian
>> distance to a defined target (1,1). The resulting (post)input would be "[x,
>> y], score" -> which is converted to bitmap as output.
>>
>>
>> ===================
>>
>> The behavior and emotions experiments with NuPIC can be found in my
>> https://github.com/breznak/ALife repo.
>>
>>
>> 1/ Emotions:
>> -I went on to assume that basic emotions (low level, like hunger, pain,
>> "feeling good") can be hardwired to the program, and so are in humans and
>> animals where these are encoded in levels of hormones (adrenalin, ..). Such
>> emotions drive "osmosis" where the body wants to keep certains conditions,
>> inner states - like feeling hungry, keeping reasonable temperature,
>> "biological clock for mothers", ...
>>
>> This is modelled by the utility-encoder (above).
>>
>> Emotions can be used to model a higher level goals as well. Here it loses
>> the biological plausability but the use of utility still holds. Such case
>> can be "will to reach a target position, get highest profit in trades, etc"
>>
>>
>> 2/ Actions' effects
>> Another interesting use is where the creature is discovering its
>> abilities (a young baby, completely new environment [space], or a n
>> artifitial limb ["vision" through taste gadget for blind people].
>> The similar concept is used in Prolog programming/planning - where
>> actions have it's prerequisities and effects "ie examples of cranes & cars,
>> monkey&banana&box".
>> This utilizes nicely the concept of SP (and TP) to learn effects,
>> requirements and changes of actions.
>> Example can be: {"hungry", eat, chicken} -> inner state hunger goes down
>> -> high score!
>> while {"full", eat, chicken} -> not much improvement in inner states ->
>> med score. And finally: {"extremely hungry", play violin, violin} -> lowers
>> food ammount -> very low score.
>>
>> Staced up actions example could be a sequence {no food, hungry, walk}
>> followed by {have food, hungry, eat} has high score, vs {no food, hungry,
>> eat}, {no food, hungry, walk} sequence  does not.
>>
>> 3/ Behavior
>> Is the final stage, combines the above + some sort of planning.
>> Can be described like pursuing the main goal(s) while switching to more
>> actual sub-goals as needed. "Eg Get from NYC to LA, avoid planes and dont
>> die (hunger, hit by cars, ...)"
>>
>> The utility map is quite hard to plot, because actually it's changing by
>> position-action-innerstates(-and time).
>>
>> This is modelled by the behavior agent who percieves the worlds, keeps
>> inner representation of the explored states (memory - "5gold on pos [1,5];
>> troll on [8,8]"), has a collection of it's inner states (hunger, body
>> temperature, oz in car's gas tank,..). This agent updates its utility map
>> for each {state, innerstate, action} taken (similar to reinforcement
>> learning)
>>
>> Like I said, the agent creates a utility map as it consumes its resources
>> and moves through the environment. Emotions allow shaping its directions
>> toward (sub)goals. the progress is done by minimalization (or max, doesn''t
>> matter) of the utility function, following the gradient. Here, the "choose
>> the best" can be done either "artificially" (non-biological way), or there
>> could be a higher level region which will take the possible inputs and
>> choose the by the minimum score. (Out of interest, such minimalizing CLA
>> would be a nice proof of concept).
>>
>>
>> I'd like to hear you further ideas, other examples, flaws in my plan etc
>> etc :)
>>
>> Cheers,
>> breznak
>>
>> --
>> Marek Otahal :o)
>>
>> _______________________________________________
>> nupic mailing list
>> [email protected]
>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>
>>
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>


-- 
Marek Otahal :o)
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to