Defining an RL-agent environment in RL-Glue API is a straightforward task. 
Apart from (de)initialization calls, an environment respond for the agent 
action must me defined. This respond should have an appropriate reward for 
the agent (There are two separate placeholders for integer and real 
rewards. The way API deals with different types). If environment is 
dynamic, its internal state could be programmed as you see fit. Examples 
are here: http://library.rl-community.org/wiki/Category:Environments.


On Tuesday, November 25, 2014 10:34:23 AM UTC-5, John Myles White wrote:
>
> Sounds like a cool project. Are the state space representations that 
> RL-Glue uses easy to work with?
>
>  — John
>
> On Nov 24, 2014, at 10:09 PM, wil...@gmail.com <javascript:> wrote:
>
> Reinforcement learning (RL) isn't covered much in Julia packages. There is 
> a collection of RL algorithms over MDP in package: 
> https://github.com/cpritcha/MDP. There is a collection of IJulia 
> notebooks from a Stanford course that cover more RL algorithms: 
> https://github.com/sisl/aa228-notebook/tree/master
>
> Unfortunately, more advanced function approximation techniques, beyond 
> look-up table, that allow to tackle large action-state spaces, are nowhere 
> to find.
>
> Couple a month ago, Shane Conway, the guy behind RL-Glue 
> <http://glue.rl-community.org/wiki/Main_Page>, talked about developing 
> Julia RL-Glue client. If that happens, it would be quite simple to use 
> various advanced RL algorithms, including value function approximators, in 
> Julia. 
>
>
> On Saturday, November 22, 2014 11:12:29 PM UTC-5, Pileas wrote:
>>
>> Some problems have the so-called curse of dimensionality and curse of 
>> modeling. For this reason Bersekas and Tsimtsiklis (at MIT) introduced the 
>> so-called Neuro-Dynamic Programing.
>>
>> Does Julia offer support for the aforementioned and if not, how about the 
>> future?
>>
>
>

Reply via email to