Typically people use the reparameterization trick to handle this. See the original variational autoencoder paper, and example lasagne implementation here: https://github.com/Lasagne/Recipes/blob/master/examples/variational_autoencoder/variational_autoencoder.py#L92
On Friday, June 16, 2017 at 6:43:44 PM UTC-4, Sunjeet Jena wrote: > > I am working on a code to implement a deep RL algorithm where the policy > function is the sampled values from a normal distribution. But when I > differentiate through the cost function(which of course depends upon the > the distribution) show the following error: > > "theano.gradient.NullTypeGradError: tensor.grad encountered a NaN. This > variable is Null because the grad method for input 2 > (Subtensor{int64:int64:}.0) of the RandomFunction{normal} op is > mathematically undefined. No gradient defined through raw random numbers op" > > > Is there anyway I can solve it? > > -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.