Re: [theano-users] And operator doesn't work with theano logical operators
Don't use the Python "and" operation. Use theano.tensor.and(a,b) instead. I think it will fix your problem. Le mer. 28 juin 2017 10:26, Sym a écrit : > > I want to build a piecewise function with theano, for instance a function > that is nonzero only in the interval [2,3]. > > Here is the minimal code reproducing the error : > > > import theano > import theano.tensor as T > import numpy as np > import matplotlib.pyplot as plt > > r = T.scalar() > gate = T.switch( T.ge(r,2.) and T.le(r,3.) , 1., 0.) > f = theano.function([r],gate) > x = np.arange(0.,4.,0.05,dtype='float32') > y = [f(i) for i in x] > plt.plot(x,y) > > > > The result is the following : https://i.stack.imgur.com/XMQme.png > > Which is clearly not correct : only one condition is satisfied here. > > > If I replace T.switch by theano.ifelse.ifelse the result is the same... > > Is it a known bug, or am I missing something here? > > > Thanks a lot ! > > -- > > --- > You received this message because you are subscribed to the Google Groups > "theano-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to theano-users+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[theano-users] Gradient Problem (always 0)
Hi all, I am running a neuroscience with an recurrent neural network model with Theano: def rnn(u_t, x_tm1, r_tm1, Wrec): x_t = ( (1 - alpha)*x_tm1 + alpha*(T.dot(r_tm1, Wrec ) + brec + u_t[:,Nin:]) ) r_t = f_hidden(x_t) then I define the scan function to iterate at each time step iteration [x, r], _ = theano.scan(fn=rnn, outputs_info=[x0_, f_hidden(x0_)], sequences=u, non_sequences=[Wrec]) Wrec and brec are learnt by stochastic gradient descent: g = T.grad(cost , [Wrec, brec]) where cost is the cost function: T.sum(f_loss(z, target[:,:,:Nout])) with z = f_output(T.dot(r, Wout_.T) + bout ) Until now, everything works good. Now I want to add two new vectors, let's call them u and v so that the initial rnn function becomes: def rnn(u_t, x_tm1, r_tm1, Wrec, *u, v*): x_t = ( (1 - alpha)*x_tm1 + alpha*(T.dot(r_tm1, Wrec + *T.dot(u, v)* ) + brec + u_t[:,Nin:]) ) r_t = f_hidden(x_t) [x, r], _ = theano.scan(fn=rnn, outputs_info=[x0_, f_hidden(x0_)], sequences=u, non_sequences=[Wrec,* m, n*]) m and n are the variables corresponding to u and v in the main function. and suddenly, the gradient T.grad(cost, m) and T.grad(cost, n) are zeros I am blocked since 2 weeks now on this problem. I verified that the values are not integer by using dtype=theano.config.floatX every where in the definition of the variables. As you can see the link between the cost and m (or n) is: the cost function depends on z, and z depends on r and r is one of the outputs of the rnn function that uses m and n in the equation. Do you have any ideas why this does not work ? Any idea is welcome. I hope I can unblock this problem soon. Thank you! -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[theano-users] Re: How can I calculate the size of output of convolutional operation in theano?
You should check this out http://deeplearning.net/software/theano_versions/dev/tutorial/conv_arithmetic.html The output size is in general o = (i - r + 2p)/s + 1 where i is the input size, o the output size, r the filter size, p the padding and s the stride of the convolution. This formula holds for every dimension (so for a 2D convolution, if the strides, padding, filters, etc.. are different you can apply this formula separately) Le lundi 26 juin 2017 15:36:21 UTC-4, Sunjeet Jena a écrit : > > Is there any way I can calculate the size of the output after the > Convolution Operation? > -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[theano-users] And operator doesn't work with theano logical operators
I want to build a piecewise function with theano, for instance a function that is nonzero only in the interval [2,3]. Here is the minimal code reproducing the error : import theano import theano.tensor as T import numpy as np import matplotlib.pyplot as plt r = T.scalar() gate = T.switch( T.ge(r,2.) and T.le(r,3.) , 1., 0.) f = theano.function([r],gate) x = np.arange(0.,4.,0.05,dtype='float32') y = [f(i) for i in x] plt.plot(x,y) The result is the following : https://i.stack.imgur.com/XMQme.png Which is clearly not correct : only one condition is satisfied here. If I replace T.switch by theano.ifelse.ifelse the result is the same... Is it a known bug, or am I missing something here? Thanks a lot ! -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.