Re: [theano-users] Re: Theano lives on as Aesara

2021-09-13 Thread Frédéric Bastien
Great works!

Le ven. 7 mai 2021 11 h 25, Brandon T. Willard  a
écrit :

> We have a GitHub Discussions set up for Aesara, so, if anyone has
> Aesara-specific questions, comments, etc., post them to
> https://github.com/pymc-devs/aesara/discussions.
>
> On Thursday, April 29, 2021 at 8:25:58 AM UTC-5 Thomas wrote:
>
>> Hi everyone,
>>
>> We have forked Theano to Aesara: https://github.com/pymc-devs/aesara,
>> completely refactored it, added a new JAX backend, and are in the process
>> of replacing the C backend with numba, among many other improvements.
>>
>> Best,
>> Thomas
>>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/theano-users/21787b5a-fa5b-415b-9b64-db09b0445046n%40googlegroups.com
> 
> .
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/theano-users/CADKKbtjy7RQMujL_RvbzsgrMy9L4%3DmbRWQQmsBU-%2BbFG6Qzsug%40mail.gmail.com.


Re: [theano-users] Could not initialize pygpu error with CUDA 9.0 and latest Theano

2018-05-16 Thread Frédéric Bastien
creating 2 env seem the right thing to do. I do not think dnn.libary_path
override LD_LIBRARY_PATH.

On Thu, May 10, 2018 at 7:06 PM Michael Klachko 
wrote:

> What is the best way to do that? Should I use separate conda environments
> for theano and tensorflow, and create LD_LIBRARY_PATH in each? Does
> dnn.libary_path in theanorc override LD_LIBRARY_PATH?
>
>
>
> On Thursday, May 10, 2018 at 3:09:29 PM UTC-7, nouiz wrote:
>
>> You could have multiple cuda version installed to have TF working.
>>
>> Le jeu. 10 mai 2018 16:28, Michael Klachko  a
>> écrit :
>>
>>> After struggling with this error for a day, I decided to upgrade CUDA to
>>> 9.1 and CuDNN to 7.1. After that I got "your driver might be too old"
>>> error, which was resolved by updating the driver to 396.24. Also, in the
>>> process I found out I had older CuDNN files in /usr/lib/x86_64-linux-gnu/
>>> directory. Not sure how they got there, perhaps because sometimes I
>>> installed CuDNN using .deb package, and sometimes by manually copying the
>>> files. So it's probably not a good idea to mix .deb and .run cuda
>>> installation methods.
>>>
>>> Anyway, now theano works fine now, but unfortunately my Tensorflow is
>>> broken because it does not support cuda 9.1 yet... Will probably have to
>>> compile it from source.
>>>
>>>
>>>
>>> On Thursday, May 10, 2018 at 11:30:38 AM UTC-7, Arnaud Bergeron wrote:
>>>
 This is a new one.  It is also very weird since gemm doesn't involve
 cuLinkAddData.  This may be an error message from something else.

 First things first, since you are on cuda 9.0, I would recommend that
 you update your driver to 384.111 or 390.*.  If that doesn't help, then
 I'll need some help reproducing the problem since I don't get that in any
 of my environments.

>>> Le 8 mai 2018 à 18:15, Michael Klachko  a écrit :

 I have CUDA 9.0 and CuDNN 7.0.5 on my Ubuntu 16.04, and Tensorflow
 works fine. In order to install theano, I first installed miniconda, then
 ran "conda install theano pygpu" and it seemed to have installed fine.



 However, here's what I get:


 $ python
 Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56)
 [GCC 7.2.0] on linux
 Type "help", "copyright", "credits" or "license" for more information.
 >>> import theano
 Using cuDNN version 7005 on context None
 ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
 Traceback (most recent call last):
   File 
 "/home/michael/miniconda2/envs/las/lib/python3.6/site-packages/theano/gpuarray/__init__.py",
  line 227, in 
 use(config.device)
   File 
 "/home/michael/miniconda2/envs/las/lib/python3.6/site-packages/theano/gpuarray/__init__.py",
  line 214, in use
 init_dev(device, preallocate=preallocate)
   File 
 "/home/michael/miniconda2/envs/las/lib/python3.6/site-packages/theano/gpuarray/__init__.py",
  line 159, in init_dev
 pygpu.blas.gemm(0, tmp, tmp, 0, tmp, overwrite_c=True)
   File "pygpu/blas.pyx", line 149, in pygpu.blas.gemm
   File "pygpu/blas.pyx", line 47, in pygpu.blas.pygpu_blas_rgemm
 pygpu.gpuarray.GpuArrayException: (b'cuLinkAddData: CUDA_ERROR_UNKNOWN: 
 unknown error', 3)



 Here's the packages I have installed in this environment:


 $ conda list
 # packages in environment at /home/michael/miniconda2/envs/las:
 #
 # NameVersion   Build  Channel
 binutils_impl_linux-642.28.1   had2808c_3
 binutils_linux-64 7.2.026
 ca-certificates   2018.03.070
 certifi   2018.4.16py36_0
 gcc_impl_linux-64 7.2.0habb00fd_3
 gcc_linux-64  7.2.026
 gxx_impl_linux-64 7.2.0hdf63c60_3
 gxx_linux-64  7.2.026
 intel-openmp  2018.0.0  8
 libedit   3.1  heed3624_0
 libffi3.2.1hd88cf55_4
 libgcc-ng 7.2.0hdf63c60_3
 libgfortran-ng7.2.0hdf63c60_3
 libgpuarray   0.7.5h14c3975_0
 libstdcxx-ng  7.2.0hdf63c60_3
 mako  1.0.7py36h0727276_0
 markupsafe1.0  py36hd9260cd_1
 mkl   2018.0.2  1
 mkl-service   1.1.2py36h17a0993_4
 mkl_fft   1.0.1py36h3010b51_0
 mkl_random1.0.1py36h629b387_0
 ncurses   6.0  

Re: [theano-users] Could not initialize pygpu error with CUDA 9.0 and latest Theano

2018-05-10 Thread Frédéric Bastien
You could have multiple cuda version installed to have TF working.

Le jeu. 10 mai 2018 16:28, Michael Klachko  a
écrit :

> After struggling with this error for a day, I decided to upgrade CUDA to
> 9.1 and CuDNN to 7.1. After that I got "your driver might be too old"
> error, which was resolved by updating the driver to 396.24. Also, in the
> process I found out I had older CuDNN files in /usr/lib/x86_64-linux-gnu/
> directory. Not sure how they got there, perhaps because sometimes I
> installed CuDNN using .deb package, and sometimes by manually copying the
> files. So it's probably not a good idea to mix .deb and .run cuda
> installation methods.
>
> Anyway, now theano works fine now, but unfortunately my Tensorflow is
> broken because it does not support cuda 9.1 yet... Will probably have to
> compile it from source.
>
>
>
> On Thursday, May 10, 2018 at 11:30:38 AM UTC-7, Arnaud Bergeron wrote:
>
>> This is a new one.  It is also very weird since gemm doesn't involve
>> cuLinkAddData.  This may be an error message from something else.
>>
>> First things first, since you are on cuda 9.0, I would recommend that you
>> update your driver to 384.111 or 390.*.  If that doesn't help, then I'll
>> need some help reproducing the problem since I don't get that in any of my
>> environments.
>>
> Le 8 mai 2018 à 18:15, Michael Klachko  a écrit :
>>
>> I have CUDA 9.0 and CuDNN 7.0.5 on my Ubuntu 16.04, and Tensorflow works
>> fine. In order to install theano, I first installed miniconda, then ran 
>> "conda
>> install theano pygpu" and it seemed to have installed fine.
>>
>>
>>
>> However, here's what I get:
>>
>>
>> $ python
>> Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56)
>> [GCC 7.2.0] on linux
>> Type "help", "copyright", "credits" or "license" for more information.
>> >>> import theano
>> Using cuDNN version 7005 on context None
>> ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
>> Traceback (most recent call last):
>>   File 
>> "/home/michael/miniconda2/envs/las/lib/python3.6/site-packages/theano/gpuarray/__init__.py",
>>  line 227, in 
>> use(config.device)
>>   File 
>> "/home/michael/miniconda2/envs/las/lib/python3.6/site-packages/theano/gpuarray/__init__.py",
>>  line 214, in use
>> init_dev(device, preallocate=preallocate)
>>   File 
>> "/home/michael/miniconda2/envs/las/lib/python3.6/site-packages/theano/gpuarray/__init__.py",
>>  line 159, in init_dev
>> pygpu.blas.gemm(0, tmp, tmp, 0, tmp, overwrite_c=True)
>>   File "pygpu/blas.pyx", line 149, in pygpu.blas.gemm
>>   File "pygpu/blas.pyx", line 47, in pygpu.blas.pygpu_blas_rgemm
>> pygpu.gpuarray.GpuArrayException: (b'cuLinkAddData: CUDA_ERROR_UNKNOWN: 
>> unknown error', 3)
>>
>>
>>
>> Here's the packages I have installed in this environment:
>>
>>
>> $ conda list
>> # packages in environment at /home/michael/miniconda2/envs/las:
>> #
>> # NameVersion   Build  Channel
>> binutils_impl_linux-642.28.1   had2808c_3
>> binutils_linux-64 7.2.026
>> ca-certificates   2018.03.070
>> certifi   2018.4.16py36_0
>> gcc_impl_linux-64 7.2.0habb00fd_3
>> gcc_linux-64  7.2.026
>> gxx_impl_linux-64 7.2.0hdf63c60_3
>> gxx_linux-64  7.2.026
>> intel-openmp  2018.0.0  8
>> libedit   3.1  heed3624_0
>> libffi3.2.1hd88cf55_4
>> libgcc-ng 7.2.0hdf63c60_3
>> libgfortran-ng7.2.0hdf63c60_3
>> libgpuarray   0.7.5h14c3975_0
>> libstdcxx-ng  7.2.0hdf63c60_3
>> mako  1.0.7py36h0727276_0
>> markupsafe1.0  py36hd9260cd_1
>> mkl   2018.0.2  1
>> mkl-service   1.1.2py36h17a0993_4
>> mkl_fft   1.0.1py36h3010b51_0
>> mkl_random1.0.1py36h629b387_0
>> ncurses   6.0  h9df7e31_2
>> nose  1.3.7py36hcdf7029_2
>> numpy 1.14.2   py36hdbf6ddf_1
>> openssl   1.0.2o   h20670df_0
>> pip   10.0.1   py36_0
>> pygpu 0.7.5py36h14c3975_0
>> python3.6.5hc3d631a_2
>> readline  7.0  ha6073c6_4
>> scipy 1.0.1py36hfc37229_0
>> setuptools39.1.0   py36_0
>> six   1.11.0   py36h372c433_1
>> sqlite

Re: [theano-users] NotImplementedError: We didn't implemented yet the case where scan do 0 iteration

2018-05-04 Thread Frédéric Bastien
Quick reply. The problem is that you are having 0 iteration in your
recurence and we do not support that. So check why in your code you could
end up with just 0 iteration.

Frédéric

On Thu, Apr 26, 2018 at 3:12 AM jau94  wrote:

> Hi,
>
> I am using Theano == 1.0.1 for the Sequential Matching Network
> . I have tested
> their code and it works well
>
> But now, I want to modify their *predict method* in *SMN_last.py. *I want
> to be able to provide the test data at each time to the theano.function,
> instead of  only giving the index (line 195) of a fixed test data:
>
> val_model = theano.function([*index*], [y,predict,cost,error], givens=val_dic,
>> on_unused_input='ignore')
>>
>>
> What if the test data can vary each time I want to call to the
> theano.function method. I don't want to be restricted to only select from
> the batches of a fixed test data.
>
> Thefore, I have done the following change to the *predict method *in
> *SMN_last.py*. I called this method *load_graph*.
>
>
>
> def load_graph(U,batch_size=20,max_l =
>>> 100,hidden_size=100,word_embedding_size=100,session_hidden_size=50,session_input_size
>>> =50, model_name = 'SMN_last.bin',save_file_pred=None, max_turn=10):
>>>   # for optimization
>>
>> hiddensize = hidden_size
>>
>> U = U.astype(dtype=theano.config.floatX) #Cast the embedding matrix
>>> to a floatX tensor (THIS IS STILL A NUMPY ARRAY)
>>
>> rng = np.random.RandomState(3435)  #A single random number is
>>> generated and returned
>>
>> lsize, rsize = max_l,max_l
>>
>>
>>> #DECLARE THE INPUT TENSORS!!!
>>
>> test_set = T.matrix(dtype='int32')  #Creates a tensor matrix
>>
>> sessionmask = T.matrix()  #Creates a tensor matrix
>>
>> lx = []
>>
>> lxmask = []
>>
>> for i in range(max_turn): #For max_turn (default=10), generate as
>>> many tensor matrices
>>
>> lx.append(T.matrix())
>>
>> lxmask.append(T.matrix())
>>
>> index = T.lscalar() #Declare a tensor scalar
>>
>> rx = T.matrix('rx') #Declare a tensor matrix with a name. I think
>>> this will be the response!
>>
>> rxmask = T.matrix() #Mask for the response as a tensor matrix
>>
>> y = T.ivector('y')  #Declare a tensor scalar
>>
>> Words = theano.shared(value = U, name = "Words") #Declare a shared
>>> variable with the embeddings
>>
>>
>>>
>>> llayer0_input = []
>>
>> for i in range(max_turn):
>>
>> llayer0_input.append(Words[T.cast(lx[i].flatten(),dtype="int32")]
>>
>> .reshape((lx[i].shape[0],lx[i].shape[1],Words.shape[1])))
>>
>>
>>> rlayer0_input =
>>> Words[T.cast(rx.flatten(),dtype="int32")].reshape((rx.shape[0],rx.shape[1],Words.shape[1]))
>>># input: word embeddings of the mini batch
>>
>>
>>> # # #Why is divided in train, dev, test when we are predicting?
>>
>> # # #test_set = datasets
>>
>>
>>>
>>> q_embedding = []
>>
>> offset = 2 * lsize
>>
>>
>>>
>>> test_set_lx = []
>>
>> test_set_lx_mask = []
>>
>> for i in range(max_turn):
>>
>> test_set_lx.append(T.cast(test_set[:,offset*i:offset*i +
>>> lsize],dtype=theano.config.floatX))
>>
>> test_set_lx_mask.append(T.cast(test_set[:,offset*i +
>>> lsize:offset*i + 2*lsize],dtype=theano.config.floatX))
>>
>>
>>> test_set_rx = T.cast(test_set[:,offset*max_turn:offset*max_turn +
>>> lsize],dtype=theano.config.floatX)
>>
>> test_set_rx_mask = T.cast(test_set[:,offset*max_turn
>>> +lsize:offset*max_turn +2 *lsize],dtype=theano.config.floatX)
>>
>> test_set_session_mask =
>>> T.cast(test_set[:,-max_turn-1:-1],dtype=theano.config.floatX)
>>
>> test_set_y =T.cast(test_set[:,-1],dtype='int32') #somehow put int32
>>> here
>>
>>
>>>
>>> test_dic = {}
>>
>> for i in range(max_turn):
>>
>> test_dic[lx[i]] =
>>> test_set_lx[i][index*batch_size:(index+1)*batch_size]
>>
>> test_dic[lxmask[i]] =
>>> test_set_lx_mask[i][index*batch_size:(index+1)*batch_size]
>>
>> test_dic[rx] = test_set_rx[index*batch_size:(index+1)*batch_size]
>>
>> test_dic[sessionmask] =
>>> test_set_session_mask[index*batch_size:(index+1)*batch_size]
>>
>> test_dic[rxmask] =
>>> test_set_rx_mask[index*batch_size:(index+1)*batch_size]
>>
>> test_dic[y] = test_set_y[index*batch_size:(index+1)*batch_size]
>>
>>
>>>
>>> sentence2vec =
>>> GRU(n_in=word_embedding_size,n_hidden=hiddensize,n_out=hiddensize)
>>
>>
>>> for i in range(max_turn):
>>
>> q_embedding.append(sentence2vec(llayer0_input[i],lxmask[i],True))
>>
>> r_embedding = sentence2vec(rlayer0_input,rxmask,True)
>>
>>
>>> pooling_layer =
>>> ConvSim(rng,max_l,session_input_size,hidden_size=hiddensize)
>>
>>
>>> poolingoutput = []
>>
>> #test =
>>> theano.function([index],pooling_layer(llayer0_input[-4],rlayer0_input,q_embedding[i],r_embedding),givens=test_dic,on_unused_input='ignore')
>>
>>
>>>
>>> for i in range(max_turn):
>>
>> 

Re: [theano-users] Using theano.tensor.repeat with repeats.ndim == 1

2018-05-04 Thread Frédéric Bastien
You can try to implement it with tensor.alloc() and set_subtensor with
broadcasting.

this will work with gradient and on the GPU.

On Thu, May 3, 2018 at 7:02 PM Pascal Lamblin 
wrote:

> Oh, right.
>
> Then, I don't think it will be implemented.
> And I don't think RepeatOp is optimized to use the GPU anyway.
>
>
> Sorry about that
>
> On 2018-05-03 06:01 PM, Kristjan Arumae wrote:
> > No, it only works with two vectors.  2 Matrixes is not supported in
> > numpy either I don't think, since it is likely to mess up the output
> shape.
> >
> > On Thursday, May 3, 2018 at 4:03:15 PM UTC-4, Pascal Lamblin wrote:
> >
> > Does it work with two matrices?
> > If so, you can try to use dimshuffle to make v a "row" instead of a
> > "vector".
> >
> > On 2018-05-03 03:24 PM, Kristjan Arumae wrote:
> >  > An example of what I am doing:
> >  >
> >  > Here m is an fmatrix, and v is an ivector
> >  >
> >  > out = T.repeat(m, v, axis=0)
> >  >
> >  > The forward pass works fine, but there is no gradient code
> > implemented.
> >  >
> >  > This works fine when both inputs are vectors but not as above.
> >  >
> >  > I am not familiar with theano enough to fill in the missing code
> in
> >  > grad() for class RepeatOp().  Does anyone have suggestions as to a
> >  > workaround?  I have not found anything even remotely helpful so
> far.
> >  > I've tried using tile with scan, but to no end.
> >  >
> >  > Thanks.
> >  >
> >  > --
> >  >
> >  > ---
> >  > You received this message because you are subscribed to the Google
> >  > Groups "theano-users" group.
> >  > To unsubscribe from this group and stop receiving emails from it,
> > send
> >  > an email to theano-users...@googlegroups.com 
> >  > .
> >  > For more options, visit https://groups.google.com/d/optout
> > .
> >
> > --
> > Pascal Lamblin
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> > Groups "theano-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> > an email to theano-users+unsubscr...@googlegroups.com
> > .
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> Pascal Lamblin
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Convolution with input shape = output shape

2018-05-04 Thread Frédéric Bastien
You can use the parameter border_mode=... to do what you want.

see:
http://deeplearning.net/software/theano/library/tensor/nnet/conv.html#theano.tensor.nnet.conv2d

On Fri, May 4, 2018 at 5:14 AM Tayeb Benzenati  wrote:

> i want create a cnn using theano , but i want to keep the same shape of
> the input and the output image.
> conv_out = conv2d(input=X, filters=W)
> with W.shape = (48, 3, 5, 5)
> How can i do that ?
> Thank's
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Using NumPy C-API based implementation for BLAS functions.

2018-04-25 Thread Frédéric Bastien
It should install g++.

But your error message is probably different then what the original user
gave. What is your  own error?

Le jeu. 29 mars 2018 20:03, Qinpeng Wang  a écrit :

> Hi,
>
> This link: http://deeplearning.net/software/theano/install_windows.html
>  mentions:
>
> Optional requirements
>> GCC compiler with g++ (version >= 4.2.*)
>
>
> If I installed m2w64-toolchain, do I still need to worry about g++?
>
> Thanks!
>
> On Thursday, March 29, 2018 at 6:38:56 PM UTC-5, Qinpeng Wang wrote:
>>
>> Hi,
>>
>> I have mkl-service package installed via conda, but why do I still get
>> this error message? How can I link Theano with desired BLAS library?
>>
>> Best,
>> Qinpeng
>>
>> On Tuesday, November 28, 2017 at 11:59:29 AM UTC-6, Pascal Lamblin wrote:
>>>
>>> Hi,
>>>
>>> Usually, Theano tries to link directly with a BLAS library (MKL,
>>> OpenBlas...) if it is able to detect one, and use it for dot products on
>>> CPU.
>>> If it does not, it uses the fallback of using the C API of numpy
>>> instead, which can be slower and result in more memory copies.
>>>
>>> On 2017-11-23 05:00 AM, Mathias Müller wrote:
>>> >
>>> > Hi,
>>> >
>>> > With the newest Theano, I get the following warning:
>>> >
>>> > |
>>> > WARNING (theano.tensor.blas):UsingNumPyC-API based implementation
>>> > forBLAS functions.
>>> > |
>>> >
>>> > What does this message mean? Does that mean there is an alternative to
>>> > Numpy C-API based BLAS functions?
>>> >
>>> > Thanks a lot for your help.
>>> > Mathias
>>> >
>>> > --
>>> >
>>> > ---
>>> > You received this message because you are subscribed to the Google
>>> > Groups "theano-users" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> > an email to theano-users...@googlegroups.com
>>> > .
>>> > For more options, visit https://groups.google.com/d/optout.
>>>
>>> --
>>> Pascal Lamblin
>>>
>>
> On Thursday, March 29, 2018 at 6:38:56 PM UTC-5, Qinpeng Wang wrote:
>>
>> Hi,
>>
>> I have mkl-service package installed via conda, but why do I still get
>> this error message? How can I link Theano with desired BLAS library?
>>
>> Best,
>> Qinpeng
>>
>> On Tuesday, November 28, 2017 at 11:59:29 AM UTC-6, Pascal Lamblin wrote:
>>>
>>> Hi,
>>>
>>> Usually, Theano tries to link directly with a BLAS library (MKL,
>>> OpenBlas...) if it is able to detect one, and use it for dot products on
>>> CPU.
>>> If it does not, it uses the fallback of using the C API of numpy
>>> instead, which can be slower and result in more memory copies.
>>>
>>> On 2017-11-23 05:00 AM, Mathias Müller wrote:
>>> >
>>> > Hi,
>>> >
>>> > With the newest Theano, I get the following warning:
>>> >
>>> > |
>>> > WARNING (theano.tensor.blas):UsingNumPyC-API based implementation
>>> > forBLAS functions.
>>> > |
>>> >
>>> > What does this message mean? Does that mean there is an alternative to
>>> > Numpy C-API based BLAS functions?
>>> >
>>> > Thanks a lot for your help.
>>> > Mathias
>>> >
>>> > --
>>> >
>>> > ---
>>> > You received this message because you are subscribed to the Google
>>> > Groups "theano-users" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> > an email to theano-users...@googlegroups.com
>>> > .
>>> > For more options, visit https://groups.google.com/d/optout.
>>>
>>> --
>>> Pascal Lamblin
>>>
>>
> On Thursday, March 29, 2018 at 6:38:56 PM UTC-5, Qinpeng Wang wrote:
>>
>> Hi,
>>
>> I have mkl-service package installed via conda, but why do I still get
>> this error message? How can I link Theano with desired BLAS library?
>>
>> Best,
>> Qinpeng
>>
>> On Tuesday, November 28, 2017 at 11:59:29 AM UTC-6, Pascal Lamblin wrote:
>>>
>>> Hi,
>>>
>>> Usually, Theano tries to link directly with a BLAS library (MKL,
>>> OpenBlas...) if it is able to detect one, and use it for dot products on
>>> CPU.
>>> If it does not, it uses the fallback of using the C API of numpy
>>> instead, which can be slower and result in more memory copies.
>>>
>>> On 2017-11-23 05:00 AM, Mathias Müller wrote:
>>> >
>>> > Hi,
>>> >
>>> > With the newest Theano, I get the following warning:
>>> >
>>> > |
>>> > WARNING (theano.tensor.blas):UsingNumPyC-API based implementation
>>> > forBLAS functions.
>>> > |
>>> >
>>> > What does this message mean? Does that mean there is an alternative to
>>> > Numpy C-API based BLAS functions?
>>> >
>>> > Thanks a lot for your help.
>>> > Mathias
>>> >
>>> > --
>>> >
>>> > ---
>>> > You received this message because you are subscribed to the Google
>>> > Groups "theano-users" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> > an email to theano-users...@googlegroups.com
>>> > .
>>> > For more options, visit https://groups.google.com/d/optout.
>>>
>>> --
>>> Pascal Lamblin
>>>
>> --
>
> ---
> You received this message 

Re: [theano-users] can i use theano with one of these( Angular4, ionic3, JavaScript, phonegap)

2018-03-08 Thread Frédéric Bastien
Theano could work as a connection, but theano require a compiler at run
time. You do not want that on a phone I think. So Theano in its current
form do not seem a good software for your need.

Frédéric

On Thu, Feb 22, 2018 at 2:16 AM <3h1...@gmail.com> wrote:

> Hi everybody,
> i have a question that if i can use theano with one of these( Angular4,
> ionic3, JavaScript, phonegap) ??
>
> my project is for a face recognition technology
> and my teacher want it to be as a mobile application
> and exactly she wants me to use ionic3 to build this app
> i did not find any library of face recognition using these languages
> so
> could theano works as a connected layer between python and javascript for
> example???
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] problem of "out of memory" for theano 0.9.0

2018-03-08 Thread Frédéric Bastien
Something else could use the memory of the GPU, like the GUI or other
process in back-ground. Can you reboot the computer? If that do not fix the
problem, try to execute the command line "nvidia-smi" in a terminal.

Frédéric

On Tue, Feb 27, 2018 at 10:54 AM Fei Tao  wrote:

> Hi, Dear all,
>
> I have two computers, one with GTX 1070 (8GB g-mem) and the other with GTX
> 745 (4GB g-mem).
>
> For both of them, I am using Keras 2.0.8 and Theano 0.9.0. I copy the
> theano and keras configuration files to both computers so that they have
> the same setting. The only difference is GTX 1070 works with cuDNN 5105,
> while GTX 745 works with cuDNN 5110.
>
> I also use the same python codes for both computers (copy the program to
> both computers). However, I have a problem about GTX 1070, showing that it
> is 'GPU out of memory' during initialization, while the GTX 745 is totally
> fine.
>
> I am using Ubuntu 14.04.
>
> This is very weird since GTX 1070 has more memory than GTX 745. Is there
> any setting I can check to find out what the problem is?
>
> How can I fix this problem?
>
> Thank you very much!
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Avoiding HostFromGPU at every Index into Shared Variable?

2018-02-07 Thread Frédéric Bastien
On the GPU, not all indexing are fast. The slices are fast (just a view).
But on advanced indexing, only this version have been well optimized:

a_tensor[a_vector_of_int]

The vector_of_int can be on any of the dimensions from memory. But for sure
on the first dimensions.

We have code that support more advanced indexing on the GPU, but sometimes
it is slower, sometimes faster. So it is not activated by default.

For the "other computation being slow". It will depend what is that
computation. Without seeing the profile of that part, I can't comment. But
we didn't spend a good amount of time optimizing those type of computation.
So I'm not suprised that there is case when the generated code isn't very
optimized.

Frédéric


On Fri, Jan 19, 2018 at 3:42 PM Adam Stooke  wrote:

> Hi,
>
>   I am holding an array on the GPU (in a shared variable), and I'm
> sampling random minibatches from it, but it seems there is a call to
> HostFromGpu at every index, which causes significant delay.  Is there a way
> to avoid this?
>
>   Here is a minimal code example, plus the debug and profiling printouts.
> The same thing happens if I use theano.map.  The problem is much worse in
> my actual code, which uses multiple levels of indexing--despite also using
> much larger data arrays, the time in the many calls to HostFromGpu
> dominates.
>
>
> Code example:
>
> import theano
> import theano.tensor as T
> import numpy as np
>
> H = W = 3
> N = 10
> B = 3
>
> src = theano.shared(np.random.rand(N, H, W).astype(np.float32), name="src")
> dest = theano.shared(np.zeros([B, H, W], dtype=np.float32), name="dest")
> idxs = T.ivector('idxs')
>
> selections = [src[idxs[i]] for i in range(B)]
> new_dest = T.stack(selections)
> updates = [(dest, new_dest)]
> f = theano.function(inputs=[idxs], updates=updates)
>
> np_idxs = np.random.randint(low=0, high=N, size=B).astype(np.int32)
> print(dest.get_value())
> f(np_idxs)
> print(dest.get_value())
>
> theano.printing.debugprint(f)
> for _ in range(10):
> f(np_idxs)
>
>
> Debugprint (notice the HostFromGpu listed with unique ID leading up to
> each ScalarFromTensor):
>
> GpuJoin [id A] ''   16
>  |TensorConstant{0} [id B]
>  |InplaceGpuDimShuffle{x,0,1} [id C] ''   15
>  | |GpuSubtensor{int32} [id D] ''   14
>  |   |src [id E]
>  |   |ScalarFromTensor [id F] ''   13
>  | |HostFromGpu(gpuarray) [id G] ''   12
>  |   |GpuSubtensor{int64} [id H] ''   11
>  | |GpuFromHost [id I] ''   0
>  | | |idxs [id J]
>  | |Constant{0} [id K]
>  |InplaceGpuDimShuffle{x,0,1} [id L] ''   10
>  | |GpuSubtensor{int32} [id M] ''   9
>  |   |src [id E]
>  |   |ScalarFromTensor [id N] ''   8
>  | |HostFromGpu(gpuarray) [id O] ''   7
>  |   |GpuSubtensor{int64} [id P] ''   6
>  | |GpuFromHost [id I] ''   0
>  | |Constant{1} [id Q]
>  |InplaceGpuDimShuffle{x,0,1} [id R] ''   5
>|GpuSubtensor{int32} [id S] ''   4
>  |src [id E]
>  |ScalarFromTensor [id T] ''   3
>|HostFromGpu(gpuarray) [id U] ''   2
>  |GpuSubtensor{int64} [id V] ''   1
>|GpuFromHost [id I] ''   0
>|Constant{2} [id W]
>
>
>
> Theano profile (in 10 calls to the function--notice 10 calls to
> GpuFromHost but 30 calls to HostFromGPU):
>
> Class
> ---
> <% time> <#call> <#apply>
> 
>   38.9%38.9%   0.001s   5.27e-05s C   10   1
>  theano.gpuarray.basic_ops.GpuJoin
>   31.5%70.4%   0.000s   1.42e-05s C   30   3
>  theano.gpuarray.basic_ops.HostFromGpu
>   15.0%85.4%   0.000s   2.03e-05s C   10   1
>  theano.gpuarray.basic_ops.GpuFromHost
>7.4%92.8%   0.000s   1.67e-06s C   60   6
>  theano.gpuarray.subtensor.GpuSubtensor
>6.0%98.8%   0.000s   2.69e-06s C   30   3
>  theano.gpuarray.elemwise.GpuDimShuffle
>1.2%   100.0%   0.000s   5.56e-07s C   30   3
>  theano.tensor.basic.ScalarFromTensor
>... (remaining 0 Classes account for   0.00%(0.00s) of the runtime)
>
>
>
> Appreciate any tips! Thanks!
> Adam
>
>
>
>
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Numpy error during optimization phase

2018-02-07 Thread Frédéric Bastien
I'm not able to reproduce it.

On which OS? Which Theano version? Can you try a Theano version at least
1.0.1?

You can ignore this "error". Mostly, some optimization are skipped. But I
would still like to fix it.

I ran the tests like this:

THEANO_FLAGS=device=cuda,floatX=float32 nosetests test_ctc.py &> OUT

What are your Theano flags?

On Wed, Jan 24, 2018 at 5:05 AM  wrote:

> Hi everyone,
>
> While using an OpFromGraph involving some operations with binary values,
> there is an optimization error:
>
> theano.gof.opt: ERROR: Optimization failure due to: local_add_canonizer
>> theano.gof.opt: ERROR: node:
>> Elemwise{add,no_inplace}(InplaceDimShuffle{0,1,x}.0,
>> InplaceDimShuffle{x,0,1}.0)
>> theano.gof.opt: ERROR: TRACEBACK:
>> theano.gof.opt: ERROR: Traceback (most recent call last):
>> File "/home/granger/dev/Theano/theano/gof/opt.py", line 2034, in
>> process_node
>> replacements = lopt.transform(node)
>> File "/home/granger/dev/Theano/theano/tensor/opt.py", line 4989, in
>> transform
>> num, denum = self.simplify(list(orig_num), list(orig_denum), out.type)
>> File "/home/granger/dev/Theano/theano/tensor/opt.py", line 4833, in
>> simplify
>> out_type=out_type)
>> File "/home/granger/dev/Theano/theano/tensor/opt.py", line 4919, in
>> simplify_constants
>> out_type=out_type)
>> File "/home/granger/dev/Theano/theano/tensor/opt.py", line 6328, in
>> add_calculate
>> v = reduce(np.add, num, zero) - reduce(np.add, denum, zero)
>> TypeError: numpy boolean subtract, the `-` operator, is deprecated, use
>> the bitwise_xor, the `^` operator, or the logical_xor function instead.
>
>
> This error does not happen when running on CPU backend.
> I suspect it might be due to the use of binary values in my code, but the
> log message is not very helpful, is there any way to get some more
> information to track down the error? Note that the fast_compile optimizer
> does not trigger the error, only the fast_run one.
>
> A demo code and the complete output is available here:
> https://gist.github.com/nlgranger/279bda7fff356cfe3f40ad6397d0ba04
>
> Best,
> Nicolas
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: global name 'float32_shared_constructor' is not defined

2018-02-07 Thread Frédéric Bastien
Thanks for your solution. But I think you had multiple Theano version
installed. Removing the conda version and updating the pip version seem to
fix your problem.

Fred

On Fri, Feb 2, 2018 at 2:56 AM Rakesh Malviya 
wrote:

> Solved the issue by:
>
> 1. I replaced mkl 2018 to mkl 2017 by using conda install mkl=2017
> 2. Removed conda theano:  conda uninstall theano
> 3. Used pip theano : pip install theano
>
> Thanks and regards,
> Rakesh
>
>
> On Thursday, February 1, 2018 at 12:08:14 PM UTC+5:30, Rakesh Malviya
> wrote:
>>
>> Hi,
>>
>> I am running theano code from following repo
>> https://github.com/luheng/deep_srl
>>
>> I am getting following error:
>> Traceback (most recent call last):
>>   File "python/train.py", line 163, in 
>> train_tagger(args)
>>   File "python/train.py", line 85, in train_tagger
>> model = BiLSTMTaggerModel(data, config=config)
>>   File
>> "/home/holmes/rakesh_work/deepSRL/deep_srl/python/neural_srl/theano/tagger.py",
>> line 67, in __init__
>> self.is_train)
>>   File
>> "/home/holmes/rakesh_work/deepSRL/deep_srl/python/neural_srl/theano/layer.py",
>> line 248, in connect
>> return LSTMLayer.connect(self, inputs, mask, is_train)
>>   File
>> "/home/holmes/rakesh_work/deepSRL/deep_srl/python/neural_srl/theano/layer.py",
>> line 167, in connect
>> self.recurrent_dropout_layer.generate_mask([batch_size,
>> self.hidden_dim], is_train)
>>   File
>> "/home/holmes/rakesh_work/deepSRL/deep_srl/python/neural_srl/theano/layer.py",
>> line 472, in generate_mask
>> dtype=floatX)
>>   File
>> "/home/holmes/intel/intelpython2/lib/python2.7/site-packages/theano/sandbox/rng_mrg.py",
>> line 1392, in binomial
>> x = self.uniform(size=size, dtype=dtype, nstreams=nstreams)
>>   File
>> "/home/holmes/intel/intelpython2/lib/python2.7/site-packages/theano/sandbox/rng_mrg.py",
>> line 1357, in uniform
>> node_rstate = float32_shared_constructor(rstates)
>> NameError: global name 'float32_shared_constructor' is not defined
>>
>> I searched for similar issues in github and this group but no sucess.
>> Please let me know if we can solve this ?
>>
>> Thanks and regards,
>> Rakesh Malviya
>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error while running code on gpu

2018-02-07 Thread Frédéric Bastien
Try just this:

python -c "import pygpu"

You need to install this packge. We recommand conda:

conda install theano pygpu

On Tue, Feb 6, 2018 at 12:55 AM  wrote:

> This the error i'm getting when I'm trying to run the following command
> : CUDA_VISIBLE_DEVICES=0
> THEANO_FLAGS=mode=FAST_RUN,device=cuda0,floatX=float32 python main.py
>
> ERROR (theano.gpuarray): pygpu was configured but could not be imported or
> is too old (version 0.7 or higher required)
> Traceback (most recent call last):
>   File "/home/divya/Theano/theano/gpuarray/__init__.py", line 23, in
> 
> import pygpu
> ImportError: No module named pygpu
>
> How do I resolve this issue?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Numpy error during optimization phase

2018-02-02 Thread Frédéric Bastien
Thanks for the report. Which version of numpy do you use?

The problem seem related to your numpy version from the error message.

On Wed, Jan 24, 2018 at 5:05 AM  wrote:

> Hi everyone,
>
> While using an OpFromGraph involving some operations with binary values,
> there is an optimization error:
>
> theano.gof.opt: ERROR: Optimization failure due to: local_add_canonizer
>> theano.gof.opt: ERROR: node:
>> Elemwise{add,no_inplace}(InplaceDimShuffle{0,1,x}.0,
>> InplaceDimShuffle{x,0,1}.0)
>> theano.gof.opt: ERROR: TRACEBACK:
>> theano.gof.opt: ERROR: Traceback (most recent call last):
>> File "/home/granger/dev/Theano/theano/gof/opt.py", line 2034, in
>> process_node
>> replacements = lopt.transform(node)
>> File "/home/granger/dev/Theano/theano/tensor/opt.py", line 4989, in
>> transform
>> num, denum = self.simplify(list(orig_num), list(orig_denum), out.type)
>> File "/home/granger/dev/Theano/theano/tensor/opt.py", line 4833, in
>> simplify
>> out_type=out_type)
>> File "/home/granger/dev/Theano/theano/tensor/opt.py", line 4919, in
>> simplify_constants
>> out_type=out_type)
>> File "/home/granger/dev/Theano/theano/tensor/opt.py", line 6328, in
>> add_calculate
>> v = reduce(np.add, num, zero) - reduce(np.add, denum, zero)
>> TypeError: numpy boolean subtract, the `-` operator, is deprecated, use
>> the bitwise_xor, the `^` operator, or the logical_xor function instead.
>
>
> This error does not happen when running on CPU backend.
> I suspect it might be due to the use of binary values in my code, but the
> log message is not very helpful, is there any way to get some more
> information to track down the error? Note that the fast_compile optimizer
> does not trigger the error, only the fast_run one.
>
> A demo code and the complete output is available here:
> https://gist.github.com/nlgranger/279bda7fff356cfe3f40ad6397d0ba04
>
> Best,
> Nicolas
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] global name 'float32_shared_constructor' is not defined

2018-02-01 Thread Frédéric Bastien
Which version of Theano do you use? Updating Theano could help.

Fred

On Thu, Feb 1, 2018 at 1:38 AM Rakesh Malviya 
wrote:

> Hi,
>
> I am running theano code from following repo
> https://github.com/luheng/deep_srl
>
> I am getting following error:
> Traceback (most recent call last):
>   File "python/train.py", line 163, in 
> train_tagger(args)
>   File "python/train.py", line 85, in train_tagger
> model = BiLSTMTaggerModel(data, config=config)
>   File
> "/home/holmes/rakesh_work/deepSRL/deep_srl/python/neural_srl/theano/tagger.py",
> line 67, in __init__
> self.is_train)
>   File
> "/home/holmes/rakesh_work/deepSRL/deep_srl/python/neural_srl/theano/layer.py",
> line 248, in connect
> return LSTMLayer.connect(self, inputs, mask, is_train)
>   File
> "/home/holmes/rakesh_work/deepSRL/deep_srl/python/neural_srl/theano/layer.py",
> line 167, in connect
> self.recurrent_dropout_layer.generate_mask([batch_size,
> self.hidden_dim], is_train)
>   File
> "/home/holmes/rakesh_work/deepSRL/deep_srl/python/neural_srl/theano/layer.py",
> line 472, in generate_mask
> dtype=floatX)
>   File
> "/home/holmes/intel/intelpython2/lib/python2.7/site-packages/theano/sandbox/rng_mrg.py",
> line 1392, in binomial
> x = self.uniform(size=size, dtype=dtype, nstreams=nstreams)
>   File
> "/home/holmes/intel/intelpython2/lib/python2.7/site-packages/theano/sandbox/rng_mrg.py",
> line 1357, in uniform
> node_rstate = float32_shared_constructor(rstates)
> NameError: global name 'float32_shared_constructor' is not defined
>
> I searched for similar issues in github and this group but no sucess.
> Please let me know if we can solve this ?
>
> Thanks and regards,
> Rakesh Malviya
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] ModuleNotFoundError: No module named 'pkg_resources'

2018-01-31 Thread Frédéric Bastien
How did you install Theano? We only support installing Theano with our
conda package on Windows.

Le dim. 14 janv. 2018 10:32, Karin Westin  a
écrit :

> Hi!
>
> I'm going through the theano tutorial, and when running the function
> theano.scan, I get the following error:
>
> Traceback (most recent call last):
>   File "C:\Users\Karin\Desktop\test.py", line 10, in 
> results, updates = theano.scan(lambda v: T.tanh(T.dot(v, W) + b_sym),
> sequences=X)
>   File
> "C:\Users\Karin\AppData\Local\Programs\Python\Python36\lib\site-packages\theano\scan_module\scan.py",
> line 1005, in scan
> from theano import gpuarray
>   File
> "C:\Users\Karin\AppData\Local\Programs\Python\Python36\lib\site-packages\theano\gpuarray\__init__.py",
> line 33, in 
> from . import fft, dnn, opt, extra_ops, multinomial, reduction, sort,
> rng_mrg, ctc
>   File
> "C:\Users\Karin\AppData\Local\Programs\Python\Python36\lib\site-packages\theano\gpuarray\fft.py",
> line 14, in 
> from .opt import register_opt, op_lifter, register_opt2
>   File
> "C:\Users\Karin\AppData\Local\Programs\Python\Python36\lib\site-packages\theano\gpuarray\opt.py",
> line 85, in 
> from .linalg import (GpuCusolverSolve, MATRIX_STRUCTURES_SOLVE,
> GpuCholesky,
>   File
> "C:\Users\Karin\AppData\Local\Programs\Python\Python36\lib\site-packages\theano\gpuarray\linalg.py",
> line 5, in 
> import pkg_resources
> ModuleNotFoundError: No module named 'pkg_resources'
>
> I'm on a Windows 7, Python 3.6.3, installed with Miniconda3, conda 4.4.7.
> Setuptools 38.4.0 is also installed using conda.
>
> Any what's wrong?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: GpuArrayException: cuInit: CUDA_ERROR_UNKNOWN: unknown error

2018-01-31 Thread Frédéric Bastien
Thanks for the answer.

Fred

Le jeu. 25 janv. 2018 15:11, Hendrik Weideman  a écrit :

> For anyone else coming across this post, it turns out CUDA 9.1 doesn't
> support Ubuntu 14.04:
>
> http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#system-requirements
>
> Everything works with CUDA 8.0 and NVIDIA 375.26.
>
>
> On Thursday, January 25, 2018 at 10:17:36 AM UTC-5, Hendrik Weideman wrote:
>>
>> When I try to import Theano, I run into the following error message:
>>
>> ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
>> Traceback (most recent call last):
>>   File "/home/user/Theano/theano/gpuarray/__init__.py", line 227, in
>> 
>> use(config.device)
>>   File "/home/user/Theano/theano/gpuarray/__init__.py", line 214, in use
>> init_dev(device, preallocate=preallocate)
>>   File "/home/user/Theano/theano/gpuarray/__init__.py", line 99, in
>> init_dev
>> **args)
>>   File "pygpu/gpuarray.pyx", line 658, in pygpu.gpuarray.init
>> (pygpu/gpuarray.c:9628)
>>   File "pygpu/gpuarray.pyx", line 587, in pygpu.gpuarray.pygpu_init
>> (pygpu/gpuarray.c:9038)
>> GpuArrayException: cuInit: CUDA_ERROR_UNKNOWN: unknown error
>>
>> I built and installed libgpuarray successfully, and Theano's installation
>> completes without any errors.
>> I'm running Ubuntu 14.04, with CUDA 9.1 and cuDNN 7.  I'm currently
>> running NVIDIA 375.66, but I've
>> also tried 384.111 with no luck.
>>
>> Output of nvidia-smi:
>> NVIDIA-SMI 375.66 Driver Version: 375.66
>> (And other information, showing three GPUs, GTX 660 and 2x TITAN X).
>>
>> Output of nvcc --version:
>> nvcc: NVIDIA (R) Cuda compiler driver
>> Copyright (c) 2005-2017 NVIDIA Corporation
>> Built on Fri_Nov__3_21:07:56_CDT_2017
>> Cuda compilation tools, release 9.1, V9.1.85
>>
>> Any ideas?  The error message doesn't really give me a good lead on what
>> I should be investigating.
>>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Porting GPU operations from the old GPU backend to the CUDA backend

2018-01-31 Thread Frédéric Bastien
You should be able to do similar things.

Fred

Le mer. 17 janv. 2018 14:37, Minh Ngo  a écrit :

> Hello,
>
> Before theano version 1.0 it was quite straightforward to port existent
> CUDA layers available in Caffe to Theano by writing a small piece of CUDA
> code and specifying the *.cu file using the theano.sandbox.cuda.GpuOp . For
> instance, the correlation layer which I have ported for the FlowNet
> architecture [1, 2].
>
> I would like to ask if there is any straightforward way to do the same
> using the newly introduced libgpuarray backend.
>
> - Minh
>
> [1]:
> https://github.com/Ignotus/theano-flownet/blob/master/correlation_layer.cu
> [2]:
> https://github.com/Ignotus/theano-flownet/blob/master/correlation_layer.py
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] AttributeError: module 'theano' has no attribute 'ifelse'

2018-01-31 Thread Frédéric Bastien
It is not imported by default. Just

import theano.ifelse

Fred

Le sam. 13 janv. 2018 17:10, ziqi zhang  a
écrit :

> I am installing theano following the instructions here
> http://deeplearning.net/software/theano/install_ubuntu.html, on linux
> ubuntu.
>
> Specifically, the commands are:
>
> step 1, install anaconda: bash Anaconda3-5.0.1-Linux-x86_64.sh
> step 2, install theano: conda install numpy scipy mkl nose sphinx pydot-ng
> step 3, conda install theano pygpu
>
> Then testing:
>
> python
> Python 3.6.3 |Anaconda, Inc.| (default, Oct 13 2017, 12:02:49)
> [GCC 7.2.0] on linux
> Type "help", "copyright", "credits" or "license" for more information.
> >>> import theano
> >>> theano.ifelse
> Traceback (most recent call last):
>   File "", line 1, in 
> AttributeError: module 'theano' has no attribute 'ifelse'
>
>
> Then as you can see, I get an error show above.
>
> Because this error, my code that works on a different platform cannot
> work. The code also uses Keras, which calls theano to create models.
>
> What does the error mean and how can I fix this?
>
> Many thanks
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Help use AMD graphics card with Theano 1.01

2018-01-31 Thread Frédéric Bastien
Yes

Le lun. 29 janv. 2018 11:57, Mikel Esparza  a
écrit :

> Oh, does it means that I only can use the GPU if it is a NVDIA gpu? Sorry
> but I'm new in this world.
>
> Thank you for your quick answer!!
>
>
> On Monday, January 29, 2018 at 5:47:08 PM UTC+1, nouiz wrote:
>
>> Hi,
>>
>> we have just experimental support for OpenCL. It isn't in a usable state.
>> Due to the news bellow, we won't finish that:
>>
>>
>> https://groups.google.com/forum/#!msg/theano-users/7Poq8BZutbY/rNCIfvAEAwAJ
>>
>> Frédéric
>>
>> On Mon, Jan 29, 2018 at 11:14 AM Mikel Esparza 
>> wrote:
>>
> Hi!, I'm trying to configure my computer to start working in deepLearning.
>>> For that I would like to use Theano as back end. I've tried to configure
>>> Theano to use my graphic card (AMD Radeon HD 7900 series) but at the moment
>>> I've can't. I've create the .theanorc file in my personal folder and after
>>> running the test script , it always says that it is using the CPU. In the
>>> output of the script I have this:
>>>
>>> ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
>>> Traceback (most recent call last):
>>>   File
>>> "C:\Users\maixi\AppData\Local\Continuum\anaconda3\lib\site-packages\theano\gpuarray\__init__.py",
>>> line 227, in 
>>> use(config.device)
>>>   File
>>> "C:\Users\maixi\AppData\Local\Continuum\anaconda3\lib\site-packages\theano\gpuarray\__init__.py",
>>> line 214, in use
>>> init_dev(device, preallocate=preallocate)
>>>   File
>>> "C:\Users\maixi\AppData\Local\Continuum\anaconda3\lib\site-packages\theano\gpuarray\__init__.py",
>>> line 99, in init_dev
>>> **args)
>>>   File "pygpu\gpuarray.pyx", line 651, in pygpu.gpuarray.init
>>>   File "pygpu\gpuarray.pyx", line 587, in pygpu.gpuarray.pygpu_init
>>> pygpu.gpuarray.GpuArrayException: b'Could not load "nvcuda.dll": No se
>>> puede encontrar el m\xf3dulo especificado.\r\n'
>>> [Elemwise{exp,no_inplace}()]
>>> Looping 1000 times took 11.217949 seconds
>>> Result is [ 1.23178029  1.61879337  1.52278066 ...,  2.20771813
>>> 2.29967761
>>>   1.62323284]
>>> Used the cpu
>>>
>>> Can anyone help me??
>>>
>>> Thanks in advance :D
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>>
>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to theano-users...@googlegroups.com.
>>
>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Help use AMD graphics card with Theano 1.01

2018-01-29 Thread Frédéric Bastien
Hi,

we have just experimental support for OpenCL. It isn't in a usable state.
Due to the news bellow, we won't finish that:

https://groups.google.com/forum/#!msg/theano-users/7Poq8BZutbY/rNCIfvAEAwAJ

Frédéric

On Mon, Jan 29, 2018 at 11:14 AM Mikel Esparza 
wrote:

> Hi!, I'm trying to configure my computer to start working in deepLearning.
> For that I would like to use Theano as back end. I've tried to configure
> Theano to use my graphic card (AMD Radeon HD 7900 series) but at the moment
> I've can't. I've create the .theanorc file in my personal folder and after
> running the test script , it always says that it is using the CPU. In the
> output of the script I have this:
>
> ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
> Traceback (most recent call last):
>   File
> "C:\Users\maixi\AppData\Local\Continuum\anaconda3\lib\site-packages\theano\gpuarray\__init__.py",
> line 227, in 
> use(config.device)
>   File
> "C:\Users\maixi\AppData\Local\Continuum\anaconda3\lib\site-packages\theano\gpuarray\__init__.py",
> line 214, in use
> init_dev(device, preallocate=preallocate)
>   File
> "C:\Users\maixi\AppData\Local\Continuum\anaconda3\lib\site-packages\theano\gpuarray\__init__.py",
> line 99, in init_dev
> **args)
>   File "pygpu\gpuarray.pyx", line 651, in pygpu.gpuarray.init
>   File "pygpu\gpuarray.pyx", line 587, in pygpu.gpuarray.pygpu_init
> pygpu.gpuarray.GpuArrayException: b'Could not load "nvcuda.dll": No se
> puede encontrar el m\xf3dulo especificado.\r\n'
> [Elemwise{exp,no_inplace}()]
> Looping 1000 times took 11.217949 seconds
> Result is [ 1.23178029  1.61879337  1.52278066 ...,  2.20771813  2.29967761
>   1.62323284]
> Used the cpu
>
> Can anyone help me??
>
> Thanks in advance :D
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: GpuCorrMM encountered a CUBLAS error

2018-01-10 Thread Frédéric Bastien
Do you have multiple cudnn version installed? I have the impression Theano
is in an environment with multiple cudnn version available.

Can you delete your Theano cache? This could also help.

theano-cache purge



On Wed, Dec 13, 2017 at 9:20 AM Beatriz G.  wrote:

> After trying a lot of thigns, I have decided to uninstall and install
> theano, and a new version has installed, the new version requires cuda, so
> my theanorc file is now like:
>
> [global]
> device = cuda
>
> floatX = float32
>
>
> [blas]
> ldflags = -lopenblas
>
>
> [nvcc]
> # flags=-D_FORCE_INLINES
> optimizer_including=cudnn
>
> [cuda]
> root=/usr/local/cuda-9.1
>
>
> And I get the following output after trying Lenet:
>
>
> Using cuDNN version 7005 on context None
> Mapped name None to device cuda: GeForce GTX 750 Ti (:06:00.0)
>
> ... loading data
> ... building the model
> LENET.py:108: UserWarning: DEPRECATION: the 'ds' parameter is not going to
> exist anymore as it is going to be replaced by the parameter 'ws'.
>   ignore_border=True
>
> Traceback (most recent call last):
>   File "LENET.py", line 394, in 
> evaluate_lenet5()
>   File "LENET.py", line 228, in evaluate_lenet5
> y: test_set_y[index * batch_size: (index + 1) * batch_size]
>   File
> "/home/bea/anaconda2/lib/python2.7/site-packages/theano/compile/function.py",
> line 317, in function
> output_keys=output_keys)
>   File
> "/home/bea/anaconda2/lib/python2.7/site-packages/theano/compile/pfunc.py",
> line 486, in pfunc
> output_keys=output_keys)
>   File
> "/home/bea/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
> line 1841, in orig_function
> fn = m.create(defaults)
>   File
> "/home/bea/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
> line 1715, in create
> input_storage=input_storage_lists, storage_map=storage_map)
>   File
> "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/link.py", line
> 699, in make_thunk
> storage_map=storage_map)[:3]
>   File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/vm.py",
> line 1084, in make_all
> impl=impl))
>   File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/op.py",
> line 955, in make_thunk
> no_recycling)
>   File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/op.py",
> line 858, in make_c_thunk
> output_storage=node_output_storage)
>   File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py",
> line 1217, in make_thunk
> keep_lock=keep_lock)
>   File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py",
> line 1157, in __compile__
> keep_lock=keep_lock)
>   File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py",
> line 1620, in cthunk_factory
> key=key, lnk=self, keep_lock=keep_lock)
>   File
> "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cmodule.py",
> line 1174, in module_from_key
> module = lnk.compile_cmodule(location)
>   File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py",
> line 1523, in compile_cmodule
> preargs=preargs)
>   File
> "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cmodule.py",
> line 2368, in compile_str
> return dlimport(lib_filename)
>   File
> "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cmodule.py",
> line 302, in dlimport
> rval = __import__(module_name, {}, {}, [module_name])
> ImportError: ('The following error happened while compiling the node',
> GpuDnnConv{algo='small', inplace=True, num_groups=1}(GpuContiguous.0,
> GpuContiguous.0, GpuAllocEmpty{dtype='float32', context_name=None}.0,
> GpuDnnConvDesc{border_mode='valid', subsample=(1, 1), dilation=(1, 1),
> conv_mode='conv', precision='float32', num_groups=1}.0, Constant{1.0},
> Constant{0.0}), '\n',
> '/home/bea/.theano/compiledir_Linux-4.4--generic-x86_64-with-debian-stretch-sid-x86_64-2.7.12-64/tmpPD9sEN/97ac95f817846a3cb0867215657bdc2150272dcddf165864039b936dd3b77309.so:
> undefined symbol: cudnnGetConvolutionGroupCount',
> "[GpuDnnConv{algo='small', inplace=True,
> num_groups=1}(,
> , ,
> , Constant{1.0}, Constant{0.0})]")
>
>
> Regards.
>
> El miércoles, 13 de diciembre de 2017, 13:50:44 (UTC+1), Beatriz G.
> escribió:
>>
>> Hi everyone.
>>
>> I used to work with Theano and it works perfectly, but after installing
>> tensorflow with conda, and some dependencies to work with it, my Theano has
>> stopped to work.
>>
>> I obtain the following error:
>>
>> Using gpu device 0: GeForce GTX 750 Ti (CNMeM is disabled, cuDNN not
>> available)
>> ... loading data
>> ... building the model
>> ... training
>> training @ iter =  0
>> Traceback (most recent call last):
>>   File "LENET.py", line 394, in 
>> evaluate_lenet5()
>>   File "LENET.py", line 301, in evaluate_lenet5
>> cost_ij = 

Re: [theano-users] Re: six package

2018-01-10 Thread Frédéric Bastien
Pylearn2 isn't supported anymore. Maybe it don't support newer Theano
versoin. So make sure to try Theano dev version and if that don't work, try
older Theano version.

Maybe you should investigate other framework. See:
https://groups.google.com/forum/#!msg/theano-users/7Poq8BZutbY/rNCIfvAEAwAJ

Frédéric

On Wed, Dec 13, 2017 at 11:40 AM Beatriz G.  wrote:

> If I do not use the pylearn2 package, I can run my code in CPU but with
> device=cuda0 or device=cuda the execution is like is running, but if I type
> "top" in Ubuntu bash, I do not have any python execution.
>
>
>
>
> El miércoles, 13 de diciembre de 2017, 17:35:28 (UTC+1), Beatriz G.
> escribió:
>>
>> When I execute my code I get the following error:
>>
>> File "/home/bea/Desktop/bla/Primera_prueba/casia_Lenet.py", line 6, in
>> 
>> from layers_gaussian_init import *
>>   File
>> "/home/bea/Desktop/Publicacion_Aris/Primera_prueba/layers_gaussian_init.py",
>> line 6, in 
>> from pylearn2.expr.normalize import CrossChannelNormalization
>>   File "/home/bea/pylearn2/pylearn2/__init__.py", line 4, in 
>> from pylearn2.utils.logger import configure_custom
>>   File "/home/bea/pylearn2/pylearn2/utils/__init__.py", line 11, in
>> 
>> from theano.compat.six.moves import input, zip as izip
>> ImportError: No module named six.moves
>>
>> I have tried to uninstall and install six package manually and with pip
>> or conda (I am using Anaconda), but I still get the error and I do not what
>> to do.
>>
>> Regards.
>>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] GSOC 2018

2018-01-10 Thread Frédéric Bastien
no, see:

https://groups.google.com/forum/#!msg/theano-users/7Poq8BZutbY/rNCIfvAEAwAJ

On Tue, Jan 2, 2018 at 10:46 PM achie27  wrote:

> Hi!
> Will Theano be participating in GSOC this year?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Why is Theano.ifelse.ifelse executing both tensor functions?

2018-01-10 Thread Frédéric Bastien
Hi,

Theano is a functional programming system. To see only one branch executed,
it must be in the graph. Here is an example with the Print op that have a
printing as a side effect:

import theano.ifelse
import theano.tensor as T
a=theano.shared(2)
b=theano.shared(10)
theano.ifelse.ifelse(T.gt(b,a), theano.printing.Print("T")(a),
theano.printing.Print("F")(b)).eval()

Two thing to remember:
- this op have more overhead then switch. So only try to use it to save
significant computation
- We do not guaranty that the minimal number of node will be executed.
There is interaction with some optimization and if you want to get more
node not executed, you should disable some optimization like using this
flag: "optimizer_exclusing=inplace". Sadly, this disable more then the
minimum of optimization needed. There is no way without significant work to
disable the minimum of optimization.

Fred

On Thu, Dec 14, 2017 at 2:21 PM Ines Ayed  wrote:

> I looked up a theano variant for keras.backend.switch, beause I did not
> want both operations to be executed and I found this :
> https://github.com/Theano/Theano/blob/master/theano/ifelse.py
>
> Here it says that (lazy) ifelse executes only the branch corresponding to
> the condition and not both like switch. I tested it like this:
>
> import theano
>
> def function1():
> print("function 1 is executed")
> return a
>
> def function2():
> print("function 2 is executed")
> return b
> a = 2
> b = 10
> result = theano.ifelse.ifelse(T.gt(b,a), function1(), function2())
>
> But when I run this both messages are printed which means that both
> branches are executed. This is confusing since the description of ifelse
> says that it should not. Am I missing something here?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Cannot update Theano to 1.0.0

2017-11-28 Thread Frédéric Bastien
We have problems supporting mac computers with GPUs as CUDA isn't well
supported there. Do you need the GPU? If not, just install Theano like this:

conda install -c mila-udem theano

>From memory, pygpu isn't build anymore for mac.

On Tue, Nov 28, 2017 at 10:32 AM mcomin  wrote:

> From Theano website :
>
>
>
> *Latest conda packages for theano (>= 0.9) and pygpu (>= 0.6*) currently
> don’t support Python 3.4 branch.*And :
>
> *Moved Python 3.* minimum supported version from 3.3 to 3.4*
>
> Does it mean that I can't update Theano from conda if I have python 3.5 ?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] can not import theano: GpuArrayException: Could not load "nvrtc64_70.dll"

2017-11-17 Thread Frédéric Bastien
Update pygpu and libgpuarray to 0.7.5. it should fix this problem.

Le mar. 14 nov. 2017 06:43,  a écrit :

> I am new in keras and Theano:
>
> I am trying to trying to install  keras and theano for while.
>
> I get follwing error:
>
> ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
> Traceback (most recent call last): File
> "C:\toolkits\Anaconda2\lib\site-packages\theano\gpuarray\__init__.py", line
> 164, in  use(config.device) File
> "C:\toolkits\Anaconda2\lib\site-packages\theano\gpuarray\__init__.py", line
> 151, in use init_dev(device) File
> "C:\toolkits\Anaconda2\lib\site-packages\theano\gpuarray\__init__.py", line
> 60, in init_dev sched=config.gpuarray.sched) File "pygpu\gpuarray.pyx",
> line 634, in pygpu.gpuarray.init File "pygpu\gpuarray.pyx", line 584, in
> pygpu.gpuarray.pygpu_init File "pygpu\gpuarray.pyx", line 1057, in
> pygpu.gpuarray.GpuContext.__cinit__ GpuArrayException: Could not load
> "nvrtc64_70.dll": Das angegebene Modul wurde nicht gefunden.
>
> I have installed CUDA 9.0
> Windowas 10
> anaconda 2, python 2.7
>
> I tried to install it like there discriped ist here:
> http://wiki.fast.ai/index.php/Local_install_(Windows_only:cpu)
>
>
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] cuDNN

2017-11-17 Thread Frédéric Bastien
You need to install cudnn. We don't install it for you.

Le lun. 13 nov. 2017 20:33, roman.foell via theano-users <
theano-users@googlegroups.com> a écrit :

> Hello,
>
> I get the following error:
>
> Code hier eingeben...Can not use cuDNN on context None: cannot compile
> with cuDNN. We got this error:
> In file included from C:\Program Files\NVIDIA GPU Computing
> Toolkit\CUDA\v7.0\include/driver_types.h:53:0,
>  from C:\Program Files\NVIDIA GPU Computing
> Toolkit\CUDA\v7.0\include/cudnn.h:63,
>  from
> c:\users\flo9fe\appdata\local\temp\try_flags_lx5h9t.c:4:
> C:\Program Files\NVIDIA GPU Computing
> Toolkit\CUDA\v7.0\include/host_defines.h:84:0: warning: "__cdecl" redefined
>  #define __cdecl
>  ^
> : note: this is the location of the previous definition
> C:/ProgramData/Anaconda2/Library/mingw-w64/bin/../lib/gcc/x86_64-w64-mingw32/5.3.0/../../../../x86_64-w64-mingw32/bin/ld.exe:
> cannot find -lcudnn
> collect2.exe: error: ld returned 1 exit status
>
> Mapped name None to device cuda: Quadro K2100M (:01:00.0)
> [GpuElemwise{exp,no_inplace}(),
> HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
> Looping 1000 times took 0.577000 seconds
> Result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
>   1.62323296]
> Used the gpu
>
>
> My .theanorc:
>
> [global]
> device = cuda
> floatX = float32
> MKL_THREADING_LAYER=GNU
>
> [lib]
> cnmem=90
>
> [cuda]
> root = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0
> base_compiledir=/tmp/%(user)s/theano.NOBACKUP
> nvcc.flags=-D_FORCE_INLINES
> optimizer_including=cudnn
>
> [nvcc]
> fastmath = True
> compiler_bindir=C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin
>
>
> Thanks for help.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Unable to set up CUDA in optimus laptop in windows 10 OS

2017-11-10 Thread Frédéric Bastien
I never used optimus on windows, so I can't help.

I would highly recommand that you install Theano 1.0rc1 after you have made
the cuda sample run. See this page:
https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end(gpuarray
)

On linux, for optimus to work, you need to enable it on the command line:

optirun python ...

You will need to find how to do this on windows.

If you find, tell us. It could help other people.

On Sat, Nov 4, 2017 at 10:04 AM Sanjaya Nayak 
wrote:

>
> Hello friends,
>
> Can you help me to resolve my problem ? I am a newbie for words like
> 'optimus laptop', 'gpu'. I have a Lenovo Think pad notebook having intel i7
> processor, 32GB RAM, Windows 10 OS. Notebook is having two GPU(igpu =
> Intel(R) HD Graphics 620 and dGPU = NVIDIA GeForce 940MX). I have installed
> Visual Studio 2015 update 3 prior to CUDA 8.0 installation. Installed CUDA
> 8.0 successfully. Notebook has *gcc* version as "*4.7.0 20111220
> (experimental)*" and *nvcc* version as "*Cuda compilation tools, release
> 8.0, V8.0.60*".
>
> I am able to build the CUDA samples by VS2015, but unable to run the
> binaries. It is showing error as
>
>
> *CUDA error at C:\ProgramData\NVIDIA Corporation\CUDA
> Samples\v8.0\common\inc\helper_cuda.h:1133 code=38(cudaErrorNoDevice)
> "cudaGetDeviceCount(_count)"*
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Failed to compile cuda_ndarray.cu

2017-11-03 Thread Frédéric Bastien
Here is the link to help you install it:

https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end(gpuarray)

On Fri, Nov 3, 2017 at 1:09 PM Frédéric Bastien <frederic.bast...@gmail.com>
wrote:

> We do not support anymore that version of Theano. Try Theano 1.0rc1:
> c697eeab
>
> On Mon, Oct 30, 2017 at 3:37 PM Adam Jones <ajones...@gmail.com> wrote:
>
>> *My system info:*
>> - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu
>> 16.04.2
>> - Theano installed: via conda
>> - Theano version:'0.9.0.dev-c697eeab84e5b8a74908da654b66ec9eca4f1291'
>> - Python version: 2.7
>> - cuDNN version: 6.0.21
>> - CUDA compilation tools, release: 8.0, V8.0.61
>> - GPU model and memory: GeForce GTX Titan X
>> - Compiler: gcc version 5.4.0
>>
>> *Problem:*
>> Fresh install of cuda, cudnn, and theano as per docs
>> <http://deeplearning.net/software/theano/updating.html#updating>. I get
>> the following error message when I try importing theano. I know there are a
>> multitude of posts for this error message, but none have been of any help.
>> Any guidance would be *greatly* appreciated!
>>
>> *Error produced:*
>> ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu:
>> libcublas.so.8.0: cannot open shared object file: No such file or directory
>> /home/adam/miniconda2/lib/python2.7/site-packages/theano/gpuarray/dnn.py:135:
>> UserWarning: Your cuDNN version is more recent than Theano. If you
>> encounter problems, try updating Theano or downgrading cuDNN to version 5.1.
>>   warnings.warn("Your cuDNN version is more recent than "
>> ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
>> Traceback (most recent call last):
>>   File
>> "/home/adam/miniconda2/lib/python2.7/site-packages/theano/gpuarray/__init__.py",
>> line 164, in 
>> use(config.device)
>>   File
>> "/home/adam/miniconda2/lib/python2.7/site-packages/theano/gpuarray/__init__.py",
>> line 151, in use
>> init_dev(device)
>>   File
>> "/home/adam/miniconda2/lib/python2.7/site-packages/theano/gpuarray/__init__.py",
>> line 68, in init_dev
>> context.cudnn_handle = dnn._make_handle(context)
>>   File
>> "/home/adam/miniconda2/lib/python2.7/site-packages/theano/gpuarray/dnn.py",
>> line 85, in _make_handle
>> raise RuntimeError("error creating cudnn handle")
>> RuntimeError: error creating cudnn handle
>>
>>
>>  *Output of 'nvidia-smi':*
>>
>> '+-+
>> | NVIDIA-SMI 384.90 Driver Version: 384.90
>> |
>>
>> |---+--+--+
>> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr.
>> ECC |
>> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util
>> Compute M. |
>>
>> |===+==+==|
>> |   0  GeForce GTX TIT...  Off  | :01:00.0 Off |
>> N/A |
>> | 22%   40CP814W / 250W |  2MiB / 12207MiB |  0%
>> Default |
>>
>> +---+--+--+
>> |   1  GeForce GT 610  Off  | :07:00.0 N/A |
>> N/A |
>> | 40%   35CP8N/A /  N/A | 48MiB /   963MiB | N/A
>> Default |
>>
>> +---+--+--+
>>
>>
>>
>> +-+
>> | Processes:   GPU
>> Memory |
>> |  GPU   PID   Type   Process name Usage
>> |
>>
>> |=|
>> |1Not Supported
>>  |
>>
>> +-+'
>>
>>
>> *Relevant portion of .bashrc file:*
>> export CUDA_ROOT=/usr/local/cuda
>>
>> export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
>> # export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64"
>>
>> export PATH="/usr/local/cuda/bin:$PATH"
>> export PATH="/usr/local/cuda-8.0/bin:$PATH"
>>
>> export CUDA_VISIBLE_DEVICES=0
>>
>> export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda/lib64
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to theano-users+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Failed to compile cuda_ndarray.cu

2017-11-03 Thread Frédéric Bastien
We do not support anymore that version of Theano. Try Theano 1.0rc1:
c697eeab

On Mon, Oct 30, 2017 at 3:37 PM Adam Jones  wrote:

> *My system info:*
> - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu
> 16.04.2
> - Theano installed: via conda
> - Theano version:'0.9.0.dev-c697eeab84e5b8a74908da654b66ec9eca4f1291'
> - Python version: 2.7
> - cuDNN version: 6.0.21
> - CUDA compilation tools, release: 8.0, V8.0.61
> - GPU model and memory: GeForce GTX Titan X
> - Compiler: gcc version 5.4.0
>
> *Problem:*
> Fresh install of cuda, cudnn, and theano as per docs
> . I get
> the following error message when I try importing theano. I know there are a
> multitude of posts for this error message, but none have been of any help.
> Any guidance would be *greatly* appreciated!
>
> *Error produced:*
> ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu:
> libcublas.so.8.0: cannot open shared object file: No such file or directory
> /home/adam/miniconda2/lib/python2.7/site-packages/theano/gpuarray/dnn.py:135:
> UserWarning: Your cuDNN version is more recent than Theano. If you
> encounter problems, try updating Theano or downgrading cuDNN to version 5.1.
>   warnings.warn("Your cuDNN version is more recent than "
> ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
> Traceback (most recent call last):
>   File
> "/home/adam/miniconda2/lib/python2.7/site-packages/theano/gpuarray/__init__.py",
> line 164, in 
> use(config.device)
>   File
> "/home/adam/miniconda2/lib/python2.7/site-packages/theano/gpuarray/__init__.py",
> line 151, in use
> init_dev(device)
>   File
> "/home/adam/miniconda2/lib/python2.7/site-packages/theano/gpuarray/__init__.py",
> line 68, in init_dev
> context.cudnn_handle = dnn._make_handle(context)
>   File
> "/home/adam/miniconda2/lib/python2.7/site-packages/theano/gpuarray/dnn.py",
> line 85, in _make_handle
> raise RuntimeError("error creating cudnn handle")
> RuntimeError: error creating cudnn handle
>
>
>  *Output of 'nvidia-smi':*
>
> '+-+
> | NVIDIA-SMI 384.90 Driver Version: 384.90
> |
>
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr.
> ECC |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute
> M. |
>
> |===+==+==|
> |   0  GeForce GTX TIT...  Off  | :01:00.0 Off |
> N/A |
> | 22%   40CP814W / 250W |  2MiB / 12207MiB |  0%
> Default |
>
> +---+--+--+
> |   1  GeForce GT 610  Off  | :07:00.0 N/A |
> N/A |
> | 40%   35CP8N/A /  N/A | 48MiB /   963MiB | N/A
> Default |
>
> +---+--+--+
>
>
>
> +-+
> | Processes:   GPU
> Memory |
> |  GPU   PID   Type   Process name Usage
> |
>
> |=|
> |1Not Supported
>|
>
> +-+'
>
>
> *Relevant portion of .bashrc file:*
> export CUDA_ROOT=/usr/local/cuda
>
> export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
> # export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64"
>
> export PATH="/usr/local/cuda/bin:$PATH"
> export PATH="/usr/local/cuda-8.0/bin:$PATH"
>
> export CUDA_VISIBLE_DEVICES=0
>
> export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda/lib64
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Theanorc configuration

2017-11-03 Thread Frédéric Bastien
Sorry, but we don't have time to support Theano 0.9. You can try Theano
1.0rc1:

https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end(gpuarray
)

Frédéric

On Mon, Oct 30, 2017 at 7:49 PM ephi5757 via theano-users <
theano-users@googlegroups.com> wrote:

> Got it working. I found that my CNN codes favored python 2.7 so I
> installed miniconda2 and reinstalled theano 0.9.0 and all of the software
> dependencies. I found that putting my .theanorc.txt file into the home
> directory worked, i.e., c:\users\atun...Now the .theanorc.txt file effects
> all of my .py programs that depend on theano.  However, I am not completely
> successful. I get the error statement "Using gpu device 0: Quadro P4000
> (CNMeM is enabled with initial size: 80.0% of memory, cuDNN not available)"
> and then the error ("We can't determine the cudnn version as it is not
> available".
> Previously adding into the .theanorc.txt the following worked perfectly:
> [cuda]
> cuda.disable_gcc_cudnn_check=True
> Now it does not appear to be effective.
> Are there any solutions to this problem? I think I am very close to be up
> and running my programs on my new computer. Thank you for your help.
> Arnold
>
> here is my .theanorc.txt file:
> ===.
> [global]
> device = gpu
> REM device = cpu
> floatX=float32
> [cuda]
> root=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\
> cuda.disable_gcc_cudnn_check=True
> [nvcc]
> flags = -LC:\ProgramData\Miniconda2\libs
> compiler_bindir=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin
> cxx=C:\ProgramData\Miniconda2\Library\mingw-w64\bin
> optimizer_including=dnn
> [lib]
> cnmem=0.8
> =.
>
>
> On Sunday, October 29, 2017 at 4:41:27 PM UTC-4, Arnold Tunick wrote:
>
>> Hi  Peter,
>> Hi Fred,
>>  How have you been. Sorry to hear that Theano is coming to an end. As
>> I recently wrote to Pascal, I truly appreciate all of your expert help.
>>
>> I am in the middle of a project using an implementation of a CNN in
>> Theano so I have to ask the following question:
>>
>>  I have just installed the latest version of Theano v0.9.0 on a new
>> windows notebook using Miniconda3 along with Python 3.6, MSVS 2015, and
>> Cuda 8.0.6.1.
>>
>>  I need to know how to implement the .theanorc.txt configurations in
>> the new version of Theano. I found in the document at
>> *http://deeplearning.net/software/theano/library/config.html*
>>  that;
>>
>> 1) The location[s] of the .theanorc file[s] in ConfigParser format. It
>> defaults to $HOME/.theanorc. On Windows, it defaults to
>> $HOME/.theanorc:$HOME/.theanorc.txt to make Windows users’ life easier.
>>
>> and
>>
>> 2) to load configuration files {.theanorc} in the current working
>> directory, append .theanorc to the list of configuration files, e.g.
>> THEANORC=~/.theanorc:.theanorc.
>>
>> Therefore in a python shell I did the following:
>> import theano
>> The python >> prompt returned with no error messages.
>> THEANORC="C:\SciSoft\.theanorc.txt"
>> The python >> prompt returned with no error messages.
>>
>> Is this a viable way to modify the .theanorc configurations?
>>
>> Best,
>> Arnold
>>
>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Find the variable that causes DisconnectedInputError

2017-11-03 Thread Frédéric Bastien
Update to Theano 1.0rc1. We updated error information heavily and I think
it will help your case:
https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end(gpuarray)

On Wed, Nov 1, 2017 at 9:37 AM Mathias Müller  wrote:

> Hi,
>
> I am getting the following theano error:
>
> Traceback (most recent call last):
>   File "/home/user/mmueller/nematus-context2/nematus/nmt.py", line 1995,
> in 
> train(**vars(args))
>   File "/home/user/mmueller/nematus-context2/nematus/nmt.py", line 1439,
> in train
> grads = tensor.grad(cost, wrt=itemlist(updated_params))
>   File
> "/home/user/mmueller/.pythonz/pythons/CPython-2.7.13/lib/python2.7/site-packages/theano/gradient.py"
> , line 539, in grad
> handle_disconnected(elem)
>   File
> "/home/user/mmueller/.pythonz/pythons/CPython-2.7.13/lib/python2.7/site-packages/theano/gradient.py"
> , line 526, in handle_disconnected
> raise DisconnectedInputError(message)
> theano.gradient.DisconnectedInputError:
> Backtrace when that variable is created:
>
>   File "/home/user/mmueller/nematus-context2/nematus/nmt.py", line 1995,
> in 
> train(**vars(args))
>   File "/home/user/mmueller/nematus-context2/nematus/nmt.py", line 1377,
> in train
> tparams = init_theano_params(params)
>   File "/home/user/mmueller/nematus-context2/nematus/theano_util.py",
> line 57, in init_theano_params
> tparams[kk] = theano.shared(params[kk], name=kk)
>
>
> As far as I understand, this means that I am requesting the gradient with
> respect to a variable that was not used to compute the cost.
>
> My model has a large number of theano variables, and I find the error
> message a bit uninformative. *How can I find out which variable is
> causing this error?*
>
> $ python -c "import theano; print theano.version.version"
> 0.9.0
>
>
> Thanks so much,
> Mathias
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] How to keep the same mask for Dropout layers for multiple batches?

2017-11-03 Thread Frédéric Bastien
Make different Theano function. One to generate the mask, make that
function return it or put it in a shared variable. Then the other Theano
function use that mask.

Frédéric

On Wed, Nov 1, 2017 at 12:29 PM Farhood Etaati 
wrote:

> Hello there!
>
> I'm trying to implement a custom dropout layer which can generate a random
> mask and hold on to it until a stop flag has been given to it. Does anyone
> have an idea how could I implement this?
>
> Thanks.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] THEANORC configuration

2017-11-03 Thread Frédéric Bastien
This line should be in a normal shell, not a python shell:

THEANORC="C:\SciSoft\.theanorc.txt"

If you want to do that inside a Python shell, you should do it before you
import theano:

import os
os.envirion["THEANORC"]="C:\SciSoft\.theanorc.txt"

I strongly recommand that you use Theno 1.0rc1:

https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end(gpuarray)

On Thu, Oct 26, 2017 at 6:53 PM 'Arnold Tunick' via theano-users <
theano-users@googlegroups.com> wrote:

> Hi  Pascal,
>  How have you been. Sorry to hear that Theano is coming to an end. I
> sincerely appreciate all of the help that you and Fred have provided. My
> work with a CNN in Theano has benefited greatly from your expert
> recommendations.
>
>  I have now installed the latest version of Theano v0.9.0 on a new
> windows notebook along with Python 3.6 using Miniconda3.
>
>  I wanted to know how to implement the .theanorc.txt configurations in
> the new version of Theano. I found in the documents at
> http://deeplearning.net/software/theano/library/config.html that "to load
> configuration files {.theanorc} in the current working directory, append
> .theanorc to the list of configuration files, e.g.
> THEANORC=~/.theanorc:.theanorc.
>
> Therefore in a python shell I did the following:
> import theano
> THEANORC="C:\SciSoft\.theanorc.txt"
> The python >> prompt returned with no error messages.
>
> Is this the correct implementation to modify the .theanorc
> configurations?
>
> Best,
> Arnold
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: MILA and the future of Theano

2017-10-25 Thread Frédéric Bastien
Just a historical note.

Before Theano, we had another framework. It took 1 year to prototype
Theano, then it took 1 year to have most people use Theano internally, but
the previous framework was still used for 10 year. So I won't be surprised
it Theano continue to be used for a long time.



On Wed, Oct 25, 2017 at 1:58 PM Juan Camilo Gamboa Higuera <
juancami...@gmail.com> wrote:

>
> I understand that MILA has found that it doesn't need to continue
> supporting Theano. However, I believe that the statement  "MILA will
> discontinue support of Theano" is not equivalent to  "Theano is dead". I
> feel that pytorch and tensorflow, even though they're backed by large
> companies, they're still lagging behind Theano as general machine learning
> framewroks (i.e. not just deep learning). Personally, I've benefited from
> all the effort that people have put in theano, and will continue to use it
> as long as it is the best tool that suits my needs (bayesian methods for
> model based reinforcement learning). For this, I am very grateful towards
> the Theano development team.
>
> THanks!
>
>
>
> On Thursday, September 28, 2017 at 12:23:05 PM UTC-4, Pascal Lamblin wrote:
>
>> Dear users and developers,
>>
>> After almost ten years of development, we have the regret to announce
>> that we will put an end to our Theano development after the 1.0 release,
>> which is due in the next few weeks. We will continue minimal maintenance
>> to keep it working for one year, but we will stop actively implementing
>> new features. Theano will continue to be available afterwards, as per
>> our engagement towards open source software, but MILA does not commit to
>> spend time on maintenance or support after that time frame.
>>
>> The software ecosystem supporting deep learning research has been
>> evolving quickly, and has now reached a healthy state: open-source
>> software is the norm; a variety of frameworks are available, satisfying
>> needs spanning from exploring novel ideas to deploying them into
>> production; and strong industrial players are backing different software
>> stacks in a stimulating competition.
>>
>> We are proud that most of the innovations Theano introduced across the
>> years have now been adopted and perfected by other frameworks. Being
>> able to express models as mathematical expressions, rewriting
>> computation graphs for better performance and memory usage, transparent
>> execution on GPU, higher-order automatic differentiation, for instance,
>> have all become mainstream ideas.
>>
>> In that context, we came to the conclusion that supporting Theano is no
>> longer the best way we can enable the emergence and application of novel
>> research ideas. Even with the increasing support of external
>> contributions from industry and academia, maintaining an older code base
>> and keeping up with competitors has come in the way of innovation.
>>
>> MILA is still committed to supporting researchers and enabling the
>> implementation and exploration of innovative (and sometimes wild)
>> research ideas, and we will keep working towards this goal through other
>> means, and making significant open source contributions to other
>> projects.
>>
>> Thanks to all of you who for helping develop Theano, and making it
>> better by contributing bug reports, profiles, use cases, documentation,
>> and support.
>>
>> -- Yoshua Bengio,
>> Head of MILA
>>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] NameError: global name 'CVM' is not defined

2017-10-23 Thread Frédéric Bastien
Note, the Compute Canada cluster problem is independent of Theano, we also
have it ourself. Do not install anaconda manually yourself. Otherwise, you
will need to "fix" the installation due to the strange Compute Canada
configuration that end up with the error you show.

It should work with there software stack (but this will request you to
manually build libgpuarray/pygpu).

They also have beta doc on how to install conda yourself in a way that it
will work with Theano:
https://docs.computecanada.ca/wiki/Anaconda

I also suggest that you update Theano to the dev version or the last beta.
It have many great new features.

Also, check this page to use the new GPU interface:
https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29




On Sun, Oct 22, 2017 at 12:35 PM Lucas Caccia  wrote:

> Hi,
>
> I'm getting the following error when running theano code :
> Traceback (most recent call last):
>   File "mnist_pixelvae_train.py", line 350, in 
> eps = T.cast(theano_srng.normal(mu.shape), theano.config.floatX)
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/sandbox/rng_mrg.py",
> line 1574, in normal
> nstreams=nstreams)
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/sandbox/rng_mrg.py",
> line 1354, in uniform
> rstates = self.get_substream_rstates(nstreams, dtype)
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/configparser.py",
> line 117, in res
> return f(*args, **kwargs)
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/sandbox/rng_mrg.py",
> line 1256, in get_substream_rstates
> multMatVect(rval[0], A1p72, M1, A2p72, M2)
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/sandbox/rng_mrg.py",
> line 66, in multMatVect
> [A_sym, s_sym, m_sym, A2_sym, s2_sym, m2_sym], o, profile=False)
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/compile/function.py",
> line 326, in function
> output_keys=output_keys)
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/compile/pfunc.py",
> line 486, in pfunc
> output_keys=output_keys)
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/compile/function_module.py",
> line 1795, in orig_function
> defaults)
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/compile/function_module.py",
> line 1661, in create
> input_storage=input_storage_lists, storage_map=storage_map)
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/gof/link.py",
> line 699, in make_thunk
> storage_map=storage_map)[:3]
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/gof/vm.py",
> line 1098, in make_all
> self.updated_vars,
>   File
> "/home/lpagec/anaconda2/envs/theano_cedar/lib/python2.7/site-packages/theano/gof/vm.py",
> line 952, in make_vm
> vm = CVM(
> NameError: global name 'CVM' is not defined
>
> After looking at previous posts, the fix seems to be to remove ~/.theano
> directory.  However, this does not work for me. The code runs file on my
> school servers, but fails on compute canada clusters.
> Any thoughts ? I'm running theano=0.9 and lasagne='0.2.dev1'
>
>
> Thanks,
> Lucas
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Theano does not optimise CNN

2017-10-11 Thread Frédéric Bastien
Try to update Theano to the dev version. Maybe you ended up with the few.
Bad commit that produce wrong results.

Le lun. 2 oct. 2017 12:52, Fraser Robinson 
a écrit :

> Hi,
>
> I've been trying to work through Convolutional Neural Networks (LeNet)
> . At this point I've
> just saved the code shared on the site and tried to run it. It runs without
> too many issues but the validation error stays at around 89%. I've run for
> 49,000 iterations and it still ended at roughly 89%.  Here's a print out of
> the function being called and the first few iterations:
>
> runfile('C:/Users/Fraser/Documents/Python
> Practice/DeepLearning/convolutional_mlp.py',
> wdir='C:/Users/Fraser/Documents/Python Practice/DeepLearning')
> Can not use cuDNN on context None: cannot compile with cuDNN. We got this
> error:
> In file included from C:\Program Files\NVIDIA GPU Computing
> Toolkit\CUDA\v9.0\include/host_defines.h:50:0,
>  from C:\Program Files\NVIDIA GPU Computing
> Toolkit\CUDA\v9.0\include/driver_types.h:53,
>  from C:\Program Files\NVIDIA GPU Computing
> Toolkit\CUDA\v9.0\include/cudnn.h:63,
>  from
> c:\users\fraser\appdata\local\temp\try_flags_muyptb.c:4:
> C:\Program Files\NVIDIA GPU Computing
> Toolkit\CUDA\v9.0\include/crt/host_defines.h:84:0: warning: "__cdecl"
> redefined
>  #define __cdecl
>  ^
> : note: this is the location of the previous definition
> C:/ProgramData/Anaconda22/Library/mingw-w64/bin/../lib/gcc/x86_64-w64-mingw32/5.3.0/../../../../x86_64-w64-mingw32/bin/ld.exe:
> cannot find -lcudnn
> collect2.exe: error: ld returned 1 exit status
>
> Mapped name None to device cuda: GeForce GTX 1080 Ti (:23:00.0)
> loading data
> building the model
> C:/Users/Fraser/Documents/Python
> Practice/DeepLearning/convolutional_mlp.py:104: UserWarning: DEPRECATION:
> the 'ds' parameter is not going to exist anymore as it is going to be
> replaced by the parameter 'ws'.
>   ignore_border=True
> C:\ProgramData\Anaconda22\lib\site-packages\nose_parameterized\__init__.py:7:
> UserWarning: The 'nose-parameterized' package has been renamed
> 'parameterized'. For the two step migration instructions, see:
> https://github.com/wolever/parameterized#migrating-from-nose-parameterized-to-parameterized
> (set NOSE_PARAMETERIZED_NO_WARN=1 to suppress this warning)
>   "The 'nose-parameterized' package has been renamed 'parameterized'. "
> training
> training @ iter =  0
> epoch 1, minibatch 100/100, validation error 89.13 %
>  epoch 1, minibatch 100/100, test error of best model 88.51 %
> training @ iter =  100
> epoch 2, minibatch 100/100, validation error 88.71 %
>  epoch 2, minibatch 100/100, test error of best model 88.25 %
> training @ iter =  200
> epoch 3, minibatch 100/100, validation error 88.72 %
> training @ iter =  300
> epoch 4, minibatch 100/100, validation error 88.67 %
>  epoch 4, minibatch 100/100, test error of best model 88.09 %
> training @ iter =  400
> epoch 5, minibatch 100/100, validation error 88.70 %
> training @ iter =  500
> epoch 6, minibatch 100/100, validation error 88.82 %
> training @ iter =  600
>
> As I said, I've left it running for around 500 epochs and it still just
> sits around the 88 - 89 mark. I can post more epochs if needed. I really
> don't know where to start in debugging this, has anyone else implemented
> this tutorial on the same version of theano? Suggestion?
>
> Here's my system:
>
> Windows 10 Pro
> AMD Ryzen 7 1800X
> GTX 1080ti (only graphics installed - no onboard)
>
> Python
> Version: 2.7.13
>
> NumPy
> Version: 1.13.1
>
> SciPy
> Version: 0.19.1
>
> Nose
> Version 1.37
>
> Theano
> Version: 0.9.0.dev-c697eeab84e5b8a74908da654b66ec9eca4f1291
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Theano gradient of subtensor

2017-10-11 Thread Frédéric Bastien
You need to take the subtensor in the forward to save all computation. It
is a very problem to remove useless computation due to a subtensor at the
end of the graph. We cover very few optimisation compared to what is
needed. So move the subtensor in the forward.

Fred

Le lun. 2 oct. 2017 23:35, dhern  a écrit :

> Thanks for the reply.
>
> Right, that method however seems to address the issue for gradients with
> respect to shared variables. I am interested, as in the code above in
> taking symbolic gradients with respect to subarrays of theano tensors. That
> doesn't seem to be possible, correct?. I will look more closely into taking
> a subtensor of the gradient, although I am not sure it reduces computation
> time in my actual code, since that is what I did to begin with and it is
> still very time consuming.
>
>
> On Thursday, September 28, 2017 at 3:32:19 PM UTC-4, Pascal Lamblin wrote:
>
>> Maybe the following can help you.
>>
>>
>> http://deeplearning.net/software/theano/tutorial/faq_tutorial.html#how-to-update-a-subset-of-weights
>>
>> Also, if you take a subtensor of the gradient itself, some optimizations
>> can apply that would avoid the computation of the full gradient.
>>
>> For instance, with your example, the "subtensor" and "* 2" operations
>> are swapped:
>>
>>  >>> grad0 = full_grad[0]
>>  >>> g0 = theano.function([X, Y], grad0)
>>
>>  >>> theano.printing.debugprint(g0)
>> Elemwise{mul,no_inplace} [id A] ''   1
>>   |TensorConstant{(1,) of 2.0} [id B]
>>   |Subtensor{int64} [id C] ''   0
>> | [id D]
>> |Constant{0} [id E]
>>
>>
>> On 2017-09-27 05:25 PM, Daniel Hernandez wrote:
>> > Hi,
>> >
>> > I was wondering if someone here had an answer to this unsolved question
>> > over in stack overflow:
>> >
>> >
>> https://stackoverflow.com/questions/37545325/theano-gradient-of-subtensor
>> >
>> > Basically, how do you compute gradients w.r.t. a subtensor?
>> >
>> > The question arises in the context of large tensors, say Y and X, where
>> > it is known that each entry in Y depends only on a small subset of the
>> > entries of X. Taking T.grad(Y, X) is computationally expensive since it
>> > will compute every possible gradient so one would like to be able to
>> > compute, e.g. T.grad(Y, X[i]) . Here is some basic code illustrating
>> the
>> > problem.
>> >
>> > X = T.matrix()
>> > Y = T.sum(X**2)
>> >
>> > full_grad = T.grad(Y, X) # This works
>> >
>> > X0 = X[0]
>> > test = T.grad(Y, X0) # This pukes a Disconnected Input error
>> >
>> > Silencing the Disconnected Input can be done in grad, but of course,
>> > that doesn't solve anything, evaluating the gradients only results in a
>> > bunch of 0s. So, is there a way of taking these gradients with respect
>> > to a subtensor?
>> >
>> >
>> > --
>> >
>> > ---
>> > You received this message because you are subscribed to the Google
>> > Groups "theano-users" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>>
> > an email to theano-users...@googlegroups.com
>>
> > .
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> Pascal Lamblin
>>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: RuntimeError: Mixed dnn version. The header is version 5105 while the library is version 5110.

2017-10-11 Thread Frédéric Bastien
Hi,

Make sure in your environment variables there is only the directory of one
version. This should work, we work like that to allow user of the same
computer select the cudnn version they want to use.

Le mar. 10 oct. 2017 07:49, 艾小科  a écrit :

>
> Hello, I met the similar problem. And I have multiple installation of
> that library because several people share the same deep learning server.
> ImportError: cuDNN not available: Mixed dnn version. The header is from
> one version, but we link with a different version (5110,6021)
> 在 2017年4月28日星期五 UTC+8上午6:36:24,Michael Klachko写道:
>
>> I had the same problem, and I solved it by overwriting all versions of
>> the files I could find, with the latest version. Also, I used CuDNN v6 with
>> the latest bleeding edge Theano, and it seems to work fine.
>>
>>
>> On Thursday, April 20, 2017 at 3:07:38 PM UTC-7, Robert wrote:
>>>
>>> I come from a Windows environment so I'm not familiar at all with the
>>> details of linux under the hood.  So I chose Anaconda Navigator since it
>>> makes installing packages and managing environments a lot easier.
>>>
>>> I did a search for 'cudnn.h' and 'libcudnn.so' and I did find those
>>> filenames in some anaconda directories.
>>> There is one cudnn.h in home/robert/anaconda3/pkgs/cudnn-5.1-0/include,
>>> this is not the only one.
>>> There is one libcudnn.so in home/robert/anaconda3/pkgs/cudnn-5.1-0/lib,
>>> this is not the only one.
>>> These are files installed by anaconda and they are in more than one
>>> place.  After I had installed cuda I copied the cudnn files using the
>>> commands:
>>> $sudo cp lib64/* /usr/local/cuda/lib64/
>>> $sudo cp include/* /usr/local/cuda/include/
>>>
>>> The revision given in the cudnn.h file that I copied using the command
>>> above is 5.1.5, but the revision that anaconda has in it's directories is
>>> 5.1.10.  It seems that anaconda actually has these files as part of it's
>>> package, and they are causing the conflict.
>>>
>>> In case it's useful, the following text is from the bottom of my .bashrc
>>> file:
>>>
>>> # added by Anaconda3 4.3.1 installer
>>> export PATH="/home/robert/anaconda3/bin:$PATH"
>>>
>>> # for cuda
>>> export PATH=/usr/local/cuda-8.0/bin:$PATH
>>> export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:$LD_LIBRARY_PATH
>>>
>>> # for cudnn
>>> export LIBRARY_PATH=/usr/local/cuda/lib64
>>>
>>> The first export was added by the anaconda installer, and the other two
>>> were added by me after installing cuda.
>>>
>>> Do you see any way to fix the problem?
>>>
>>>
>>>
>>>
>>>
>>> On Wednesday, 19 April 2017 11:01:44 UTC-7, nouiz wrote:

 If after that, you still have the problem, search in your filesystem
 file like cudnn.h and libcudnn.so. There is another place where cudnn is
 installed and it conflict with your new installed version.

 Fred

 On Tue, Apr 18, 2017 at 10:52 AM Robert Lee  wrote:

> Yes I copied the cudnn files using the following two commands:
> $sudo cp lib64/* /usr/local/cuda/lib64/
> $sudo cp include/* /usr/local/cuda/include/
>
> When I initially had this problem I purged cuda and the nvidia
> drivers, then I renamed the '/usr/local/cuda' and '/usr/local/cuda-8.0'
> directories and reinstalled cuda and nvidia.  This was to make sure that
> the files in these directories would only  come from the latest
> installation.
>
>
>
> On Monday, 17 April 2017 21:19:00 UTC-7, Jesse Livezey wrote:
>>
>> Sounds like the cudnn header and libraries are not consistent. When
>> you install cudnn, did you move all of the files into the correct cuda
>> folders?
>>
>> On Monday, April 17, 2017 at 8:30:03 PM UTC-7, Robert Lee wrote:
>>>
>>> I'm trying to get theano to work with keras.  My program runs fine
>>> with tensorflow but when I switch to theano I get the above error 
>>> message.
>>> My theano version is 0.9.0.  I'd appreciate any help in figuring this 
>>> out.
>>>
>> --
>
> ---
> You received this message because you are subscribed to the Google
> Groups "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to theano-users...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
 --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] cuDNN Mixed dnn version. The header is from one version, but we link with a different version(5110,6021)

2017-10-11 Thread Frédéric Bastien
Answered in the other email you send about that.

Le mar. 10 oct. 2017 10:28, 艾小科  a écrit :

> when I import theano, there is a problem.
> >>> import theano
> WARNING (theano.sandbox.cuda): The cuda backend is deprecated and will be
> removed in the next release (v0.10).  Please switch to the gpuarray
> backend. You can get more information about how to switch at this URL:
>
> https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29
>
> Using gpu device 0: GeForce GTX 1080 (CNMeM is enabled with initial size:
> 50.0% of memory, cuDNN Mixed dnn version. The header is from one version,
> but we link with a different version (5110, 6021))
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Passing a np.float32 variable to user-defined GPUOP with a CUDAndArray variable receives unexpected values after decimal.

2017-10-11 Thread Frédéric Bastien
Python float are always float64. So if you downcast them to float32 there
can be presision loss.

Frédéric

Le dim. 8 oct. 2017 02:22, Adit Bhargav  a écrit :

> Hello,
>
> I have a Python float32 variable. I am passing this variable to a GPU OP
> written in  Theano. The variable I am collecting in make_node of theano OP
> is a cudaNDvariable.
> I followed a C file to store my Apply specific code where I am collecting
> this variable as CUDANDArray*
>
> Then I pass it to a kernel to print it's value. Unexpectedly, it takes the
> value upto decimal point as correct value but after decimal it's adding
> some unknown numbers. I dont know from where it's coming from.
>
> For example:
>
> In Python if my value is 537633.0 , in CUDA kernel  I am getting
> as 537633.375000
> Similarly,  716264.0 as 716264.75
> 969777.0 as 969777.937500
> 963690.0 as 963690.875000
> and  602411.0 as  602411.812500
>
> I dont understand how CUDANDvariables in my CUDA kernel gets modified
> after decimal points?
>
> Please let me know if anyone has any idea for this strange behaviour ?
>
> Thanks !!
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Passing sequence of sparse matrices to scan

2017-09-25 Thread Frédéric Bastien
Sadly, we don't support what you want.

Frédéric

On jeu. 21 sept. 2017 06:29 Fab  wrote:

>
> Hello,
> how could I pass to theano.scan a sequence of sparse matrices ? I can't
> seem to find a theano sparse tensor.
>
> Am I forced to convert to dense matrices and then create a tensor out of
> it so it will iterate on the first dimension ?
>
>
> Thank you
> F.
>
>
>
>
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] AssertionError when adding Sparse to float

2017-09-15 Thread Frédéric Bastien
The problem is probably the + in
sparse.sqrt(new_a) + self.epsilon

Here you add a scalar to a sparse variable. WHich will make it non sparse!
We force the user to make explicit conversion from sparse to dense to
prevent unexpected memory grow. You can manually force the conversion of
the sparse variable like this:

sparse.dense_from_sparse(sparse.sqrt(new_a)) + self.epsilon


On Thu, Sep 14, 2017 at 9:35 PM Amir Alavi  wrote:

> I'm new to theano, and my research group is using it as the backend for
> Keras. We are using some Sparse matrices for our weights, and I wanted to
> use RMSprop as our optimizer, so I had to write my own to work with these
> Sparse matrices. However, I am running into errors that I don't understand.
> For example, here is the end of the Traceback:
>
>   File "/home/aalavi/single_cell_reducer/sparse_optimizers.py", line 83,
> in get_updates
> new_p = p - lr * g / (sparse.sqrt(new_a) + self.epsilon)
>   File
> "/home/aalavi/anaconda2/envs/scrna_new/lib/python3.6/site-packages/theano/sparse/basic.py"
> , line 225, in __add__
> return add(left, right)
>   File
> "/home/aalavi/anaconda2/envs/scrna_new/lib/python3.6/site-packages/theano/sparse/basic.py"
> , line 2174, in add
> return add_s_d(x, y)
>   File
> "/home/aalavi/anaconda2/envs/scrna_new/lib/python3.6/site-packages/theano/gof/op.py"
> , line 615, in __call__
> node = self.make_node(*inputs, **kwargs)
>   File
> "/home/aalavi/anaconda2/envs/scrna_new/lib/python3.6/site-packages/theano/sparse/basic.py"
> , line 2039, in make_node
> assert y.type.ndim == 2
> AssertionError
>
>
> To put into context, here is the part of the built-in RMSprop optimizer
> from Keras, which I am trying to get to work with Sparse:
>
> for p, g, a in zip(params, grads, accumulators):
> # update accumulator
> new_a = self.rho * a + (1. - self.rho) * K.square(g)
> self.updates.append(K.update(a, new_a))
> new_p = p - lr * g / (K.sqrt(new_a) + self.epsilon)
>
>
> # apply constraints
> if p in constraints:
> c = constraints[p]
> new_p = c(new_p)
> self.updates.append(K.update(p, new_p))
> return self.updates
>
> I originally had an error with the line:
> new_a = self.rho * a + (1. - self.rho) * K.square(g)
>
> and the error was:
>   File "/home/aalavi/single_cell_reducer/sparse_optimizers.py", line 73,
> in get_updates
> new_a = self.rho * a + (1. - self.rho) * K.square(g)
>   File
> "/home/aalavi/anaconda2/envs/scrna_new/lib/python3.6/site-packages/keras/backend/theano_backend.py"
> , line 472, in square
> return T.sqr(x)
>   File
> "/home/aalavi/anaconda2/envs/scrna_new/lib/python3.6/site-packages/theano/gof/op.py"
> , line 615, in __call__
> node = self.make_node(*inputs, **kwargs)
>   File
> "/home/aalavi/anaconda2/envs/scrna_new/lib/python3.6/site-packages/theano/tensor/elemwise.py"
> , line 576, in make_node
> inputs = list(map(as_tensor_variable, inputs))
>   File
> "/home/aalavi/anaconda2/envs/scrna_new/lib/python3.6/site-packages/theano/tensor/basic.py"
> , line 171, in as_tensor_variable
> "Variable type field must be a TensorType.", x, x.type)
> theano.tensor.var.AsTensorError: ('Variable type field must be a
> TensorType.', SparseVariable{csr,float32}, Sparse[float32, csr])
> I fixed this by using theano.sparse.sqr(g) in the calculation for new_a,
> but now I can't get paste the error in calculating new_p, even after trying
> theano.sparse.sqrt(new_a) as above.
>
> I'd appreciate any help on this
>
> Is this similar to below?
> Discussion about comparing sparse to scalar:
> https://groups.google.com/d/topic/theano-users/sbKdzoWOCDI/discussion
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Plugging new optimser to Theano

2017-09-15 Thread Frédéric Bastien
I don't have time for that. You don't give enough information in your email
for other people to help you. So I would recommand that you send full code
or full code in attachment + code snipet of the important part in the email.

That way maybe someone else will have the time and would answer.

I would also recommend you to check the deep learning tutorial. They use
sdg, but show how it is done, so you can change the code for your
optimizer: http://deeplearning.net/tutorial/

Frédéric

On Wed, Sep 13, 2017 at 7:12 PM  wrote:

> Hi everyone.
>
> I am pursuing Masters. I am using theano. I wanted to use theano, but I
> don't want to use SGD. I have made an optimizer of my own. How to plug it
> in theano to train my model.
> Is it possible.
>
> Regards
> Bhaskar.
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Plugging new optimser to Theano

2017-09-14 Thread Frédéric Bastien
Hi,

Theano do not specify the optimizer to use. It is up to you to build your
own optimizer.

There is framework like Lasagne et Keras on top of Theano that provide
optimizers. If you use such framework, you should ask on there mailing list
how to specify your own optimizer.

Frédéric

On Wed, Sep 13, 2017 at 7:12 PM  wrote:

> Hi everyone.
>
> I am pursuing Masters. I am using theano. I wanted to use theano, but I
> don't want to use SGD. I have made an optimizer of my own. How to plug it
> in theano to train my model.
> Is it possible.
>
> Regards
> Bhaskar.
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Theano import * throws error

2017-09-08 Thread Frédéric Bastien
You probably have many Theano installed. Uninstall all of them. Try to
import theano. Make sure Theano isn't found, then reinstall it.

Frédéric

On Sat, Sep 2, 2017 at 10:49 AM  wrote:

> Hi,
> I have installed Theano for first time when I import it will always throw
> the following error. I have also tried updating six manually, that didn't
> help. I have searched a lot about it but couldn't find a solution yet. Any
> help will be appreciated.
>
>
> from theano import *
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "/usr/local/lib/python2.7/dist-packages/theano/__init__.py", line
> 80, in 
> from theano.scan_module import (scan, map, reduce, foldl, foldr, clone,
>   File
> "/usr/local/lib/python2.7/dist-packages/theano/scan_module/__init__.py",
> line 41, in 
> from theano.scan_module import scan_opt
>   File
> "/usr/local/lib/python2.7/dist-packages/theano/scan_module/scan_opt.py",
> line 60, in 
> from theano import tensor, scalar
> ImportError: cannot import name tensor
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] GPU convolutions implementation

2017-09-08 Thread Frédéric Bastien
Hi,

the fastest implementation is from cudnn that is closed source. But we have
one implementation based on GEMM that is open source. See the class
BaseGpuCorrMM
and its chield classes:

https://github.com/Theano/Theano/blob/master/theano/gpuarray/blas.py#L444

Frédéric

On Fri, Sep 1, 2017 at 10:40 AM V  wrote:

> Hi,
> are *any of the several implementations* of the convolutional operations
> for GPU completely open source? Where exactly in the code does the
> convolutional operation happen? (probably written in CUDA)
> Thanks!
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] CUDNN_STATUS_INTERNAL_ERROR

2017-09-08 Thread Frédéric Bastien
Hi,

I strongly suggest that you update cudnn to v7 (or at least v6). They fixed
some of those type of errors.

Updating Theano to 0.10.beta1 could also fix that, as we have some work
around in some cases for that.

The ideal would be to update both.

If you update Theano to 0.10.beta2 or the dev version, you will need to
update libgpuarray version to 0.7.1

Frédéric

On Thu, Aug 31, 2017 at 1:56 PM  wrote:

> I installed theano 0.9.0-dev and lasagne 0.2.0 dev. When I run my code
> using CPU, it works. When I my code using GPU, it shows me error as below:
>
> sing cuDNN version 5110 on context None
> Preallocating 10295/11439 Mb (0.90) on cuda0
> Mapped name None to device cuda0: Tesla K80 (:04:00.0)
> Using Theano backend.
> Compiling...
> theano_layers.py:266: UserWarning: DEPRECATION: the 'ds' parameter is not
> going to exist anymore as it is going to be replaced by the parameter 'ws'.
>   impulse = pool.pool_2d(inp, ds=self.poolsize, st=self.stride,
> ignore_border=self.ignore_border, mode='average_inc_pad')
> Compiled.
> Running network...
> Traceback (most recent call last):
>
>   File "", line 1, in 
>
> runfile('/var/home/xzhang/code_repository/network_08_03/network_07_20/relulastlayer/test_convnet_binary_bias3.py',
> wdir='/var/home/xzhang/code_repository/network_08_03/network_07_20/relulastlayer')
>
>   File
> "/var/home/xzhang/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
> line 880, in runfile
> execfile(filename, namespace)
>
>   File
> "/var/home/xzhang/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
> line 94, in execfile
> builtins.execfile(filename, *where)
>
>   File
> "/var/home/xzhang/code_repository/network_08_03/network_07_20/relulastlayer/test_convnet_binary_bias3.py",
> line 161, in 
> main(**kargs)
>
>   File
> "/var/home/xzhang/code_repository/network_08_03/network_07_20/relulastlayer/test_convnet_binary_bias3.py",
> line 107, in main
> dt=dt, max_rate=1000, proc_fn=get_output,  reset_fn=final_dense)
>
>   File "spike_tester_theano.py", line 128, in run_tester
> out_mem, t, Ntransmittedspikes, conv1_spikes, conv2_spikes,
> conv3_spikes = proc_fn(inp_images.astype('float32'), float(t))
>
>   File
> "/var/home/xzhang/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
> line 898, in __call__
> storage_map=getattr(self.fn, 'storage_map', None))
>
>   File
> "/var/home/xzhang/anaconda2/lib/python2.7/site-packages/theano/gof/link.py",
> line 325, in raise_with_op
> reraise(exc_type, exc_value, exc_trace)
>
>   File
> "/var/home/xzhang/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
> line 884, in __call__
> self.fn() if output_subset is None else\
>
> RuntimeError: error doing operation: CUDNN_STATUS_INTERNAL_ERROR
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] I cann't import theano,help me please.(On ArchLinux)

2017-09-08 Thread Frédéric Bastien
Hi,

For some about 2 weeks, Theano master wasn't working with the master of
libgpuarray. Now this have been fixed.

So update Theano again and it should work. Or downgrade libgpuarray to
0.6.9.

Theano 0.10beta1 support libgpuarray 0.6.*
Theano 0.10.beta2 support libgpuarray 0.7.*

If you use a dev version between, you must select the right one depending
of the commit.

Frédéric

On Sat, Sep 2, 2017 at 1:03 PM Ailick Guo  wrote:

> I installed pygpu and cuda (with nccl) etc.. but:
> [ailick@Ailick_Mj Build]$ DEVICE=cuda python -c "import
> pygpu;pygpu.test()"
> pygpu is installed in
> /usr/lib/python3.6/site-packages/pygpu-0.7.1-py3.6-linux-x86_64.egg/pygpu
> NumPy version 1.13.1
> NumPy relaxed strides checking option: True
> NumPy is installed in /usr/lib/python3.6/site-packages/numpy
> Python version 3.6.2 (default, Jul 20 2017, 03:52:27) [GCC 7.1.1 20170630]
> nose version 1.3.7
> *** Testing for GeForce GTX 760M
> mpi4py found: False
>
> 

Re: [theano-users] theano mean for certain indices in a vector

2017-09-08 Thread Frédéric Bastien
You must make your indices interger like `B = T.ivector('B')`.

With that, you can do indexing to select just the values you want:

A[B]

Thean compute the mean on them:

C = A[B].mean()

If E should be a vector, you can concatenate C and D like this:

theano.tensor.concatenate([C, D])

Frédéric

On Mon, Sep 4, 2017 at 3:36 AM Shadekur Rahman 
wrote:

> I am quite new in theano. I am having problem to implement the following:
>
> import theano.tensor as T
> A = T.vector('A')
> B = T.vector('B') #represents list of indices of A
> C = T.scalar('C') #represents mean of A for certain indices stored in B
> D = T.vector('D')
> E = T.vector('E')
> # E should be concatenation of C and D with length (D.length+1)
> func = theano.function(inputs=[A,B,D],outputs=E)
>
> Can anyone give me idea how to calculate C and E?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Announcing Theano 0.10.0beta1

2017-09-08 Thread Frédéric Bastien
Yes, we continue to support Python 2.7.

Frédéric

On Sat, Sep 2, 2017 at 12:41 AM Jim Goodwin  wrote:

> Hi,
> Does this new release,  Theano 0.10.0beta1,  work with Python v 2.7.13?
>
> I'm on Win7, using Theano with Keras. 2.0.8.
>
> (Yes, I'd  rather be using the newer Python, but also need to stay
> compatible
> with some others who are using old Python on a Mac.)
>
> Thanks
> Jim
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] MemoryError: alloc failed and Segmentation fault

2017-09-08 Thread Frédéric Bastien
For the alloc, Theano try to allocate 5G for one node in the graph. So even
if you have 64G total on the computer, you will need way more then 5G.

Try to use smaller minibatch or lower other memory usage. You are really
using too much memory.

You could use scan_checkpoint that is done to lower the memory usage by
duplicating some computation. It is particularly useful for very long
sequence like you have:

http://deeplearning.net/software/theano/library/scan.html#reducing-scan-s-memory-usage

The segfault could be caused by missing memory and we hit that problem at
another place, but don't handle it well at that other place. Updating
Theano could help fix that if you are using Theano 0.9. Use the dev
version. Lowering the memory usage could also fix the segfault if my
assuption are right.

Frédéric

On Tue, Sep 5, 2017 at 3:17 AM roman.foell via theano-users <
theano-users@googlegroups.com> wrote:

> Hello,
>
> I have running two programs in python, theano where the set of data with
> around 65000 is quit huge.
> I get for one of these programs this error below
>
> Apply node that caused the error: Alloc(TensorConstant{(1, 1, 1) of 0.0},
> TensorConstant{65530}, TensorConstant{150}, TensorConstant{150})
> Toposort index: 26
> Inputs types: [TensorType(float64, (True, True, True)), TensorType(int64,
> scalar), TensorType(int64, scalar), TensorType(int64, scalar)]
> Inputs shapes: [(1, 1, 1), (), (), ()]
> Inputs strides: [(8, 8, 8), (), (), ()]
> Inputs values: [array([[[ 0.]]]), array(65530), array(150), array(150)]
> Outputs clients: [[IncSubtensor{InplaceInc;int64}(Alloc.0, Elemwise{
> Composite{(i0 * (((i1 + i2) * i3) - i4) * i5 * i6)}}[(0, 3)].0, Constant{-
> 1}), IncSubtensor{Inc;int64}(Alloc.0, Elemwise{Composite{(i0 * (((i1 + i2)
> * i3) - i4) * i5 * i6)}}[(0, 4)].0, Constant{-1})]]
>
> Backtrace when the node is created(use Theano flag traceback.limit=N to
> make it longer):
>   File
> "/home/flo9fe/anaconda2/lib/python2.7/site-packages/theano/gradient.py",
> line 1272, in access_grad_cache
> term = access_term_cache(node)[idx]
>   File
> "/home/flo9fe/anaconda2/lib/python2.7/site-packages/theano/gradient.py",
> line 967, in access_term_cache
> output_grads = [access_grad_cache(var) for var in node.outputs]
>   File
> "/home/flo9fe/anaconda2/lib/python2.7/site-packages/theano/gradient.py",
> line 1272, in access_grad_cache
> term = access_term_cache(node)[idx]
>   File
> "/home/flo9fe/anaconda2/lib/python2.7/site-packages/theano/gradient.py",
> line 967, in access_term_cache
> output_grads = [access_grad_cache(var) for var in node.outputs]
>   File
> "/home/flo9fe/anaconda2/lib/python2.7/site-packages/theano/gradient.py",
> line 1272, in access_grad_cache
> term = access_term_cache(node)[idx]
>   File
> "/home/flo9fe/anaconda2/lib/python2.7/site-packages/theano/gradient.py",
> line 967, in access_term_cache
> output_grads = [access_grad_cache(var) for var in node.outputs]
>   File
> "/home/flo9fe/anaconda2/lib/python2.7/site-packages/theano/gradient.py",
> line 1272, in access_grad_cache
> term = access_term_cache(node)[idx]
>   File
> "/home/flo9fe/anaconda2/lib/python2.7/site-packages/theano/gradient.py",
> line 1108, in access_term_cache
> new_output_grads)
>
>
>
> and on another machine
>
> Segmentation fault
>
>
> Actually I have in the other program, which works fine, a theano.scan
> loop, which is of the same size and which should produce also a tensor
> with  size (65000,150,150).
> I'm working on a machine with 64 GB, so the alloc should not be a problem
> I think.
>
> I also tried to set ulimit -s unlimited, which didn't worked so far.
>
> The code which I think produces the error is of the form
>
> EPhiTPhi = np.zeros((150,150))
> loop = np.int32(-1)
> def EPhiTPhi_loop(..):
> EPhiTPhi = some calculations to produce a 150 times 150 matrix
> return EPhiTPhi
>
> result, _ = theano.scan(EPhiTPhi_loop,
>   outputs_info = [EPhiTPhi],
>   n_steps = 65000,
>   non_sequences = [...])
>
> EPhiTPhi_out = result[-1][-1]
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Significant increase in GPU memory consumption with new GPU backend

2017-08-30 Thread Frédéric Bastien
What is the name of the flag you used? The name changed with the new
back-end.

Make sure to use the github version. Not a tagged version.

Frédéric

On Wed, Aug 30, 2017 at 11:20 AM Anton Murashov <muras...@qst.hk> wrote:

> Actually, initially I tried theano-0.10-dev-0b1 or smth like this, which
> appears to be the most recent dev version, which I later re-installed to be
> theano-0.9 which is part of Anaconda package.
>
> As per preallocate flag I tried following options:
>
> (a) 1 and 0 (big problems crash with OutOfMem, some problems work
> initially but crash with OutOfMem if fit is restarted after kernel
> interrupt).
>
> (b) -1 (model.fit crashes on problem of any size (even which work in (a)
> initially) with invalid argument error in cuMemAlloc) --> this one appears
> to be an outright bug.
>
> Should I open github ticket?
>
> On 30 Aug 2017 5:59 pm, "Frédéric Bastien" <frederic.bast...@gmail.com>
> wrote:
>
>> Update to Theano dev version. There is many updates that could help you.
>>
>> If that don't fix your problem, open an issue on github.
>>
>> For preallocation, which flag to do you use?
>>
>> On Tue, Aug 29, 2017 at 8:30 PM Anton Murashov <muras...@qst.hk> wrote:
>>
>>> Hello all!
>>>
>>> I have a very similar problem with new gpuarray backend, ) it has
>>> following undesired behaviour:
>>>
>>> (a) with preallocation turned ON (any value above and including zero) it
>>> crashes with cuMemAlloc error (OutOfMemory) on problem of my size (smaller
>>> problems work)
>>> (b) with preallocation turned ON and if small problem is being fitted -
>>> interrupting the kernel and restarting results in cuMemAlloc error
>>> (OutOfMemory)
>>> (b) with preallocation turned OFF (preallocation=-1) it does not even
>>> start fitting with cuMemAlloc error (invalid argument!!! NOT
>>> OutOfMemory)
>>>
>>> GpuArrayException: ('The following error happened while compiling the
>>> node', forall_inplace,gpu,grad_of_scan_fn}(TensorConstant{1000},
>>> GpuSubtensor{int64:int64:int64}.0, GpuElemwise{Composite{(i0 -
>>> sqr(i1))}}[].0, GpuElemwise{tanh,no_inplace}.0,
>>> InplaceGpuDimShuffle{0,2,1}.0, GpuAlloc{memset_0=True}.0,
>>> GpuSubtensor{int64:int64:int64}.0, GpuSubtensor{int64:int64:int64}.0,
>>> GpuSubtensor{int64:int64:int64}.0, GpuAlloc{memset_0=True}.0,
>>> GpuAlloc{memset_0=True}.0, GpuAlloc{memset_0=True}.0,
>>> TensorConstant{1000}, GpuSubtensor{::, int64:int64:}.0,
>>> InplaceGpuDimShuffle{1,0}.0, GpuSubtensor{::, :int64:}.0, GpuSubtensor{::,
>>> int64::}.0, InplaceGpuDimShuffle{1,0}.0, GpuSubtensor{::, int64:int64:}.0,
>>> InplaceGpuDimShuffle{1,0}.0, InplaceGpuDimShuffle{1,0}.0,
>>> GpuAlloc{memset_0=True}.0), '\n', 'cuMemAlloc:
>>> CUDA_ERROR_INVALID_VALUE: invalid argument')
>>>
>>> Needless to say, on old backend all works fine, just 20% slower (on
>>> problems which actually start fitting on both backends). I use versions
>>> currently supplied with Anaconda (theano-0.9, libgpuarray 0.6.9, pygpu
>>> 0.6.9)
>>>
>>> On Tuesday, July 11, 2017 at 3:23:44 AM UTC+2, Pascal Lamblin wrote:
>>>>
>>>> On Monday, July 10, 2017 at 2:42:39 AM UTC-4, Fabian Stemmer wrote:
>>>>>
>>>>> Thanks, by setting gpuarray.preallocate=-1 I now get similar behavior
>>>>> for the new backend as for the old.
>>>>>
>>>>> Do I understand correctly, that leaving preallocate at default
>>>>> behavior (new backend) will not result in higher memory consumption, but
>>>>> merely doesn't free memory once allocated, so what I see in nvidia-smi is
>>>>> max-memory consumption up to this point?
>>>>>
>>>>
>>>> Not really, it can actually result in higher memory consumption due to
>>>> the way new memory blocks are allocated. For instance, in the worse case,
>>>> if a tensor of 1 MB gets allocated and deallocated, then a 2 MB tensor, a
>>>> new 2 MB block will be added to the pool, however it will not be mergeable
>>>> with the first one, and if it gets freed, a 3 MB tensor cannot be "split"
>>>> between the first blocks. Due to that fragmentation effect, allocating /
>>>> deallocating 1 MB, then 2 MB, 3 MB, etc., will end up using 1 + 2 + 3 + ...
>>>> MB total on the GPU.
>>>>
>>>>
>>>>> A related question: When I run with profile=True,profile_memory=True -
>>

Re: [theano-users] Re: Significant increase in GPU memory consumption with new GPU backend

2017-08-30 Thread Frédéric Bastien
Update to Theano dev version. There is many updates that could help you.

If that don't fix your problem, open an issue on github.

For preallocation, which flag to do you use?

On Tue, Aug 29, 2017 at 8:30 PM Anton Murashov  wrote:

> Hello all!
>
> I have a very similar problem with new gpuarray backend, ) it has
> following undesired behaviour:
>
> (a) with preallocation turned ON (any value above and including zero) it
> crashes with cuMemAlloc error (OutOfMemory) on problem of my size (smaller
> problems work)
> (b) with preallocation turned ON and if small problem is being fitted -
> interrupting the kernel and restarting results in cuMemAlloc error
> (OutOfMemory)
> (b) with preallocation turned OFF (preallocation=-1) it does not even
> start fitting with cuMemAlloc error (invalid argument!!! NOT
> OutOfMemory)
>
> GpuArrayException: ('The following error happened while compiling the
> node', forall_inplace,gpu,grad_of_scan_fn}(TensorConstant{1000},
> GpuSubtensor{int64:int64:int64}.0, GpuElemwise{Composite{(i0 -
> sqr(i1))}}[].0, GpuElemwise{tanh,no_inplace}.0,
> InplaceGpuDimShuffle{0,2,1}.0, GpuAlloc{memset_0=True}.0,
> GpuSubtensor{int64:int64:int64}.0, GpuSubtensor{int64:int64:int64}.0,
> GpuSubtensor{int64:int64:int64}.0, GpuAlloc{memset_0=True}.0,
> GpuAlloc{memset_0=True}.0, GpuAlloc{memset_0=True}.0,
> TensorConstant{1000}, GpuSubtensor{::, int64:int64:}.0,
> InplaceGpuDimShuffle{1,0}.0, GpuSubtensor{::, :int64:}.0, GpuSubtensor{::,
> int64::}.0, InplaceGpuDimShuffle{1,0}.0, GpuSubtensor{::, int64:int64:}.0,
> InplaceGpuDimShuffle{1,0}.0, InplaceGpuDimShuffle{1,0}.0,
> GpuAlloc{memset_0=True}.0), '\n', 'cuMemAlloc:
> CUDA_ERROR_INVALID_VALUE: invalid argument')
>
> Needless to say, on old backend all works fine, just 20% slower (on
> problems which actually start fitting on both backends). I use versions
> currently supplied with Anaconda (theano-0.9, libgpuarray 0.6.9, pygpu
> 0.6.9)
>
> On Tuesday, July 11, 2017 at 3:23:44 AM UTC+2, Pascal Lamblin wrote:
>>
>> On Monday, July 10, 2017 at 2:42:39 AM UTC-4, Fabian Stemmer wrote:
>>>
>>> Thanks, by setting gpuarray.preallocate=-1 I now get similar behavior
>>> for the new backend as for the old.
>>>
>>> Do I understand correctly, that leaving preallocate at default behavior
>>> (new backend) will not result in higher memory consumption, but merely
>>> doesn't free memory once allocated, so what I see in nvidia-smi is
>>> max-memory consumption up to this point?
>>>
>>
>> Not really, it can actually result in higher memory consumption due to
>> the way new memory blocks are allocated. For instance, in the worse case,
>> if a tensor of 1 MB gets allocated and deallocated, then a 2 MB tensor, a
>> new 2 MB block will be added to the pool, however it will not be mergeable
>> with the first one, and if it gets freed, a 3 MB tensor cannot be "split"
>> between the first blocks. Due to that fragmentation effect, allocating /
>> deallocating 1 MB, then 2 MB, 3 MB, etc., will end up using 1 + 2 + 3 + ...
>> MB total on the GPU.
>>
>>
>>> A related question: When I run with profile=True,profile_memory=True -
>>> shouldn't the max GPU memory stat in the profiling correspond to what I see
>>> in nvidia-smi when I run with preallocate on default?
>>>
>>
>> Again, not really, due to that fragmentation effect.
>>
>>
>>> Currently, I see ~400MB GPU memory usage in profiling and that's what I
>>> see with preallocate=-1 too (although I can't guarantuee there aren't
>>> higher spikes that I don't see with nvidia-smi). When I leave preallocate
>>> at default, I see GPU memory usage ~2GB (but the profiling still reports
>>> only 400MB).
>>>
>>
>> Preallocating 400 or 500 MB may avoid fragmentation and bring the total
>> consumption peak closer to what is actually allocated to arrays.
>>
>>
>>>
>>> Thanks
>>> Fabian
>>>
>>> On Thursday, June 22, 2017 at 3:45:07 PM UTC+2, nouiz wrote:

 The equivalent to the old back-end setting for memory is:
 gpuarray.preallocate=-1.

 The new back-end by default will cache all call to cudaMalloc() to
 speed up computation. This flag will disable this cache. THis is the same
 default as the old back-end.

 On Thu, Jun 22, 2017 at 9:41 AM Fabian Stemmer 
 wrote:

> When I did use preallocation I used lib.cnmem=1 for theano 0.8.2 and
> gpuarray.preallocate=1 for theano 0.9.0 and 0.10.dev.
> For most experiments (including those in the log files) I did not use
> preallocation, because the only way I could see the difference in memory
> usage was through nvidia-smi, which only shows the static pre-allocation
> when it is used.
> I believe the problem does not disappear with pre-allocation, since I
> see my training crash for much smaller models with the new backend even
> then. However, I cannot measure the effect of switching backends on GPU
> memory when I use preallocation.
>
>
> On Thursday, 

Re: [theano-users] ImageNet ILSVRC2012 training and validation data sets

2017-08-28 Thread Frédéric Bastien
Thanks to forward the confirmation.

Frédéric

On Mon, Aug 28, 2017 at 2:54 PM ephi5757 via theano-users <
theano-users@googlegroups.com> wrote:

> FYI, see email below.
>
>
>
> - Forwarded Message -
>
> From: E Park 
>
> To:  
>
> Cc: ImageNet Support ; ilsvrc2...@image-net.org <
> ilsvrc2...@image-net.org>
>
> Sent: ‎Thursday‎, ‎August‎ ‎24‎, ‎2017‎ ‎03‎:‎48‎:‎08‎ ‎PM
>
> Subject: Re: ImageNet ILSVRC2012 validation dataset
>
>
>
> Hi, validation dataset is not overlapped with training images. Thanks!
>
>
>
> Best Regards,
>
> E Park
>
>
> On Monday, August 28, 2017 at 9:06:02 AM UTC-4, nouiz wrote:
>
>> I didn't used this dataset myself recently. But it would be a very big
>> error that the validation set is a subset of the training set. This should
>> never be the case.
>>
>> Fred
>>
>> On mer. 23 août 2017 18:47 ephi5757 via theano-users <
>> theano...@googlegroups.com> wrote:
>>
> Is the ImageNet ILSVRC2012 validation dataset (50,000 images) a subset of
>>> the training dataset (1.2 million images) or are the validation
>>> images new/independent of the training images?
>>> .
>>> Best,
>>> Arnold
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>>
>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to theano-users...@googlegroups.com.
>>
>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] ImageNet ILSVRC2012 training and validation data sets

2017-08-28 Thread Frédéric Bastien
I didn't used this dataset myself recently. But it would be a very big
error that the validation set is a subset of the training set. This should
never be the case.

Fred

On mer. 23 août 2017 18:47 ephi5757 via theano-users <
theano-users@googlegroups.com> wrote:

> Is the ImageNet ILSVRC2012 validation dataset (50,000 images) a subset of
> the training dataset (1.2 million images) or are the validation
> images new/independent of the training images?
> .
> Best,
> Arnold
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Differences between theano.shared and numpy.ndarray.

2017-08-28 Thread Frédéric Bastien
It is not a bug. Currently, by default the shape of a shared variable can
changes during the execution. So we can't use the shape to determine the
broadcasting pattern.

You can pass the parameter broadcastable=() where you put the broadcast
pattern. By default it is never broadcastable.

Fred

On ven. 25 août 2017 00:58 佐藤優  wrote:

> I saw follown defferences:
> import theano
> import theano.tensor as T
> import numpy as np
>
> o = np.ones((1,2,3))
> o2= np.ones((2,1))
> o2_shared = theano.shared(ones((2, 1)))
>
> print((o2 + o).shape)
> print((o2_shared + o).shape)
>
> result is
>
> (1, 2, 3)
> [1 2 1]
>
>
> Maybe, broadcasting result is difference.
>
> But changing the order of calculation:
> import theano
> import theano.tensor as T
> import numpy as np
>
>
> o = np.ones((1,2,3))
> o2= np.ones((2,1))
> o2_shared = theano.shared(ones((2, 1)))
>
> print((o + o2).shape)
> print((o + o2_shared).shape.eval())
>
>
>
> result is same as follows:
>
> (1, 2, 3)
> [1 2 3]
>
>
> Is this theano.shared bug?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Using multiple GPUs

2017-08-22 Thread Frédéric Bastien
The first one is probably faster most of the time.

But this interface isn't week supported on Theano. This is still
experimental and have known crash cases.

On mar. 22 août 2017 12:33 Alfred Ferrer Florensa 
wrote:

> Hello,
>
> I am not sure where to ask this, but here seem the most appropiate place.
> I have a little question about the working of the Multiple GPUs; would it
> be equally fast this option (the one in the example on the
> http://deeplearning.net/software/theano/tutorial/using_multi_gpu.html):
>
> import numpyimport theano
> v01 = theano.shared(numpy.random.random((1024, 1024)).astype('float32'),
> target='dev0')v02 = 
> theano.shared(numpy.random.random((1024, 1024)).astype('float32'),
> target='dev0')v11 = 
> theano.shared(numpy.random.random((1024, 1024)).astype('float32'),
> target='dev1')v12 = 
> theano.shared(numpy.random.random((1024, 1024)).astype('float32'),
> target='dev1')
> f = theano.function([], [theano.tensor.dot(v01, v02),
>  theano.tensor.dot(v11, v12)])
> f()
>
> Or this one where I am using the same device for both dots:
>
> import numpyimport theano
> v01 = theano.shared(numpy.random.random((1024, 1024)).astype('float32'),
> target='dev0')v02 = 
> theano.shared(numpy.random.random((1024, 1024)).astype('float32'),
> target='dev0')v11 = 
> theano.shared(numpy.random.random((1024, 1024)).astype('float32'),
> target='dev0')v12 = 
> theano.shared(numpy.random.random((1024, 1024)).astype('float32'),
> target='dev0')
> f = theano.function([], [theano.tensor.dot(v01, v02),
>  theano.tensor.dot(v11, v12)])
> f()
>
>
> Thanks for your time
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] ValueError: dimension mismatch in x,y_idx arguments

2017-08-22 Thread Frédéric Bastien
To get better error message from Theano, disable the GPU and use this flag.
optimizer=fast_compile

In this way, Theano will probably give you a stack trace where you created
the computation that cause problem.

On lun. 21 août 2017 19:15 ephi5757 via theano-users <
theano-users@googlegroups.com> wrote:

> Hi Frederic,
>   I am pre-processing the image data again to regenerate the training
> and validation .hkl image files. I found in my code (alexnet\train.py) that
> the program crashes before it completes the first iteration, i.e., as it
> looks at the first of 5003 minibatches. In order to make room on my
> external solid state hard drive, I deleted the training and validation file
> folders named train_(or val_)hkl_b256_b_128, which I don't think are used
> but take up 237GB of space... and kept the folders named train_(or
> val_)hkl_b256_b_256. Perhaps in another day or two when the 1.2 M images
> are reshaped into 5003 files each containing 256 images that are size (256
> x 256)... then I can try to run the train.py again and see if the errors
> correct themselves.
>  This may have been my mistake for wanting to save space for my neural
> net model output (weights and biases).
> Best,
> Arnold
>
> On Wednesday, August 16, 2017 at 10:00:43 PM UTC-4, nouiz wrote:
>
>> I think the problem are the values in the index vector. Double check that.
>>
>> Frédéric
>>
>> On Wed, Aug 16, 2017 at 5:49 PM ephi5757 via theano-users <
>> theano...@googlegroups.com> wrote:
>>
> I'm retraining my implementation of the neural network model AlexNet in
>>> Theano and not long after it initializes the program crashes with the error
>>> "ValueError: dimension mismatch in x,y_idx arguments." see traceback below.
>>> Any comments or suggestions that you may offer would be helpful. Note
>>> that the only discernible difference in this training in comparison to the
>>> previous one is that I am using 5003 .hkl training image data files instead
>>> of 5004. Nevertheless, I don't think this value needs to be fixed.
>>> Looking forward to your reply.
>>> Arnold
>>> ___.
>>>
>>>
>>> C:\SciSoft\Git\theano_alexnet>python train.py
>>> THEANO_FLAGS=mode=FAST_RUN, floatX=float32
>>> Using gpu device 0: Quadro K4000M (CNMeM is disabled, CuDNN 3007)
>>> Using gpu device 0: Quadro K4000M (CNMeM is disabled, CuDNN 3007)
>>> ... building the model
>>> conv (cudnn) layer with shape_in: (3, 227, 227, 1)
>>> conv (cudnn) layer with shape_in: (96, 27, 27, 1)
>>> conv (cudnn) layer with shape_in: (256, 13, 13, 1)
>>> conv (cudnn) layer with shape_in: (384, 13, 13, 1)
>>> conv (cudnn) layer with shape_in: (384, 13, 13, 1)
>>> fc layer with num_in: 9216 num_out: 4096
>>> dropout layer with P_drop: 0.5
>>> fc layer with num_in: 4096 num_out: 4096
>>> dropout layer with P_drop: 0.5
>>> softmax layer with num_in: 4096 num_out: 1000
>>> ... training
>>>
>>>
>>> __.
>>> Traceback (most recent call last):
>>>   File
>>> "C:\SciSoft\WinPython-64bit-2.7.9.4\python-2.7.9.amd64\lib\multiprocessing\process.py",
>>> line 266, in _bootstrap
>>> self.run()
>>>   File
>>> "C:\SciSoft\WinPython-64bit-2.7.9.4\python-2.7.9.amd64\lib\multiprocessing\process.py",
>>> line 120, in run
>>> self._target(*self._args, **self._kwargs)
>>>   File "C:\SciSoft\Git\theano_alexnet\train.py", line 128, in train_net
>>> recv_queue=load_recv_queue)
>>>   File "C:\SciSoft\Git\theano_alexnet\train_funcs.py", line 171, in
>>> train_model_wrap
>>> cost_ij = train_model()
>>>   File "c:\scisoft\git\theano\theano\compile\function_module.py", line
>>> 871, in __call__
>>> storage_map=getattr(self.fn, 'storage_map', None))
>>>   File "c:\scisoft\git\theano\theano\gof\link.py", line 314, in
>>> raise_with_op
>>> reraise(exc_type, exc_value, exc_trace)
>>>   File "c:\scisoft\git\theano\theano\compile\function_module.py", line
>>> 859, in __call__
>>> outputs = self.fn()
>>>
>>> ValueError: dimension mismatch in x,y_idx arguments
>>> Apply node that caused the error:
>>> GpuCrossentropySoftmaxArgmax1HotWithBias(GpuDot22.0,
>>> , GpuFromHost.0)
>>> Toposort index: 298
>>> Inputs types: [CudaNdarrayType(float32, matrix),
>>> CudaNdarrayType(float32, vector), CudaNdarrayType(float32, vector)]
>>> Inputs shapes: [(256, 1000), (1000,), (1,)]
>>> Inputs strides: [(1000, 1), (1,), (0,)]
>>> Inputs values: ['not shown', 'not shown', CudaNdarray([ 275.])]
>>> Outputs clients:
>>> [[GpuCAReduce{add}{1}(GpuCrossentropySoftmaxArgmax1HotWithBias.0)],
>>> [GpuCrossentropySoftmax1HotWithBiasDx(GpuElemwise{Inv}[(0, 0)].0,
>>> GpuCrossentropySoftmaxArgmax1HotWithBias.1, GpuFromHost.0)], []]
>>> .
>>> _.
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>>
>> To 

Re: [theano-users] Get the name of the gpu device used in Theano

2017-08-22 Thread Frédéric Bastien
This give you the string we print:

theano.gpuarray.init_dev.devmap['cuda0'].devname

example string: "GeForce GTX 750"

Frédéric

On Sun, Aug 13, 2017 at 12:41 PM Geppetto Null 
wrote:

> I am interested in getting the name of the gpu device I use. That is, when
> I import theano, I get the following message:
>
>
>
>
> *In [1]: import theano Using cuDNN version 5110 on context None Mapped
> name None to device cuda0: GeForce GTX 1070 (:01:00.0)*
>
>
> I would like to get a string with the name of the gpu (i.e., "GeForce GTX
> 1070"), or just the whole line as shown above. I tried
> theano.config.device, but it contains just the 'cuda0'.
>
> Many thanks,
> Christos
>
>
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: GpuElemwise working with old backend but not with new ?

2017-08-17 Thread Frédéric Bastien
thanks. I don't know when @abergeron can check that. You can probably work
around it by introducing a cast to floatX before the log10. Can you make an
issue on github so we don't loose track of this?

On Wed, Aug 16, 2017 at 11:00 PM Rodolphe Cambier 
wrote:

> I was able to pinpoint the problem to this part:
>
> This code does not work:
> import theano
> import theano.tensor as T
>
> a = ivector()
> fun = theano.function([a], T.log10(a))
>
> And this code does:
> import theano
> import theano.tensor as T
>
> a = vector()
> fun = theano.function([a], T.log10(a))
>
> So basically it is defining the vector as Integer32 that crashes the
> GpuElemwise.
> And I really don't know why.
>
> Le jeudi 17 août 2017 01:17:11 UTC+2, Rodolphe Cambier a écrit :
>>
>> Hello,
>>
>> I have the same code running on two computers, one with the old backend
>> and one with the new one. The code is the following:
>>
>> import lasagne
>> import theano
>> import theano.tensor as T
>> import lasagne.layers as ll
>>
>> max_length = 1000
>> learning_rate = .1
>>
>>
>> l_in = ll.InputLayer(shape=(None, max_length, 1), name="InputLayer")
>> l_reshape = ll.ReshapeLayer(l_in, ([0], 1, [1]), name="ReshapeLayer")
>> l_conv0 = ll.Conv1DLayer(l_reshape, num_filters=15, filter_size=30,
>> stride=10,
>>
>>  nonlinearity=lasagne.nonlinearities.rectify, name="Conv1DLayer_0")
>> l_conv1 = ll.Conv1DLayer(l_conv0, num_filters=15, filter_size=4, stride=4,
>>
>>  nonlinearity=lasagne.nonlinearities.rectify, name="Conv1DLayer_1")
>> l_conv2 = ll.Conv1DLayer(l_conv1, num_filters=15, filter_size=1, stride=1,
>>
>>  nonlinearity=lasagne.nonlinearities.rectify, name="Conv1DLayer_2")
>> l_out = ll.DenseLayer(ll.dropout(l_conv2, p=0.3), num_units=1,
>>
>>  nonlinearity=lasagne.nonlinearities.linear, name="Denselayer")
>>
>>
>> predicted_values = lasagne.layers.get_output(l_out)
>> target_values = T.ivector('target_output')
>>
>> predict_log = T.sgn(predicted_values) *  T.log(1+T.abs_(predicted_values))
>> target_log = T.sgn(target_values) *  T.log(1+T.abs_(target_values))
>>
>> cost = T.mean(lasagne.objectives.squared_error(predict_log,target_log))
>> all_params = lasagne.layers.get_all_params(l_out)
>>
>> updates = lasagne.updates.adagrad(cost, all_params, learning_rate)
>> train = theano.function([l_in.input_var, target_values], [cost,
>> predicted_values, target_values], updates =updates,
>> allow_input_downcast=True)
>>
>>
>>
>> So I setup a simple convolutional net, then i try to measure a specific
>> cost on it, using T.sgn and T.log.
>> On the old backend, this works fine.
>> On the new backend, it worked fine for a day (i ran it maybe 15 times),
>> then at some point it outputted:
>>
>>
>> Using cuDNN version 5105 on context None
>> Mapped name None to device cuda0: Tesla K40c (:01:00.0)
>> Traceback (most recent call last):
>>   File "quicktest.py", line 34, in 
>> train = theano.function([l_in.input_var, target_values], [cost,
>> predicted_values, target_values], updates =updates,
>> allow_input_downcast=True)
>>   File
>> "/home/rcambier/miniconda2/envs/cardio_env/lib/python2.7/site-packages/theano/compile/function.py",
>> line 317, in function
>> output_keys=output_keys)
>>   File
>> "/home/rcambier/miniconda2/envs/cardio_env/lib/python2.7/site-packages/theano/compile/pfunc.py",
>> line 486, in pfunc
>> output_keys=output_keys)
>>   File
>> "/home/rcambier/miniconda2/envs/cardio_env/lib/python2.7/site-packages/theano/compile/function_module.py",
>> line 1838, in orig_function
>> fn = m.create(defaults)
>>   File
>> "/home/rcambier/miniconda2/envs/cardio_env/lib/python2.7/site-packages/theano/compile/function_module.py",
>> line 1712, in create
>> input_storage=input_storage_lists, storage_map=storage_map)
>>   File
>> "/home/rcambier/miniconda2/envs/cardio_env/lib/python2.7/site-packages/theano/gof/link.py",
>> line 699, in make_thunk
>> storage_map=storage_map)[:3]
>>   File
>> "/home/rcambier/miniconda2/envs/cardio_env/lib/python2.7/site-packages/theano/gof/vm.py",
>> line 1084, in make_all
>> impl=impl))
>>   File
>> "/home/rcambier/miniconda2/envs/cardio_env/lib/python2.7/site-packages/theano/gof/op.py",
>> line 955, in make_thunk
>> no_recycling)
>>   File
>> "/home/rcambier/miniconda2/envs/cardio_env/lib/python2.7/site-packages/theano/gof/op.py",
>> line 858, in make_c_thunk
>> output_storage=node_output_storage)
>>   File
>> "/home/rcambier/miniconda2/envs/cardio_env/lib/python2.7/site-packages/theano/gof/cc.py",
>> line 1215, in make_thunk
>> keep_lock=keep_lock)
>>   File
>> "/home/rcambier/miniconda2/envs/cardio_env/lib/python2.7/site-packages/theano/gof/cc.py",
>> line 1155, in __compile__
>> keep_lock=keep_lock)
>>   File
>> "/home/rcambier/miniconda2/envs/cardio_env/lib/python2.7/site-packages/theano/gof/cc.py",
>> line 1635, in cthunk_factory
>> *(in_storage + out_storage + orphd))
>> RuntimeError: ('The following error happened while compiling the 

Re: [theano-users] RuntimeError: error doing operation: CUDNN_STATUS_INTERNAL_ERROR if I switch to gpuarray backend

2017-08-16 Thread Frédéric Bastien
Can you update cudnn to v7 that is release? They also fixed some of this
type of error from memory.

On Wed, Aug 16, 2017 at 2:14 AM  wrote:

> I was using theano (0.9.0.dev_RELEASE) with pygpu (0.6.9). If I switch to
> gpuarray backend (device=cuda), RuntimeError happens 'error doing
> operation: CUDNN_STATUS_INTERNAL_ERROR', but device=gpu is OK.
> How to deal with it ?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] RuntimeError: error doing operation: CUDNN_STATUS_INTERNAL_ERROR if I switch to gpuarray backend

2017-08-16 Thread Frédéric Bastien
Update to the dev version of Theano. We have work around many such case.

On mer. 16 août 2017 02:14  wrote:

> I was using theano (0.9.0.dev_RELEASE) with pygpu (0.6.9). If I switch to
> gpuarray backend (device=cuda), RuntimeError happens 'error doing
> operation: CUDNN_STATUS_INTERNAL_ERROR', but device=gpu is OK.
> How to deal with it ?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Low GPU utilization for larger batches

2017-08-14 Thread Frédéric Bastien
We merged in Theano dev the CTC wrapper. I would recommend that you update
to it.

Le lun. 14 août 2017 05:50, Ameretat Reith  a
écrit :

> Nevermind, turn out warp-ctc binding I'm using, just works with CPU
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] derivative of function based on eigenvalues in theano

2017-08-10 Thread Frédéric Bastien
The problem is smat(x)

It return list of list of Theano variable. This isn't a Theano variable
itself. You can have Theano contact all of this correctly to make a new
corresponding Theano variable with:

v,w=nlin.eigh(theano.tensor.stacklists(smat(x)))

Fred

On Tue, Aug 1, 2017 at 7:37 AM Jyotiranjan Beuria <
jyotiranjan.beu...@gmail.com> wrote:

> Hi All,
>
> I am trying to calculate the derivative of a function that
> depends on eigenvalues of a matrix. I am new to Theano.
> Here is a snippet of the code.
> import numpy as np
>
> import theano
> import theano.tensor as T
> import theano.tensor.nlinalg as nlin
>
> def myFun(X,a=2):
> s=T.dmatrix('s')
> x=T.dvector('x')
> a=T.dscalar('a')
> def smat(x):
>   return  [[x[0]**2,x[1],x[2]],
>[x[1]**2,a*x[1],a*X[0]],
>[x[2]**2,x[0],a*x[1]]]
> v,w=nlin.eigh(smat(x))
> TG=T.grad(v,x)
> Eigen,Grad = theano.function([x], [v,TG],allow_input_downcast=True )
>
> ev=Eigen(X)
> der=Grad(X)
> print ev,der
>
> myFun([2,3,5])
>
> Can anyone help me to solve this problem?
>
> Regards,
> Jyotiranjan
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] why does this gradient is invalid?

2017-08-09 Thread Frédéric Bastien
This is a bug in one Theano optimization: local_dimshuffle_subtensor

Thanks for the report. I made an issue so that we don't forget it:

https://github.com/Theano/Theano/issues/6288

Frédéric

On Wed, Aug 9, 2017 at 4:50 AM 佐藤優  wrote:

> I wonder why bellow code is invalid..
>
> from numpy import *
> import theano.tensor as T
> x = T.dmatrix("x")
> mx = x[...,None,:]
> a = T.ones((1,3))
> T.grad(mx[...,0].dot(a).sum(), a).eval({x:ones((5,10)).astype(float32)})
>
> bellow error is emerged.
>
> ---ValueError
> Traceback (most recent call 
> last)/home/yu/anaconda3/lib/python3.5/site-packages/theano/compile/function_module.py
>  in __call__(self, *args, **kwargs)883 outputs =\--> 884  
>self.fn() if output_subset is None else\885 
> self.fn(output_subset=output_subset)
> ValueError: Shape mismatch: A.shape[1] != x.shape[0]
>
> During handling of the above exception, another exception occurred:
> ValueErrorTraceback (most recent call 
> last) in ()  3 mx = x[...,None,:]  
> 4 a = T.ones((1,3))> 5 T.grad(mx[...,0].dot(a).sum(), 
> a).eval({x:ones((5,10)).astype(float32)})
> /home/yu/anaconda3/lib/python3.5/site-packages/theano/gof/graph.py in 
> eval(self, inputs_to_values)517 args = [inputs_to_values[param] 
> for param in inputs]518 --> 519 rval = 
> self._fn_cache[inputs](*args)520 521 return rval
> /home/yu/anaconda3/lib/python3.5/site-packages/theano/compile/function_module.py
>  in __call__(self, *args, **kwargs)896 
> node=self.fn.nodes[self.fn.position_of_error],897 
> thunk=thunk,--> 898 storage_map=getattr(self.fn, 
> 'storage_map', None))899 else:900 # 
> old-style linkers raise their own exceptions
> /home/yu/anaconda3/lib/python3.5/site-packages/theano/gof/link.py in 
> raise_with_op(node, thunk, exc_info, storage_map)323 # extra long 
> error message in that case.324 pass--> 325 reraise(exc_type, 
> exc_value, exc_trace)326 327
> /home/yu/anaconda3/lib/python3.5/site-packages/six.py in reraise(tp, value, 
> tb)683 value = tp()684 if value.__traceback__ is 
> not tb:--> 685 raise value.with_traceback(tb)686 
> raise value687
> /home/yu/anaconda3/lib/python3.5/site-packages/theano/compile/function_module.py
>  in __call__(self, *args, **kwargs)882 try:883 
> outputs =\--> 884 self.fn() if output_subset is None else\
> 885 self.fn(output_subset=output_subset)886 
> except Exception:
> ValueError: Shape mismatch: A.shape[1] != x.shape[0]
> Apply node that caused the error: 
> CGemv{inplace}(AllocEmpty{dtype='float64'}.0, TensorConstant{1.0}, 
> InplaceDimShuffle{1,0}.0, Rebroadcast{0}.0, TensorConstant{0.0})
> Toposort index: 7
> Inputs types: [TensorType(float64, vector), TensorType(float64, scalar), 
> TensorType(float64, matrix), TensorType(float64, vector), TensorType(float64, 
> scalar)]
> Inputs shapes: [(3,), (), (3, 5), (1,), ()]
> Inputs strides: [(8,), (), (8, 24), (80,), ()]
> Inputs values: [array([  0.e+000,   4.94065646e-324,   
> 9.88131292e-324]), array(1.0), 'not shown', array([ 1.]), array(0.0)]
> Inputs type_num: [12, 12, 12, 12, 12]
> Outputs clients: [[InplaceDimShuffle{x,0}(CGemv{inplace}.0)]]
>
> Debugprint of the apply node:
> CGemv{inplace} [id A]  ''
>  |AllocEmpty{dtype='float64'} [id B]  ''
>  | |TensorConstant{3} [id C] 
>  |TensorConstant{1.0} [id D] 
>  |InplaceDimShuffle{1,0} [id E]  ''
>  | |Alloc [id F]  ''
>  |   |TensorConstant{(1, 1) of 1.0} [id G] 
>  |   |Shape_i{0} [id H]  ''
>  |   | |x [id I] 
>  |   |TensorConstant{3} [id C] 
>  |Rebroadcast{0} [id J]  ''
>  | |Subtensor{int8, ::, int64} [id K]  ''
>  |   |InplaceDimShuffle{0,x,1} [id L]  False))> ''
>  |   | |x [id I] 
>  |   |Constant{0} [id M] 
>  |   |Constant{0} [id N] 
>  |TensorConstant{0.0} [id O] 
>
> Storage map footprint:
>  - x, Input, Shape: (5, 10), ElemSize: 8 Byte(s), TotalSize: 400 Byte(s)
>  - InplaceDimShuffle{0,x,1}.0, Shape: (5, 1, 10), ElemSize: 8 Byte(s), 
> TotalSize: 400 Byte(s)
>  - Alloc.0, Shape: (5, 3), ElemSize: 8 Byte(s), TotalSize: 120 Byte(s)
>  - InplaceDimShuffle{1,0}.0, Shape: (3, 5), ElemSize: 8 Byte(s), 

Re: [theano-users] Split Op (OpFromGraph) to save intermediate results for grad

2017-08-09 Thread Frédéric Bastien
Sorry, but I'm not able to answer this grad question. Hopefully someone
else that better understand that part can answer.

Fred

On Mon, Jul 31, 2017 at 9:43 AM  wrote:

> I am trying to build an Op with a custom/optimized gradient formula. To
> override the automatic differenciation, I'm trying to use OpFromGraph.
> The gradient formula can reuse intermediate results from the feed forward
> pass, so I have tried to split the Op in two: Op1 computes the intermediate
> and final result and gives all of it to Op2, Op2 forwards the final result
> and takes care of the gradient computation given all the necessary values.
>
> Note that the gradient of the loss wrt the intermediate results is never
> needed.
>
> Below is a what I believe to be a minimal working example of my problem,
> it exhibits a strange conversion error related to the gradient computation
> with the intermediate values. Please take note of the presence of an
> integral variable.
>
> import numpy as np
> import theano.tensor as T
> import theano
>
>
> def make_ops():
> x = T.vector()
> m = T.bvector()
>
> r = m.sum().astype('floatX')  # intermediate value
> z = x * m / r  # final result
>
>
> def grad_op1(inputs, output_gradients):
> return [
> output_gradients[0],  # gradient computation delegated to op2
> T.DisconnectedType()()  # variable has integral type
> # T.zeros_like(inputs[1])
> ]
>
>
> op1 = theano.OpFromGraph(
> inputs=[x, m],
> outputs=[z, m, r],
> grad_overrides=grad_op1,
> inline=True,
> name="op1")
>
>
> z = T.vector()
> r_forwarded = T.scalar()
>
> def grad_op2(inputs, output_gradients):
> _, m_, r_ = inputs
> dm_ = theano.gradient.DisconnectedType()(name="dm_")
> # I think the error could be around here
> <<--
> # dr_ = theano.gradient.DisconnectedType()(name="dr_")
> dr_ = T.zeros_like(r_)
> return [m_ / r_, dm_, dr_]
>
> op2 = theano.OpFromGraph(
> inputs=[z, m, r_forwarded],
> outputs=[z],  # Op 2 forwards the precomputed output
> grad_overrides=grad_op2,
> inline=True,
> name="op2")
>
> return op1, op2
>
>
> def main():
> op1, op2 = make_ops()
> x = T.vector(name="x")
> m = T.bvector(name="m")
> z_intermediate, m_forwarded, r = op1(x, m)
> z = op2(z_intermediate, m, r)
>
> g = theano.grad(T.sum(z), wrt=x)
> print(g.eval({x: np.array([1., .3, .0, .2], dtype=np.float32),
>   m: np.array([1, 0, 1, 1], dtype=np.int8)}))
>
>
> if __name__ == "__main__":
> main()
>
> (Note: I had tried to hijack my previous question thread with this problem
> but it went unnoticed, sorry for double posting)
>
> Thank you
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Why is this GpuFromHost call generated?

2017-08-09 Thread Frédéric Bastien
Hi,

do you use float? I was meaning float32. The old back-end only suport
float32. So if you use float64 or int32, nothing will compute on the GPU.

The new back-end support many dtypes including float64 and int*. So it
should work better.

Note, if you do operation between float32 and int32, the result is float64.
This is the normal c/numpy casting rules. float32 and int16 return float32.
So if you end up with float64, it is frequently that case.
Fred

On Wed, Aug 9, 2017 at 2:48 PM Haining Yu <hainin...@gmail.com> wrote:

> Thank you Fred.
>
> Yes I am using device=gpu0. I will switch to the new backend and test
> again.
>
> On float64, do you mean int64? If yes, am puzzled by that too. In my code
> I never explicit cast to int64. Instead I use tensor.ivector() to index
> matrices and cast them explicitly into int32. For example:
>
> x = T.ivector()
>
> z = T.cast(y, dtype='int32')
>
> Do you think these things cause the problem?
>
> Thank you,
> Haining
>
> Haining Yu on Gmail
>
> On Wed, Aug 9, 2017 at 2:36 PM, Frédéric Bastien <
> frederic.bast...@gmail.com> wrote:
>
>> My guess is that you use the old GPU backend. Can you confirm you use the
>> Theano flag device=gpu? And that also you have float64 in the graph. The
>> old backend don't support them. I suggest that you install the just
>> released 0.10 beta and that you use the new backend with device=cuda.
>>
>> Also,you can use the flag warn_float64=pdb to find where you have them
>> and make sore they are float32. This will be faster.
>>
>> Fred
>>
>> Le lun. 31 juil. 2017 14:42, Haining Yu <hainin...@gmail.com> a écrit :
>>
>>> Hi,
>>>
>>> I am running a RNN/GRU model for a fairly large dataset with the goal
>>> of sequence prediction. When I profile my code, I found one GpuFromHost
>>> takes ~30% of computation time. See part of profiling results below:
>>>
>>> <% time><#call>  
>>>  
>>>   30.2%73.0% 462.776s   3.71e-01s   1248   221
>>>   GpuFromHost(Subtensor{:int64:}.0)
>>> input 0: dtype=float32, shape=(512, 1024, 2048), strides=(-4096, 4,
>>> 2097152)
>>> output 0: dtype=float32, shape=(512, 1024, 2048), strides=(2097152,
>>> 2048, 1)
>>>
>>> theano.printing.debugprint shows that the call is generated in gradient
>>> calculation; see snippet below. There is also a HostFromGpu a couple of
>>> layers below.
>>>
>>>  | | | | |GpuFromHost [id FN] ''   221
>>>  | | | |   |Subtensor{:int64:} [id FO] ''   220
>>>  | | | | |Subtensor{::int64} [id FP] ''   219
>>>  | | | | | |InplaceDimShuffle{1,2,0} [id FQ] ''   218
>>>  | | | | | | |Reshape{3} [id FR] ''   217
>>>  | | | | | |   |CrossentropyCategorical1HotGrad [id FS] ''   216
>>>  | | | | | |   | |Elemwise{Second}[(0, 0)] [id FT] ''   215
>>>  | | | | | |   | | |CrossentropyCategorical1Hot [id FU] ''   209
>>>  | | | | | |   | | | |HostFromGpu [id FV] ''   206
>>>
>>> I have heard about the cost of using GpuFromHost (and its counterpart
>>> HostFromGpu) and had moved almost all data to GPU (via shared
>>> variables). So I don't understand why the call is needed. In particular I
>>> don't understand:
>>>
>>> 1. If all my data are on GPU and theano is optimized for GPU, why is the
>>> GpuFromHost even generated?
>>> 2. Is the call generated because the memory is too large? The call tries
>>> to move 512 x 1024 x 2048 x 4 = 4.2GB memory. But my Tesla K80 should have
>>> 12GB memory thus the need to move seems remote on the surface. Overall
>>> memory consumption seems OK under profiling.
>>> 3. Does the call have anything to do with CrossentropyCategorical1Hot? I
>>> assume CrossentropyCategorical1Hot  has been optimized for GPU. But the
>>> code shows that a HostFromGPU is called before CrossentropyCategorical1Hot
>>> is applied. I am not sure if CrossentropyCategorical1Hot has any memory
>>> requirement (e.g., c-contiguous).
>>> 4. Should I try any GPU assertion to debug the root cause of the problem?
>>>
>>> Any hint is appreciated.
>>>
>>> Thank you,
>>> Haining
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to theano-users+unsubscr...@googlegroups.com.
>>&g

Re: [theano-users] Error while compiling two theano functions

2017-08-09 Thread Frédéric Bastien
note, I made an issue about this:

https://github.com/Theano/Theano/issues/6287

Fred

On Mon, Jul 3, 2017 at 7:51 AM Frédéric Bastien <frederic.bast...@gmail.com>
wrote:

> This is still experimental and we don't have to to work on it now.
>
> For multiple GPU, you should do data parallelism. The is 3 framework that
> can help you, theano-mpi, platoon and synkronous.
>
> Fred
>
> Le sam. 1 juil. 2017 16:33, Ramana Subramanyam <vxrram...@gmail.com> a
> écrit :
>
>> Hi,
>> This error that I reported was solved using the
>> flag optimizer_excluding=fusion. However, when I try to use multiple GPUs,
>> I get this error
>>
>> ERROR (theano.gof.opt): Optimization failure due to:
>> LocalOptGroup(local_abstractconv_cudnn,local_abstractconv_gw_cudnn,local_abstractconv_gi_cudnn,local_abstractconv_gemm,local_abstractconv3d_gemm,local_abstractconv_gradweights_gemm,local_abstractconv3d_gradweights_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gradinputs_gemm)
>> ERROR (theano.gof.opt): node: AbstractConv2d{convdim=2, border_mode=(4,
>> 3), subsample=(1, 1), filter_flip=False, imshp=(None, None, None, None),
>> kshp=(None, None, None, None), filter_dilation=(1, 1)}(X,
>> CIFAR10.pixelCNN.pxCNN.vstack1.filter)
>> ERROR (theano.gof.opt): TRACEBACK:
>> ERROR (theano.gof.opt): Traceback (most recent call last):
>>   File
>> "/home/akshat/anaconda2/envs/ramana-test/lib/python2.7/site-packages/theano/gof/opt.py",
>> line 1982, in process_node
>> replacements = lopt.transform(node)
>>   File
>> "/home/akshat/anaconda2/envs/ramana-test/lib/python2.7/site-packages/theano/gof/opt.py",
>> line 1335, in transform
>> new_repl = opt.transform(node)
>>   File
>> "/home/akshat/anaconda2/envs/ramana-test/lib/python2.7/site-packages/theano/gpuarray/dnn.py",
>> line 2816, in local_abstractconv_cudnn
>> ctx = infer_context_name(*node.inputs)
>>   File
>> "/home/akshat/anaconda2/envs/ramana-test/lib/python2.7/site-packages/theano/gpuarray/basic_ops.py",
>> line 122, in infer_context_name
>> raise ValueError("Could not infer context from inputs")
>> ValueError: Could not infer context from inputs
>>
>> I used these THEANO_FLAGS
>> , contexts=dev0->cuda1;dev1->cuda3,floatX=float32,optimizer_excluding=fusion.
>> The same flags works well with the import and the sample code on this
>> page
>> <http://deeplearning.net/software/theano/tutorial/using_multi_gpu.html>. This
>> is my first time using multiple GPUs, apologise if I have made some trivial
>> mistake
>>
>> Ramana
>>
>>
>> On Tuesday, June 27, 2017 at 11:50:12 PM UTC+5:30, Ramana Subramanyam
>> wrote:
>>>
>>> Hi Fred,
>>> Since there wasn't any \n in the output, it was all in the same line.
>>> You have to scroll towards your left/right on this link
>>> <http://dpaste.com/0SSEM4E>. I am pasting a smaller copy of that below,
>>>
>>>
>>> (Composite{Switch((LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1,
>>> i0)}(i0, i1), i2), i3), Composite{Switch(LT(i0, i1), i1,
>>> i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(i0, i1), i2), i3) + i4)}(i8,
>>> Composite{((i0 + i1) - i2)}(i2, Composite{Switch(LT(Composite{Switch(GE(i0,
>>> i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{((i0 + i1) -
>>> i2)}(i0, i1, i2), i3, i4), i5), i3), i3, Composite{Switch(GE(i0, i1), i1,
>>> i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{((i0 + i1) - i2)}(i0,
>>> i1, i2), i3, i4), i5))}(i1, Composite{Switch(LT(Composite{Switch(GE(i0,
>>> i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{((i0 + i1) -
>>> i2)}(i0, i1, i2), i3, i4), i5), i3), i3, Composite{Switch(GE(i0, i1), i1,
>>> i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{((i0 + i1) - i2)}(i0,
>>> i1, i2), i3, i4), i5))}(i2, i3, i4, i5, i6, Composite{((i0 + i1) - i2)}(i7,
>>> i3, i4)), Composite{(Switch(LT(Composite{Switch(LT(i0, i1), i1,
>>> i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(i0, i1), i2), i3),
>>> Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1,
>>> i0)}(i0, i1), i2), i3) + i4)}(i8, Composite{((i0 + i1) - i2)}(i7, i3, i4),
>>> i5, Composite{Switch(LT(Composite{Switch(GE(i0, i1), i1,
>>> i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{((i0 + i1) - i2)}(i0,
>>> i1, i2), i3, i4), i5), i3), i3, Composite{Switch(GE(i0, i1), i1,
>>> i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{((i0 + i1) - i2)}(i0,
>>> i1, i2), i3, i4), i5))}(i2,

Re: [theano-users] Grouped Convolution Error

2017-08-09 Thread Frédéric Bastien
Their have been a fix in Theano. Can you update and try again?

Le lun. 24 juil. 2017 19:56, Michael Klachko  a
écrit :

> I'm trying the new grouped convolutions feature in the latest Theano
> version, so I ran a simple convnet with CIFAR-10: 32x32 RGB input images
> (batch size = 128), and the first convolutional layer has 9 feature maps. I
> want to have 3 feature maps per color, so if I understand it correctly, I
> should use num_groups=3 argument in conv2d op.
>
> Again: I want the first conv. layer to process input images with 3 filters
> per color, so that each color channel is connected to 3 feature maps.
> Filters are 8x8 with stride 8 (non-overlapping) so the output feature maps
> should be 4x4 pixels.
>
> After adding the num_groups arg I got the following error:
>
> ValueError: images and kernel must have the same stack size
> Apply node that caused the error: GpuDnnConv{algo='time_on_shape_change',
> inplace=True, num_groups=3}(GpuContiguous.0, GpuContiguous.0,
> GpuAllocEmpty{dtype='float32', context_name=None}.0, GpuDnnConvDesc{
> border_mode='valid', subsample=(8, 8), dilation=(1, 1), conv_mode='conv',
> precision='float32'}.0, Constant{1.0}, Constant{0.0})
> Toposort index: 62
> Inputs types: [GpuArrayType(float32, 4D), GpuArrayType(float32
> , 4D), GpuArrayType(float32, 4D),  at 0x7fa3900bc910>, Scalar(float32), Scalar(float32)]
> Inputs shapes: [(128, 3, 32, 32), (9, 3, 8, 8), (128, 9, 4, 4), 'No
> shapes', (), ()]
> Inputs strides: [(12288, 4096, 128, 4), (768, 256, 32, 4), (576, 64, 16, 4
> ), 'No strides', (), ()]
> Inputs values: ['not shown', 'not shown', 'not shown',  NULL at 0x7fa372027f30>, 1.0, 0.0]
> Outputs clients: [[GpuElemwise{Add}[(0, 0)](GpuDnnConv{algo=
> 'time_on_shape_change', inplace=True, num_groups=3}.0,
> InplaceGpuDimShuffle{x,0,x,x}.0)]]
>
>
>
> Thanks,
> Michael
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error with theano. This was working fine earlier

2017-08-09 Thread Frédéric Bastien
You changed something in your installation. Try to delete your Theano
cache. If that don't fix it try to remove all your Python. You probably
have mixed Python in your environment.

Le mer. 19 juil. 2017 10:37, SUNITHA  a écrit :

> Dear All,
>
> This is the error message I get:
>
> *C:\Users\Sunitha\Anaconda2\libs/python27.lib: error adding symbols: File
> in wrong format collect2.exe: error: ld returned 1 exit status*
>
> 1 #include 
> 2 #include "theano_mod_helper.h"
> 3 #include "structmember.h"
> 4 #include 
> 5
> 6 #if PY_VERSION_HEX >= 0x0300
> 7 #include "numpy/npy_3kcompat.h"
> 8 #define PyCObject_AsVoidPtr  NpyCapsule_AsVoidPtr
> 9 #define PyCObject_GetDesc  NpyCapsule_GetDesc
> 00010 #define PyCObject_Check NpyCapsule_Check
> 00011 #endif
> 00012
>
> Please help me fix this issue.
>
> Regards,
> Sunitha
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Theano ConftestImportFailure error after installing

2017-08-09 Thread Frédéric Bastien
We don't use py.test, but nosetests.

Fred

Le mar. 8 août 2017 12:12, Sara Saeed  a écrit :

>
> I am new to Ubuntu and I tried to install Theano using Anaconda.
>
> After tracking some other errors and solving them. I am stuck with this
> error, which I don't understand when running py.test.
>
> Can anyone help me to fix this.
>
> Thank you
>
>
>
>
>
>
>
>
> ===1 error in 6.61 seconds
> 
> (theanoenv) sara@sara-ubunto:-$ AC
> (theano_env) sara@sara-ubunto:-$ AC
> (theano_env) sara@sara-ubunto:-$ AC
> (theano_env) sara@sara-ubunto:-$ AC
> (theano_env) sara@sara-ubunto:-$ AC
> (theano_env) sara@sara-ubunto:-$ theano-cache clear
> (theano_env) sara@sara-ubunto:~$ PY.tast
> test session stats
> =
> platform linux2 -- Python 2.7.13, pytest-3.1.,r py-1.4 4, pluggy-0.4.0
> rootdir: /home/sara, inifile:
> collected 0 items / 1 errors
>
>
> ===ERRORS
> ===
> __EROR collecting
> __
> anaconda3/envs/theano_env/lib/python2.7/site-package/py/_path/common.py:
> 372: In
> visit
>  for x in Visitor(fil, rec, ignore, bf, sort).gen(self):
> anaconda3/envs/theano_env/lib/python2.7/site-packages/py/_path/common.py:
> 421: in
> gen
>  for p in self.gen(subdir):
> anaconda3/envs/theano_env/lib/python2.7/site-packages/py/_path/common.py:
> 421: in
> gen
>  for p in self.gen(subdir):
> anaconda3/envs/theano_env/lib/python2.7/site-packages/py/_path/common.py:
> 421: in
> gen
>  for p in self.gen(subdir):
> anaconda3/envs/theano_env/lib/python2.7/site-packages/py/_path/common.py :
> 421: in
> gen
>  for p in self.gen(subdir):
> anaconda3/envs/theano_env/lib/python2.7/site-packages/py/_path/common.py:
> 411: in
> gen
>  if p.check(dir=1) and (rec is None or rec(p))])
> anaconda3/envs/theano_env/lib/python2.7/site-packages/_pytest/main.py:686:
> in
> recurse
>  ihook = self.gethookproxy(path)
> anaconda3/envs/theano_env/lib/python2.7/site-packages/_pytest/main.py:590:
> in ge
> thookproxy
>  my_conftestodules = pm._getconftestmodules(fspath)
> anaconda3/envs/theano_env/lib/python2.7/site-packages/_pytest/config.py:
> 350: in
> _getconftestmodules
>  mod = self.importconftest(conftestpath)
> anaconda3/envs/theano_env/lib/python2.7/site-packages/_pytest/config.py:
> 375: in
> _importconftest
>  raise ConftestImportFailure(conftestpath, sys.exc_info())
>  onftestImportrailure: ImportflismatchErroR('pandas.conftest', '/home/sara/an
>
> aconda3/envs/theano_env/lib/python2.7/site-packages/pandas/conftest.py',
> local('
> /home/sara/anaconda3/lib/python3.6/site-packages/pandas/conftest.py'))
> !! Interrupted: 1 errors during collection
> !
> ==1error in 11.59 seconds
> 
> (theano_env) sara@sara-ubunto:~$
>
>
>
>
>
> 
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] plotting the inner scan operations in d3viz?

2017-08-08 Thread Frédéric Bastien
Hi,

there was plan to make d3viz support Scan, but it wasn't finished and we
don't have someone working on this now.

You can probably hack it with something like this:

# get the compiled scan node:
f= theano.function(...)
scans = [n for n in f.maker.fgraph.apply_node if isinstance(n.op,
theano.scan_module.scan_op.Scan)]
# I'll handle just the first scan for the demo

theanod3viz(scans[0].op.fn)

This way, you will have one graph for the function and one for each scan in
the graph.

Keep us updated if you try it.

Frédéric

On Tue, Aug 1, 2017 at 8:35 PM Juan Camilo Gamboa Higuera <
juancami...@gmail.com> wrote:

> Hi all,
>
> From a compiled function, how can i plot the inner graph of a scan with
> d3viz? Is there a way of getting all the inner scan functions from a
> compiled function? Can I pass those to d3viz?
>
> Thanks!
>
> -- Juan Camilo
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Batched matrix operations in Theano

2017-08-08 Thread Frédéric Bastien
Do you want to make a PR or of this?

Le dim. 23 juil. 2017 08:20, Maxim Kochurov  a
écrit :

> class BatchedDiag(tt.Op):
> """
> Fast BatchedDiag allocation
> """
> __props__ = ()
>
> def make_node(self, diag):
> diag = tt.as_tensor_variable(diag)
> if diag.type.ndim != 2:
> raise TypeError('data argument must be a matrix', diag.type)
>
> return tt.Apply(self, [diag], [tt.tensor3(dtype=diag.dtype)])
>
> def perform(self, node, ins, outs, params=None):
> (C,) = ins
> (z,) = outs
>
> bc = C.shape[0]
> dim = C.shape[-1]
> Cd = np.zeros((bc, dim, dim), C.dtype)
> bidx = np.repeat(np.arange(bc), dim)
> didx = np.tile(np.arange(dim), bc)
> Cd[bidx, didx, didx] = C.flatten()
> z[0] = Cd
>
> def grad(self, inputs, gout):
> (gz,) = gout
> idx = tt.arange(gz.shape[-1])
> return [gz[..., idx, idx]]
>
> def infer_shape(self, nodes, shapes):
> return [(shapes[0][0], ) + (shapes[0][1],) * 2]
>
> Here is code for Custom Op that might work faster when taking gradients
>
>
> суббота, 7 мая 2016 г., 16:00:54 UTC+3 пользователь Tambet Matiisen
> написал:
>
>> OK, solved. I used Keras wrapper K.zeros(), but this created Numpy matrix
>> of zeros, which failed with Theano expression as dimension. After switching
>> to full Theano implementation the error went away. The final code looks
>> like this:
>>
>> # initialize with zeros
>> batch_size = x.shape[0]
>> a = T.zeros((batch_size, num_actuators, num_actuators))
>> # set diagonal elements
>> batch_idx = T.extra_ops.repeat(T.arange(batch_size), num_actuators)
>> diag_idx = T.tile(T.arange(num_actuators), batch_size)
>> b = T.set_subtensor(a[batch_idx, diag_idx, diag_idx],
>> T.flatten(T.exp(x[:, :num_actuators])))
>> # set lower triangle
>> cols = np.concatenate([np.array(range(i), dtype=np.uint) for i in
>> xrange(num_actuators)])
>> rows = np.concatenate([np.array([i]*i, dtype=np.uint) for i in
>> xrange(num_actuators)])
>> cols_idx = T.tile(T.as_tensor_variable(cols), batch_size)
>> rows_idx = T.tile(T.as_tensor_variable(rows), batch_size)
>> batch_idx = T.extra_ops.repeat(T.arange(batch_size), len(cols))
>> c = T.set_subtensor(b[batch_idx, rows_idx, cols_idx], T.flatten(x[:,
>> num_actuators:]))
>>
>> Thanks injecting me belief that it is possible!
>>
>>   Tambet
>>
>> reede, 6. mai 2016 17:57.02 UTC+3 kirjutas nouiz:
>>>
>>> what error do you get?
>>>
>>>
>>> On Fri, May 6, 2016 at 10:54 AM, Tambet Matiisen 
>>> wrote:
>>>
 I could not figure out how make broadcasting work here, so I
 implemented option 2.

 num_actuators=4
 x = K.variable([range(num_actuators*(num_actuators+1)/2)]*5)

 batch_size = K.shape(x)[0]
 a = K.zeros((batch_size.eval(), num_actuators, num_actuators))

 # populate diagonal
 batch_idx = T.extra_ops.repeat(T.arange(batch_size), num_actuators)
 diag_idx = T.tile(T.arange(num_actuators), batch_size)
 b = T.set_subtensor(a[batch_idx, diag_idx, diag_idx],
 T.flatten(K.exp(x[:, :num_actuators])))

 # populate lower triangle
 cols = np.concatenate([np.array(range(i), dtype=np.uint) for i in
 xrange(num_actuators)])
 rows = np.concatenate([np.array([i]*i, dtype=np.uint) for i in
 xrange(num_actuators)])
 cols_idx = T.tile(K.variable(cols, dtype=int), batch_size)
 rows_idx = T.tile(K.variable(rows, dtype=int), batch_size)
 batch_idx = T.extra_ops.repeat(T.arange(batch_size), len(cols))
 c = T.set_subtensor(b[batch_idx, rows_idx, cols_idx], T.flatten(x[:,
 num_actuators:]))

 It works nicely, but only because I eval() batch_size when creating all
 zeros array. In real application I don't know the batch size beforehand and
 using it without eval() gives an error. So the question is - can you create
 a matrix in Theano dynamically, depending on some value in computational
 graph?

   Tambet

 reede, 6. mai 2016 16:14.59 UTC+3 kirjutas nouiz:
>
> broadcasting could be in theory more efficient. So this would request
> that you try option 1.
>
> Otherwise, both should work.
>
> Fred
>
> On Fri, May 6, 2016 at 9:12 AM, Tambet Matiisen 
> wrote:
>
>> Actually I know the dimensions of the matrix beforehand, so I can do
>> those calculations in Python+Numpy. Following seems to do the trick:
>>
>> num_actuators = 3
>> x = [1,2,3,4,5,6]
>> a = K.zeros((num_actuators, num_actuators))
>>
>> # set diagonal elements
>> b = T.set_subtensor(a[range(num_actuators), range(num_actuators)],
>> K.exp(x[:num_actuators]))
>>
>> # set lower triangle
>> cols = np.concatenate([np.array(range(i), dtype=np.uint) for i in
>> xrange(num_actuators)])
>> 

Re: [theano-users] Memory issues

2017-08-08 Thread Frédéric Bastien
This error was fixed in the master. Update again.

Le mar. 1 août 2017 11:43, Eric Ma <ericmajingl...@gmail.com> a écrit :

> I did an install of the latest commit, but get an error:
>
> FileNotFoundError: [Errno 2] No such file or directory:
> '/home/ericmjl/anaconda/envs/bayesian/lib/python3.6/site-packages/theano/tensor/c_code/dimshuffle.c'
>
> I checked - `c_code/` is not installed when running `python setup.py
> install`. I'm a bit unfamiliar with how C-code gets installed - surely an
> __init__.py isn't needed?
>
> On Tue, Aug 1, 2017 at 9:50 AM Eric Ma <ericmajingl...@gmail.com> wrote:
>
>> Thank you for your reply, Frederic!
>>
>> I have 16GB of RAM on my CPU.
>>
>> Just to confirm, on GitHub, the dev version of Theano is the current
>> master branch, is that correct? Would it be sufficient for me to do a
>> "development" install (`python setup.py develop`), or is something more
>> complicated needed?
>>
>> On Tue, Aug 1, 2017 at 9:37 AM, Frédéric Bastien <
>> frederic.bast...@gmail.com> wrote:
>>
>>> The problem is g++ that is missing memory. How much CPU ram your
>>> computer have?
>>>
>>> Can you update to the dev version of THeano on github? It could have
>>> this already fixed and is stable.
>>>
>>> Fred
>>>
>>> On Fri, Jul 28, 2017 at 1:17 PM Eric Ma <ericmajingl...@gmail.com>
>>> wrote:
>>>
>>>> Hey everybody,
>>>>
>>>> I have an issue with Theano memory allocation on my GPU when using
>>>> PyMC3. I can't seem to figure out how to debug this. I'm not sure if this
>>>> problem is reproducible on other machines yet, but at least I know it's
>>>> consistently happening on my own machine (desktop ASUS i7 + GTX 1080). Is
>>>> there a good way to debug this and figure out how to make things work?
>>>>
>>>> I am trying out Bayesian neural nets, and have an example here:
>>>> https://github.com/ericmjl/bayesian-analysis-recipes/blob/master/multiclass-classification-neural-network.ipynb
>>>> .
>>>>
>>>> My compute environment in detail:
>>>> - PyMC3 version 3.1, installed from master and tracking latest commits.
>>>> - Theano version 0.9.0
>>>> - libgpuarray 0.6.8
>>>> - pygpu 0.6.8
>>>> - NVIDIA GTX1080 with driver version 384.59
>>>> - Ubuntu 16.04.2
>>>>
>>>> The error message shows up in the notebook at the cell with the
>>>> following code:
>>>>
>>>> ```python
>>>> with model:
>>>> samp_ppc = pm.sample_ppc(trace, samples=500)
>>>>
>>>> ```
>>>>
>>>> Full error message:
>>>>
>>>> ```python
>>>>   0%|  | 0/100 [00:00>>> compilation with the command line below:
>>>> /usr/bin/g++ -shared -g -O3 -fno-math-errno -Wno-unused-label -Wno-
>>>> unused-variable -Wno-write-strings -march=broadwell -mmmx -mno-3dnow -msse
>>>> -msse2 -msse3 -mssse3 -mno-sse4a -mcx16 -msahf -mmovbe -maes -mno-sha 
>>>> -mpclmul
>>>> -mpopcnt -mabm -mno-lwp -mfma -mno-fma4 -mno-xop -mbmi -mbmi2 -mno-tbm
>>>> -mavx -mavx2 -msse4.2 -msse4.1 -mlzcnt -mrtm -mhle -mrdrnd -mf16c 
>>>> -mfsgsbase
>>>> -mrdseed -mprfchw -madx -mfxsr -mxsave -mxsaveopt -mno-avx512f 
>>>> -mno-avx512er
>>>> -mno-avx512cd -mno-avx512pf -mno-prefetchwt1 -mclflushopt -mxsavec -mxsaves
>>>> -mno-avx512dq -mno-avx512bw -mno-avx512vl -mno-avx512ifma -mno-avx512vbmi
>>>> -mno-clwb -mno-pcommit -mno-mwaitx --param l1-cache-size=32 --param l1-
>>>> cache-line-size=64 --param l2-cache-size=8192 -mtune=generic -
>>>> DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -m64 -fPIC -I/home/ericmjl/
>>>> anaconda/envs/bayesian/lib/python3.6/site-packages/pygpu -I/home/
>>>> ericmjl/anaconda/envs/bayesian/lib/python3.6/site-packages/numpy/core/include
>>>> -I/home/ericmjl/anaconda/envs/bayesian/include -I/home/ericmjl/anaconda
>>>> /envs/bayesian/lib/python3.6/site-packages/numpy/core/include -I/home/
>>>> ericmjl/anaconda/envs/bayesian/include/python3.6m -I/home/ericmjl/
>>>> github/software/Theano/theano/gof -L/home/ericmjl/anaconda/envs/
>>>> bayesian/lib -L/home/ericmjl/anaconda/envs/bayesian/lib -fvisibility=hidden
>>>> -o /home/ericmjl/.theano/compiledir_Linux-4.10--generic-x86_64-with-
>>>> debian-stretch-sid-x86_64-3.6.1-64/tmp595s6h99/
>>>

Re: [theano-users] Simplest op with gradient example

2017-08-01 Thread Frédéric Bastien
Just don't use otypes, itypes. Implement the make_node() method. It isn't
hard. We won't extend shortly as_op or otypes/itypes.

If you have question about make_node(), you can ask them.

Fred

On Tue, Aug 1, 2017 at 12:33 PM 'Michael Osthege' via theano-users <
theano-users@googlegroups.com> wrote:

> sorry to dig up this old thread, but I am also working with pymc3 and have
>>> a related problem:
>>>
>>
> I am trying to create custom Ops for integrating an ODE model. I can
> already do it with as_op, but that can't be pickled leading to problems
> with parallelization in pymc3.
>
> I followed the theano documentation to implement a custom Op, but I
> noticed a problem with the otypes. The *Ops otypes is a list of dvector,
> but the length of that list can change with the Op parameters*. But the
> itypes/otypes are not instance-attributes but class attributes. So
> theoretically I *can't have multiply custom Ops of the same type that
> have different itypes/otypes*, right?
>
> Also, do you have any idea how I could circumvent this?
>
> cheers
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] theano0.9 and cuda-8

2017-07-05 Thread Frédéric Bastien
You are mixing cuda 7.5 and 8 in your end variable. Never do this. Delete
your Theano cache. It is not safe to reuse it.

The new backend support cuda 8. Your warning is about cudnn version. This
is not the same.

If in a clean environment,you still have problems, give the error you have.

Fred

Le mer. 5 juil. 2017 08:51,  a écrit :

> I found out that it is use CPU by running the program on a machine where
> all GUPs are already in use.
> On our setup if all GPUs are already in use the 'import theano' causes the
> program to crash.
> When I remove the * from cuda* it does use GPU, but then the theano
> functions
> do not compile and dumps long header files and error descriptions on the
> screen.
> My suspicion is that this may be due to the fact I am using Lasagne
> libarary that
> seems not to be compatible with the new backend.
> I am OK using the old backend but it does not work with cuda-8. import
> theano
> gives me the following warning
> ---
> Using gpu device 0: GeForce GTX 1080 Ti (CNMeM is enabled with initial
> size: 95.0% of memory, cuDNN 6021)
> /var/local/miniconda2/lib/python2.7/site-packages/theano/sandbox/cuda/__init__.py:631:
> UserWarning: Your cuDNN version is more recent than the one Theano
> officially supports. If you see any problems, try updating Theano or
> downgrading cuDNN to version 5.1.
> 
> And once again theano functions do not compile. Long error messages with
> some header files are dumped as part of error.
> However when I the following settings
>
>
> CUDA_ROOT =/usr/local/cuda-7.5
> LD_PATH=/usr/local/cuda-8/lib64:...
> PATH=/usr/local/cuda-8/bin: ...
>
> Thing work but after certain number of training epochs (reducing the error
> nicely)
> all training parameters suddenly become nan. I have a complex network
> using 4 Bi-GRUs, Conv1DLayer,
> MaxPool1DLayer from lasagne and some attention layers I implemented.
> I have spent numerous hours trying to make sure gradients remain bounded,
> by using 'theano.gradient.grad_clip'
> at various stages of computation and by using
> 'lasagne.updates.norm_constraint' and still have
> not been able to pin down the cause of parameters suddenly becoming 'nan'.
>
> I just would like to be sure that this is not happening because of my
> usage of cuda-8 and cuda-7.5 at
> the same time.
> I appreciate your help in resolving this.
>
>
>
>
> On Monday, July 3, 2017 at 6:13:09 PM UTC-4, Pascal Lamblin wrote:
>
>> How did you determine it is using the CPU?
>>
>> On Monday, July 3, 2017 at 10:20:40 AM UTC-4, ngu...@interactions.com
>> wrote:
>>>
>>> Changing it to cuda* results is CPU usage and not GPU
>>>
>>> On Friday, June 30, 2017 at 4:20:22 PM UTC-4, nouiz wrote:

 You should not mix cuda version...

 Do you still use the old gpu back-end (device=gpu*) or the new back-end
 (device=cuda*)?

 Fred

 On Fri, Jun 30, 2017 at 9:57 AM  wrote:

> I trying to understand some unexplained behavior of my code.
> To be sure that the problem is with my code and not with software
> incompatibility I would like to sure  about the correctness of my setup
> I have:
> theano version 0.9
>
> CUDA_ROOT =/usr/local/cuda-7.5
> LD_PATH=/usr/local/cuda-8/lib64:...
> PATH=/usr/local/cuda-8/bin: ...
>
> Essentially I am using some parts of cuda-8 and some of cuda-7.5.
>
> With CUDA_ROOT =/usr/local/cuda-8, I cannot compile the theano
> functions.
>
> Thanks
>
>
>
>
> ***
>
> This e-mail and any of its attachments may contain Interactions
> Corporation proprietary information, which is privileged, confidential, or
> subject to copyright belonging to the Interactions Corporation. This 
> e-mail
> is intended solely for the use of the individual or entity to which it is
> addressed. If you are not the intended recipient of this e-mail, you are
> hereby notified that any dissemination, distribution, copying, or action
> taken in relation to the contents of and attachments to this e-mail is
> strictly prohibited and may be unlawful. If you have received this e-mail
> in error, please notify the sender immediately and permanently delete the
> original and any copy of this e-mail and any printout. Thank You.
>
>
> ***
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google
> Groups "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to theano-users...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

>
> 

Re: [theano-users] Re: How to delete theano model from GPU before initiating another model

2017-07-05 Thread Frédéric Bastien
Pure Theano Do not expect shapes. By default shapes can changes. You just
need to be consistent in the computation you do on the shapes.

If you set the batchsize shape to None, you are not using pure Theano.

Do you use lasagne? Keras?

Can you show the code where you set the shape to None?

Fred

Le mer. 5 juil. 2017 08:01, Feras Almasri  a écrit :

> I found that it is possible to change the batch size during the run time
> by defining the batch size to None. But the pooling layer in case of
> averaging or same size doesn't have this option and should be defined in
> different way.
>
>
> On Tuesday, July 4, 2017 at 11:21:04 PM UTC+2, Feras Almasri wrote:
>>
>> I'm re-initiating another model in a loop because I'm testing different
>> batch sizes so I have to re initiate the model again. it seems in my code
>> that every time I'm re initiating the model the old model still in the GPU
>> and not deleted. is there any way to delete the model before initiating the
>> second ?
>>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: How create activation function from scratch in python

2017-07-05 Thread Frédéric Bastien
Give the full error message. Without our I can't help.

Fred

Le mer. 5 juil. 2017 12:33, Bruno Messias  a
écrit :

> I'  need call "custom" function with a given variable  x, such that
>
> type(x)
>
>
> On Wednesday, July 5, 2017 at 12:53:22 PM UTC-3, Bruno Messias wrote:
>>
>> For didactic reasons, I am trying to implement a  "activation"  function
>>
>>
>> a, x, y = T.matrices("a", 'x','y')
>> b = T.scalars("b")
>> def  custom(val):
>>
>> T.log(val)
>>
>>
>> z_switch = T.switch(T.gt(a,b), T.true_div(T.add(T.pow(x, qEff),0),
>> 2), T.log(y))
>>
>> f_switch = theano.function([a, b, x, y], z_switch,
>>mode=theano.Mode(linker='vm'))
>> return f_switch(val, 0, val, val)
>>
>> Then I get the following error
>>
>> Expected an array-like object, but found a Variable: maybe you are trying to 
>> call a function on a (possibly shared) variable instead of a numeric array?
>>
>> Repeating> this is only for didactic purposes. There are any good tutorial 
>> about this?
>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] How to delete theano model from GPU before initiating another model

2017-07-04 Thread Frédéric Bastien
You don't need to re create a new model of you just change the batch size.
What make you think this is needed? Some great framework on top of Theano
will request that you set the batchsize to None to tell that it will change.

Fred

Le mar. 4 juil. 2017 17:21, Feras Almasri  a écrit :

> I'm re-initiating another model in a loop because I'm testing different
> batch sizes so I have to re initiate the model again. it seems in my code
> that every time I'm re initiating the model the old model still in the GPU
> and not deleted. is there any way to delete the model before initiating the
> second ?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] theano0.9 and cuda-8

2017-06-30 Thread Frédéric Bastien
You should not mix cuda version...

Do you still use the old gpu back-end (device=gpu*) or the new back-end
(device=cuda*)?

Fred

On Fri, Jun 30, 2017 at 9:57 AM  wrote:

> I trying to understand some unexplained behavior of my code.
> To be sure that the problem is with my code and not with software
> incompatibility I would like to sure  about the correctness of my setup
> I have:
> theano version 0.9
>
> CUDA_ROOT =/usr/local/cuda-7.5
> LD_PATH=/usr/local/cuda-8/lib64:...
> PATH=/usr/local/cuda-8/bin: ...
>
> Essentially I am using some parts of cuda-8 and some of cuda-7.5.
>
> With CUDA_ROOT =/usr/local/cuda-8, I cannot compile the theano functions.
>
> Thanks
>
>
>
>
> ***
>
> This e-mail and any of its attachments may contain Interactions
> Corporation proprietary information, which is privileged, confidential, or
> subject to copyright belonging to the Interactions Corporation. This e-mail
> is intended solely for the use of the individual or entity to which it is
> addressed. If you are not the intended recipient of this e-mail, you are
> hereby notified that any dissemination, distribution, copying, or action
> taken in relation to the contents of and attachments to this e-mail is
> strictly prohibited and may be unlawful. If you have received this e-mail
> in error, please notify the sender immediately and permanently delete the
> original and any copy of this e-mail and any printout. Thank You.
>
>
> ***
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Deleting .theano each time I change server/machine/node

2017-06-30 Thread Frédéric Bastien
You can use this Theano flags to append to the default compiledir the
hostname. This way you will have one for each computer, but it still will
be in your home:

compiledir_formet="compiledir_%(short_platform)s-%(processor)s-%(python_version)s-%(python_bitwidth)s-%(hostname)s

But you can go simplet, change the base compiledir to be local to each
computer:

base_compiledir=/tmp/%(user)s/theano_base_compiledir

Fred

On Fri, Jun 30, 2017 at 8:54 AM André L  wrote:

> *Context of the issue *
>
> I work with Theano in my university server via SSH and with virtualenv.
> I dont have admin privileges.
> I can access several servers.
>
> Normally i work in server "D". But when i tried running the same
> experiment on server "R", it failed with illegal instruction. So i deleted
> the .theano folder and it worked.
> However, when if i want to run again on server "D", i must delete .theano
> folder again.
>
>
> *Problem:*
> How can i have theano working on two different machines without deleting
> .theano each time i change server? Also, what if i want to run a theano
> code in server "D" and also in server "R" at the same time?
>
>
> *According to Daniel Renshaw:*
>
>
> *Now I look at the documentation I see that the full directory determined
> by the compiledir_format flag includes details that should, I would have
> thought, ensured that your compiled bits were kept separate on the two
> different architectures. Not idea why it isn't working. However, if you
> set base_compiledir differently on each machine you'll ensure each gets its
> own compilation cache.*
>
>
> However, i cant have two .locals since im no admin and im always logging
> via SSH with the same user.
>
> What should I do ?
>
>
>
>
> References :
>
> 
>
> https://stackoverflow.com/questions/29338016/import-theano-gets-illegal-instruction
>
>
> https://groups.google.com/forum/#!searchin/theano-users/.local$20different$20computer|sort:relevance/theano-users/MxPMjrt-ZB8/930HYOa6BAAJ
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Always Segmentation fault(core dumped) when use theano(NOT when import)

2017-06-30 Thread Frédéric Bastien
Install the dev version of Theano. It contains segmentation fault fixes.

If that don't work, tell us, but I think it should work.

Le ven. 30 juin 2017 06:00, noodles  a écrit :

> Hello,
>
> I encounter a strange problem when using theano. These days I bought
> a new computer and install theano on it, and I can even import it in
> python with no error, but everytime I create a function, it corrupted with 
> "Segmentation
> fault(core dumped)". Below is the detail:
> I have installed theano on another two old machine, and they works 
> well.
> This new machine is : CPU: intel 7700; GPU  2xGTX1080Ti, OS: ubuntu16.04.
> CUDA 8.0, cudnn 5.1 .I use miniconda2 to install theano( conda install
> theano), python 2.7, theano 0.9.0
>
>   when I import theano in python, the output is:
>
>> *nice@fat01:~$ python*
>> *Python 2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016,
>> 23:09:15) *
>> *[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2*
>> *Type "help", "copyright", "credits" or "license" for more information.*
>> *Anaconda is brought to you by Continuum Analytics.*
>> *Please check out: http://continuum.io/thanks > >
>> and https://anaconda.org *
>> *>>> import theano*
>> *Using cuDNN version 5110 on context None*
>> *Mapped name None to device cuda1: GeForce GTX 1080 Ti (:02:00.0)*
>> *>>> *
>
>
> then I input the code from the exercise of
> http://deeplearning.net/software/theano/tutorial/using_gpu.html#gpuarray
>
> 
>
> *import numpy*
> *import theano*
> *import theano.tensor as T*
> *rng = numpy.random*
> *N = 400*
> *feats = 784*
> *D = (rng.randn(N, feats).astype(theano.config.floatX),*
> *rng.randint(size=N,low=0, high=2).astype(theano.config.floatX))*
> *training_steps = 1*
> *# Declare Theano symbolic variables*
> *x = T.matrix("x")*
> *y = T.vector("y")*
> *w = theano.shared(rng.randn(feats).astype(theano.config.floatX),
> name="w")*
> *b = theano.shared(numpy.asarray(0., dtype=theano.config.floatX),
> name="b")*
> *x.tag.test_value = D[0]*
> *y.tag.test_value = D[1]*
> *# Construct Theano expression graph*
> *p_1 = 1 / (1 + T.exp(-T.dot(x, w)-b)) # Probability of having a one*
> *prediction = p_1 > 0.5 # The prediction that is done: 0 or 1*
> *xent = -y*T.log(p_1) - (1-y)*T.log(1-p_1) # Cross-entropy*
> *cost = xent.mean() + 0.01*(w**2).sum() # The cost to optimize*
> *gw,gb = T.grad(cost, [w,b])*
> *# Compile expressions to functions*
> *train = theano.function(*
> *inputs=[x,y],*
> *outputs=[prediction, xent],*
> *updates=[(w, w-0.01*gw), (b, b-0.01*gb)],*
> *name = "train")*
>
>
> ==
> It corrupted at this line.
> I have run numpy.test() and scipy.test() and they work well, but when I
> run theano.test(), it corrupted too. The full log is too long, so I just
> post
> the end of it:
>
> */home/nice/miniconda2/lib/python2.7/site-packages/
>> theano/compile/nanguardmode.py:168:
>> RuntimeWarning: All-NaN axis encountered*
>> *  return np.isinf(np.nanmax(arr)) or np.isinf(np.nanmin(arr))*
>> *.E/home/nice/
>> miniconda2/lib/python2.7/site-packages/theano/gof/vm.py:851:
>> UserWarning: CVM does not support memory profile, using Stack VM.*
>> *  'CVM does not support memory profile, using Stack VM.')*
>> *...SS.0.930614401665*
>> *0.930614401665*
>> *0.930614401665*
>> *0.930614401665*
>> *...
>> 
>> ...E/home/nice/miniconda2/
>> lib/python2.7/site-packages/theano/gof/vm.py:854:
>> UserWarning: LoopGC does not support partial evaluation, using Stack VM.*
>> *  'LoopGC does not support partial evaluation, '*
>> *.Segmentation fault (core dumped)*
>
>
>
> I hope someone can help me.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Theano sort - new GPU implementation

2017-06-30 Thread Frédéric Bastien
There is a pr for topk, but it is not sorted. You can use it and keep the
sort on CPU for now. It will be faster like that I think.

Le jeu. 29 juin 2017 21:11, Victor Campmany  a écrit :

> We are working on both sorts for 1D arrays and sort of an axis of an
> nd-array. We are trying to release it as soon as possible.
>
> El jueves, 29 de junio de 2017, 21:02:08 (UTC-4), Adam Becker escribió:
>>
>> Has there been any progress? I'm in need of sorted TopK on GPU. I can go
>> with CPU sort but seems a bit slow.
>>
>> On Thursday, June 15, 2017 at 4:01:00 AM UTC+8, Victor Campmany wrote:
>>>
>>> Hi,
>>>
>>> We are planning to implement a new GPU accelerated sorting algorithm.
>>> We'd like to know which are the most frequent sorting cases that you guys
>>> use and the data sizes you are dealing with. For example, sorting a large
>>> 1d array, sorting a given axis of a tensor or minibatch, or any other type
>>> of sorting you come up with.
>>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Gradient Problem (always 0)

2017-06-29 Thread Frédéric Bastien
You can also add names to your intermediate variables. theano.grad() will
use them to create names for the grads nodes. This will help you understand
what is going on. Maybe the debugprint parameter stop_on_name=True could
also help make that graph more readable.

On Thu, Jun 29, 2017 at 9:22 AM Frédéric Bastien <frederic.bast...@gmail.com>
wrote:

> The + of the + T.dot(u, v).
>
> The debugprint command I gave you will help separate the forward
> computation from the grad computation.
>
> The grad of a dot is a another dot. So what would explain a 0 outputs
> would be too many or only zeros in the inputs. Can you very the values of m
> and n? Make sure there is no zeros in them.
>
> On Thu, Jun 29, 2017 at 9:05 AM Mohamed Akrout <mohammed.akr...@gmail.com>
> wrote:
>
>> Yes I printed the gradient function of m but it is extremely big. I find
>> it unreadable (file attached). I don't know how this tree will help me find
>> the problem. There are nodes who are Alloc and second but I don't know how
>> to change and/or control them.
>>
>> When you say: "Only the extra addition will be done at each iterations",
>> about which extra addition are you talking?
>>
>> Thank you Fred.
>>
>> Med
>>
>> Regarding your notice, if m and n are non sequence, Theano will not updat
>>
>>
>> On Thursday, June 29, 2017 at 8:34:32 AM UTC-4, nouiz wrote:
>>
>>> I don't know, but you can use theano.printing.debugprint([cost,
>>> grads...])
>>>
>>> To see the gradient function. Maybe it will help you understand what is
>>> going on.
>>>
>>> Don't forget m and n are non sequence. This mean the dot will be lifted
>>> out of the loop by Theano. Only the extra addition will be done at each
>>> iterations.
>>>
>>> Fred
>>>
>>> Le mer. 28 juin 2017 19:12, Mohamed Akrout <mohamme...@gmail.com> a
>>> écrit :
>>>
>> Hi all,
>>>>
>>>> I am running a neuroscience with an recurrent neural network model with
>>>> Theano:
>>>>
>>>>
>>>>
>>>> def rnn(u_t, x_tm1, r_tm1, Wrec):
>>>>  x_t = ( (1 - alpha)*x_tm1 + alpha*(T.dot(r_tm1, Wrec ) + brec
>>>> + u_t[:,Nin:]) )
>>>>  r_t = f_hidden(x_t)
>>>>
>>>>
>>>> then I define the scan function to iterate at each time step iteration
>>>>
>>>> [x, r], _ = theano.scan(fn=rnn,
>>>> outputs_info=[x0_, f_hidden(x0_)],
>>>> sequences=u,
>>>> non_sequences=[Wrec])
>>>>
>>>> Wrec and brec are learnt by stochastic gradient descent: g =
>>>> T.grad(cost , [Wrec, brec])
>>>>
>>>> where cost is the cost function: T.sum(f_loss(z, target[:,:,:Nout]))
>>>> with z = f_output(T.dot(r, Wout_.T) + bout )
>>>>
>>>> Until now, everything works good.
>>>>
>>>>
>>>>
>>>> Now I want to add two new vectors, let's call them u and v so that the
>>>> initial rnn function becomes:
>>>>
>>>>
>>>> def rnn(u_t, x_tm1, r_tm1, Wrec, *u, v*):
>>>>  x_t = ( (1 - alpha)*x_tm1 + alpha*(T.dot(r_tm1, Wrec + *T.dot(u,
>>>> v)* ) + brec + u_t[:,Nin:]) )
>>>>  r_t = f_hidden(x_t)
>>>>
>>>> [x, r], _ = theano.scan(fn=rnn,
>>>> outputs_info=[x0_, f_hidden(x0_)],
>>>> sequences=u,
>>>> non_sequences=[Wrec,* m, n*])
>>>>
>>>> m and n are the variables corresponding to u and v in the main function.
>>>>
>>>> and suddenly, the gradient T.grad(cost, m) and T.grad(cost, n) are zeros
>>>>
>>>> I am blocked since 2 weeks now on this problem. I verified that the
>>>> values are not integer by using dtype=theano.config.floatX every where in
>>>> the definition of the variables.
>>>>
>>>> As you can see the link between the cost and m (or n) is: the cost
>>>> function depends on  z, and z depends on r and r is one of the outputs of
>>>> the rnn function that uses m and n in the equation.
>>>>
>>>> Do you have any ideas why this does not work ?
>>>>
>>>> Any idea is welcome. I hope I can unblock this problem soon.
>>>> Thank you!
>>>>
>>>> --
>>>>
>>>> ---
>>>> You received this message because you are subscribed to the Google
>>>> Groups "theano-users" group.
>>>>
>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to theano-users...@googlegroups.com.
>>>
>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to theano-users+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Gradient Problem (always 0)

2017-06-29 Thread Frédéric Bastien
I don't know, but you can use theano.printing.debugprint([cost, grads...])

To see the gradient function. Maybe it will help you understand what is
going on.

Don't forget m and n are non sequence. This mean the dot will be lifted out
of the loop by Theano. Only the extra addition will be done at each
iterations.

Fred

Le mer. 28 juin 2017 19:12, Mohamed Akrout  a
écrit :

> Hi all,
>
> I am running a neuroscience with an recurrent neural network model with
> Theano:
>
>
>
> def rnn(u_t, x_tm1, r_tm1, Wrec):
>  x_t = ( (1 - alpha)*x_tm1 + alpha*(T.dot(r_tm1, Wrec ) + brec +
> u_t[:,Nin:]) )
>  r_t = f_hidden(x_t)
>
>
> then I define the scan function to iterate at each time step iteration
>
> [x, r], _ = theano.scan(fn=rnn,
> outputs_info=[x0_, f_hidden(x0_)],
> sequences=u,
> non_sequences=[Wrec])
>
> Wrec and brec are learnt by stochastic gradient descent: g = T.grad(cost ,
> [Wrec, brec])
>
> where cost is the cost function: T.sum(f_loss(z, target[:,:,:Nout])) with
> z = f_output(T.dot(r, Wout_.T) + bout )
>
> Until now, everything works good.
>
>
>
> Now I want to add two new vectors, let's call them u and v so that the
> initial rnn function becomes:
>
>
> def rnn(u_t, x_tm1, r_tm1, Wrec, *u, v*):
>  x_t = ( (1 - alpha)*x_tm1 + alpha*(T.dot(r_tm1, Wrec + *T.dot(u,
> v)* ) + brec + u_t[:,Nin:]) )
>  r_t = f_hidden(x_t)
>
> [x, r], _ = theano.scan(fn=rnn,
> outputs_info=[x0_, f_hidden(x0_)],
> sequences=u,
> non_sequences=[Wrec,* m, n*])
>
> m and n are the variables corresponding to u and v in the main function.
>
> and suddenly, the gradient T.grad(cost, m) and T.grad(cost, n) are zeros
>
> I am blocked since 2 weeks now on this problem. I verified that the values
> are not integer by using dtype=theano.config.floatX every where in the
> definition of the variables.
>
> As you can see the link between the cost and m (or n) is: the cost
> function depends on  z, and z depends on r and r is one of the outputs of
> the rnn function that uses m and n in the equation.
>
> Do you have any ideas why this does not work ?
>
> Any idea is welcome. I hope I can unblock this problem soon.
> Thank you!
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] And operator doesn't work with theano logical operators

2017-06-28 Thread Frédéric Bastien
Don't use the Python "and" operation. Use theano.tensor.and(a,b) instead. I
think it will fix your problem.

Le mer. 28 juin 2017 10:26, Sym  a écrit :

>
> I want to build a piecewise function with theano, for instance a function
> that is nonzero only in the interval [2,3].
>
> Here is the minimal code reproducing the error :
>
>
> import theano
> import theano.tensor as T
> import numpy as np
> import matplotlib.pyplot as plt
>
> r = T.scalar()
> gate = T.switch( T.ge(r,2.) and T.le(r,3.) , 1., 0.)
> f = theano.function([r],gate)
> x = np.arange(0.,4.,0.05,dtype='float32')
> y = [f(i) for i in x]
> plt.plot(x,y)
>
>
>
> The result is the following : https://i.stack.imgur.com/XMQme.png
>
> Which is clearly not correct : only one condition is satisfied here.
>
>
> If I replace T.switch by theano.ifelse.ifelse the result is the same...
>
> Is it a known bug, or am I missing something here?
>
>
> Thanks a lot !
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Significant increase in GPU memory consumption with new GPU backend

2017-06-22 Thread Frédéric Bastien
The equivalent to the old back-end setting for memory is:
gpuarray.preallocate=-1.

The new back-end by default will cache all call to cudaMalloc() to speed up
computation. This flag will disable this cache. THis is the same default as
the old back-end.

On Thu, Jun 22, 2017 at 9:41 AM Fabian Stemmer 
wrote:

> When I did use preallocation I used lib.cnmem=1 for theano 0.8.2 and
> gpuarray.preallocate=1 for theano 0.9.0 and 0.10.dev.
> For most experiments (including those in the log files) I did not use
> preallocation, because the only way I could see the difference in memory
> usage was through nvidia-smi, which only shows the static pre-allocation
> when it is used.
> I believe the problem does not disappear with pre-allocation, since I see
> my training crash for much smaller models with the new backend even then.
> However, I cannot measure the effect of switching backends on GPU memory
> when I use preallocation.
>
>
> On Thursday, June 22, 2017 at 3:23:15 PM UTC+2, nouiz wrote:
>
>> Do you use the Theano flag: gpuarray.preallocate=1? When you tried the
>> preallocation, how did you use it?
>>
>> Is is mostly equivalent to lib.cnmem. But our default is different and by
>> default give more speed up, but sometimes can cause memory fragmentation.
>> the flag above fix the new fragmentation that can happen by default.
>>
>> On Thu, Jun 22, 2017 at 5:33 AM Fabian Stemmer 
>> wrote:
>>
> One addition:
>>> The theano 0.9.0 setup used libgpuarray v0.6.2.
>>> The theano 0.10.dev setup used libgpuarray v0.6.5 - I just updated to
>>> v0.6.7 and tested again, but I still get ~2GB memory usage.
>>>
>>>
>>> On Thursday, June 22, 2017 at 8:38:26 AM UTC+2, Fabian Stemmer wrote:

 Hi,

 I recently tried to switch my CNN implementation to the new theano GPU
 backend. To do so, I switched from "device=gpu" to "device=cuda" with
 theano9 and libgpuarray installed. My theano code then works with the new
 backend without any further changes.

 However, when I do this, I see my GPU memory consumption increase
 drastically. When I use theano memory profiling both GPU backends show the
 same memory consumption, but when I use nvidia-smi to monitor memory usage
 while the job is running, the old backend hovers somewhere around 400MB,
 while the new backend uses 2GB for the same model size and data. When I try
 to train larger models, the new GPU backend fails with memory errors for
 much smaller models than the old backend. This is also true when I activate
 memory pre-allocation.

 I tried to remove parts of my model or exclude certain theano
 optimizations (e.g. exclude conv_dnn to force theano to use a different
 convolution algorithm) but nothing I changed in the model structure had an
 impact on the discrepancy I see in memory usage.

 I use CUDA 8.0 and cuDNN 5105 for these experiments. For the old
 backend I see very similar behavior for both the 0.8.2 and 0.9.0 releases.
 For the new backend I tested the 0.9.0 release as well as a recent github
 checkout (commit c5cd87fa7895dc44c7acd54cb85e6d232b33bd3a) - both showed
 the same memory increase.

 I attached log files including my models computational graph and
 information on libraries, environment variables, etc. Please let me know if
 I can supply any additional information to make it easier to look into
 this. I tried to prepare a simple sample script to reproduce the behavior,
 but was so far unable to do so.

 Thanks
 Fabian

>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>>
>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to theano-users...@googlegroups.com.
>>
>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Significant increase in GPU memory consumption with new GPU backend

2017-06-22 Thread Frédéric Bastien
Do you use the Theano flag: gpuarray.preallocate=1? When you tried the
preallocation, how did you use it?

Is is mostly equivalent to lib.cnmem. But our default is different and by
default give more speed up, but sometimes can cause memory fragmentation.
the flag above fix the new fragmentation that can happen by default.

On Thu, Jun 22, 2017 at 5:33 AM Fabian Stemmer 
wrote:

> One addition:
> The theano 0.9.0 setup used libgpuarray v0.6.2.
> The theano 0.10.dev setup used libgpuarray v0.6.5 - I just updated to
> v0.6.7 and tested again, but I still get ~2GB memory usage.
>
>
> On Thursday, June 22, 2017 at 8:38:26 AM UTC+2, Fabian Stemmer wrote:
>>
>> Hi,
>>
>> I recently tried to switch my CNN implementation to the new theano GPU
>> backend. To do so, I switched from "device=gpu" to "device=cuda" with
>> theano9 and libgpuarray installed. My theano code then works with the new
>> backend without any further changes.
>>
>> However, when I do this, I see my GPU memory consumption increase
>> drastically. When I use theano memory profiling both GPU backends show the
>> same memory consumption, but when I use nvidia-smi to monitor memory usage
>> while the job is running, the old backend hovers somewhere around 400MB,
>> while the new backend uses 2GB for the same model size and data. When I try
>> to train larger models, the new GPU backend fails with memory errors for
>> much smaller models than the old backend. This is also true when I activate
>> memory pre-allocation.
>>
>> I tried to remove parts of my model or exclude certain theano
>> optimizations (e.g. exclude conv_dnn to force theano to use a different
>> convolution algorithm) but nothing I changed in the model structure had an
>> impact on the discrepancy I see in memory usage.
>>
>> I use CUDA 8.0 and cuDNN 5105 for these experiments. For the old backend
>> I see very similar behavior for both the 0.8.2 and 0.9.0 releases. For the
>> new backend I tested the 0.9.0 release as well as a recent github checkout
>> (commit c5cd87fa7895dc44c7acd54cb85e6d232b33bd3a) - both showed the same
>> memory increase.
>>
>> I attached log files including my models computational graph and
>> information on libraries, environment variables, etc. Please let me know if
>> I can supply any additional information to make it easier to look into
>> this. I tried to prepare a simple sample script to reproduce the behavior,
>> but was so far unable to do so.
>>
>> Thanks
>> Fabian
>>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] 'local_remove_all_assert' command not working.

2017-06-20 Thread Frédéric Bastien
Read my full previous email. There was many important point. Update to 0.9
or the dev version of Theano. If you still have the problem, give me the
full new error.

Le lun. 19 juin 2017 20:44, Sunjeet Jena  a écrit :

> I am using the theano version 0.8.2. I got this error while using Jacobian
> function.
>
>
> On Tuesday, 20 June 2017 03:45:46 UTC+5:30, nouiz wrote:
>
>> This flag won't help you with that error. Which version of Theano do you
>> use? Make sure to use 0.9 or more recent. Also give the full error message.
>>
>> Le dim. 18 juin 2017 18:51, Sunjeet Jena  a écrit :
>>
> I am trying to use 'local_remove_all_assert' in theano.flag to remove this
>>> error "  AssertionError: Scan has returned a list of updates. This should
>>> not happen! Report this to theano-users (also include the script that
>>> generated the error)" but still I am getting this error. Is this the right
>>> way to disable this function:
>>>
>>>
>>>
>>> * THEANO_FLAGS="floatX=float32,
>>> optimizer_including=local_remove_all_assert" python Deep_RL_4.py*
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>>
>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to theano-users...@googlegroups.com.
>>
>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Running into "NameError: global name 'CVM' is not defined" when using Theano

2017-06-19 Thread Frédéric Bastien
Theano should work on Solaris, but we don't test it. What is the full error?

Probably the problem is that Theano don't find g++. Did you instead it?

Can you confirm you have Theano 0.9 or more recent?

Le mer. 14 juin 2017 23:35, Avinash Thangali  a
écrit :

> I run into this error when I use Theano directly and from Keras. Both
> install with pip and for python 2.7 without raising any errors. I'm running
> this on a 64-bit version of Solaris, which I believe is a supported
> platform for Theano according to PyPi. I've looked around for possible
> causes of this error but they all seem to be related to windows users. The
> one suggestion I found for Unix was rm -rf .theano from the root directory
> but I tried that and I'm still getting this error.
>
> What could be causing this issue?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Cannot do a simple theano install (Python 2.7, Ubuntu 16.04, Theano 0.9, Cuda 8.0, TitanX GPU) due to pygpu errors

2017-06-19 Thread Frédéric Bastien
Your cudnn.h file should not be in the lib64 directory, but in an include
directory. Tensorflow does none standard stuff related to import and cause
problem in other setup, but it seem to tolerate your non standard setup.
Theano does the standard setup.

You can use the Theano flag dnn.include_path and dnn.library_path to tell
Theano where your cudnn.h and cudnn.so* files are.

I did not see your last error in full.

Le ven. 16 juin 2017 19:35, Daniel Seita  a écrit :

> Ack, sorry, half of my post got deleted! Hopefully you can still see it (i
> can find it by looking at the original post but it's in a really ugly
> format, sorry).
>
>
>
> On Friday, June 16, 2017 at 4:33:20 PM UTC-7, Daniel Seita wrote:
>
>> I was running into some more difficulties, so I gave up on getting this
>> to work and tried to uninstall and then reinstall Theano. Just to be extra
>> clear, here is my setup:
>>
>>- Ubuntu 16.04
>>- Cuda 8.0, stored in `usr/local/cuda-8.0`
>>- Titan X GPU with Pascal
>>
>> cuDNN is here:
>>
>> $ ls /usr/local/cuda-8.0/lib64/cudnn.h
>> /usr/local/cuda-8.0/lib64/cudnn.h
>>
>> To verify that I can use my GPU I started this quick TensorFlow
>> computation:
>>
>> In [1]: import tensorflow as tf
>>
>> In [2]: tf.__version__
>> Out[2]: '1.1.0'
>>
>> In [3]: tf.GPUOptions
>> Out[3]: tensorflow.core.protobuf.config_pb2.GPUOptions
>>
>> In [4]: with tf.device('/gpu:0'):
>>...: a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3],
>> name='a')
>>...: b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2],
>> name='b')
>>...: c = tf.matmul(a,b)
>>...:
>>
>> In [5]: with tf.Session() as sess:
>>...: print(sess.run(c))
>>...:
>> 2017-06-16 16:10:54.402311: W tensorflow/core/platform/cpu_feature_guard.
>> cc:45] The TensorFlow library wasn't compiled to use SSE4.1
>> instructions, but these are available on your machine and could speed up
>> CPU computations.
>> 2017-06-16 16:10:54.402328: W
>> tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
>> wasn't compiled to use SSE4.2 instructions, but these are available on
>> your machine and could speed up CPU computations.
>> 2017-06-16 16:10:54.402346: W tensorflow/core/platform/cpu_feature_guard.
>> cc:45] The TensorFlow library wasn't compiled to use AVX instructions,
>> but these are available on your machine and could speed up CPU computations.
>> 2017-06-16 16:10:54.402350: W
>> tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
>> wasn't compiled to use AVX2 instructions, but these are available on
>> your machine and could speed up CPU computations.
>> 2017-06-16 16:10:54.402356: W tensorflow/core/platform/cpu_feature_guard.
>> cc:45] The TensorFlow library wasn't compiled to use FMA instructions,
>> but these are available on your machine and could speed up CPU computations.
>> 2017-06-16 16:10:54.527167: I
>> tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful NUMA
>> node read from SysFS had negative value (-1), but there must be at least
>> one NUMA node, so returning NUMA node zero
>> 2017-06-16 16:10:54.527553: I
>> tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0 with
>> properties:
>> name: TITAN X (Pascal)
>> major: 6 minor: 1 memoryClockRate (GHz) 1.531
>> pciBusID :01:00.0
>> Total memory: 11.90GiB
>> Free memory: 11.38GiB
>> 2017-06-16 16:10:54.527565: I
>> tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
>> 2017-06-16 16:10:54.527568: I
>> tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0:   Y
>> 2017-06-16 16:10:54.527590: I
>> tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating TensorFlow
>> device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id:
>> :01:00.0)
>> [[ 22.  28.]
>>  [ 49.  64.]]
>>
>>
>> This looks like it indicates a successful GPU and/or cuDNN installation.
>>
>> Great, now let's install the *development version* of Theano. The
>> instructions I'm following step-by-step:
>> http://deeplearning.net/software/theano_versions/dev/install_ubuntu.html
>>
>> The first step seems to be to install miniconda. I downloaded the bash
>> script for Python 2.7 and ran it:
>>
>> ~/Downloads$ bash Miniconda2-latest-Linux-x86_64.sh
>>
>> Welcome to Miniconda2 4.3.21 (by Continuum Analytics, Inc.)
>>
>> In order to continue the installation process, please review the license
>> agreement.
>> Please, press ENTER to continue
>>
>> and it seemed to work without issues.
>>
>> The next step is to install requirements through conda. Here I did:
>>
>> $ conda install numpy scipy mkl nose sphinx pydot-ng
>> Fetching package metadata .
>> Solving package specifications: .
>>
>> Package plan for installation in environment /home/daniel/miniconda2:
>>
>> The following NEW packages will be INSTALLED:
>>
>> alabaster:0.7.10-py27_0
>> babel:2.4.0-py27_0
>> docutils: 

Re: [theano-users] Is it possible to run two different cost functions in parallel since they share part of the network ?

2017-06-19 Thread Frédéric Bastien
You can update two Theano shared variables in the same Theano function. If
you want two different updates to the same shared variable, it is up to you
too combine them the way you want and give Theano just one update for that
shared variable.

Le jeu. 15 juin 2017 04:55, Feras Almasri <fsalma...@gmail.com> a écrit :

> Thanks Spastian,
>
> Inside the function is it possible to have two update ?
>
>
>
> On Thu, 15 Jun 2017 at 03:21, Frédéric Bastien <frederic.bast...@gmail.com>
> wrote:
>
>> You can use only one Theano function.
>>
>> Fred
>>
>> Le mer. 14 juin 2017 06:32, Feras Almasri <fsalma...@gmail.com> a écrit :
>>
>>> Hello,
>>>
>>> Part of the network is shared between two cost function while the rest
>>> not. Is it possible to use one theno function to run both updated in
>>> parallel or it should be done in two different theano function?
>>>
>>>
>>>
>>> --
>>>
>>> ---
>>>
>> You received this message because you are subscribed to the Google Groups
>>> "theano-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to theano-users+unsubscr...@googlegroups.com.
>>
>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>>
>> ---
>>
> You received this message because you are subscribed to a topic in the
>> Google Groups "theano-users" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/theano-users/dWahADLSke8/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> theano-users+unsubscr...@googlegroups.com.
>
>
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Theano sort - new GPU implementation

2017-06-19 Thread Frédéric Bastien
Adam, what input shapes do you do sort on right now? What axis?

This is to help to know which car to optimize.

Fred

Le mer. 14 juin 2017 22:03, Victor Campmany  a écrit :

> The implementation would be for *Gpuarray*, not Theano, I mixed things up
> sorry.
>
>
> El miércoles, 14 de junio de 2017, 20:47:41 (UTC-4), Adam Becker escribió:
>>
>> I'd prefer a gpuarray implementation with similar interface as numpy:
>>
>> gpuarray.sort(arr, [axis=-1], [kind='radixsort'], [order='inc'])
>>
>> Deep Learning folks would need a fast batched version, especially float32
>> / int32 tensors on GPU. But anyway there should be a general algorithm
>> deals with all cases, never know what kind of model would come up in
>> future.
>>
>> On Thursday, June 15, 2017 at 4:01:00 AM UTC+8, Victor Campmany wrote:
>>>
>>> Hi,
>>>
>>> We are planning to implement a new GPU accelerated sorting algorithm.
>>> We'd like to know which are the most frequent sorting cases that you guys
>>> use and the data sizes you are dealing with. For example, sorting a large
>>> 1d array, sorting a given axis of a tensor or minibatch, or any other type
>>> of sorting you come up with.
>>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] MemoryError (Theano on CPU): Lenovo Thinkpad

2017-06-19 Thread Frédéric Bastien
Using Theano dev version could help you. If that don't fix it, using Python
3.5 could help. It fixed problem we are not able to reproduce for some
people.

Le sam. 17 juin 2017 21:51, Aaron Snoswell <aaron.snosw...@gmail.com> a
écrit :

> Hello.
>
> Thanks for the reply - I do indeed have mkl-service installed. I ran conda
> update --all and am still getting the same results. If anyone has any other
> suggestions I'm all ears.
>
> Thank you,
>
> On Tue, Jun 13, 2017 at 8:00 AM, Frédéric Bastien <
> frederic.bast...@gmail.com> wrote:
>
>> This is not normal.
>>
>> Did you install the conda package mkl- service ?
>>
>> Try to update numpy. It could also help.
>>
>> Le lun. 12 juin 2017 07:52, Aaron Snoswell <aaron.snosw...@gmail.com> a
>> écrit :
>>
>>> I'm working through the the DeepLearning.net tutorials using Windows 64
>>> bit, Python 3.6 and Theano installed through conda.
>>>
>>> I was able to run the Classifying MNIST digits using Logistic Regression
>>> <http://deeplearning.net/tutorial/logreg.html> demo fine, and got the
>>> same results as listed in the tutorial, hitting 4 epochs/second (about
>>> double the listed CPU performance in the tutorial). I then tried running
>>> the MLP tutorial code <http://deeplearning.net/tutorial/mlp.html> (classify
>>> MNIST digits using a simple MLP). During execution, the process gobbles up
>>> memory continuously until I get a MemoryError and the python crashes.
>>> Watching the task manager, I will occasionally see the memory usage drop -
>>> I assume this is the garbage collector kicking in, but it happens rarely.
>>>
>>>
>>> <https://lh3.googleusercontent.com/-4EYsaeVqr_w/WT5-SyEMmWI/EZw/z3aqQrLFVVcdVfqfnlLDvvS7n8WH8Qt9QCLcB/s1600/theano-running-memory.PNG>
>>>
>>>  I've tried adjusting the MLP 'batch_size' parameter;
>>>
>>>- With a value of 1000 (therefore n_train_batches == 50) the code
>>>runs until the patience condition causes it to stop (no crash)
>>>- With the default of 20 (n_train_batches == 2500) the code gets to
>>>epoch 17 and crashes
>>>- With a value of 10 (n_train_batches == 5000) I only get to epoch 3
>>>before it crashes
>>>
>>> Is this behavior expected with the hardware specs of the laptop I'm
>>> running on? I've attached my DxDiag results here, but I've got 20GB of ram
>>> on this machine.
>>>
>>> Just trying to figure out if this crashing behavior is expected, or if
>>> I'm seeing a memory leak of some sort.
>>>
>>> Thanks.
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to theano-users+unsubscr...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>>
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "theano-users" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/theano-users/Rz408i5rx2k/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> theano-users+unsubscr...@googlegroups.com.
>>
>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
>
> Aaron Snoswell
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] How to close all OPTs?

2017-06-19 Thread Frédéric Bastien
Use the Theano flag optimizer=fast_compile for few optimisation. It is not
recommend to use less optimization. But of you insist to use less, you
could use optimizer=merge or even less optimizer=None.

The two last are there just got help debugging. Never use them otherwise.
They will be very slow and do stupid stuff like computing the same thing
multiple time!

Fred

Le jeu. 15 juin 2017 09:43, mutou  a écrit :

>
> Hi all,
>
> Is there a switch to close all OPTs?
>
> Thanks.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] how to avoid NullTypeGradError in theano?

2017-06-19 Thread Frédéric Bastien
Don't create more then one thread for the same issue. If you have now
information to add or want to ask extra try related questions to a previous
question, ask it in the same thread.

This safe is valuable time.

Fred

Le ven. 16 juin 2017 18:51, Sunjeet Jena  a écrit :

> Is there any general way to avoid "NullTypeGradError" in theano which
> arises due to non differentiable functions such as theano normal
> distribution function.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: CUDNN_STATUS_EXECUTION_FAILED when using dnn.conv.algo_bwd_filter=deterministic

2017-06-19 Thread Frédéric Bastien
Try cudnn v6. The GPU that have problem are more recent. Maybe it was not
implemented case in v5.

Le lun. 19 juin 2017 16:02, Pascal Lamblin  a
écrit :

>
>
> On Monday, June 19, 2017 at 3:39:17 PM UTC-4, Pascal Lamblin wrote:
>>
>> Hi,
>>
>> Unfortunately, it looks like a runtime issue in cuDNN rather than
>> somehting in the Theano wrapper, but I could be wrong.
>> A recent PR introduced more algorithms that you can specify for
>> dnn.conv.algo_bwd_filter. In particular,
>> dnn.conv.algo_bwd_filter=fft_tiling should be deterministic as well.
>>
>
> Actually, I just realized the value gets rejected by the configuration,
> but if we bypass it in theano/configdefaults.py it should work. This should
> be fixed soon.
>
>
>>
>> Does it work with an input and kernel that are smaller than 541211 on
>> that dimension?
>> Does it work using corrMM instead of cuDNN?
>>
>> On Wednesday, June 7, 2017 at 11:19:31 AM UTC-4, Fabian Stemmer wrote:
>>>
>>> Hi,
>>>
>>> I'm using theano.tensor.nnet.conv2d in my model and I want to set
>>> dnn.conv.algo_bwd_filter=deterministic to make this run deterministically
>>> on GPUs. I work on three different GPU architectures (K10, M40, P6000) and
>>> setting the mentioned flag works well on the K10, but fails with error
>>> message CUDNN_STATUS_EXECUTION_FAILED on the other two. I have tried
>>> several combinations of theano, nvidia driver and cuDNN versions, but none
>>> fix the issue.
>>>
>>> Below are details about the respective GPU configurations I tried and
>>> the full error message. Any help you can give me is greatly appreciated.
>>>
>>> Thanks
>>> Fabian
>>>
>>>
>>> *Shared setup (all GPUs):*Theano 0.8.2 / 0.9.0 / 0.10.0.dev1 (commit
>>> 6b59449186b04225484b98951192c5867e0719ca, which was the latest at the time
>>> of this writing)
>>> cuda 8.0
>>> cuDNN 5105
>>> THEANO_FLAGS=mode=FAST_RUN,floatX=float32,lib.cnmem=1,
>>> *dnn.conv.algo_bwd_filter=deterministic*,device=cuda //device=gpu for
>>> theano 0.8.2
>>>
>>> *GPU and Nvidia driver:*
>>> Tesla K10 Architecture (Driver 361.93.03)
>>> Tesla M40 Architecture (Driver: 375.26)
>>> Quadro P6000 (Driver 375.26)
>>>
>>> Alternative driver versions (all tested on Tesla M40):
>>>
>>>1. 361.93.03 - Current Production Driver on K10/K20/K80 servers - No
>>>difference. Application fails on the M40 node
>>>2. 375.26 - Current Production driver on M40/P100/P6000 servers -
>>>App fails
>>>3. 375.51 - Most recent driver with CUDA Repo equivalent - App fails
>>>4. 375.66 - Most recent official driver for Quadro/Tesla cards - App
>>>fails
>>>
>>> I also tried upgrading to cuDNN 6.0 and still got the same error.
>>>
>>>
>>> *Full error message (on Quadro P6000, using theano 0.10.0.dev1:*
>>>
>>> Using cuDNN version 5105 on context None
>>> Mapped name None to device cuda: Quadro P6000 (:04:00.0)
>>> Traceback (most recent call last):
>>>   File
>>> "/gpfs/hcnlp/data/users/fabian_stemmer/n3lu/environments/n3lu_0.5.2/py/bin/n3lu_train",
>>> line 9, in 
>>> load_entry_point('n3lu', 'console_scripts', 'n3lu_train')()
>>>   File
>>> "/gpfs/hcnlp/data/users/fabian_stemmer/n3lu/environments/n3lu_0.5.2/n3lu/n3lu/training.py",
>>> line 507, in main
>>> valid_error, test_error = exp.run()
>>>   File
>>> "/gpfs/hcnlp/data/users/fabian_stemmer/n3lu/environments/n3lu_0.5.2/n3lu/n3lu/training.py",
>>> line 475, in run
>>> return self.run_one(self.train_corpus, self.valid_corpus)
>>>   File
>>> "/gpfs/hcnlp/data/users/fabian_stemmer/n3lu/environments/n3lu_0.5.2/n3lu/n3lu/training.py",
>>> line 384, in run_one
>>> learner.run()
>>>   File
>>> "/gpfs/hcnlp/data/users/fabian_stemmer/n3lu/environments/n3lu_0.5.2/n3lu/n3lu/learning.py",
>>> line 448, in run
>>> train_outputs = self.train(*batch)
>>>   File
>>> "/gpfs/hcnlp/data/users/fabian_stemmer/n3lu/environments/n3lu_0.5.2/py/lib/python2.7/site-packages/theano/compile/function_module.py",
>>> line 898, in __call__
>>> storage_map=getattr(self.fn, 'storage_map', None))
>>>   File
>>> "/gpfs/hcnlp/data/users/fabian_stemmer/n3lu/environments/n3lu_0.5.2/py/lib/python2.7/site-packages/theano/gof/link.py",
>>> line 325, in raise_with_op
>>> reraise(exc_type, exc_value, exc_trace)
>>>   File
>>> "/gpfs/hcnlp/data/users/fabian_stemmer/n3lu/environments/n3lu_0.5.2/py/lib/python2.7/site-packages/theano/compile/function_module.py",
>>> line 884, in __call__
>>> self.fn() if output_subset is None else\
>>> *RuntimeError: error doing operation: CUDNN_STATUS_EXECUTION_FAILED*
>>> Apply node that caused the error: GpuDnnConvGradW{algo='deterministic',
>>> inplace=True}(GpuContiguous.0, GpuContiguous.0,
>>> GpuAllocEmpty{dtype='float32', context_name=None}.0,
>>> GpuDnnConvDesc{border_mode=(1, 0), subsample=(1, 1), conv_mode='cross',
>>> precision='float32'}.0, Constant{1.0}, Constant{0.0})
>>> Toposort index: 234
>>> Inputs types: [GpuArrayType(float32, (True, True, False, False)),
>>> GpuArrayType(float32, (True, False, 

Re: [theano-users] Re: Theano 0.9.0: GPU is printed, but not used?

2017-06-19 Thread Frédéric Bastien
The code team on the GPU. This code is very simple, I'm not surprised that
it don't get always speed up.

You use the GPU well. The problem is in the detection that select the
print. It need to be updated for the new backend.

Le dim. 18 juin 2017 17:31, Meier Benjamin  a
écrit :

> Thanks for the hint:) Your are right.
>
> I just searched the code for Theano 0.9 (link:
> http://deeplearning.net/software/theano/tutorial/using_gpu.html) and used
> it for another test. Unfortunately the effect is the same.
>
> Maybe it really works for this example code, but for my application it
> does not seem to work. It is as slow with the GPU flag as with the CPU
> flag. With older versions of theano (and lasagne) it worked, but I also
> changed the GPU (GTX 780 to Titan X pascal).
>
>
> Am Samstag, 17. Juni 2017 00:37:29 UTC+2 schrieb Daniel Seita:
>>
>> Not sure if this affects the result but note that the link you provided
>> is for theano 0.8.X, not theano 0.9.0 as your title implies.
>>
>> On Thursday, June 15, 2017 at 2:45:26 PM UTC-7, Meier Benjamin wrote:
>>>
>>> Hello,
>>>
>>> I use the follwing test program:
>>> https://theano.readthedocs.io/en/0.8.x/tutorial/using_gpu.html
>>>
>>> from theano import function, config, shared, sandbox
>>> import theano.tensor as T
>>> import numpy
>>> import time
>>>
>>> vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
>>> iters = 1000
>>>
>>> rng = numpy.random.RandomState(22)
>>> x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
>>> f = function([], T.exp(x))
>>> print(f.maker.fgraph.toposort())
>>> t0 = time.time()
>>> for i in range(iters):
>>> r = f()
>>> t1 = time.time()
>>> print("Looping %d times took %f seconds" % (iters, t1 - t0))
>>> print("Result is %s" % (r,))
>>> if numpy.any([isinstance(x.op, T.Elemwise) for x in 
>>> f.maker.fgraph.toposort()]):
>>> print('Used the cpu')
>>> else:
>>> print('Used the gpu')
>>>
>>> And I get this output:
>>>
>>> root@21cfc9b009d4:/code/tmp/test# 
>>> THEANO_FLAGS='floatX=float32,device=cuda0' python gpu_test.py
>>> Using cuDNN version 5105 on context None
>>> Mapped name None to device cuda0: TITAN X (Pascal) (:87:00.0)
>>> [GpuElemwise{exp,no_inplace}(), 
>>> HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
>>> Looping 1000 times took 0.221684 seconds
>>> Result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
>>>   1.62323296]
>>> Used the cpu
>>>
>>>
>>> For some reason theano still uses the CPU? But it already prints the GPU 
>>> infos? Do I do something wrong?
>>>
>>> Thank you very much
>>>
>>>
>>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   >