On Wednesday, July 12, 2017 at 4:13:43 AM UTC-4, Christopher Bourez wrote: 
>
>  I don't know what you mean by "not modifying" the source for GpuEye:
> - In this example, I'm importing a not modifyed GpuEye  op from Theano 
> basic ops
> - If I'm using theano.tensor.eye, then it does not use the GpuEye
>

OK, I assumed that you had started from the implementation of GpuEye to 
implement a new GPU Op.
Your original example seems to work for me, though, so it may have to do 
with your setup:

In [3]: import theano
   ...: from theano.gpuarray.basic_ops import GpuEye
   ...: 
   ...: x = theano.tensor.iscalar('x')
   ...: y = theano.tensor.iscalar('y')
   ...: z = GpuEye(dtype='float32', context_name=None)(x,y, theano.tensor.
constant(0))
   ...: 
   ...: theano.printing.debugprint(z)
   ...: print("Compiling")
   ...: f = theano.function( [x,y], z)
   ...: theano.printing.debugprint(f)
   ...: print("Results")
   ...: print(f(3, 3))
   ...: 
GpuEye{dtype='float32', context_name=None} [id A] ''   
 |x [id B]
 |y [id C]
 |TensorConstant{0} [id D]
Compiling
GpuEye{dtype='float32', context_name=None} [id A] ''   0
 |x [id B]
 |y [id C]
 |TensorConstant{0} [id D]
Results
[[ 1.  0.  0.]
 [ 0.  1.  0.]
 [ 0.  0.  1.]]

Also, are you sure this test
>
> https://github.com/Theano/Theano/blob/2625464534147fd70da60a3a3ddcb63ed8e5a416/theano/gpuarray/tests/test_basic_ops.py#L401
> works well ? 
>

Yes, it gets tested in our daily buildbot and on several pull requests per 
week, by our continuous integration systems. I also just launched it 
manually:
$ theano-nose theano/gpuarray/tests/test_basic_ops.py:test_gpueye
Can not use cuDNN on context None: Disabled by dnn.enabled flag
Mapped name None to device cuda: GeForce GTX 580 (0000:02:00.0)
.............................................
----------------------------------------------------------------------
Ran 45 tests in 21.645s

OK


I've also tried to create an example with theano.gpuarray.nnet.GpuSoftmax but 
> after compilation it got replaced another implementation*GpuDnnSoftmax : *


Yes, there is an optimization that does that if cuDNN is available. You 
should be able to disable it with `optimizer_excluding=local_softmax_dnn`.

A second thing that is not clear to me in the documentation of Theano is 
> how you specify a C implementation and GPU implementation of the same own 
> op. Thank you
>

You do not specify C and GPU implementations for the same Op, what we have 
in general is two different Ops, one that has CPU inputs and outputs, and 
computes on CPU, and another one with GPU inputs and outputs, that computes 
on GPU.
This is necessary because the Variables in Theano are strongly typed, and 
the device is part of the type.
There are optimizations that replace CPU Ops by GPU ones, inserting 
transfer Ops (GpuFromHost, HostFromGpu) if necessary.
GPU Ops, like CPU ones, can have C (using CUDA) or Python implementations 
(or both). 

What surprises me is to get seg faults in the theano function, while I 
> would have expected them to occur during evaluation on values...
>

It is strange indeed. It may be possible that some GPU operations are 
executed on GPU during the compilation phase, for constant folding 
(constant propagation) for instance.
Does it happen as well with the latest master from GitHub?
 

>  
>
On Wednesday, July 12, 2017 at 10:05:30 AM UTC+2, Christopher Bourez wrote:
>>
>>
>>
>> On Wednesday, July 12, 2017 at 9:58:34 AM UTC+2, Christopher Bourez wrote:
>>>
>>>
>>>
>>> *Elemwise{mul,no_inplace} [id A] ''    |HostFromGpu(gpuarray) [id B] '' 
>>>    | |GpuSoftmax [id C] ''    |   |GpuFromHost<dev0> [id D] ''    |     |x 
>>> [id E] |InplaceDimShuffle{x,x} [id F] ''      |TensorConstant{2} [id 
>>> G]CompilingHostFromGpu(gpuarray) [id A] ''   5 |GpuElemwise{Mul}[(0, 
>>> 1)]<gpuarray> [id B] ''   4   |GpuArrayConstant{[[ 2.]]} [id C]  
>>>  |InplaceGpuDimShuffle{0,1} [id D] ''   3    
>>>  |GpuDnnSoftmax{mode='channel', algo='accurate'} [id E] ''   2      
>>>  |GpuContiguous [id F] ''   1         |InplaceGpuDimShuffle{0,1,x,x} [id G] 
>>> ''   0           |<GpuArrayType<dev0>(float32, (False, False))> [id H]*I'm 
>>> looking of a good example with a GPU Kernel.
>>>
>>> On Wednesday, July 12, 2017 at 9:56:08 AM UTC+2, Christopher Bourez 
>>> wrote:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Wednesday, July 12, 2017 at 2:48:44 AM UTC+2, Pascal Lamblin wrote:
>>>>>
>>>>> Does it work if you do not modify the source for GpuEye at all?
>>>>> If it does, then maybe sharing your new source would get you more help.
>>>>>
>>>>> On Tuesday, July 11, 2017 at 12:12:03 PM UTC-4, Christopher Bourez 
>>>>> wrote:
>>>>>>
>>>>>> Hi, 
>>>>>>
>>>>>> I'm trying to implement a simple GPU op but it always gives me a 
>>>>>> Segmentation fault during compilation, without other message.
>>>>>>
>>>>>> For example :
>>>>>> import theano
>>>>>> from theano.gpuarray.basic_ops import GpuEye
>>>>>>
>>>>>> x = theano.tensor.iscalar('x')
>>>>>> y = theano.tensor.iscalar('y')
>>>>>> z = GpuEye(dtype='float32', context_name=None)(x,y, 
>>>>>> theano.tensor.constant(0))
>>>>>>
>>>>>> theano.printing.debugprint(z)
>>>>>> print("Compiling")
>>>>>> f = theano.function( [x,y], z)
>>>>>> theano.printing.debugprint(f)
>>>>>> print("Results")
>>>>>> print(f(3, 3))
>>>>>>
>>>>>> I've also tried with the softmax gpu function. Is there something I'm 
>>>>>> missing ?
>>>>>>
>>>>>> I copied the file, created a complete new op, and the segmentation 
>>>>>> fault appears when I'm defining a Kernel in gpu_kernels() method of the 
>>>>>> op.
>>>>>>
>>>>>> Thank you a lot for your help
>>>>>>
>>>>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to