[theano-users] Re: How to implement binary activation in theano?

2017-07-12 Thread zxzhijia
I see. I'll try that. Thanks.

On Wednesday, July 12, 2017 at 11:57:57 AM UTC-8, Jesse Livezey wrote:
>
> Do you need to take derivatives through the activation? If not, then you 
> could use switch, i.e.
>
> x = some theano variable
> threshold = .5
> x_binary = T.switch(x > theshold, 1., 0.)
>
> On Wednesday, July 12, 2017 at 10:27:32 AM UTC-7, zxzh...@gmail.com wrote:
>>
>> In the binarized network github code (), Matthieu used stochastic 
>> binarization. I'm wondering how to define just a simple binary activation 
>> instead of stochastic in theano?
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: About theano.function inside for loop

2017-07-12 Thread Jesse Livezey
Yes, you should be able to just call theano.function(...) before the loops.

On Wednesday, July 12, 2017 at 4:13:33 AM UTC-7, Kelvin Chiu wrote:
>
> for x in range(x_range):
> for y in range(y_range):
> t_test_set_x = theano_translation(test_set_x, x, y, borrow=True)
> predict_model = theano.function(inputs=[index],
> outputs=layer3.errors(y),
> givens={layer0.input: 
> t_test_set_x[index * 500: (index + 1) * 500],
> y: test_set_y[index * 500: 
> (index + 1) * 500]})
> for batch_value in range(0, 20, 1):
> temp_predicted_values = predict_model(batch_value)
> predicted_values = temp_predicted_values + predicted_values
>
>
> This is part of my source code. Now, the theano function is put inside 2 for 
> loops. And my test set is updated in every loop. Is there anyway to put the
>  theano function outside the for loop so that i can speed up the 
> computational process ? 
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: How to implement binary activation in theano?

2017-07-12 Thread Jesse Livezey
Do you need to take derivatives through the activation? If not, then you 
could use switch, i.e.

x = some theano variable
threshold = .5
x_binary = T.switch(x > theshold, 1., 0.)

On Wednesday, July 12, 2017 at 10:27:32 AM UTC-7, zxzh...@gmail.com wrote:
>
> In the binarized network github code (), Matthieu used stochastic 
> binarization. I'm wondering how to define just a simple binary activation 
> instead of stochastic in theano?
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] How to implement binary activation in theano?

2017-07-12 Thread zxzhijia
In the binarized network github code (), Matthieu used stochastic 
binarization. I'm wondering how to define just a simple binary activation 
instead of stochastic in theano?

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] About theano.function inside for loop

2017-07-12 Thread Chiu Chun Pang


for x in range(x_range):
for y in range(y_range):
t_test_set_x = theano_translation(test_set_x, x, y, borrow=True)
predict_model = theano.function(inputs=[index],
outputs=layer3.errors(y),
givens={layer0.input: 
t_test_set_x[index * 500: (index + 1) * 500],
y: test_set_y[index * 500: 
(index + 1) * 500]})
for batch_value in range(0, 20, 1):
temp_predicted_values = predict_model(batch_value)
predicted_values = temp_predicted_values + predicted_values


This is part of my source code. Now, the theano function is put inside 2 for 
loops. And my test set is updated in every loop. Is there anyway to put the
 theano function outside the for loop so that i can speed up the computational 
process ? 

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: Implementing a GPU op

2017-07-12 Thread Christopher Bourez
What surprises me is to get seg faults in the theano function, while I 
would have expected them to occur during evaluation on values...

On Wednesday, July 12, 2017 at 10:05:30 AM UTC+2, Christopher Bourez wrote:
>
> A second thing that is not clear to me in the documentation of Theano is 
> how you specify a C implementation and GPU implementation of the same own 
> op. Thank you
>
> On Wednesday, July 12, 2017 at 9:58:34 AM UTC+2, Christopher Bourez wrote:
>>
>> I've also tried to create an example with theano.gpuarray.nnet.GpuSoftmax
>>  but after compilation it got replaced another implementation*GpuDnnSoftmax 
>> : *
>>
>>
>> *Elemwise{mul,no_inplace} [id A] ''|HostFromGpu(gpuarray) [id B] '' 
>>| |GpuSoftmax [id C] ''|   |GpuFromHost [id D] ''| |x 
>> [id E] |InplaceDimShuffle{x,x} [id F] ''  |TensorConstant{2} [id 
>> G]CompilingHostFromGpu(gpuarray) [id A] ''   5 |GpuElemwise{Mul}[(0, 
>> 1)] [id B] ''   4   |GpuArrayConstant{[[ 2.]]} [id C]  
>>  |InplaceGpuDimShuffle{0,1} [id D] ''   3
>>  |GpuDnnSoftmax{mode='channel', algo='accurate'} [id E] ''   2  
>>  |GpuContiguous [id F] ''   1 |InplaceGpuDimShuffle{0,1,x,x} [id G] 
>> ''   0   | [id H]*I'm 
>> looking of a good example with a GPU Kernel.
>>
>> On Wednesday, July 12, 2017 at 9:56:08 AM UTC+2, Christopher Bourez wrote:
>>>
>>> I don't know what you mean by "not modifying" the source for GpuEye:
>>> - In this example, I'm importing a not modifyed GpuEye  op from Theano 
>>> basic ops
>>> - If I'm using theano.tensor.eye, then it does not use the GpuEye
>>>
>>> Also, are you sure this test
>>>
>>> https://github.com/Theano/Theano/blob/2625464534147fd70da60a3a3ddcb63ed8e5a416/theano/gpuarray/tests/test_basic_ops.py#L401
>>> works well ? 
>>>
>>> On Wednesday, July 12, 2017 at 2:48:44 AM UTC+2, Pascal Lamblin wrote:

 Does it work if you do not modify the source for GpuEye at all?
 If it does, then maybe sharing your new source would get you more help.

 On Tuesday, July 11, 2017 at 12:12:03 PM UTC-4, Christopher Bourez 
 wrote:
>
> Hi, 
>
> I'm trying to implement a simple GPU op but it always gives me a 
> Segmentation fault during compilation, without other message.
>
> For example :
> import theano
> from theano.gpuarray.basic_ops import GpuEye
>
> x = theano.tensor.iscalar('x')
> y = theano.tensor.iscalar('y')
> z = GpuEye(dtype='float32', context_name=None)(x,y, 
> theano.tensor.constant(0))
>
> theano.printing.debugprint(z)
> print("Compiling")
> f = theano.function( [x,y], z)
> theano.printing.debugprint(f)
> print("Results")
> print(f(3, 3))
>
> I've also tried with the softmax gpu function. Is there something I'm 
> missing ?
>
> I copied the file, created a complete new op, and the segmentation 
> fault appears when I'm defining a Kernel in gpu_kernels() method of the 
> op.
>
> Thank you a lot for your help
>


-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: Implementing a GPU op

2017-07-12 Thread Christopher Bourez
A second thing that is not clear to me in the documentation of Theano is 
how you specify a C implementation and GPU implementation of the same own 
op. Thank you

On Wednesday, July 12, 2017 at 9:58:34 AM UTC+2, Christopher Bourez wrote:
>
> I've also tried to create an example with theano.gpuarray.nnet.GpuSoftmax but 
> after compilation it got replaced another implementation*GpuDnnSoftmax : *
>
>
> *Elemwise{mul,no_inplace} [id A] ''|HostFromGpu(gpuarray) [id B] '' 
>| |GpuSoftmax [id C] ''|   |GpuFromHost [id D] ''| |x 
> [id E] |InplaceDimShuffle{x,x} [id F] ''  |TensorConstant{2} [id 
> G]CompilingHostFromGpu(gpuarray) [id A] ''   5 |GpuElemwise{Mul}[(0, 
> 1)] [id B] ''   4   |GpuArrayConstant{[[ 2.]]} [id C]  
>  |InplaceGpuDimShuffle{0,1} [id D] ''   3
>  |GpuDnnSoftmax{mode='channel', algo='accurate'} [id E] ''   2  
>  |GpuContiguous [id F] ''   1 |InplaceGpuDimShuffle{0,1,x,x} [id G] 
> ''   0   | [id H]*I'm 
> looking of a good example with a GPU Kernel.
>
> On Wednesday, July 12, 2017 at 9:56:08 AM UTC+2, Christopher Bourez wrote:
>>
>> I don't know what you mean by "not modifying" the source for GpuEye:
>> - In this example, I'm importing a not modifyed GpuEye  op from Theano 
>> basic ops
>> - If I'm using theano.tensor.eye, then it does not use the GpuEye
>>
>> Also, are you sure this test
>>
>> https://github.com/Theano/Theano/blob/2625464534147fd70da60a3a3ddcb63ed8e5a416/theano/gpuarray/tests/test_basic_ops.py#L401
>> works well ? 
>>
>> On Wednesday, July 12, 2017 at 2:48:44 AM UTC+2, Pascal Lamblin wrote:
>>>
>>> Does it work if you do not modify the source for GpuEye at all?
>>> If it does, then maybe sharing your new source would get you more help.
>>>
>>> On Tuesday, July 11, 2017 at 12:12:03 PM UTC-4, Christopher Bourez wrote:

 Hi, 

 I'm trying to implement a simple GPU op but it always gives me a 
 Segmentation fault during compilation, without other message.

 For example :
 import theano
 from theano.gpuarray.basic_ops import GpuEye

 x = theano.tensor.iscalar('x')
 y = theano.tensor.iscalar('y')
 z = GpuEye(dtype='float32', context_name=None)(x,y, 
 theano.tensor.constant(0))

 theano.printing.debugprint(z)
 print("Compiling")
 f = theano.function( [x,y], z)
 theano.printing.debugprint(f)
 print("Results")
 print(f(3, 3))

 I've also tried with the softmax gpu function. Is there something I'm 
 missing ?

 I copied the file, created a complete new op, and the segmentation 
 fault appears when I'm defining a Kernel in gpu_kernels() method of the op.

 Thank you a lot for your help

>>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: Implementing a GPU op

2017-07-12 Thread Christopher Bourez
I don't know what you mean by "not modifying" the source for GpuEye:
- In this example, I'm importing a not modifyed GpuEye  op from Theano 
basic ops
- If I'm using theano.tensor.eye, then it does not use the GpuEye

Also, are you sure this test
https://github.com/Theano/Theano/blob/2625464534147fd70da60a3a3ddcb63ed8e5a416/theano/gpuarray/tests/test_basic_ops.py#L401
works well ? 

On Wednesday, July 12, 2017 at 2:48:44 AM UTC+2, Pascal Lamblin wrote:
>
> Does it work if you do not modify the source for GpuEye at all?
> If it does, then maybe sharing your new source would get you more help.
>
> On Tuesday, July 11, 2017 at 12:12:03 PM UTC-4, Christopher Bourez wrote:
>>
>> Hi, 
>>
>> I'm trying to implement a simple GPU op but it always gives me a 
>> Segmentation fault during compilation, without other message.
>>
>> For example :
>> import theano
>> from theano.gpuarray.basic_ops import GpuEye
>>
>> x = theano.tensor.iscalar('x')
>> y = theano.tensor.iscalar('y')
>> z = GpuEye(dtype='float32', context_name=None)(x,y, 
>> theano.tensor.constant(0))
>>
>> theano.printing.debugprint(z)
>> print("Compiling")
>> f = theano.function( [x,y], z)
>> theano.printing.debugprint(f)
>> print("Results")
>> print(f(3, 3))
>>
>> I've also tried with the softmax gpu function. Is there something I'm 
>> missing ?
>>
>> I copied the file, created a complete new op, and the segmentation fault 
>> appears when I'm defining a Kernel in gpu_kernels() method of the op.
>>
>> Thank you a lot for your help
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.