[theano-users] Re: cudnn detected in cuda backend but not in gpuarray backend

2017-07-03 Thread Pascal Lamblin
The test actually ran on GPU, as evidenced by it printing "GpuElemwise".
The issue is you are using a really old version of "gputest", which does 
not correctly detect the new back-end. Please use the latest version at 
http://deeplearning.net/software/theano/tutorial/using_gpu.html#testing-the-gpu

On Sunday, July 2, 2017 at 8:58:41 AM UTC-4, Akshay Chaturvedi wrote:
>
> I was able to solve the issue by setting CPLUS_INCLUDE_PATH. Now the 
> output looks like this
> Using cuDNN version 5110 on context None
> Mapped name None to device cuda0: GeForce GTX 960 (:01:00.0)
> [GpuElemwise{exp,no_inplace}(), 
> HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
> Looping 1000 times took 0.196515 seconds
> Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
> 1.62323296]
> Used the cpu
>
> The file is unable to detect that the program ran on gpu but it's a 
> seperate issue. I am attaching the file gputest.py. 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: cudnn detected in cuda backend but not in gpuarray backend

2017-07-02 Thread Akshay Chaturvedi
I was able to solve the issue by setting CPLUS_INCLUDE_PATH. Now the output 
looks like this
Using cuDNN version 5110 on context None
Mapped name None to device cuda0: GeForce GTX 960 (:01:00.0)
[GpuElemwise{exp,no_inplace}(), 
HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.196515 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
1.62323296]
Used the cpu

The file is unable to detect that the program ran on gpu but it's a 
seperate issue. I am attaching the file gputest.py. 

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')

[theano-users] Re: cudnn detected in cuda backend but not in gpuarray backend

2017-07-02 Thread Akshay Chaturvedi
I also ran pygpu.test() setting device to cuda0. The only error it gives is 
GpuArrayException: ('malloc: Resource temporarily unavailable', 6). 
Otherwise, it runs fine.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.