Hello , i am a bit confused about measuring time,so i need a little help.

I have a code like :

....
Rs_gpu=gpuarray.to_gpu(np.random.rand(numPointsRs*3).astype(np.float32))
Rp_gpu=gpuarray.to_gpu(np.random.rand(3).astype(np.float32))
....
start = drv.Event()
end = drv.Event()

mod =SourceModule("""
    __global__ void compute(float *Rs_mat, ...., float *Rp,.) 
""")

#call the function(kernel)
func = mod.get_function("compute")

start.record() # start timing

func(Rs_gpu,..Rp_gpu...)

end.record() # end timing

# calculate the run length
end.synchronize()
secs = start.time_till(end)*1e-3

#----- get data back from GPU-----
Rs=Rs_gpu.get()
Rp=Rp_gpu.get()


print "%s, %fsec, %s" % ('Time for Rs = ',secs,str(Rs))
print "%s, %fsec, %s" % ('Time for Rp = ',secs,str(Rp))     //here i am
computing the same thing!
                

My questions are:

1) Is this right correct for measuring the gpu time?

2) How can i distinguish the results for Rs and for Rp (if it can be done)

Thanks!




--
View this message in context: 
http://pycuda.2962900.n2.nabble.com/how-to-measure-time-tp7208367p7208367.html
Sent from the PyCuda mailing list archive at Nabble.com.

_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to