Hi,

I just start to use pycuda to do some gpu computing.

However, I found that transfering numpy arrays to gpu costs a lot of time
and so does compiling the source.

I am using the SourceModule now and as far as I know, for example, I have a
file called try.py and a kernel function called searching(float *arr), the
question is

1) Everytime I run the  try.py, the searching function is compiled once,
and cached later until the codes end. So I am wondering if I can
perminantly save that function and load the saved function so that I don't
have to compile it when I run the script.

2) Is there a way that make transfering data faster? I read the documents,
is the managed memory gonna help with this?


Thanks a lot for help.

Best Regards,

Yuan Chen
_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
https://lists.tiker.net/listinfo/pycuda

Reply via email to