Hi, Conversion from Image to CudaImage is not optimal. The way I'm doing it now is shown in an example in these few lines <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70>. I am aware of the problem and discussed it on the ITK forum <https://discourse.itk.org/t/shadowed-functions-in-gpuimage-or-cudaimage/1614> but I don't have a better solution yet. I'm not sure what you mean by explicitely transferring data from/to GPU but I guess you can always work with itk::Image and do your own CUDA computations in the GenerateData of the ImageFilter if you don't like the CudaImage mechanism. I hope this helps, Simon
On Mon, Jul 8, 2019 at 10:06 PM C S <[email protected]> wrote: > Dear RTK users, > > I'm looking for a way to use exisiting ITK Images (either on GPU or in > RAM) when transfering data from/to GPU. That is, not only re-using the > Image object, but writing into the memory where its buffer is. > > Why: As I'm using the Python bindings, I guess this ties in with ITK > wrapping the CudaImage type. In > https://github.com/SimonRit/RTK/blob/master/utilities/ITKCudaCommon/include/itkCudaImage.h#L32 > I > read that the memory management is done implicitly and the CudaImage can be > used with CPU filters. However when using the bindings, > only rtk.BackProjectionImageFilter can be used with CudaImages. The other > filters complain about not being wrapped for that type. > > That is why I want to explicitely transfer the data from/to GPU, but > preferably using the exisiting Images and buffers. I can't rely on RTK > managing GPU memory implicitly. > > > Thank you very much for your help! > Clemens > > > _______________________________________________ > Rtk-users mailing list > [email protected] > https://public.kitware.com/mailman/listinfo/rtk-users >
_______________________________________________ Rtk-users mailing list [email protected] https://public.kitware.com/mailman/listinfo/rtk-users
