Hi Simon, I'm not sure I understand but have you tried grafting the Image or > CudaImage to an existing itk::Image (the Graft function)? > I tried that but when I call itk.GetArrayFromImage(cuda_img) on the grafted image (cpu_img.Graft(cuda_img)) I get the error ```ValueError: PyMemoryView_FromBuffer(): info->buf must not be NULL``` from within ITK (or its Python bindings).
> Again, I'm not sure I understand but you should be able to graft a > CudaImage to another CudaImage. > If anything I'd like to graft an Image into a CudaImage. When I try something like `cuda_img.Graft(cpu_img)` I get a TypeError. If this and the Graft'ing above would work (including the array view), that would be exactly my initial wish. > You can always ask explicit transfers by calling the functions of the data > manager (accessible via CudaImage::GetCudaDataManager()) > I assume you mean manager.UpdateCPUBuffer()? When I run that, the CPU image I used to create the GPU image (by this <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70>) is not updated. My scenario is this: I give a numpy array as a volume to be forward projected. I get a ImageView from that array, set origin and spacing of that image and transfer to GPU via your method <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70>. For the output projections, I use an ImageView from a numpy.zeros array with according shape, spacing and origin and transfer that to GPU the same way. I then use the CudaForwardProjection filter. Now I'd like to have the projection data on CPU. Unfortunately, none of the suggested methods worked for me other than using an itk.ImageDuplicator on the CudaImage :( Sorry for the lenghty mail. Best Clemens > >> >> Best >> Clemens >> >> Am Mo., 8. Juli 2019 um 16:20 Uhr schrieb Simon Rit < >> [email protected]>: >> >>> Hi, >>> Conversion from Image to CudaImage is not optimal. The way I'm doing it >>> now is shown in an example in these few lines >>> <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70>. >>> I am aware of the problem and discussed it on the ITK forum >>> <https://discourse.itk.org/t/shadowed-functions-in-gpuimage-or-cudaimage/1614> >>> but I don't have a better solution yet. >>> I'm not sure what you mean by explicitely transferring data from/to GPU >>> but I guess you can always work with itk::Image and do your own CUDA >>> computations in the GenerateData of the ImageFilter if you don't like the >>> CudaImage mechanism. >>> I hope this helps, >>> Simon >>> >>> On Mon, Jul 8, 2019 at 10:06 PM C S <[email protected]> wrote: >>> >>>> Dear RTK users, >>>> >>>> I'm looking for a way to use exisiting ITK Images (either on GPU or in >>>> RAM) when transfering data from/to GPU. That is, not only re-using the >>>> Image object, but writing into the memory where its buffer is. >>>> >>>> Why: As I'm using the Python bindings, I guess this ties in with ITK >>>> wrapping the CudaImage type. In >>>> https://github.com/SimonRit/RTK/blob/master/utilities/ITKCudaCommon/include/itkCudaImage.h#L32 >>>> I >>>> read that the memory management is done implicitly and the CudaImage can be >>>> used with CPU filters. However when using the bindings, >>>> only rtk.BackProjectionImageFilter can be used with CudaImages. The other >>>> filters complain about not being wrapped for that type. >>>> >>>> That is why I want to explicitely transfer the data from/to GPU, but >>>> preferably using the exisiting Images and buffers. I can't rely on RTK >>>> managing GPU memory implicitly. >>>> >>>> >>>> Thank you very much for your help! >>>> Clemens >>>> >>>> >>>> _______________________________________________ >>>> Rtk-users mailing list >>>> [email protected] >>>> https://public.kitware.com/mailman/listinfo/rtk-users >>>> >>>
_______________________________________________ Rtk-users mailing list [email protected] https://public.kitware.com/mailman/listinfo/rtk-users
