Hello everyone,

I was wondering if RTK automagically spread the workload over several GPU's when available on a machine ?  I tried to find the answer by myself, but up to now, the only information I could get were that:

- cuda provides with a unified memory framework supposed to simplify memory management,

- class itkCudaUtil has members that identify all the GPU's present on the computer.

I had a look on the other itkCuda*** classes but found nothing that could help me understand if multiple GPU's are managed by RTK.

Would someone would be so kind as to help me find an answer ?

I thank you very much in advance,

best regards,

Vincent

_______________________________________________
Rtk-users mailing list
[email protected]
https://public.kitware.com/mailman/listinfo/rtk-users

Reply via email to