Hi Simon,
thank you for the quick reply. I'll try the splitting strategy.
Best regards,
Vincent
On 15.10.19 17:53, Simon Rit wrote:
Hi,
No. This is quite a challenge to implement this and we have no
resources on this topic. My first attempt to do this would be to use
ASTRA <http://www.astra-toolbox.com/> from RTK. RTK only automagically
select the "best" GPU, see here
<https://github.com/SimonRit/RTK/blob/master/utilities/ITKCudaCommon/src/itkCudaContextManager.cxx#L67>.
For FDK, I think it would be easy to split the volume and ask each GPU
to reconstruct a specific part of the volume (but I never did it and
RTK would need to allow parameterization of the device which it
currently doesn't).
Note that we don't use the unified memory framework.
Simon
On Tue, Oct 15, 2019 at 5:43 PM vincent <[email protected]
<mailto:[email protected]>> wrote:
Hello everyone,
I was wondering if RTK automagically spread the workload over several
GPU's when available on a machine ? I tried to find the answer by
myself, but up to now, the only information I could get were that:
- cuda provides with a unified memory framework supposed to simplify
memory management,
- class itkCudaUtil has members that identify all the GPU's
present on
the computer.
I had a look on the other itkCuda*** classes but found nothing that
could help me understand if multiple GPU's are managed by RTK.
Would someone would be so kind as to help me find an answer ?
I thank you very much in advance,
best regards,
Vincent
_______________________________________________
Rtk-users mailing list
[email protected] <mailto:[email protected]>
https://public.kitware.com/mailman/listinfo/rtk-users
_______________________________________________
Rtk-users mailing list
[email protected]
https://public.kitware.com/mailman/listinfo/rtk-users