To answer the original questions, Open MPI will look at taking advantage of the
RDMA CUDA when it is available. Obviously, work needs to be done to figure out
the best way to integrate into the library. Much like there are a variety of
protocols under the hood to support host transfer of data
Dear OpenMPI developers
I'd like to add my 2 cents that this would be a very desirable feature
enhancement for me as well (and perhaps others).
Best regards
Durga
On Tue, Aug 14, 2012 at 4:29 PM, Zbigniew Koza wrote:
> Hi,
>
> I've just found this information on nVidia's plans regarding enha
Hi,
I've just found this information on nVidia's plans regarding enhanced
support for MPI in their CUDA toolkit:
http://developer.nvidia.com/cuda/nvidia-gpudirect
The idea that two GPUs can talk to each other via network cards without
CPU as a middleman looks very promising.
This technology
Hi
I was about tell that i have written an MPI code in which i have specified
only for the communication between nodes and i dont whether can i run
program as per my requirement i.e to use the cores present in my node (all
4 cores). So my doubt is should i need to include pthreads or the program
w
On Aug 14, 2012, at 7:55 AM, seshendra seshu wrote:
> I haven't still changed my code to run when threading is needed (presently
> working).
I'm afraid I can't parse that sentence; I don't know what you mean.
> I have doubt that when i calculate the MPI ranks using the MPI command it
> give
Hi Tom,
Thank you.
I haven't still changed my code to run when threading is needed (presently
working).
I have doubt that when i calculate the MPI ranks using the MPI command it
gives only the nodes which have given in a host file.
But how can i calculate the MPI ranks as you have told i.e N=H(