Dear all,

I'm using GPUDirect v1 with Open MPI 1.4.3 and experience blocking 
MPI_SEND/RECV to block forever.

For two subsequent MPI_RECV, it hangs if the recv buffer pointer of the second 
recv points to somewhere, i.e. not at the beginning, 
in the recv buffer (previously allocated with cudaMallocHost()).

I tried the same with MVAPICH2 and did not see the problem.

Does anybody know about issues with GPUDirect v1 using Open MPI?

Thanks for your help,
Sebastian

Reply via email to