I am not aware of any issues.  Can you send me a test program and I can try it 
out?
Which version of CUDA are you using?

Rolf

>-----Original Message-----
>From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org]
>On Behalf Of Sebastian Rinke
>Sent: Tuesday, January 17, 2012 8:50 AM
>To: Open MPI Developers
>Subject: [OMPI devel] GPUDirect v1 issues
>
>Dear all,
>
>I'm using GPUDirect v1 with Open MPI 1.4.3 and experience blocking
>MPI_SEND/RECV to block forever.
>
>For two subsequent MPI_RECV, it hangs if the recv buffer pointer of the
>second recv points to somewhere, i.e. not at the beginning, in the recv buffer
>(previously allocated with cudaMallocHost()).
>
>I tried the same with MVAPICH2 and did not see the problem.
>
>Does anybody know about issues with GPUDirect v1 using Open MPI?
>
>Thanks for your help,
>Sebastian
>_______________________________________________
>devel mailing list
>de...@open-mpi.org
>http://www.open-mpi.org/mailman/listinfo.cgi/devel
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------

Reply via email to