Mar 2015 20:14:45 +
To: Open MPI Users mailto:us...@open-mpi.org>>
Subject: Re: [OMPI users] GPUDirect with OpenMPI
Hi Rob:
Sorry for the slow reply but it took me a while to figure this out. It turns
out that this issue had to do with how some of the memory within the smcuda BTL
was bei
: Wednesday, February 11, 2015 3:50 PM
To: Open MPI Users
Subject: Re: [OMPI users] GPUDirect with OpenMPI
Let me try to reproduce this. This should not have anything to do with GPU
Direct RDMA. However, to eliminate it, you could run with:
--mca btl_openib_want_cuda_gdr 0.
Rolf
From: users
: us...@open-mpi.org
Subject: [OMPI users] GPUDirect with OpenMPI
Hi,
I built OpenMPI 1.8.3 using PGI 14.7 and enabled CUDA support for CUDA 6.0. I
have a Fortran test code that tests GPUDirect and have included it here. When
I run it across 2 nodes using 4 MPI procs, sometimes it fails with
Hi,
I built OpenMPI 1.8.3 using PGI 14.7 and enabled CUDA support for CUDA 6.0. I
have a Fortran test code that tests GPUDirect and have included it here. When
I run it across 2 nodes using 4 MPI procs, sometimes it fails with incorrect
results. Specifically, sometimes rank 1 does not receiv