gt;>
List-Post: users@lists.open-mpi.org
Date: Tue, 3 Mar 2015 20:14:45 +
To: Open MPI Users <us...@open-mpi.org<mailto:us...@open-mpi.org>>
Subject: Re: [OMPI users] GPUDirect with OpenMPI
Hi Rob:
Sorry for the slow reply but it took me a while to figure this out. It turns
out tha
: Wednesday, February 11, 2015 3:50 PM
To: Open MPI Users
Subject: Re: [OMPI users] GPUDirect with OpenMPI
Let me try to reproduce this. This should not have anything to do with GPU
Direct RDMA. However, to eliminate it, you could run with:
--mca btl_openib_want_cuda_gdr 0.
Rolf
From: users
To: us...@open-mpi.org
Subject: [OMPI users] GPUDirect with OpenMPI
Hi,
I built OpenMPI 1.8.3 using PGI 14.7 and enabled CUDA support for CUDA 6.0. I
have a Fortran test code that tests GPUDirect and have included it here. When
I run it across 2 nodes using 4 MPI procs, sometimes it fails
Hi,
I built OpenMPI 1.8.3 using PGI 14.7 and enabled CUDA support for CUDA 6.0. I
have a Fortran test code that tests GPUDirect and have included it here. When
I run it across 2 nodes using 4 MPI procs, sometimes it fails with incorrect
results. Specifically, sometimes rank 1 does not