Answers below...
>-Original Message-
>From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon
>Sent: Thursday, May 21, 2015 2:19 PM
>To: Open MPI Users
>Subject: Re: [OMPI users] cuIpcOpenMemHandle failure when using
>OpenMPI 1.8.5 with CUDA 7.0 and Mul
Received from Lev Givon on Thu, May 21, 2015 at 11:32:33AM EDT:
> Received from Rolf vandeVaart on Wed, May 20, 2015 at 07:48:15AM EDT:
>
> (snip)
>
> > I see that you mentioned you are starting 4 MPS daemons. Are you following
> > the instructions here?
> >
> >
Received from Rolf vandeVaart on Wed, May 20, 2015 at 07:48:15AM EDT:
(snip)
> I see that you mentioned you are starting 4 MPS daemons. Are you following
> the instructions here?
>
> http://cudamusing.blogspot.de/2013/07/enabling-cuda-multi-process-service-mps.html
>
Yes - also
-Original Message-
>From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon
>Sent: Tuesday, May 19, 2015 10:25 PM
>To: Open MPI Users
>Subject: Re: [OMPI users] cuIpcOpenMemHandle failure when using
>OpenMPI 1.8.5 with CUDA 7.0 and Multi-Process Service
>
&
uIpcOpenMemHandle failure when using OpenMPI
> >1.8.5 with CUDA 7.0 and Multi-Process Service
> >
> >I'm encountering intermittent errors while trying to use the Multi-Process
> >Service with CUDA 7.0 for improving concurrent access to a Kepler K20Xm GPU
> >by
] cuIpcOpenMemHandle failure when using OpenMPI
> >1.8.5 with CUDA 7.0 and Multi-Process Service
> >
> >I'm encountering intermittent errors while trying to use the Multi-Process
> >Service with CUDA 7.0 for improving concurrent access to a Kepler K20Xm GPU
> >
Lev Givon
>Sent: Tuesday, May 19, 2015 6:30 PM
>To: us...@open-mpi.org
>Subject: [OMPI users] cuIpcOpenMemHandle failure when using OpenMPI
>1.8.5 with CUDA 7.0 and Multi-Process Service
>
>I'm encountering intermittent errors while trying to use the Multi-Process
>Service w
I'm encountering intermittent errors while trying to use the Multi-Process
Service with CUDA 7.0 for improving concurrent access to a Kepler K20Xm GPU by
multiple MPI processes that perform GPU-to-GPU communication with each other
(i.e., GPU pointers are passed to the MPI transmission primitives).