Of Fei Mao
> Sent: Wednesday, June 17, 2015 1:48 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5
>
> Hi Rolf,
>
> Thank you very much for clarifying the problem. Is there any plan to support
> GPU RDMA for reduction in the
in the reduction.
Rolf
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Fei Mao
Sent: Wednesday, June 17, 2015 1:08 PM
To: us...@open-mpi.org<mailto:us...@open-mpi.org>
Subject: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5
Hi there,
I am doing benchmarks on a GPU clus
t; From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Fei Mao
> Sent: Wednesday, June 17, 2015 1:08 PM
> To: us...@open-mpi.org
> Subject: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5
>
> Hi there,
>
> I am doing benchmarks on a GPU cluster with tw
of CUDA IPC or GPU
Direct RDMA in the reduction.
Rolf
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Fei Mao
Sent: Wednesday, June 17, 2015 1:08 PM
To: us...@open-mpi.org
Subject: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5
Hi there,
I am doing benchmarks on a GPU
Hi there,
I am doing benchmarks on a GPU cluster with two CPU sockets and 4 K80 GPUs each
node. Two K80 are connected with CPU socket 0, another two with socket 1. An IB
ConnectX-3 (FDR) is also under socket 1. We are using Linux’s OFED, so I know
there is no way to do GPU RDMA inter-node commu