Hi, Martin

The environment variable:

MXM_RDMA_PORTS=device:port

is what you're looking for. You can specify a device/port pair on your OMPI
command line like:

mpirun -np 2 ... -x MXM_RDMA_PORTS=mlx4_0:1 ...


Best,

Josh

On Fri, Aug 12, 2016 at 5:03 PM, Audet, Martin <martin.au...@cnrc-nrc.gc.ca>
wrote:

> Hi OMPI_Users && OMPI_Developers,
>
> Is there an equivalent to the MCA parameter btl_openib_include_if when
> using MXM over Infiniband (e.g. either (pml=cm  mtl=mxm) or (pml=yalla)) ?
>
> I ask this question because I’m working on a cluster where LXC containers
> are used on compute nodes (with SR-IOV I think) and multiple mlx4
> interfaces are reported by lstopo (e.g. mlx4_0, mlx4_1, …, mlx4_16) even if
> a single physical Mellanox Connect-X3 HCA is present per node.
>
> I found that when I use the plain openib btl (e.g. (pml=ob1  btl=openib)),
> it is much faster if I specify the MCA parameter
> btl_openib_include_if=mlx4_0 to force Open MPI to use a single interface.
> By doing that the latency is lower while the bandwidth higher. I guess it
> is because otherwise Open MPI mess by trying to use all “virtual”
> interfaces at once.
>
> However we all know that MXM is better than plain openib since it allows
> the HCAs to perform message matching, transfer message in the background
> and provide communication progress.
>
> So in this case is there a way to use only mlx4_0 ?
>
> I mean when using mxm mtl (pml=cm  mtl=mxm) or preferably using it more
> directly by yalla pml (pml=yalla).
>
> Note I’m using Open MPI 1.10.3 I compiled myself for now but I could use
> instead Open MPI 2.0 if necessary .
>
> Thanks,
>
> Martin Audet
>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to