Joseph, There is no specific case. We are working on supporting the use of OpenMPI with our software, in addition to Intel MPI. With Intel MPI, we find that using the I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables is useful in many cases in which the job hosts have multiple network interfaces.
I tried to use btl_tcp_if_include and btl_tcp_if_exclude, but neither seemed to have any effect. I also noticed that these options do not appear to be present in the source code. Although there were similar options for ptl in the source, my undestanding is that ptl has been replaced by btl. I tested using version 3.1.2. The source that I examined was also version 3.1.2. Charles Doland charles.dol...@ansys.com<mailto:charles.dol...@ansys.com> (408) 627-6621 [x6621] ________________________________ From: users <users-boun...@lists.open-mpi.org> on behalf of Joseph Schuchart via users <users@lists.open-mpi.org> Sent: Tuesday, September 1, 2020 1:50 PM To: users@lists.open-mpi.org <users@lists.open-mpi.org> Cc: Joseph Schuchart <schuch...@hlrs.de> Subject: Re: [OMPI users] Limiting IP addresses used by OpenMPI [External Sender] Charles, What is the machine configuration you're running on? It seems that there are two MCA parameter for the tcp btl: btl_tcp_if_include and btl_tcp_if_exclude (see ompi_info for details). There may be other knobs I'm not aware of. If you're using UCX then my guess is that UCX has its own way to choose the network interface to be used... Cheers Joseph On 9/1/20 9:35 PM, Charles Doland via users wrote: > Yes. It is not unusual to have multiple network interfaces on each host > of a cluster. Usually there is a preference to use only one network > interface on each host due to higher speed or throughput, or other > considerations. It would be useful to be able to explicitly specify the > interface to use for cases in which the MPI code does not select the > preferred interface. > > Charles Doland > charles.dol...@ansys.com <mailto:charles.dol...@ansys.com> > (408) 627-6621 [x6621] > ------------------------------------------------------------------------ > *From:* users <users-boun...@lists.open-mpi.org> on behalf of John > Hearns via users <users@lists.open-mpi.org> > *Sent:* Tuesday, September 1, 2020 12:22 PM > *To:* Open MPI Users <users@lists.open-mpi.org> > *Cc:* John Hearns <hear...@gmail.com> > *Subject:* Re: [OMPI users] Limiting IP addresses used by OpenMPI > > *[External Sender]* > > Charles, I recall using the I_MPI_NETMASK to choose which interface for > MPI to use. > I guess you are asking the same question for OpenMPI? > > On Tue, 1 Sep 2020 at 17:03, Charles Doland via users > <users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>> wrote: > > Is there a way to limit the IP addresses or network interfaces used > for communication by OpenMPI? I am looking for something similar to > the I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables for > Intel MPI. > > The OpenMPI documentation mentions the btl_tcp_if_include > and btl_tcp_if_exclude MCA options. These do not appear to be > present, at least in OpenMPI v3.1.2. Is there another way to do > this? Or are these options supported in a different version? > > Charles Doland > charles.dol...@ansys.com <mailto:charles.dol...@ansys.com> > (408) 627-6621 [x6621] >