Thanks Ralph,

The problem was with the configuration of my OpenMPI installation. This problem 
was causing the btl tcp component to not be found, so the command "ompi_info 
--param btl tcp --level 9" was not showing anything.

Charles

________________________________________
From: users <users-boun...@lists.open-mpi.org> on behalf of Ralph Castain via 
users <users@lists.open-mpi.org>
Sent: Monday, September 21, 2020 3:11 PM
To: Open MPI Users
Cc: Ralph Castain
Subject: Re: [OMPI users] Limiting IP addresses used by OpenMPI

[External Sender]

I'm not sure where you are looking, but those params are indeed present in the 
opal/mca/btl/tcp component:

/*
 *  Called by MCA framework to open the component, registers
 *  component parameters.
 */

static int mca_btl_tcp_component_register(void)
{
    char* message;

    /* register TCP component parameters */
    mca_btl_tcp_param_register_string("if_include", "Comma-delimited list of 
devices and/or CIDR notation of networks to use for MPI communication (e.g., 
\"eth0,192.168.0.0/16\").  Mutually exclusive with btl_tcp_if_exclude.", "", 
OPAL_INFO_LVL_1, &mca_btl_tcp_component.tcp_if_include);

    mca_btl_tcp_param_register_string("if_exclude", "Comma-delimited list of 
devices and/or CIDR notation of networks to NOT use for MPI communication -- 
all devices not matching these specifications will be used (e.g., 
\"eth0,192.168.0.0/16\").  If set to a non-default value, it is mutually 
exclusive with btl_tcp_if_include.",
                                      "127.0.0.1/8,sppp",
                                      OPAL_INFO_LVL_1, 
&mca_btl_tcp_component.tcp_if_exclude);


I added a little padding to make them clearer. This was from the v3.1.x branch, 
but those params have been there for a very long time. The 
"mca_btl_tcp_param_register_string" function adds the "btl_tcp_" prefix to the 
param.


On Sep 4, 2020, at 5:39 PM, Charles Doland via users 
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>> wrote:

Joseph,

There is no specific case. We are working on supporting the use of OpenMPI with 
our software, in addition to Intel MPI. With Intel MPI, we find that using the 
I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables is useful in many 
cases in which the job hosts have multiple network interfaces.

I tried to use btl_tcp_if_include and btl_tcp_if_exclude, but neither seemed to 
have any effect. I also noticed that these options do not appear to be present 
in the source code. Although there were similar options for ptl in the source, 
my undestanding is that ptl has been replaced by btl. I tested using version 
3.1.2. The source that I examined was also version 3.1.2.

Charles Doland
charles.dol...@ansys.com<mailto:charles.dol...@ansys.com>
(408) 627-6621  [x6621]

________________________________
From: users 
<users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> on 
behalf of Joseph Schuchart via users 
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>>
Sent: Tuesday, September 1, 2020 1:50 PM
To: users@lists.open-mpi.org<mailto:users@lists.open-mpi.org> 
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>>
Cc: Joseph Schuchart <schuch...@hlrs.de<mailto:schuch...@hlrs.de>>
Subject: Re: [OMPI users] Limiting IP addresses used by OpenMPI

[External Sender]

Charles,

What is the machine configuration you're running on? It seems that there
are two MCA parameter for the tcp btl: btl_tcp_if_include and
btl_tcp_if_exclude (see ompi_info for details). There may be other knobs
I'm not aware of. If you're using UCX then my guess is that UCX has its
own way to choose the network interface to be used...

Cheers
Joseph

On 9/1/20 9:35 PM, Charles Doland via users wrote:
> Yes. It is not unusual to have multiple network interfaces on each host
> of a cluster. Usually there is a preference to use only one network
> interface on each host due to higher speed or throughput, or other
> considerations. It would be useful to be able to explicitly specify the
> interface to use for cases in which the MPI code does not select the
> preferred interface.
>
> Charles Doland
> charles.dol...@ansys.com<mailto:charles.dol...@ansys.com> 
> <mailto:charles.dol...@ansys.com>
> (408) 627-6621  [x6621]
> ------------------------------------------------------------------------
> *From:* users 
> <users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> 
> on behalf of John
> Hearns via users <users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>>
> *Sent:* Tuesday, September 1, 2020 12:22 PM
> *To:* Open MPI Users 
> <users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>>
> *Cc:* John Hearns <hear...@gmail.com<mailto:hear...@gmail.com>>
> *Subject:* Re: [OMPI users] Limiting IP addresses used by OpenMPI
>
> *[External Sender]*
>
> Charles, I recall using the I_MPI_NETMASK to choose which interface for
> MPI to use.
> I guess you are asking the same question for OpenMPI?
>
> On Tue, 1 Sep 2020 at 17:03, Charles Doland via users
> <users@lists.open-mpi.org<mailto:users@lists.open-mpi.org> 
> <mailto:users@lists.open-mpi.org>> wrote:
>
>     Is there a way to limit the IP addresses or network interfaces used
>     for communication by OpenMPI? I am looking for something similar to
>     the I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables for
>     Intel MPI.
>
>     The OpenMPI documentation mentions the btl_tcp_if_include
>     and btl_tcp_if_exclude MCA options. These do not  appear to be
>     present, at least in OpenMPI v3.1.2. Is there another way to do
>     this? Or are these options supported in a different version?
>
>     Charles Doland
>     charles.dol...@ansys.com<mailto:charles.dol...@ansys.com> 
> <mailto:charles.dol...@ansys.com>
>     (408) 627-6621  [x6621]
>

Reply via email to