Re: [OMPI users] OpenMPI + InfiniBand

2016-10-31 Thread Jeff Squyres (jsquyres)
What does "ompi_info | grep openib" show?

Additionally, Mellanox provides alternate support through their MXM libraries, 
if you want to try that.

If that shows that you have the openib BTL plugin loaded, try running with 
"mpirun --mca btl_base_verbose 100 ..."  That will provide additional output 
about why / why not each point-to-point plugin is chosen.


> On Oct 30, 2016, at 10:35 PM, Sergei Hrushev  wrote:
> 
> Hi Gilles!
> 
> 
> is there any reason why you configure with --with-verbs-libdir=/usr/lib ?
> as far as i understand, --with-verbs should be enough, and /usr/lib
> nor /usr/local/lib should ever be used in the configure command line
> (and btw, are you running on a 32 bits system ? should the 64 bits
> libs be in /usr/lib64 ?)
> 
> I'm on Ubuntu 16.04 x86_64 and it has /usr/lib and /usr/lib32.
> As I understand /usr/lib is assumed to be /usr/lib64.
> So the library path is correct.
>  
> 
> make sure you
> ulimit -l unlimited
> before you invoke mpirun, and this value is correctly propagated to
> the remote nodes
> /* the failure could be a side effect of a low ulimit -l */
>  
> Yes, ulimit -l returns "unlimited".
> So this is also correct.
> 
> Best regards,
> Sergei.
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


Re: [OMPI users] MCA compilation later

2016-10-31 Thread Sean Ahern
Thanks. That's what I expected and hoped. But is there a pointer about how
to get started? If I've got an existing OpenMPI build, what's the process
to get a new MCA plugin built with a new set of header files?

(I'm a bit surprised only header files are necessary. Shouldn't the plugin
require at least runtime linking with a low-level transport library?)

-Sean

--
Sean Ahern
Computational Engineering International
919-363-0883

On Fri, Oct 28, 2016 at 3:40 PM, r...@open-mpi.org  wrote:

> You don’t need any of the hardware - you just need the headers. Things
> like libfabric and libibverbs are all publicly available, and so you can
> build all that support even if you cannot run it on your machine.
>
> Once your customer installs the binary, the various plugins will check for
> their required library and hardware and disqualify themselves if it isn’t
> found.
>
> On Oct 28, 2016, at 12:33 PM, Sean Ahern  wrote:
>
> There's been discussion on the OpenMPI list recently about static linking
> of OpenMPI with all of the desired MCAs in it. I've got the opposite
> question. I'd like to add MCAs later on to an already-compiled version of
> OpenMPI and am not quite sure how to do it.
>
> Let me summarize. We've got a commercial code that we deploy on customer
> machines in binary form. We're working to integrate OpenMPI into the
> installer, and things seem to be progressing well. (Note: because we're a
> commercial code, making the customer compile something doesn't work for us
> like it can for open source or research codes.)
>
> Now, we want to take advantage of OpenMPI's ability to find MCAs at
> runtime, pointing to the various plugins that might apply to a deployed
> system. I've configured and compiled OpenMPI on one of our build machines,
> one that doesn't have any special interconnect hardware or software
> installed. We take this compiled version of OpenMPI and use it on all of
> our machines. (Yes, I've read Building FAQ #39
>  about
> relocating OpenMPI. Useful, that.) I'd like to take our pre-compiled
> version of OpenMPI and add MCA libraries to it, giving OpenMPI the ability
> to communicate via transport mechanisms that weren't available on the
> original build machine. Things like InfiniBand, OmniPath, or one of Cray's
> interconnects.
>
> How would I go about doing this? And what are the limitations?
>
> I'm guessing that I need to go configure and compile the same version of
> OpenMPI on a machine that has the desired interconnect installation
> (headers and libraries), then go grab the corresponding
> lib/openmpi/mca_*{la,so} files. Take those files and drop them in our
> pre-built OpenMPI from our build machine in the same relative plugin
> location (lib/openmpi). If I stick with the same compiler (gcc, in this
> case), I'm hoping that symbols will all resolve themselves at runtime. (I
> probably will have to do some LD_LIBRARY_PATH games to be sure to find the
> appropriate underlying libraries unless OpenMPI's process for building MCAs
> links them in statically somehow.)
>
> Am I even on the right track here? (The various system-level FAQs (here
> , here
> , and especially here
> ) seem to suggest that I
> am.)
>
> Our first test platform will be getting OpenMPI via IB working on our
> cluster, where we have IB (and TCP/IP) functional and not OpenMPI. This
> will be a great stand-in for a customer that has an IB cluster and wants to
> just run our binary installation.
>
> Thanks.
>
> -Sean
>
> --
> Sean Ahern
> Computational Engineering International
> 919-363-0883
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

[OMPI users] mpi4py+OpenMPI: Qs about submitting bugs and examples

2016-10-31 Thread Jason Maldonis
Hello everyone,

I am using mpi4py with OpenMPI for a simulation that uses dynamic resource
allocation via `mpi_spawn_multiple`.  I've been working on this problem for
about 6 months now and I have some questions and potential bugs I'd like to
submit.

Is this mailing list a good spot to submit bugs for OpenMPI? Or do I use
github? Are previous versions (like 1.10.2) still being developed for
bugfixes, or do I need to reproduce bugs for 2.x only?

I may also submit bugs to mpi4py, but I don't yet know exactly where the
bugs are originating from.  Do any of you know if github is the correct
place to submit bugs for mpi4py?

I have also learned some cool things that are not well documented on the
web, and I'd like to provide nice examples or something similar. Can I
contribute examples to either mpi4py or OpenMPI?

As a side note, OpenMPI 1.10.2 seems to be much more stable than 2.x for
the dynamic resource allocation code I am writing.

Thanks in advance,
Jason Maldonis
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] mpi4py+OpenMPI: Qs about submitting bugs and examples

2016-10-31 Thread r...@open-mpi.org

> On Oct 31, 2016, at 10:39 AM, Jason Maldonis  wrote:
> 
> Hello everyone,
> 
> I am using mpi4py with OpenMPI for a simulation that uses dynamic resource 
> allocation via `mpi_spawn_multiple`.  I've been working on this problem for 
> about 6 months now and I have some questions and potential bugs I'd like to 
> submit.
> 
> Is this mailing list a good spot to submit bugs for OpenMPI? Or do I use 
> github?

You can use either - I would encourage the use of github “issues” when you have 
a specific bug, and the mailing list for general questions

> Are previous versions (like 1.10.2) still being developed for bugfixes, or do 
> I need to reproduce bugs for 2.x only?

The 1.10 series is still being supported - it has proven fairly stable and so 
the release rate has slowed down considerably in the last year. Primary 
development focus in on 2.x

> 
> I may also submit bugs to mpi4py, but I don't yet know exactly where the bugs 
> are originating from.  Do any of you know if github is the correct place to 
> submit bugs for mpi4py?

I honestly don’t know, but I do believe mpi4py is on github as well

> 
> I have also learned some cool things that are not well documented on the web, 
> and I'd like to provide nice examples or something similar. Can I contribute 
> examples to either mpi4py or OpenMPI?

Please do!

> 
> As a side note, OpenMPI 1.10.2 seems to be much more stable than 2.x for the 
> dynamic resource allocation code I am writing.

Yes, there has been an outstanding bug on the 2.x series for dynamic 
operations. We just finally found the missing code change and it is being 
ported at this time.

> 
> Thanks in advance,
> Jason Maldonis
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] MCA compilation later

2016-10-31 Thread r...@open-mpi.org
Here’s a link on how to create components:

https://github.com/open-mpi/ompi/wiki/devel-CreateComponent

and if you want to create a completely new framework:

https://github.com/open-mpi/ompi/wiki/devel-CreateFramework

If you want to distribute a proprietary plugin, you first develop and build it 
within the OMPI code base on your own machines. Then, just take the dll for 
your plugin from the /lib/openmpi directory and distribute that “blob”.

I’ll correct my comment: you need the headers and the libraries. You just don’t 
need the hardware, though it means you cannot test those features.


> On Oct 31, 2016, at 6:19 AM, Sean Ahern  wrote:
> 
> Thanks. That's what I expected and hoped. But is there a pointer about how to 
> get started? If I've got an existing OpenMPI build, what's the process to get 
> a new MCA plugin built with a new set of header files?
> 
> (I'm a bit surprised only header files are necessary. Shouldn't the plugin 
> require at least runtime linking with a low-level transport library?)
> 
> -Sean
> 
> --
> Sean Ahern
> Computational Engineering International
> 919-363-0883
> 
> On Fri, Oct 28, 2016 at 3:40 PM, r...@open-mpi.org  
> mailto:r...@open-mpi.org>> wrote:
> You don’t need any of the hardware - you just need the headers. Things like 
> libfabric and libibverbs are all publicly available, and so you can build all 
> that support even if you cannot run it on your machine.
> 
> Once your customer installs the binary, the various plugins will check for 
> their required library and hardware and disqualify themselves if it isn’t 
> found.
> 
>> On Oct 28, 2016, at 12:33 PM, Sean Ahern > > wrote:
>> 
>> There's been discussion on the OpenMPI list recently about static linking of 
>> OpenMPI with all of the desired MCAs in it. I've got the opposite question. 
>> I'd like to add MCAs later on to an already-compiled version of OpenMPI and 
>> am not quite sure how to do it.
>> 
>> Let me summarize. We've got a commercial code that we deploy on customer 
>> machines in binary form. We're working to integrate OpenMPI into the 
>> installer, and things seem to be progressing well. (Note: because we're a 
>> commercial code, making the customer compile something doesn't work for us 
>> like it can for open source or research codes.)
>> 
>> Now, we want to take advantage of OpenMPI's ability to find MCAs at runtime, 
>> pointing to the various plugins that might apply to a deployed system. I've 
>> configured and compiled OpenMPI on one of our build machines, one that 
>> doesn't have any special interconnect hardware or software installed. We 
>> take this compiled version of OpenMPI and use it on all of our machines. 
>> (Yes, I've read Building FAQ #39 
>>  about 
>> relocating OpenMPI. Useful, that.) I'd like to take our pre-compiled version 
>> of OpenMPI and add MCA libraries to it, giving OpenMPI the ability to 
>> communicate via transport mechanisms that weren't available on the original 
>> build machine. Things like InfiniBand, OmniPath, or one of Cray's 
>> interconnects.
>> 
>> How would I go about doing this? And what are the limitations?
>> 
>> I'm guessing that I need to go configure and compile the same version of 
>> OpenMPI on a machine that has the desired interconnect installation (headers 
>> and libraries), then go grab the corresponding lib/openmpi/mca_*{la,so} 
>> files. Take those files and drop them in our pre-built OpenMPI from our 
>> build machine in the same relative plugin location (lib/openmpi). If I stick 
>> with the same compiler (gcc, in this case), I'm hoping that symbols will all 
>> resolve themselves at runtime. (I probably will have to do some 
>> LD_LIBRARY_PATH games to be sure to find the appropriate underlying 
>> libraries unless OpenMPI's process for building MCAs links them in 
>> statically somehow.)
>> 
>> Am I even on the right track here? (The various system-level FAQs (here 
>> , here 
>> , and especially here 
>> ) seem to suggest that I 
>> am.)
>> 
>> Our first test platform will be getting OpenMPI via IB working on our 
>> cluster, where we have IB (and TCP/IP) functional and not OpenMPI. This will 
>> be a great stand-in for a customer that has an IB cluster and wants to just 
>> run our binary installation.
>> 
>> Thanks.
>> 
>> -Sean
>> 
>> --
>> Sean Ahern
>> Computational Engineering International
>> 919-363-0883
>> ___
>> users mailing list
>> users@lists.open-mpi.org 
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users 
>> 
> 
> ___
> users mailing list
> users@lists.ope

Re: [OMPI users] Redusing libmpi.so size....

2016-10-31 Thread Mahesh Nanavalla
Hi Jeff Squyres,

Thank you for your reply...

My problem is i want to *reduce library* size by removing unwanted plugin's.

Here *libmpi.so.12.0.3 *size is 2.4MB.

How can i know what are the* pluggin's *included to* build the*
*libmpi.so.12.0.3* and how can remove.

Thanks&Regards,
Mahesh N

On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres)  wrote:

> On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
> mahesh.nanavalla...@gmail.com> wrote:
> >
> > i have configured as below for arm
> >
> > ./configure --enable-orterun-prefix-by-default  
> > --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
> --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
> --disable-java --disable-libompitrace --disable-static
>
> Note that there is a tradeoff here: --enable-dlopen will reduce the size
> of libmpi.so by splitting out all the plugins into separate DSOs (dynamic
> shared objects -- i.e., individual .so plugin files).  But note that some
> of plugins are quite small in terms of code.  I mention this because when
> you dlopen a DSO, it will load in DSOs in units of pages.  So even if a DSO
> only has 1KB of code, it will use  of bytes in your running
> process (e.g., 4KB -- or whatever the page size is on your system).
>
> On the other hand, if you --disable-dlopen, then all of Open MPI's plugins
> are slurped into libmpi.so (and friends).  Meaning: no DSOs, no dlopen, no
> page-boundary-loading behavior.  This allows the compiler/linker to pack in
> all the plugins into memory more efficiently (because they'll be compiled
> as part of libmpi.so, and all the code is packed in there -- just like any
> other library).  Your total memory usage in the process may be smaller.
>
> Sidenote: if you run more than one MPI process per node, then libmpi.so
> (and friends) will be shared between processes.  You're assumedly running
> in an embedded environment, so I don't know if this factor matters (i.e., I
> don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
>
> On the other hand (that's your third hand, for those at home counting...),
> you may not want to include *all* the plugins.  I.e., there may be a bunch
> of plugins that you're not actually using, and therefore if they are
> compiled in as part of libmpi.so (and friends), they're consuming space
> that you don't want/need.  So the dlopen mechanism might actually be better
> -- because Open MPI may dlopen a plugin at run time, determine that it
> won't be used, and then dlclose it (i.e., release the memory that would
> have been used for it).
>
> On the other (fourth!) hand, you can actually tell Open MPI to *not* build
> specific plugins with the --enable-dso-no-build=LIST configure option.
> I.e., if you know exactly what plugins you want to use, you can negate the
> ones that you *don't* want to use on the configure line, use
> --disable-static and --disable-dlopen, and you'll likely use the least
> amount of memory.  This is admittedly a bit clunky, but Open MPI's
> configure process was (obviously) not optimized for this use case -- it's
> much more optimized to the "build everything possible, and figure out which
> to use at run time" use case.
>
> If you really want to hit rock bottom on MPI process size in your embedded
> environment, you can do some experimentation to figure out exactly which
> components you need.  You can use repeated runs with "mpirun --mca
> ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
> names ("framework" = collection of plugins of the same type).  This verbose
> output will show you exactly which components are opened, which ones are
> used, and which ones are discarded.  You can build up a list of all the
> discarded components and --enable-mca-no-build them.
>
> > While i am running the using mpirun
> > am getting following errror..
> > root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1
> /usr/bin/openmpiWiFiBulb
> > 
> --
> > Sorry!  You were supposed to get help about:
> > opal_init:startup:internal-failure
> > But I couldn't open the help file:
> > 
> > /home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
> No such file or directory.  Sorry!
>
> So this is really two errors:
>
> 1. The help message file is not being found.
> 2. Something is obviously going wrong during opal_init() (which is one of
> Open MPI's startup functions).
>
> For #1, when I do a default build of Open MPI 1.10.3, that file *is*
> installed.  Are you trimming the installation tree, perchance?  If so, if
> you can put at least that one file back in its installation location (it's
> in the Open MPI source tarball), it might reveal more information on
> exactly what is failing.
>
> Additionally, I wonder if shared memory is not getting setu

Re: [OMPI users] Redusing libmpi.so size....

2016-10-31 Thread Mahesh Nanavalla
Hi all,

Thank you for your reply...

My problem is i want to *reduce library* size by removing unwanted plugin's.

Here *libmpi.so.12.0.3 *size is 2.4MB.

How can i know what are the* pluggin's *included to* build the*
*libmpi.so.12.0.3* and how can remove.

Thanks&Regards,
Mahesh N

On Tue, Nov 1, 2016 at 11:43 AM, Mahesh Nanavalla <
mahesh.nanavalla...@gmail.com> wrote:

> Hi Jeff Squyres,
>
> Thank you for your reply...
>
> My problem is i want to *reduce library* size by removing unwanted
> plugin's.
>
> Here *libmpi.so.12.0.3 *size is 2.4MB.
>
> How can i know what are the* pluggin's *included to* build the*
> *libmpi.so.12.0.3* and how can remove.
>
> Thanks&Regards,
> Mahesh N
>
> On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
>
>> On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
>> mahesh.nanavalla...@gmail.com> wrote:
>> >
>> > i have configured as below for arm
>> >
>> > ./configure --enable-orterun-prefix-by-default
>> --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
>> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
>> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
>> --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
>> --disable-java --disable-libompitrace --disable-static
>>
>> Note that there is a tradeoff here: --enable-dlopen will reduce the size
>> of libmpi.so by splitting out all the plugins into separate DSOs (dynamic
>> shared objects -- i.e., individual .so plugin files).  But note that some
>> of plugins are quite small in terms of code.  I mention this because when
>> you dlopen a DSO, it will load in DSOs in units of pages.  So even if a DSO
>> only has 1KB of code, it will use  of bytes in your running
>> process (e.g., 4KB -- or whatever the page size is on your system).
>>
>> On the other hand, if you --disable-dlopen, then all of Open MPI's
>> plugins are slurped into libmpi.so (and friends).  Meaning: no DSOs, no
>> dlopen, no page-boundary-loading behavior.  This allows the compiler/linker
>> to pack in all the plugins into memory more efficiently (because they'll be
>> compiled as part of libmpi.so, and all the code is packed in there -- just
>> like any other library).  Your total memory usage in the process may be
>> smaller.
>>
>> Sidenote: if you run more than one MPI process per node, then libmpi.so
>> (and friends) will be shared between processes.  You're assumedly running
>> in an embedded environment, so I don't know if this factor matters (i.e., I
>> don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
>>
>> On the other hand (that's your third hand, for those at home
>> counting...), you may not want to include *all* the plugins.  I.e., there
>> may be a bunch of plugins that you're not actually using, and therefore if
>> they are compiled in as part of libmpi.so (and friends), they're consuming
>> space that you don't want/need.  So the dlopen mechanism might actually be
>> better -- because Open MPI may dlopen a plugin at run time, determine that
>> it won't be used, and then dlclose it (i.e., release the memory that would
>> have been used for it).
>>
>> On the other (fourth!) hand, you can actually tell Open MPI to *not*
>> build specific plugins with the --enable-dso-no-build=LIST configure
>> option.  I.e., if you know exactly what plugins you want to use, you can
>> negate the ones that you *don't* want to use on the configure line, use
>> --disable-static and --disable-dlopen, and you'll likely use the least
>> amount of memory.  This is admittedly a bit clunky, but Open MPI's
>> configure process was (obviously) not optimized for this use case -- it's
>> much more optimized to the "build everything possible, and figure out which
>> to use at run time" use case.
>>
>> If you really want to hit rock bottom on MPI process size in your
>> embedded environment, you can do some experimentation to figure out exactly
>> which components you need.  You can use repeated runs with "mpirun --mca
>> ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
>> names ("framework" = collection of plugins of the same type).  This verbose
>> output will show you exactly which components are opened, which ones are
>> used, and which ones are discarded.  You can build up a list of all the
>> discarded components and --enable-mca-no-build them.
>>
>> > While i am running the using mpirun
>> > am getting following errror..
>> > root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1
>> /usr/bin/openmpiWiFiBulb
>> > 
>> --
>> > Sorry!  You were supposed to get help about:
>> > opal_init:startup:internal-failure
>> > But I couldn't open the help file:
>> > 
>> > /home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
>> No such file or directory.  Sorry!
>>
>> So this is really two errors:
>>
>> 1. The help message file is not being 

Re: [OMPI users] OpenMPI + InfiniBand

2016-10-31 Thread Sergei Hrushev
Hi Jeff !

What does "ompi_info | grep openib" show?
>
>
$ ompi_info | grep openib
 MCA btl: openib (MCA v2.0.0, API v2.0.0, Component v1.10.2)

Additionally, Mellanox provides alternate support through their MXM
> libraries, if you want to try that.
>

Yes, I know.
But we already have a hybrid cluster with OpenMPI, OpenMP, CUDA, Torque and
many other libraries installed,
and because it works perfect over Ethernet interconnect my idea was to add
InfiniBand support with minimum
of changes. Mainly because we already have some custom-written software for
OpenMPI.


> If that shows that you have the openib BTL plugin loaded, try running with
> "mpirun --mca btl_base_verbose 100 ..."  That will provide additional
> output about why / why not each point-to-point plugin is chosen.
>
>
Yes, I tried to get this info already.
And I saw in log that rdmacm wants IP address on port.
So my question in topc start message was:

Is it enough for OpenMPI to have RDMA only or IPoIB should also be
installed?

The mpirun output is:

[node1:02674] mca: base: components_register: registering btl components
[node1:02674] mca: base: components_register: found loaded component openib
[node1:02674] mca: base: components_register: component openib register
function successful
[node1:02674] mca: base: components_register: found loaded component sm
[node1:02674] mca: base: components_register: component sm register
function successful
[node1:02674] mca: base: components_register: found loaded component self
[node1:02674] mca: base: components_register: component self register
function successful
[node1:02674] mca: base: components_open: opening btl components
[node1:02674] mca: base: components_open: found loaded component openib
[node1:02674] mca: base: components_open: component openib open function
successful
[node1:02674] mca: base: components_open: found loaded component sm
[node1:02674] mca: base: components_open: component sm open function
successful
[node1:02674] mca: base: components_open: found loaded component self
[node1:02674] mca: base: components_open: component self open function
successful
[node1:02674] select: initializing btl component openib
[node1:02674] openib BTL: rdmacm IP address not found on port
[node1:02674] openib BTL: rdmacm CPC unavailable for use on mlx4_0:1;
skipped
[node1:02674] select: init of component openib returned failure
[node1:02674] mca: base: close: component openib closed
[node1:02674] mca: base: close: unloading component openib
[node1:02674] select: initializing btl component sm
[node1:02674] select: init of component sm returned failure
[node1:02674] mca: base: close: component sm closed
[node1:02674] mca: base: close: unloading component sm
[node1:02674] select: initializing btl component self
[node1:02674] select: init of component self returned success
[node1:02674] mca: bml: Using self btl to [[16642,1],0] on node node1
[node1:02674] mca: base: close: component self closed
[node1:02674] mca: base: close: unloading component self

Best regards,
Sergei.
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users