Hello,

I am having 2 (possibly unrelated) issues attempting to run Open MPI
under the Flux scheduler.

First, attempting to run the legacy launcher mpirun under a Flux
allocation (flux mini alloc -N 1):
the command
mpirun -N 2 hostname
returns
mpirun: Error: unknown option "-N"

in fact, just running mpirun returns
[fluxslurm181:1854484] procdir: /tmp/ompi.fluxslurm181.31638/pid.1854484/0/0
[fluxslurm181:1854484] jobdir: /tmp/ompi.fluxslurm181.31638/pid.1854484/0
[fluxslurm181:1854484] top: /tmp/ompi.fluxslurm181.31638/pid.1854484
[fluxslurm181:1854484] top: /tmp/ompi.fluxslurm181.31638
[fluxslurm181:1854484] tmp: /tmp
[fluxslurm181:1854484] sess_dir_cleanup: job session dir does not exist
[fluxslurm181:1854484] sess_dir_cleanup: top session dir not empty - leaving
[fluxslurm181:1854484] procdir: /tmp/ompi.fluxslurm181.31638/pid.1854484/0/0
[fluxslurm181:1854484] jobdir: /tmp/ompi.fluxslurm181.31638/pid.1854484/0
[fluxslurm181:1854484] top: /tmp/ompi.fluxslurm181.31638/pid.1854484
[fluxslurm181:1854484] top: /tmp/ompi.fluxslurm181.31638
[fluxslurm181:1854484] tmp: /tmp
[fluxslurm181:1854484] *** Process received signal ***
[fluxslurm181:1854484] Signal: Segmentation fault (11)
[fluxslurm181:1854484] Signal code: Address not mapped (1)
[fluxslurm181:1854484] Failing at address: (nil)
[fluxslurm181:1854484] [ 0] /lib64/libpthread.so.0(+0x12ce0)[0x15555395fce0]
[fluxslurm181:1854484] *** End of error message ***

clearing that tmpdir doesn't have any effect, btw

Tracing around, the closest function that gets called before the
segfault is: int opal_output_open in opal/util/output.c

Second, when I do flux mini run -N 2 -n 8 osu_allreduce, everything works fine.
Any more ranks and I get failures e.g.: flux mini run -N 2 -n 16 osu_allreduce
returns many messages along the following lines:
[fluxslurm181][[0,0],7][connect/btl_openib_connect_udcm.c:1535:udcm_find_endpoint]
could not find endpoint with port: 1, lid: 107, msg_type: 100
[fluxslurm181][[0,0],7][connect/btl_openib_connect_udcm.c:2044:udcm_process_messages]
could not find associated endpoint.
[fluxslurm181][[0,0],6][connect/btl_openib_connect_udcm.c:1535:udcm_find_endpoint]
could not find endpoint with port: 1, lid: 107, msg_type: 100
[fluxslurm181][[0,0],6][connect/btl_openib_connect_udcm.c:2044:udcm_process_messages]
could not find associated endpoint.
[fluxslurm181][[0,0],5][connect/btl_openib_connect_udcm.c:1535:udcm_find_endpoint]
could not find endpoint with port: 1, lid: 107, msg_type: 100
[fluxslurm181][[0,0],5][connect/btl_openib_connect_udcm.c:2044:udcm_process_messages]
could not find associated endpoint.
[fluxslurm181:1859081] *** An error occurred in MPI_Barrier
[fluxslurm181:1859081] *** reported by process [0,0]
[fluxslurm181:1859081] *** on communicator MPI_COMM_WORLD
[fluxslurm181:1859081] *** MPI_ERR_INTERN: internal error
[fluxslurm181:1859081] *** MPI_ERRORS_ARE_FATAL (processes in this
communicator will now abort,
[fluxslurm181:1859081] ***    and potentially your MPI job)
[fluxslurm181:1859081] [0]
func:~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6/lib/libopen-pal.so.40(opal_backtrace_buffer+0x23)
[0x7ffff64706f3]
[fluxslurm181:1859081] [1]
func:~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6/lib/libmpi.so.40(ompi_mpi_abort+0x127)
[0x7ffff78cf437]
[fluxslurm181:1859081] [2]
func:~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6/lib/libmpi.so.40(ompi_mpi_errors_are_fatal_comm_handler+0xcd)
[0x7ffff78be55d]
[fluxslurm181:1859081] [3]
func:~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6/lib/libmpi.so.40(ompi_errhandler_invoke+0x115)
[0x7ffff78bda95]
[fluxslurm181:1859081] [4]
func:~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6/lib/libmpi.so.40(MPI_Barrier+0x193)
[0x7ffff78e80e3]
[fluxslurm181:1859081] [5] func:osu_allreduce() [0x4018f6]
[fluxslurm181:1859081] [6]
func:/lib64/libc.so.6(__libc_start_main+0xf3) [0x7ffff69a2cf3]
[fluxslurm181:1859081] [7] func:osu_allreduce() [0x401d2e]
--------------------------------------------------------------------------
At least one pair of MPI processes are unable to reach each other for
MPI communications.  This means that no Open MPI device has indicated
that it can be used to communicate between these processes.  This is
an error; Open MPI requires that all MPI processes be able to reach
each other.  This error can sometimes be the result of forgetting to
specify the "self" BTL.

  Process 1 ([[0,0],5]) is on host: fluxslurm181
  Process 2 ([[0,0],13]) is on host: unknown!
  BTLs attempted: self openib vader

Your MPI job is now going to abort; sorry.
--------------------------------------------------------------------------
[fluxslurm181:1859086] *** An error occurred in MPI_Barrier
[fluxslurm181:1859086] *** reported by process [0,5]
[fluxslurm181:1859086] *** on communicator MPI_COMM_WORLD
[fluxslurm181:1859086] *** MPI_ERR_INTERN: internal error
[fluxslurm181:1859086] *** MPI_ERRORS_ARE_FATAL (processes in this
communicator will now abort,
[fluxslurm181:1859086] ***    and potentially your MPI job)
[fluxslurm181:1859086] [0]
func:~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6/lib/libopen-pal.so.40(opal_backtrace_buffer+0x23)
[0x7ffff64706f3]
[fluxslurm181:1859086] [1]
func:~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6/lib/libmpi.so.40(ompi_mpi_abort+0x127)
[0x7ffff78cf437]
[fluxslurm181:1859086] [2]
func:~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6/lib/libmpi.so.40(ompi_mpi_errors_are_fatal_comm_handler+0xcd)
[0x7ffff78be55d]
[fluxslurm181:1859086] [3]
func:~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6/lib/libmpi.so.40(ompi_errhandler_invoke+0x115)
[0x7ffff78bda95]
[fluxslurm181:1859086] [4]
func:~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6/lib/libmpi.so.40(MPI_Barrier+0x193)
[0x7ffff78e80e3]
[fluxslurm181:1859086] [5] func:osu_allreduce() [0x4018f6]
[fluxslurm181:1859086] [6]
func:/lib64/libc.so.6(__libc_start_main+0xf3) [0x7ffff69a2cf3]
[fluxslurm181:1859086] [7] func:osu_allreduce() [0x401d2e]
[fluxslurm181:1859087] UNSUPPORTED TYPE 0

I saw similar errors when limits were too low on the system in
question, but now ulimit -a output is as follows:

core file size          (blocks, -c) 32
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1028904
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 262144
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1028904
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

and we're still seeing this issue.

I have the following parameters set:
mca_base_component_show_load_errors = 1
opal_set_max_sys_limits = 1
orte_report_launch_progress = 1
orte_startup_timeout = 10000
orte_tmpdir_base = /tmp
orte_allocation_required = 1
btl_openib_want_fork_support = 0
rmaps_base_ranking_policy = core
mpi_show_handle_leaks = 0
mpi_warn_on_fork = 1
ras_base_launch_orted_on_hn = true
btl_openib_allow_ib = true
opal_abort_print_stack = true
orte_report_silent_errors = true
orte_debug = true
orte_debug_verbose = 100
orte_debug_daemons = true
orte_show_resolved_nodenames = true
orte_enable_recovery = true
orte_max_restarts = 100
orte_display_alloc = true

I also tried with just defaults on all of those except btl_openib_allow_ib and
still got the same errors.

ompi_info for this build is as follows:
                Package: Open MPI nate@fluxslurm211 Distribution
                Open MPI: 4.1.2
  Open MPI repo revision: v4.1.2
   Open MPI release date: Nov 24, 2021
                Open RTE: 4.1.2
  Open RTE repo revision: v4.1.2
   Open RTE release date: Nov 24, 2021
                    OPAL: 4.1.2
      OPAL repo revision: v4.1.2
       OPAL release date: Nov 24, 2021
                 MPI API: 3.1.0
            Ident string: 4.1.2
                  Prefix:
~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6
 Configured architecture: x86_64-pc-linux-gnu
          Configure host: fluxslurm211
           Configured by: nate
           Configured on: Thu Aug 18 21:47:44 UTC 2022
          Configure host: fluxslurm211
  Configure command line:
'--prefix=~/spack/linux-rhel8-x86_64/gcc-8.3.1/openmpi-4.1.2-fhzwbqwfjn2olmrai37dlfjhg6dwl6n6'
                          '--enable-shared' '--disable-silent-rules'
                          '--disable-builtin-atomics' '--with-pmi=/usr'

'--with-pmix=~/spack/linux-rhel8-x86_64/gcc-8.3.1/pmix-4.1.2-2ctmnxymxfe65riwg2v66gue4vqahaeg'

'--with-zlib=~/spack/linux-rhel8-x86_64/gcc-8.3.1/zlib-1.2.11-ldshfne6a4s4rqnm2vonftb2nf6ytgvd'
                          '--enable-mpi1-compatibility' '--without-xpmem'
                          '--without-mxm' '--without-fca' '--without-ofi'
                          '--with-psm2=/usr' '--without-psm' '--with-cma'
                          '--with-verbs=/usr' '--without-knem'
                          '--without-ucx' '--without-hcoll' '--without-tm'
                          '--without-sge' '--with-slurm' '--without-alps'
                          '--without-lsf' '--without-loadleveler'
                          '--disable-memchecker' '--with-lustre=/usr'
                          '--with-libevent=/usr' '--disable-java'
                          '--disable-mpi-java' '--without-cuda'
                          '--enable-wrapper-rpath'
                          '--disable-wrapper-runpath' '--disable-mpi-cxx'
                          '--disable-cxx-exceptions'
                          '--disable-per-user-config-files'
                          '--with-ld=/usr/bin/ld'
                          '--with-memory-manager=linux'
                          '--enable-mca-no-build=crs,filem,pml-v,btl-tcp'

'--with-io-romio-flags=--with-file-system=ufs+nfs+lustre'
                          '--enable-mpi-fortran' '--with-flux'
                          '--with-flux-pmi'
                Built by: nate
                Built on: Thu Aug 18 21:52:22 UTC 2022
              Built host: fluxslurm211
              C bindings: yes
            C++ bindings: no
             Fort mpif.h: yes (all)
            Fort use mpi: yes (full: ignore TKR)
       Fort use mpi size: deprecated-ompi-info-value
        Fort use mpi_f08: yes
 Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
                          limitations in the
                          /usr/WS1/nate/tce4/spack/lib/spack/env/gcc/gfortran
                          compiler and/or Open MPI, does not support the
                          following: array subsections, direct passthru
                          (where possible) to underlying Open MPI's C
                          functionality
  Fort mpi_f08 subarrays: no
           Java bindings: no
  Wrapper compiler rpath: rpath
              C compiler: /usr/WS1/nate/tce4/spack/lib/spack/env/gcc/gcc
     C compiler absolute:
  C compiler family name: GNU
      C compiler version: 8.5.0
            C++ compiler: /usr/WS1/nate/tce4/spack/lib/spack/env/gcc/g++
   C++ compiler absolute: none
           Fort compiler: /usr/WS1/nate/tce4/spack/lib/spack/env/gcc/gfortran
       Fort compiler abs:
         Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
   Fort 08 assumed shape: yes
      Fort optional args: yes
          Fort INTERFACE: yes
    Fort ISO_FORTRAN_ENV: yes
       Fort STORAGE_SIZE: yes
      Fort BIND(C) (all): yes
      Fort ISO_C_BINDING: yes
 Fort SUBROUTINE BIND(C): yes
       Fort TYPE,BIND(C): yes
 Fort T,BIND(C,name="a"): yes
            Fort PRIVATE: yes
          Fort PROTECTED: yes
           Fort ABSTRACT: yes
       Fort ASYNCHRONOUS: yes
          Fort PROCEDURE: yes
         Fort USE...ONLY: yes
           Fort C_FUNLOC: yes
 Fort f08 using wrappers: yes
         Fort MPI_SIZEOF: yes
             C profiling: yes
           C++ profiling: no
   Fort mpif.h profiling: yes
  Fort use mpi profiling: yes
   Fort use mpi_f08 prof: yes
          C++ exceptions: no
          Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support: yes,
                          OMPI progress: no, ORTE progress: yes, Event lib:
                          yes)
           Sparse Groups: no
  Internal debug support: no
  MPI interface warnings: yes
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
              dl support: yes
   Heterogeneous support: no
 mpirun default --prefix: no
       MPI_WTIME support: native
     Symbol vis. support: yes
   Host topology support: yes
            IPv6 support: no
      MPI1 compatibility: yes
          MPI extensions: affinity, cuda, pcollreq
   FT Checkpoint support: no (checkpoint thread: no)
   C/R Enabled Debugging: no
  MPI_MAX_PROCESSOR_NAME: 256
    MPI_MAX_ERROR_STRING: 256
     MPI_MAX_OBJECT_NAME: 64
        MPI_MAX_INFO_KEY: 36
        MPI_MAX_INFO_VAL: 256
       MPI_MAX_PORT_NAME: 1024
  MPI_MAX_DATAREP_STRING: 128
           MCA allocator: basic (MCA v2.1.0, API v2.0.0, Component v4.1.2)
           MCA allocator: bucket (MCA v2.1.0, API v2.0.0, Component v4.1.2)
           MCA backtrace: execinfo (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.1.2)
                 MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.1.2)
                 MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.1.2)
            MCA compress: bzip (MCA v2.1.0, API v2.0.0, Component v4.1.2)
            MCA compress: gzip (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                  MCA dl: dlopen (MCA v2.1.0, API v1.0.0, Component v4.1.2)
               MCA event: external (MCA v2.1.0, API v2.0.0, Component v4.1.2)
               MCA hwloc: external (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                  MCA if: linux_ipv6 (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
                  MCA if: posix_ipv4 (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
         MCA installdirs: env (MCA v2.1.0, API v2.0.0, Component v4.1.2)
         MCA installdirs: config (MCA v2.1.0, API v2.0.0, Component v4.1.2)
              MCA memory: patcher (MCA v2.1.0, API v2.0.0, Component v4.1.2)
               MCA mpool: hugepage (MCA v2.1.0, API v3.0.0, Component v4.1.2)
             MCA patcher: overwrite (MCA v2.1.0, API v1.0.0, Component
                          v4.1.2)
                MCA pmix: isolated (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA pmix: ext3x (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA pmix: flux (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA pmix: s1 (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA pmix: s2 (MCA v2.1.0, API v2.0.0, Component v4.1.2)
               MCA pstat: linux (MCA v2.1.0, API v2.0.0, Component v4.1.2)
              MCA rcache: grdma (MCA v2.1.0, API v3.3.0, Component v4.1.2)
           MCA reachable: weighted (MCA v2.1.0, API v2.0.0, Component v4.1.2)
           MCA reachable: netlink (MCA v2.1.0, API v2.0.0, Component v4.1.2)
               MCA shmem: mmap (MCA v2.1.0, API v2.0.0, Component v4.1.2)
               MCA shmem: posix (MCA v2.1.0, API v2.0.0, Component v4.1.2)
               MCA shmem: sysv (MCA v2.1.0, API v2.0.0, Component v4.1.2)
               MCA timer: linux (MCA v2.1.0, API v2.0.0, Component v4.1.2)
              MCA errmgr: default_app (MCA v2.1.0, API v3.0.0, Component
                          v4.1.2)
              MCA errmgr: default_hnp (MCA v2.1.0, API v3.0.0, Component
                          v4.1.2)
              MCA errmgr: default_orted (MCA v2.1.0, API v3.0.0, Component
                          v4.1.2)
              MCA errmgr: default_tool (MCA v2.1.0, API v3.0.0, Component
                          v4.1.2)
                 MCA ess: env (MCA v2.1.0, API v3.0.0, Component v4.1.2)
                 MCA ess: hnp (MCA v2.1.0, API v3.0.0, Component v4.1.2)
                 MCA ess: pmi (MCA v2.1.0, API v3.0.0, Component v4.1.2)
                 MCA ess: singleton (MCA v2.1.0, API v3.0.0, Component
                          v4.1.2)
                 MCA ess: tool (MCA v2.1.0, API v3.0.0, Component v4.1.2)
                 MCA ess: slurm (MCA v2.1.0, API v3.0.0, Component v4.1.2)
             MCA grpcomm: direct (MCA v2.1.0, API v3.0.0, Component v4.1.2)
                 MCA iof: hnp (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA iof: orted (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA iof: tool (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA odls: default (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA odls: pspawn (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA oob: tcp (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA plm: isolated (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA plm: rsh (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA plm: slurm (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA ras: simulator (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
                 MCA ras: slurm (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA regx: fwd (MCA v2.1.0, API v1.0.0, Component v4.1.2)
                MCA regx: naive (MCA v2.1.0, API v1.0.0, Component v4.1.2)
                MCA regx: reverse (MCA v2.1.0, API v1.0.0, Component v4.1.2)
               MCA rmaps: mindist (MCA v2.1.0, API v2.0.0, Component v4.1.2)
               MCA rmaps: ppr (MCA v2.1.0, API v2.0.0, Component v4.1.2)
               MCA rmaps: rank_file (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
               MCA rmaps: resilient (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
               MCA rmaps: round_robin (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
               MCA rmaps: seq (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA rml: oob (MCA v2.1.0, API v3.0.0, Component v4.1.2)
              MCA routed: binomial (MCA v2.1.0, API v3.0.0, Component v4.1.2)
              MCA routed: direct (MCA v2.1.0, API v3.0.0, Component v4.1.2)
              MCA routed: radix (MCA v2.1.0, API v3.0.0, Component v4.1.2)
                 MCA rtc: hwloc (MCA v2.1.0, API v1.0.0, Component v4.1.2)
              MCA schizo: flux (MCA v2.1.0, API v1.0.0, Component v4.1.2)
              MCA schizo: ompi (MCA v2.1.0, API v1.0.0, Component v4.1.2)
              MCA schizo: orte (MCA v2.1.0, API v1.0.0, Component v4.1.2)
              MCA schizo: jsm (MCA v2.1.0, API v1.0.0, Component v4.1.2)
              MCA schizo: singularity (MCA v2.1.0, API v1.0.0, Component
                          v4.1.2)
              MCA schizo: slurm (MCA v2.1.0, API v1.0.0, Component v4.1.2)
               MCA state: app (MCA v2.1.0, API v1.0.0, Component v4.1.2)
               MCA state: hnp (MCA v2.1.0, API v1.0.0, Component v4.1.2)
               MCA state: novm (MCA v2.1.0, API v1.0.0, Component v4.1.2)
               MCA state: orted (MCA v2.1.0, API v1.0.0, Component v4.1.2)
               MCA state: tool (MCA v2.1.0, API v1.0.0, Component v4.1.2)
                 MCA bml: r2 (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA coll: adapt (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA coll: basic (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA coll: han (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA coll: inter (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA coll: libnbc (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA coll: self (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA coll: sm (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA coll: sync (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA coll: tuned (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA coll: monitoring (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
                MCA fbtl: posix (MCA v2.1.0, API v2.0.0, Component v4.1.2)
               MCA fcoll: dynamic (MCA v2.1.0, API v2.0.0, Component v4.1.2)
               MCA fcoll: dynamic_gen2 (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
               MCA fcoll: individual (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
               MCA fcoll: two_phase (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
               MCA fcoll: vulcan (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                  MCA fs: lustre (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                  MCA fs: ufs (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                  MCA io: ompio (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                  MCA io: romio321 (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA mtl: psm2 (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                  MCA op: avx (MCA v2.1.0, API v1.0.0, Component v4.1.2)
                 MCA osc: sm (MCA v2.1.0, API v3.0.0, Component v4.1.2)
                 MCA osc: monitoring (MCA v2.1.0, API v3.0.0, Component
                          v4.1.2)
                 MCA osc: pt2pt (MCA v2.1.0, API v3.0.0, Component v4.1.2)
                 MCA osc: rdma (MCA v2.1.0, API v3.0.0, Component v4.1.2)
                 MCA pml: cm (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA pml: monitoring (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
                 MCA pml: ob1 (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                 MCA rte: orte (MCA v2.1.0, API v2.0.0, Component v4.1.2)
            MCA sharedfp: individual (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
            MCA sharedfp: lockedfile (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)
            MCA sharedfp: sm (MCA v2.1.0, API v2.0.0, Component v4.1.2)
                MCA topo: basic (MCA v2.1.0, API v2.2.0, Component v4.1.2)
                MCA topo: treematch (MCA v2.1.0, API v2.2.0, Component
                          v4.1.2)
           MCA vprotocol: pessimist (MCA v2.1.0, API v2.0.0, Component
                          v4.1.2)

Any help would be very much appreciated as I am pretty much out of ideas.

Thanks,
Nate

Reply via email to