All,
I am seeing a fatal error with OpenMPI 2.0.2 if requesting support for
MPI_THREAD_MULTIPLE and afterwards creating a window using
MPI_Win_create. I am attaching a small reproducer. The output I get is
the following:
```
MPI_THREAD_MULTIPLE supported: yes
MPI_THREAD_MULTIPLE supported: yes
MPI_THREAD_MULTIPLE supported: yes
MPI_THREAD_MULTIPLE supported: yes
--------------------------------------------------------------------------
The OSC pt2pt component does not support MPI_THREAD_MULTIPLE in this
release.
Workarounds are to run on a single node, or to use a system with an RDMA
capable network such as Infiniband.
--------------------------------------------------------------------------
[beryl:10705] *** An error occurred in MPI_Win_create
[beryl:10705] *** reported by process [2149974017,2]
[beryl:10705] *** on communicator MPI_COMM_WORLD
[beryl:10705] *** MPI_ERR_WIN: invalid window
[beryl:10705] *** MPI_ERRORS_ARE_FATAL (processes in this communicator
will now abort,
[beryl:10705] *** and potentially your MPI job)
[beryl:10698] 3 more processes have sent help message help-osc-pt2pt.txt
/ mpi-thread-multiple-not-supported
[beryl:10698] Set MCA parameter "orte_base_help_aggregate" to 0 to see
all help / error messages
[beryl:10698] 3 more processes have sent help message
help-mpi-errors.txt / mpi_errors_are_fatal
```
I am running on a single node (my laptop). Both OpenMPI and the
application were compiled using GCC 5.3.0. Naturally, there is no
support for Infiniband available. Should I signal OpenMPI that I am
indeed running on a single node? If so, how can I do that? Can't this be
detected by OpenMPI automatically? The test succeeds if I only request
MPI_THREAD_SINGLE.
OpenMPI 2.0.2 has been configured using only
--enable-mpi-thread-multiple and --prefix configure parameters. I am
attaching the output of ompi_info.
Please let me know if you need any additional information.
Cheers,
Joseph
--
Dipl.-Inf. Joseph Schuchart
High Performance Computing Center Stuttgart (HLRS)
Nobelstr. 19
D-70569 Stuttgart
Tel.: +49(0)711-68565890
Fax: +49(0)711-6856832
E-Mail: schuch...@hlrs.de
#include <mpi.h>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
int provided;
MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided);
printf("MPI_THREAD_MULTIPLE supported: %s\n", (provided == MPI_THREAD_MULTIPLE) ? "yes" : "no" );
MPI_Win win;
char *base = malloc(sizeof(uint64_t));
MPI_Win_create(base, sizeof(uint64_t), sizeof(uint64_t), MPI_INFO_NULL, MPI_COMM_WORLD, &win);
MPI_Win_free(&win);
free(base);
MPI_Finalize();
return 0;
}
Package: Open MPI joseph@beryl Distribution
Open MPI: 2.0.2
Open MPI repo revision: v2.0.1-348-ge291d0e
Open MPI release date: Jan 31, 2017
Open RTE: 2.0.2
Open RTE repo revision: v2.0.1-348-ge291d0e
Open RTE release date: Jan 31, 2017
OPAL: 2.0.2
OPAL repo revision: v2.0.1-348-ge291d0e
OPAL release date: Jan 31, 2017
MPI API: 3.1.0
Ident string: 2.0.2
Prefix: /home/joseph/opt/openmpi-2.0.2
Configured architecture: x86_64-unknown-linux-gnu
Configure host: beryl
Configured by: joseph
Configured on: Wed Feb 1 11:03:54 CET 2017
Configure host: beryl
Built by: joseph
Built on: Wed Feb 1 11:09:15 CET 2017
Built host: beryl
C bindings: yes
C++ bindings: no
Fort mpif.h: no
Fort use mpi: no
Fort use mpi size: deprecated-ompi-info-value
Fort use mpi_f08: no
Fort mpi_f08 compliance: The mpi_f08 module was not built
Fort mpi_f08 subarrays: no
Java bindings: no
Wrapper compiler rpath: runpath
C compiler: gcc
C compiler absolute: /usr/bin/gcc
C compiler family name: GNU
C compiler version: 5.4.1
C++ compiler: g++
C++ compiler absolute: /usr/bin/g++
Fort compiler: none
Fort compiler abs: none
Fort ignore TKR: no
Fort 08 assumed shape: no
Fort optional args: no
Fort INTERFACE: no
Fort ISO_FORTRAN_ENV: no
Fort STORAGE_SIZE: no
Fort BIND(C) (all): no
Fort ISO_C_BINDING: no
Fort SUBROUTINE BIND(C): no
Fort TYPE,BIND(C): no
Fort T,BIND(C,name="a"): no
Fort PRIVATE: no
Fort PROTECTED: no
Fort ABSTRACT: no
Fort ASYNCHRONOUS: no
Fort PROCEDURE: no
Fort USE...ONLY: no
Fort C_FUNLOC: no
Fort f08 using wrappers: no
Fort MPI_SIZEOF: no
C profiling: yes
C++ profiling: no
Fort mpif.h profiling: no
Fort use mpi profiling: no
Fort use mpi_f08 prof: no
C++ exceptions: no
Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support: yes,
OMPI progress: no, ORTE progress: yes, Event lib: yes)
Sparse Groups: no
Internal debug support: no
MPI interface warnings: yes
MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
dl support: yes
Heterogeneous support: no
mpirun default --prefix: no
MPI I/O support: yes
MPI_WTIME support: native
Symbol vis. support: yes
Host topology support: yes
MPI extensions: affinity, cuda
MPI_MAX_PROCESSOR_NAME: 256
MPI_MAX_ERROR_STRING: 256
MPI_MAX_OBJECT_NAME: 64
MPI_MAX_INFO_KEY: 36
MPI_MAX_INFO_VAL: 256
MPI_MAX_PORT_NAME: 1024
MPI_MAX_DATAREP_STRING: 128
MCA allocator: basic (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA allocator: bucket (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA backtrace: execinfo (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA btl: vader (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA btl: tcp (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA btl: sm (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA btl: self (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA btl: openib (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA dl: dlopen (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA event: libevent2022 (MCA v2.1.0, API v2.0.0, Component
v2.0.2)
MCA hwloc: hwloc1112 (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA if: posix_ipv4 (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA if: linux_ipv6 (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA installdirs: env (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA installdirs: config (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA memory: patcher (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA mpool: sm (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA mpool: grdma (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA patcher: overwrite (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA pmix: pmix112 (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA pstat: linux (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA rcache: vma (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA sec: basic (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA shmem: posix (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA shmem: mmap (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA shmem: sysv (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA timer: linux (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA dfs: test (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA dfs: app (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA dfs: orted (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA errmgr: default_hnp (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA errmgr: default_orted (MCA v2.1.0, API v3.0.0, Component
v2.0.2)
MCA errmgr: default_tool (MCA v2.1.0, API v3.0.0, Component
v2.0.2)
MCA errmgr: default_app (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA ess: env (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA ess: singleton (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA ess: pmi (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA ess: slurm (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA ess: hnp (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA ess: tool (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA filem: raw (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA grpcomm: direct (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA iof: mr_orted (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA iof: orted (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA iof: mr_hnp (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA iof: hnp (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA iof: tool (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA notifier: syslog (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA odls: default (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA oob: ud (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA oob: usock (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA oob: tcp (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA plm: isolated (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA plm: slurm (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA plm: rsh (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA ras: simulator (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA ras: slurm (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA ras: loadleveler (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA rmaps: ppr (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA rmaps: rank_file (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA rmaps: mindist (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA rmaps: resilient (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA rmaps: round_robin (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA rmaps: staged (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA rmaps: seq (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA rml: oob (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA routed: debruijn (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA routed: direct (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA routed: binomial (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA routed: radix (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA rtc: hwloc (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA rtc: freq (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA schizo: ompi (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA state: tool (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA state: orted (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA state: dvm (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA state: staged_hnp (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA state: app (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA state: hnp (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA state: novm (MCA v2.1.0, API v1.0.0, Component v2.0.2)
MCA state: staged_orted (MCA v2.1.0, API v1.0.0, Component
v2.0.2)
MCA bml: r2 (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA coll: basic (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA coll: sm (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA coll: tuned (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA coll: inter (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA coll: libnbc (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA coll: self (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA coll: sync (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA fbtl: posix (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA fcoll: individual (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA fcoll: static (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA fcoll: two_phase (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA fcoll: dynamic (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA fs: ufs (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA io: romio314 (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA io: ompio (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA osc: pt2pt (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA osc: rdma (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA osc: sm (MCA v2.1.0, API v3.0.0, Component v2.0.2)
MCA pml: v (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA pml: cm (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA pml: ob1 (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA rte: orte (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA sharedfp: lockedfile (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA sharedfp: individual (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA sharedfp: sm (MCA v2.1.0, API v2.0.0, Component v2.0.2)
MCA topo: basic (MCA v2.1.0, API v2.2.0, Component v2.0.2)
MCA vprotocol: pessimist (MCA v2.1.0, API v2.0.0, Component v2.0.2)
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users