[OMPI devel] OpenMPI and maker - Multiple messages

2021-02-10 Thread Thomas Eylenbosch via devel
Hello

We are trying to run maker(http://www.yandell-lab.org/software/maker.html) in 
combination with OpenMPI

But when we are trying to submit a job with the maker and openmpi,
We see the following error in the log file:
--Next Contig--
#-
Another instance of maker is processing this contig!!
SeqID: chrA10
Length: 17398227
#-

According to
http://gmod.827538.n3.nabble.com/Does-maker-support-muti-processing-for-a-single-long-fasta-sequence-using-openMPI-td4061342.html

We have to run the following command  mpiexec -mca btl ^openib -n 40 maker -help
"If you get a single help message then everything is fine.  If you get 40 help 
messages, then MPI is not communicating correctly."

We are using the following command to demonstrate what is going wrong:
mpiexec -mca btl ^openib -N 5 gcc --version
gcc (GCC) 10.2.0
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

gcc (GCC) 10.2.0
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

gcc (GCC) 10.2.0
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

gcc (GCC) 10.2.0
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

gcc (GCC) 10.2.0
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.



So we are getting the message 5 times, which means OpenMPI is not correctly 
installed on our cluster?
We are using EasyBuild to build/install our OpenMPI module. ( with the default 
OpenMPI easyblock module)

Best regards / met vriendelijke groeten
Thomas Eylenbosch
DevOps Engineer (OnSite), Gluo N.V.


Currently available at BASF Belgium Coordination Center CommV
Email: thomas.eylenbo...@partners.basf.com
Postal Address: BASF Belgium Coordination Center CommV, Technologiepark 101, 
9052 Gent Zwijnaarde, Belgium

BASF Belgium Coordination Center CommV
Scheldelaan 600, 2040 Antwerpen, België
RPR Antwerpen (afd. Antwerpen)
BTW BE0862.390.376

www.basf.be

Deutsche Bank AG
IBAN: BE43 8262 8044 4801
BIC: DEUTBEBE


Information on data protection can be found here: 
https://www.basf.com/global/en/legal/data-protection-at-basf.html






[OMPI devel] Open MPI 4.1.1rc1

2021-02-10 Thread Raja, Raghu via devel
Open MPI v4.1.1rc1 is now available at 
https://www.open-mpi.org/software/ompi/v4.1/

Changes and fixes since the 4.1.0 release include:

- Reverted temporary solution that worked around launch issues in
  SLURM v20.11.{0,1,2}. SchedMD encourages users to avoid these
  versions and to upgrade to v20.11.3 or newer.
- Fixed configuration issue on Apple Silicon observed with
  Homebrew. Thanks to François-Xavier Coudert for reporting the issue.
- Disabled gcc built-in atomics by default on aarch64 platforms.
- Fixed SLURM support to mark ORTE daemons as non-MPI tasks.
- Improved AVX detection to more accurately detect supported
  platforms.  Also improved the generated AVX code, and switched to
  using word-based MCA params for the op/avx component (vs. numeric
  big flags).
- Improved OFI compatibility support and fixed memory leaks in error
  handling paths.
- Fixed MPI debugger support (i.e., the MPIR_Breakpoint() symbol).
  Thanks to @louisespellacy-arm for reporting the issue.
- Removed PML uniformity check from the UCX PML to address performance
  regression.
- Fixed MPI_Init_thread(3) statement about C++ binding and update
  references about MPI_THREAD_MULTIPLE.  Thanks to Andreas Lösel for
  bringing the outdated docs to our attention.
- Added fence_nb to Flux PMIx support to address segmentation faults.
- Ensured progress of AIO requests in the POSIX FBTL component to
  prevent exceeding maximum number of pending requests on MacOS.
- Used OPAL's mutli-thread support in the orted to leverage atomic
  operations for object refcounting.
- Fixed segv when launching with static TCP ports.
- Fixed --debug-daemons mpirun CLI option.
- Fixed bug where mpirun did not honor --host in a managed job
  allocation.
- Made a managed allocation filter a hostfile/hostlist.
- Fixed bug to marked a generalized request as pending once initiated.
- Fixed external PMIx v4.x check.
- Fixed OSHMEM build with `--enable-mem-debug`.

Please test and report issues via email to 
devel@lists.open-mpi.org, or open a Github 
issue at https://github.com/open-mpi/ompi/issues/.

Raghu