I think, I fixed the mpi error by using mpich 4.3.2
configuring by
CC=gcc CXX=g++ F77=ifx FC=ifx MPIF77=mpiifx MPIF90=mpiifx MPIFC=mpiifx
MPICC=mpigcc \
../configure --prefix=/usr/local --with-device=ch3 --with-pm=hydra
--enable-shared --enable-fast=all,O3 MPICHLIB_CFLAGS="-O3 -march=native
-mavx2" \
MPICHLIB_FFLAGS="-O3 -xCORE-AVX2 -march=native" MPICHLIB_CXXFLAGS="-O3
-march=native -mavx2" MPICHLIB_FCFLAGS="-O3 -xCORE-AVX2 -march=native" \
CFLAGS="-O3 -march=native -mavx2" FCFLAGS="-O3 -xCORE-AVX2 -march=native"
Am 01.09.2025 um 15:26 schrieb Michael Fechtelkord via Wien:
Hello Gavin,
thanks for your comprehensive report on this issue. I was using Open
Suse LEAP 15.6 before which uses an older kernel and also older
libraries. Tumbleweed uses the most recent developments and thus
things like this can happen. I have good experiences with Tumbleweed,
cause it supports the newest hardware. I am still using the latest
ifort version because it produces nearly no SIGSEV errors with WIEN2k
compared to the newest ifx version.
I think maybe the easiest is to switch mpi to mpiich (which updated
quite often) for WIEN2k and ELPA or use the workaround, I described.
Anyway, thanks for your advise again!
Best regards,
Michael
Am 01.09.2025 um 13:23 schrieb Gavin Abo:
At [1], the current latest version of oneAPI is 2025.2. There it has
under supported Linux distributions "SuSE LINUX Enterprise Server* 15
SP4, SP5, SP6". Those look like they might be more aligned with the
openSUSE Leap 15.x versions seen in the list at [2]. The openSUSE
Tumbleweed looks like it's a rolling release that might always be
ahead of what Intel is developing the oneAPI with. If that is the
case, incompatibility risks are probably going to be higher using
Tumbleweed than one of the distributions in Intel's oneAPI supported
list.
The error message has:
`/lib64/libm.so.6' for IFUNC symbol `modff'
The modff is a function in the libm library [3] that looks to be a
part of the identified GLIBC. I see that you mentioned downgrading
the GLIBC (which could need using an older OpenSUSE Leap version) is
not a solution for you.
A few other ideas below that you may or may not want to consider.
Intel has a libimf Compiler Math Library [4] that seems to contain a
modff function [5]. So, static linking to the libimf.a might be a
possibility. There is a post about a RELINK error that might be of
interest at [6].
IFUNC in the above message is likely a GNU indirect function that
resolves a function implementation picked by a resolver at runtime
where glibc uses this to provide multiple implementations optimized
for different architecture levels [7]. I don't recall what
architecture was targeted for your compile, but say it was AVX and
the new glibc has an issue with it. Maybe falling back to an
architecture (see [8]) were optimization isn't as good if the
processor your using also supports it, say SSE3, might be something
to try.
If the Intel MPI source code for libmpi.so.12 has to be compiled with
the GLIBC 2.42 by Intel in a future release to fix the issue, an
alternate solution might be to use a different Message Passing
Interface (MPI) such as MPICH or OpenMPI [9].
[1]
https://www.intel.com/content/www/us/en/developer/articles/system-requirements/oneapi-base-toolkit/2025.html
[2] https://en.wikipedia.org/wiki/OpenSUSE#Version_history
[3] https://sourceware.org/newlib/libm.html#modf
[4]
https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2025-2/compiler-math-library.html
[5]
https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2025-2/nearest-integer-functions.html
[6]
https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Trouble-linking-libimf-a-on-Linux/td-p/1342415
[7] https://maskray.me/blog/2021-01-18-gnu-indirect-function
[8]
https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2025-2/arch.html
[9] https://en.wikipedia.org/wiki/MPICH#MPICH_derivatives
Kind Regards,
Gavin
WIEN2k user
On 9/1/2025 1:30 AM, Michael Fechtelkord via Wien wrote:
Hello all,
thanks again for all your advise. I now used ifort 2021.1.1 and icc
2021.4.0 for compilation (with flag -no-gcc) but the SIGSEV is still
there. As you can see it is a C-Library related communication error
The new GLIBC 2.42 core library does not work together correctly
with Intels mpi (all versions, see error reports below). It will be
difficult to solve that problem, a downgrade of a Core library is
not recommendable, so I use as a work around the sequential
integration on one core. The integration step does not really take
long and the parallel calculation of the electron current is still
working.
I am sure that problem will arise soon for most who use mpirun and
have GLIBC 2.42 in their LINUX operating system.
Best regards,
Michael
------------------------------------
EXECUTING: mpirun -np 4 -machinefile .machine_nmrinteg
/usr/local/WIEN2k/nmr_mpi -case MgF2 -mode integ -green
/usr/local/WIEN2k/nmr_mpi: Relink
`/opt/intel/oneapi/mpi/2021.1.1//lib/release/libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
/usr/local/WIEN2k/nmr_mpi: Relink
`/opt/intel/oneapi/mpi/2021.1.1//lib/release/libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
/usr/local/WIEN2k/nmr_mpi: Relink
`/opt/intel/oneapi/mpi/2021.1.1//lib/release/libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
/usr/local/WIEN2k/nmr_mpi: Relink
`/opt/intel/oneapi/mpi/2021.1.1//lib/release/libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 0 PID 391888 RUNNING AT localhost
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 1 PID 391889 RUNNING AT localhost
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 2 PID 391890 RUNNING AT localhost
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 3 PID 391891 RUNNING AT localhost
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================
stop
/opt/fftw-3.3.10/mpi/.libs/mpi-bench: Relink
`/opt/intel/oneapi/mpi/2021.1.1//lib/release/libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 0 PID 66889 RUNNING AT Marlowe
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================
FAILED mpirun -np 1 /opt/fftw-3.3.10/mpi/mpi-bench:
Am 30.08.2025 um 10:15 schrieb Michael Fechtelkord via Wien:
Hello Peter,
lapw0 crashes with the same error:
/usr/local/WIEN2k/lapw0_mpi: Relink
`/opt/intel/oneapi/mpi/2021.13/lib/libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
/usr/local/WIEN2k/lapw0_mpi: Relink
`/opt/intel/oneapi/mpi/2021.13/lib/libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
/usr/local/WIEN2k/lapw0_mpi: Relink
`/opt/intel/oneapi/mpi/2021.13/lib/libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
/usr/local/WIEN2k/lapw0_mpi: Relink
`/opt/intel/oneapi/mpi/2021.13/lib/libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
[1] Exitcode 255 mpirun -np 4 -machinefile
.machine0 /usr/local/WIEN2k/lapw0_mpi lapw0.def >> .time00
OMP_NUM_THREADS=1 does not change anything.
I will try the 2021.1.0 version of oneapi soon and get back with
the results soon.
Best regards,
Michael
Am 29.08.2025 um 16:10 schrieb Peter Blaha:
Just for some more info:
Does this crash also occur in an mpirun of lapw0 ?? (would point
to fftw)
What is your OMP_NUM_THREADS variable ? Set it to one (also in
.bashrc).
Am 29.08.2025 um 12:54 schrieb Michael Fechtelkord via Wien:
Dear all,
I am back now to ifort 2021.13 because it crashes much less as
the new ifx compiler, also it produces omp errors using fftw3
during compilation.
All worked fine since the last Open Suse Tumbleweed version using
Kernel 6.16.3-1 (Opensuse Tumbleweed 202250827). Parallel Jobs
terminate when using "mpirun" with the following error. The same
error appears when I have compiled fftw 3.10 with gcc 15.1 and
run "make check" the check routine crashes when using mpirun with
the same error.
Does somebody know what that relink error means and how to solve
it? Is it maybe the opensuse libm.so.6 library (glibc version
2.42.1) causing the error, because the new version does not
contain the symbol, libmpi is reqeusting?
Best regards,
Michael
---------------
EXECUTING: mpirun -np 4 -machinefile .machine_nmrinteg
/usr/local/ WIEN2k/nmr_mpi -case MgF2 -mode integ -green
/usr/local/WIEN2k/nmr_mpi: Relink
`/opt/intel/oneapi/mpi/2021.13/lib/ libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
/usr/local/WIEN2k/nmr_mpi: Relink
`/opt/intel/oneapi/mpi/2021.13/lib/ libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
/usr/local/WIEN2k/nmr_mpi: Relink
`/opt/intel/oneapi/mpi/2021.13/lib/ libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
/usr/local/WIEN2k/nmr_mpi: Relink
`/opt/intel/oneapi/mpi/2021.13/lib/ libmpi.so.12' with
`/lib64/libm.so.6' for IFUNC symbol `modff'
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 0 PID 396989 RUNNING AT localhost
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 1 PID 396990 RUNNING AT localhost
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 2 PID 396991 RUNNING AT localhost
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 3 PID 396992 RUNNING AT localhost
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================
stop
/opt/fftw-3.3.10/mpi/.libs/mpi-bench: Relink `/opt/intel/oneapi/
mpi/2021.13/lib/libmpi.so.12' with `/lib64/libm.so.6' for IFUNC
symbol `modff'
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 0 PID 183831 RUNNING AT planck
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================
FAILED mpirun -np 1 /opt/fftw-3.3.10/mpi/mpi-bench:
_______________________________________________
Wien mailing list
[email protected]
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:
http://www.mail-archive.com/[email protected]/index.html
--
Dr. Michael Fechtelkord
Institut für Geowissenschaften
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum
Phone: +49 (234) 32-24380
Fax: +49 (234) 32-04380
Email: [email protected]
Web Page:
https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/
_______________________________________________
Wien mailing list
[email protected]
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:
http://www.mail-archive.com/[email protected]/index.html