Hi,
I can confirm the problem (only with gfortran, does not appear with
ifort). There is an uninitialized variable iz, which in a write
statement could cause a problem (if not set to zero).
Fortunately the problem occurs only for single atoms in huge unit cells
and with gfortran.
Fix:
Will you help me now how to locate the "libmpi_usempif08.so.40"
The libmpi_usempif08.so.40 seems to be a Open MPI file based on the
webpage at:
https://superuser.com/questions/1500931/error-in-linking-libmpi-so
It looks like Shared Memory Open MPI might have got linked in when you
compiled
Please send your struct file to my private email for testing.
Am 06.08.2021 um 13:28 schrieb SM Alay-e-Abbas:
Dear WIEN2k mailing list,
While computing atomic energies using a large fcc cell (→
http://www.wien2k.at/reg_user/faq/cohesive_energies.html
Dear WIEN2k mailing list,
While computing atomic energies using a large fcc cell (→
http://www.wien2k.at/reg_user/faq/cohesive_energies.html) with WIEN2k 21, I
have encountered the following during the execution of the nn program:
Dear Prof. Blaha,
Thanks for your clarification. Will you help me now how to locate the
"libmpi_usempif08.so.40"
I would like to know, Is there any relation to the mentioned error and
OMP_SWITCH.
Do you suggest that I must install wien2k by loading the cray-mpich module
instead of the intel
In any queuing system there is a way that your job gets to know which
nodes you have.
Then use a script to write the .machines file on the fly. Examples at:
http://www.wien2k.at/reg_user/faq/
Am 8/6/21 um 8:12 AM schrieb venky ch:
Dear Prof. Blaha,
Thank you for your reply.
Yes, I have also
Dear Prof. Blaha,
Thank you for your reply.
Yes, I have also loaded the module intel in the job script.
Further, I would like to know that If there is no way to get the nodelist
from a HPC, then how one could write the .machines files to run the mpi
parallelization. Is there any way to have a
7 matches
Mail list logo