What case is it, that you run it on 32 cores ?  How many atoms ??


Remember:   more cores does not always mean faster, in fact it could also mean crash or MUCH slower ....


Please read the parallelization section of the UG.


Am 23.03.2022 um 09:31 schrieb venky ch:
Dear Wien2k users,

I have successfully installed the wien2k.21 version in the HPC cluster. However, while running a test calculation, I am getting the following error so that the lapw0_mpi crashed.

=========

/home/proj/21/phyvech/.bashrc: line 43: ulimit: stack size: cannot modify limit: Operation not permitted /home/proj/21/phyvech/.bashrc: line 43: ulimit: stack size: cannot modify limit: Operation not permitted setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument Abort(744562703) on node 1 (rank 1 in comm 0): Fatal error in PMPI_Bcast: Other MPI error, error stack: PMPI_Bcast(432).........................: MPI_Bcast(buf=0x7ffd8f8d359c, count=1, MPI_INTEGER, root=0, comm=MPI_COMM_WORLD) failed
PMPI_Bcast(418).........................:
MPIDI_Bcast_intra_composition_gamma(391):
MPIDI_NM_mpi_bcast(153).................:
MPIR_Bcast_intra_tree(219)..............: Failure during collective
MPIR_Bcast_intra_tree(211)..............:
MPIR_Bcast_intra_tree_generic(176)......: Failure during collective
[1]    Exit 15                       mpirun -np 32 -machinefile .machine0 */home/proj/21/phyvech/soft/win2k2/lapw0_mpi lapw0.def >> .time00*
cat: No match.
grep: *scf1*: No such file or directory
grep: lapw2*.error: No such file or directory

=========

the .machines file is

======= for 102 reduced k-points =========

#
lapw0:node16:16 node22:16
51:node16:16
51:node22:16
granularity:1
extrafine:1

========

"export OMP_NUM_THREADS=1" has been used in the job submission script.

"run_lapw -p -NI -i 400 -ec 0.00001 -cc 0.0001" has been used to start the parallel calculations in available nodes.

Can someone please explain to me where I am going wrong here. Thanks in advance.

Regards,
Venkatesh
Physics department
IISc Bangalore, INDIA


_______________________________________________
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST 
at:http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

--
-----------------------------------------------------------------------
Peter Blaha,  Inst. f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-158801165300
Email:peter.bl...@tuwien.ac.at WWW:http://www.imc.tuwien.ac.at WIEN2k:http://www.wien2k.at
-------------------------------------------------------------------------
_______________________________________________
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

Reply via email to