Hello all,
just a short final note following the "-quota 8" option running on 8
nodes. (from Peter: "PPS: -quota 8 (or 24) might help and still
utilizing all cores, but I'm not sure if it would save enough memory in
the current steps.")
I did run the nmr calculation with "x_nmr_lapw -p
Dear Laurence,
I used 40 k-points.
The integration part makes no problems (-mode integ), the memory
consuming part is the current part (-mode current).
Your hint for lapw1 shows even more that it would be safer to use 4
parallel calculations instead of eight without loosing much
For my own curiosity, is it 40,000 k-points or 40 k-points?
N.B., as Peter suggested, did you try using mpi, which would be four of
nmr_integ:localhost:2
I suspect (but might be wrong) that this will reduce you memory useage by a
factor of 2, and will only be slightly slower than what you have.
Hello all,
as far as I can see it, a job with 8 cores may be faster, but uses
double of the space on scratch (8 partial nmr vectors with size
depending on the kmesh per direction eg. nmr_mqx instead of 4 partial
vectors) and that also doubles the RAM usage of the NMR current
calculation
It shows EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2
-mode current -green -scratch /scratch/WIEN2k/ -noco
in all cases and in htop the values I provided below.
Best regards,
Michael
Am 12.05.2024 um 16:01 schrieb Peter Blaha:
This makes sense.
Please let me know
This makes sense.
Please let me know if it shows
EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode
current-green -scratch /scratch/WIEN2k/ -noco
or onlynmr -case ...
In any case, it is running correctly.
PS: I know that also the current step needs a lot of
Hello all, hello Peter,
That is what is really running in the background (from htop: this is a
new job with 4 nodes but it was the same with 8 nodes -p 1 - 8), so no
nmr_mpi.
TIME+ Command
96.0 14.9 19h06:05 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode current
-green -scratch
Hello Peter,
I just use "x_nmr_lapw -p" and the rest is initiated by the nmr script.
The Line "/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode current
-green -scratch /scratch/WIEN2k/ -noco " is just part of the
whole procedure and not initiated by me manually.. (I only copied the
Hallo Michael,
I don't understand the line:
/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode current -green
-scratch /scratch/WIEN2k/ -noco
The mode current should run only k-parallel, not in mpi ??
PS: The repetition of
nmr_integ:localhost is useless.
nmr mode integ runs only
Hello Peter,
this is the .machines file content:
granulartity:1
omp_lapw0:8
omp_global:2
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
Hmm. ?
Are you using k-parallel AND mpi-parallel ?? This could overload
the machine.
How does the .machines file look like ?
Am 10.05.2024 um 18:15 schrieb Michael Fechtelkord via Wien:
Dear all,
the following problem occurs to me using the NMR part of WIEN2k (23.2)
on a opensuse
Dear all,
the following problem occurs to me using the NMR part of WIEN2k (23.2)
on a opensuse LEAP 15.5 Intel platform. WIEN2k was compiled using
one-api 2024.1 ifort and gcc 13.2.1. I am using ELPA 2024.03.01, Libxc
6.22, fftw 3.3.10 and MPICH 4.2.1 and the one-api 2024.1 MKL libraries.
12 matches
Mail list logo