Re: [Wien] WIen2k install using Oneapi

2024-06-21 Thread Michael Fechtelkord via Wien

Just a short note concerning the recent openmp problem ...


There is a new one-api version published now which does not contain that 
bug anymore (online repositories). Compilation runs fine without setting 
that extra flag now.


Version 2024.2.0
DateJune 18, 2024


Best regards,

Michael


Am 28.05.2024 um 07:13 schrieb 夏宇阳:

Thank you, all problems have been solved.

The key is to find the location of the omp_lib.mod. In the most recent ONEAPI, 
you should add the flag -I/opt/intel/oneapi/2024.1/opt/compiler/include/intel64 
to the compiler options.

Best wishes!

Xiayuyang




- 原始邮件 -
发件人: "Peter Blaha"
收件人: "wien"
发送时间: 星期二, 2024年 5 月 28日 上午 4:53:15
主题: Re: [Wien] WIen2k install using Oneapi

Seems to be a problem with the most recent ONEAPI. The include path for
the compiler should be automatically set properly when you source the
compilervars.sh files.

Try to define an additional include path:

O   Compiler options:-O -FR -mpl -w -prec_div -pc80 -pad -ip 
-DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include 
-I$(IFORTROOT)/linux/compiler/include/intel64


Please check, if in your oneapi the IFORTROOT variable is set and the
filestructure is still identical to mine.

I do have a   omp_lib.mod   in the include path.



Am 27.05.2024 um 20:19 schrieb 夏宇阳:

mstar.f90(57): error #7002: Error in opening the compiled module file.  Check 
INCLUDE paths.   [OMP_LIB]
USE OMP_LIB
^

sumpara.F(4): error #7002: Error in opening the compiled module file.  Check 
INCLUDE paths.   [OMP_LIB]
use omp_lib
--^
sumpara.F(407): error #6363: The intrinsic data types of the arguments must be 
the same.   [MAX]
BUFSIZE=MAX(NKKVL/OMP_GET_NUM_THREADS()+1, 1000)
-^
Three errors still exists in compile.msg.

It seems that we need omp_lib.module. What is that? And how can we get it?

Best wishes!
--

---
Peter Blaha,  Inst. f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-158801165300
Email:peter.bl...@tuwien.ac.at
WWW:http://www.imc.tuwien.ac.at   WIEN2k:http://www.wien2k.at
-

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST 
at:http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST 
at:http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email:michael.fechtelk...@ruhr-uni-bochum.de
Web 
Page:https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] WIen2k install using Oneapi

2024-05-27 Thread Michael Fechtelkord via Wien
I have the same problem with the oneapi online repositories using zypper 
(does not find the openmp libs). However, the offline installer using 
the installation script still works fine:



basekit:

https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html?operatingsystem=linux&distributions=offline


hpckit:

https://www.intel.com/content/www/us/en/developer/tools/oneapi/hpc-toolkit-download.html?operatingsystem=linux&distributions=offline


Regards,

 Michael


Am 27.05.2024 um 22:53 schrieb Peter Blaha:
Seems to be a problem with the most recent ONEAPI. The include path 
for the compiler should be automatically set properly when you source 
the compilervars.sh files.


Try to define an additional include path:

  O   Compiler options:    -O -FR -mpl -w -prec_div -pc80 -pad -ip 
-DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include 
-I$(IFORTROOT)/linux/compiler/include/intel64



Please check, if in your oneapi the IFORTROOT variable is set and the 
filestructure is still identical to mine.


I do have a   omp_lib.mod   in the include path.



Am 27.05.2024 um 20:19 schrieb 夏宇阳:
mstar.f90(57): error #7002: Error in opening the compiled module 
file.  Check INCLUDE paths. [OMP_LIB]

USE OMP_LIB
^

sumpara.F(4): error #7002: Error in opening the compiled module 
file.  Check INCLUDE paths.   [OMP_LIB]

   use omp_lib
--^
sumpara.F(407): error #6363: The intrinsic data types of the 
arguments must be the same.   [MAX]

   BUFSIZE=MAX(NKKVL/OMP_GET_NUM_THREADS()+1, 1000)
-^
Three errors still exists in compile.msg.

It seems that we need omp_lib.module. What is that? And how can we 
get it?


Best wishes!
--


---
Peter Blaha,  Inst. f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-158801165300
Email: peter.bl...@tuwien.ac.at
WWW:   http://www.imc.tuwien.ac.at  WIEN2k: http://www.wien2k.at
-

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at: 
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email: michael.fechtelk...@ruhr-uni-bochum.de
Web Page: 
https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] WIen2k install using Oneapi

2024-05-27 Thread Michael Fechtelkord via Wien

The SRC_plot error was already solved by Jan Doumont in a previous thread:


/Dear Peter,
/ /
Interestingly, I get the same error when using IFORT with the newest 
oneapi...

/ /
ifort  -O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback 
-assume buffered_io -I/opt/intel/oneapi/mkl/2024.0/include 
-DHAVE_PTR_ALLOC_GENERICS  -Ilib -free -gen-interface nosource 
-traceback -g  -I../SRC_w2w/lib -I../SRC_w2w/lib -c modules.f 
-olib/modules.o -module lib
ifort: remark #10448: Intel(R) Fortran Compiler Classic (ifort) is now 
deprecated and will be discontinued late 2024. Intel recommends that 
customers transition now to using the LLVM-based Intel(R) Fortran 
Compiler (ifx) for continued Windows* and Linux* support, new language 
support, new language features, and optimizations. Use 
'-diag-disable=10448' to disable this message.
modules.f(195): error #6911: The syntax of this substring is invalid.   
[CART]

   inw%grid%len = (/ ( sqrt(sum( inw%grid%Cart(:,i)**2 )), i=1,3 ) /)
--^
compilation aborted for modules.f (code 1)
make: *** [Makefile:140: lib/modules.o] Error 1
/ /
However, I found the following workaround works with both ifort and ifx 
on oneapi 2024:

/ /
   do i=1,3
  inw%grid%len(i) = sqrt(sum(inw%grid%cart(:,i)**2 ))
   end do
/ /
i.e. to replace the implicit loop by an explicit one.
/ /
BW
Jan Doumont /


Best regards,

Michael Fechtelkord


Am 27.05.2024 um 12:59 schrieb 夏宇阳:

This is my compiler options.

Current settings:
   M   OpenMP switch:   -qopenmp
   O   Compiler options:-O -FR -mpl -w -prec_div -pc80 -pad -ip 
-DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include
   L   Linker Flags:$(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) 
-lpthread -lm -ldl -liomp5
   P   Preprocessor flags   '-DParallel'
   R   R_LIBS (LAPACK+BLAS):-lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core
   F   FFTW options:-DFFTW3 -I/home/xiayuyang/fftw-3.3.10/include
   FFTW-LIBS:   -L/home/xiayuyang/fftw-3.3.10/lib -lfftw3
   FFTW-PLIBS:  -lfftw3_mpi
   X   LIBX options:-DLIBXC -I/home/xiayuyang/libxc-6.2.2/include
   LIBXC-LIBS:  -L/home/xiayuyang/libxc-6.2.2/lib -lxcf03 -lxc

And the 195 lines of modules.f is:

inw%grid%len = (/( sqrt(sum( inw%grid%Cart(:,i)**2 )), i=1,3 )/)

Best wishes!
Xiayuyang
- 原始邮件 -
发件人: "Nestoklon Mikhail"
收件人: "A Mailing list for WIEN2k users"
发送时间: 星期一, 2024年 5 月 27日 下午 6:16:24
主题: Re: [Wien] WIen2k install using Oneapi

For mstar I have no idea why the error occurs, did you forget to add -qopenmp 
flag in compiler options?
For reformat, add int to definition of function in reformat.c (line 3 of file 
SRC_reformat/reformat.c should be "int main(argc,argv)")

M.

On Mon, 27 May 2024 at 10:06, 夏宇阳 < [mailto:harri...@sjtu.edu.cn  
|harri...@sjtu.edu.cn  ] > wrote:


Thank you, sir.

But some error still exists.

SRC_mstar/compile.msg:mstar.f90(57): error #7002: Error in opening the compiled 
module file. Check INCLUDE paths. [OMP_LIB]
SRC_reformat/compile.msg:reformat.c:3:1: error: type specifier missing, 
defaults to 'int'; ISO C99 and later do not support implicit int 
[-Wimplicit-int]
SRC_reformat/compile.msg:1 warning and 1 error generated.
SRC_sumpara/compile.msg:sumpara.F(4): error #7002: Error in opening the 
compiled module file. Check INCLUDE paths. [OMP_LIB]
SRC_wplot/compile.msg:modules.f(195): error #6911: The syntax of this substring 
is invalid. [CART]
SRC_wplot/compile.msg:modules.f(195): error #6911: The syntax of this substring 
is invalid. [CART]

Best wishes!
Xiayuyang
- 原始邮件 -
发件人: "Nestoklon Mikhail" < [mailto:nestok...@gmail.com  |nestok...@gmail.com  ] 
>
收件人: "A Mailing list for WIEN2k users" < [mailto:wien@zeus.theochem.tuwien.ac.at  
|wien@zeus.theochem.tuwien.ac.at  ] >
发送时间: 星期一, 2024年 5 月 27日 下午 3:38:06
主题: Re: [Wien] WIen2k install using Oneapi

Dear Xiayuyang,
 From the errors it is clear that you did not recompile libxc with a new 
compiler.
Note that fftw and elpa (if you use it) should be also recompiled.

Sincerely yours,
Mikhail

On Mon, 27 May 2024 at 07:36, 夏宇阳 < [ mailto: [mailto:harri...@sjtu.edu.cn  
|harri...@sjtu.edu.cn  ] | [mailto:harri...@sjtu.edu.cn  |harri...@sjtu.edu.cn  ] ] 
> wrote:


Dear all,

When i install Wien2K using OneAPI, i found "icc" was deprecated. There is only 
"icx".

I followed the steps of Gavin Abo's guide just replaced all "icc" with "icx". 
And it came out errors after complie.

SRC_lapw0/compile.msg:libxc_mod.F(4): error #7013: This module file was not 
generated by any release of this compiler. [XC_F03_LIB_M]
SRC_lapw0/compile.msg:libxc_mod.F(9): error #6457: This derived type name has 
not been declared. [XC_F03_FUNC_T]
SRC_lapw0/compile.msg:libxc_mod.F(10): error #6457: This derived type name has 
not been declared. [XC_F03_FUNC_INFO_T]
SRC_lapw0/compile.msg:libxc_mod.F(5): error #6580: Name in only-list

Re: [Wien] [WIEN2k] abort of CPU core parallel jobs in NMR calculations of the current

2024-05-13 Thread Michael Fechtelkord via Wien

Hello all,


just a short final note following the "-quota 8" option running on 8 
nodes. (from Peter: "PPS:   -quota 8 (or 24)  might help and still 
utilizing all cores, but I'm not sure if it would save enough memory in 
the current steps.")


I did run the nmr calculation with "x_nmr_lapw -p -quota 8". There is 
not really a difference to the previous runs without using quota 
concerning RAM in -mode current step. The calculation occupies 122 GB of 
RAM out of 128 GB and 20 Gb of Swap out of 32 Gb.


I will user only 4 nodes for further NMR calculations.


Best regards,

Michael


Am 13.05.2024 um 10:00 schrieb Michael Fechtelkord via Wien:

Hello all,


as far as I can see it, a job with 8 cores may be faster, but uses 
double of the space on scratch (8 partial nmr vectors with size 
depending on the kmesh per direction eg. nmr_mqx instead of 4 partial 
vectors) and that also doubles the RAM usage of the NMR current 
calculation because 8 partial vectors per direction are used.


I will try the -quota 8 option, but currently it seems that 
calculations on eight cores  are at high risk to crash because of the 
memory and scratch space it needs and that already for 40k points. I 
never had problems with calculations on 4 cores even with only 64 GB 
RAM and 1000k points.



Best regards,

Michael


Am 12.05.2024 um 18:02 schrieb Michael Fechtelkord via Wien:
It shows  EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current -green -scratch /scratch/WIEN2k/ -noco


in all cases and in htop the values I provided below.


Best regards,

Michael


Am 12.05.2024 um 16:01 schrieb Peter Blaha:

This makes sense.
Please let me know if it shows

 EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current    -green -scratch /scratch/WIEN2k/ -noco


or only    nmr -case ...

In any case, it is running correctly.

PS: I know that also the current step needs a lot of memory, after 
all it needs to read the eigenvectors of all eigenvalues, ...


PPS:   -quota 8 (or 24)  might help and still utilizing all cores, 
but I'm not sure if it would save enough memory in the current steps.




Am 12.05.2024 um 10:09 schrieb Michael Fechtelkord via Wien:

Hello all, hello Peter,


That is what is really running in the background (from htop: this 
is a new job with 4 nodes but it was the same with 8 nodes -p 1 - 
8), so no nmr_mpi.



TIME+ Command

96.0 14.9 19h06:05 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 3


95.8 14.9 19h05:10 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 1


95.1 14.9 19h06:00 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2K/ -noco -p 2


95.5 15.4 19h08:10 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 4


94.6 14.9 18h35:33 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 3


93.3 15.4 18h36:24 /usr/local/WIEN2k/nmr-case MS_2M1_Al2 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 4


93.3 14.9 18h33:02 /usr/local/WIEN2k/nmr-case MS_2M1_A12 -mode 
current -green -scratch/scratch/WIEN2k/ -noco -p2


94.0 14.9 18h38:44 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 1



Regards,

Michael


Am 11.05.2024 um 20:10 schrieb Michael Fechtelkord via Wien:

Hello Peter,


I just use "x_nmr_lapw -p" and the rest is initiated by the nmr 
script. The Line "/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current -green -scratch /scratch/WIEN2k/ -noco " is just 
part of the whole procedure and not initiated by me manually.. (I 
only copied the last lines of the calculation).



Best regards,

Michael


Am 11.05.2024 um 18:08 schrieb Peter Blaha:

Hallo Michael,

I don't understand the line:

/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode current 
-green -scratch /scratch/WIEN2k/ -noco


The mode current should run only k-parallel, not in mpi ??

PS: The repetition of

nmr_integ:localhost    is useless.

nmr mode integ runs only once (not k-parallel, sumpara has 
already summed up the currents)


But one can use       nmr_integ:localhost:8


Best regards

Am 11.05.2024 um 16:19 schrieb Michael Fechtelkord via Wien:

Hello Peter,

this is the .machines file content:

granulartity:1
omp_lapw0:8
omp_global:2
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost


Best regards,

Michael


Am 11.05.2024 um 14:58 schrieb Peter Blaha:

Hmm. ?

Are you using   k-parallel  AND  mpi-parallel ?? This could 
overload the machine.


How does the .machines file look like ?


Am 10.05.2024 um 18:15 schrieb Michael Fech

Re: [Wien] [WIEN2k] abort of CPU core parallel jobs in NMR calculations of the current

2024-05-13 Thread Michael Fechtelkord via Wien

Dear Laurence,


I used 40 k-points.


The integration part makes no problems (-mode integ), the memory 
consuming part is the current part (-mode current).


Your hint for lapw1 shows even more that it would be safer to use 4 
parallel calculations instead of eight without loosing much perfomance 
(the 14900k has only 8 performance cores, the other 16 (efficient cores) 
are slower.



Best regards,

Michael


Am 13.05.2024 um 10:14 schrieb Laurence Marks:

For my own curiosity, is it 40,000 k-points or 40 k-points?

N.B., as Peter suggested, did you try using mpi, which would be four 
of nmr_integ:localhost:2
I suspect (but might be wrong) that this will reduce you memory useage 
by a factor of 2, and will only be slightly slower than what you have. 
If needed you can also go to 4 mpi. Of course you have to have 
compiled it...


N.N.B., you presumably realise that you are using 16 cores for lapw1, 
as each k-point has 2 cores.




On Mon, May 13, 2024 at 4:00 PM Michael Fechtelkord via Wien 
 wrote:


Hello all,


as far as I can see it, a job with 8 cores may be faster, but uses
double of the space on scratch (8 partial nmr vectors with size
depending on the kmesh per direction eg. nmr_mqx instead of 4 partial
vectors) and that also doubles the RAM usage of the NMR current
calculation because 8 partial vectors per direction are used.

I will try the -quota 8 option, but currently it seems that
calculations
on eight cores  are at high risk to crash because of the memory and
scratch space it needs and that already for 40k points. I never had
problems with calculations on 4 cores even with only 64 GB RAM and
1000k
points.


Best regards,

Michael


Am 12.05.2024 um 18:02 schrieb Michael Fechtelkord via Wien:
> It shows  EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2
> -mode current    -green -scratch /scratch/WIEN2k/ -noco
>
> in all cases and in htop the values I provided below.
>
>
> Best regards,
>
> Michael
>
>
> Am 12.05.2024 um 16:01 schrieb Peter Blaha:
>> This makes sense.
>> Please let me know if it shows
>>
>>  EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode
>> current    -green -scratch /scratch/WIEN2k/ -noco
>>
>> or only    nmr -case ...
>>
>> In any case, it is running correctly.
>>
>> PS: I know that also the current step needs a lot of memory, after
>> all it needs to read the eigenvectors of all eigenvalues, ...
>>
>> PPS:   -quota 8 (or 24)  might help and still utilizing all cores,
>> but I'm not sure if it would save enough memory in the current
steps.
>>
>>
>>
>> Am 12.05.2024 um 10:09 schrieb Michael Fechtelkord via Wien:
>>> Hello all, hello Peter,
>>>
>>>
>>> That is what is really running in the background (from htop:
this is
>>> a new job with 4 nodes but it was the same with 8 nodes -p 1 -
8),
>>> so no nmr_mpi.
>>>
>>>
>>> TIME+ Command
>>>
>>> 96.0 14.9 19h06:05 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode
>>> current -green -scratch /scratch/WIEN2k/ -noco -p 3
>>>
>>> 95.8 14.9 19h05:10 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode
>>> current -green -scratch /scratch/WIEN2k/ -noco -p 1
>>>
>>> 95.1 14.9 19h06:00 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode
>>> current -green -scratch /scratch/WIEN2K/ -noco -p 2
>>>
>>> 95.5 15.4 19h08:10 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode
>>> current -green -scratch /scratch/WIEN2k/ -noco -p 4
>>>
>>> 94.6 14.9 18h35:33 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode
>>> current -green -scratch /scratch/WIEN2k/ -noco -p 3
>>>
>>> 93.3 15.4 18h36:24 /usr/local/WIEN2k/nmr-case MS_2M1_Al2 -mode
>>> current -green -scratch /scratch/WIEN2k/ -noco -p 4
>>>
    >>> 93.3 14.9 18h33:02 /usr/local/WIEN2k/nmr-case MS_2M1_A12 -mode
>>> current -green -scratch/scratch/WIEN2k/ -noco -p2
>>>
>>> 94.0 14.9 18h38:44 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode
>>> current -green -scratch /scratch/WIEN2k/ -noco -p 1
>>>
>>>
>>> Regards,
>>>
>>> Michael
>>>
>>>
>>> Am 11.05.2024 um 20:10 schrieb Michael Fechtelkord via Wien:
>>>> Hello Peter,
>>>>
>>>>
>>>> I just use "x_nmr_l

Re: [Wien] [WIEN2k] abort of CPU core parallel jobs in NMR calculations of the current

2024-05-13 Thread Michael Fechtelkord via Wien

Hello all,


as far as I can see it, a job with 8 cores may be faster, but uses 
double of the space on scratch (8 partial nmr vectors with size 
depending on the kmesh per direction eg. nmr_mqx instead of 4 partial 
vectors) and that also doubles the RAM usage of the NMR current 
calculation because 8 partial vectors per direction are used.


I will try the -quota 8 option, but currently it seems that calculations 
on eight cores  are at high risk to crash because of the memory and 
scratch space it needs and that already for 40k points. I never had 
problems with calculations on 4 cores even with only 64 GB RAM and 1000k 
points.



Best regards,

Michael


Am 12.05.2024 um 18:02 schrieb Michael Fechtelkord via Wien:
It shows  EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 
-mode current    -green -scratch /scratch/WIEN2k/ -noco


in all cases and in htop the values I provided below.


Best regards,

Michael


Am 12.05.2024 um 16:01 schrieb Peter Blaha:

This makes sense.
Please let me know if it shows

 EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current    -green -scratch /scratch/WIEN2k/ -noco


or only    nmr -case ...

In any case, it is running correctly.

PS: I know that also the current step needs a lot of memory, after 
all it needs to read the eigenvectors of all eigenvalues, ...


PPS:   -quota 8 (or 24)  might help and still utilizing all cores, 
but I'm not sure if it would save enough memory in the current steps.




Am 12.05.2024 um 10:09 schrieb Michael Fechtelkord via Wien:

Hello all, hello Peter,


That is what is really running in the background (from htop: this is 
a new job with 4 nodes but it was the same with 8 nodes -p 1 - 8), 
so no nmr_mpi.



TIME+ Command

96.0 14.9 19h06:05 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 3


95.8 14.9 19h05:10 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 1


95.1 14.9 19h06:00 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2K/ -noco -p 2


95.5 15.4 19h08:10 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 4


94.6 14.9 18h35:33 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 3


93.3 15.4 18h36:24 /usr/local/WIEN2k/nmr-case MS_2M1_Al2 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 4


93.3 14.9 18h33:02 /usr/local/WIEN2k/nmr-case MS_2M1_A12 -mode 
current -green -scratch/scratch/WIEN2k/ -noco -p2


94.0 14.9 18h38:44 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 1



Regards,

Michael


Am 11.05.2024 um 20:10 schrieb Michael Fechtelkord via Wien:

Hello Peter,


I just use "x_nmr_lapw -p" and the rest is initiated by the nmr 
script. The Line "/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current -green -scratch /scratch/WIEN2k/ -noco " is just 
part of the whole procedure and not initiated by me manually.. (I 
only copied the last lines of the calculation).



Best regards,

Michael


Am 11.05.2024 um 18:08 schrieb Peter Blaha:

Hallo Michael,

I don't understand the line:

/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode current 
-green -scratch /scratch/WIEN2k/ -noco


The mode current should run only k-parallel, not in mpi ??

PS: The repetition of

nmr_integ:localhost    is useless.

nmr mode integ runs only once (not k-parallel, sumpara has already 
summed up the currents)


But one can use       nmr_integ:localhost:8


Best regards

Am 11.05.2024 um 16:19 schrieb Michael Fechtelkord via Wien:

Hello Peter,

this is the .machines file content:

granulartity:1
omp_lapw0:8
omp_global:2
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost


Best regards,

Michael


Am 11.05.2024 um 14:58 schrieb Peter Blaha:

Hmm. ?

Are you using   k-parallel  AND  mpi-parallel ??  This could 
overload the machine.


How does the .machines file look like ?


Am 10.05.2024 um 18:15 schrieb Michael Fechtelkord via Wien:

Dear all,


the following problem occurs to me using the NMR part of WIEN2k 
(23.2) on a opensuse LEAP 15.5 Intel platform. WIEN2k was 
compiled using one-api 2024.1 ifort and gcc 13.2.1. I am using 
ELPA 2024.03.01, Libxc 6.22, fftw 3.3.10 and MPICH 4.2.1 and 
the one-api 2024.1 MKL libraries. The CPU is a I9 14900k with 
24 cores where I use eight for the calculations. The RAM is 130 
Gb and a swap file of 16 GB on a Samsung PCIE 4.0 NVME SSD. The 
BUS width is 5600 MT / s.


The structure is a layersilicate and to simulate the ratio of 
Si:Al = 3:1 I use a 1:1:2 supercell currently. The monoclinic 
symmetry of the new structure (original is C 2/c) is P

Re: [Wien] [WIEN2k] abort of CPU core parallel jobs in NMR calculations of the current

2024-05-12 Thread Michael Fechtelkord via Wien
It shows  EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 
-mode current    -green -scratch /scratch/WIEN2k/ -noco


in all cases and in htop the values I provided below.


Best regards,

Michael


Am 12.05.2024 um 16:01 schrieb Peter Blaha:

This makes sense.
Please let me know if it shows

 EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current    -green -scratch /scratch/WIEN2k/ -noco


or only    nmr -case ...

In any case, it is running correctly.

PS: I know that also the current step needs a lot of memory, after all 
it needs to read the eigenvectors of all eigenvalues, ...


PPS:   -quota 8 (or 24)  might help and still utilizing all cores, but 
I'm not sure if it would save enough memory in the current steps.




Am 12.05.2024 um 10:09 schrieb Michael Fechtelkord via Wien:

Hello all, hello Peter,


That is what is really running in the background (from htop: this is 
a new job with 4 nodes but it was the same with 8 nodes -p 1 - 8), so 
no nmr_mpi.



TIME+ Command

96.0 14.9 19h06:05 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 3


95.8 14.9 19h05:10 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 1


95.1 14.9 19h06:00 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2K/ -noco -p 2


95.5 15.4 19h08:10 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 4


94.6 14.9 18h35:33 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 3


93.3 15.4 18h36:24 /usr/local/WIEN2k/nmr-case MS_2M1_Al2 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 4


93.3 14.9 18h33:02 /usr/local/WIEN2k/nmr-case MS_2M1_A12 -mode 
current -green -scratch/scratch/WIEN2k/ -noco -p2


94.0 14.9 18h38:44 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode 
current -green -scratch /scratch/WIEN2k/ -noco -p 1



Regards,

Michael


Am 11.05.2024 um 20:10 schrieb Michael Fechtelkord via Wien:

Hello Peter,


I just use "x_nmr_lapw -p" and the rest is initiated by the nmr 
script. The Line "/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current -green -scratch /scratch/WIEN2k/ -noco " is just 
part of the whole procedure and not initiated by me manually.. (I 
only copied the last lines of the calculation).



Best regards,

Michael


Am 11.05.2024 um 18:08 schrieb Peter Blaha:

Hallo Michael,

I don't understand the line:

/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode current 
-green -scratch /scratch/WIEN2k/ -noco


The mode current should run only k-parallel, not in mpi ??

PS: The repetition of

nmr_integ:localhost    is useless.

nmr mode integ runs only once (not k-parallel, sumpara has already 
summed up the currents)


But one can use       nmr_integ:localhost:8


Best regards

Am 11.05.2024 um 16:19 schrieb Michael Fechtelkord via Wien:

Hello Peter,

this is the .machines file content:

granulartity:1
omp_lapw0:8
omp_global:2
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost


Best regards,

Michael


Am 11.05.2024 um 14:58 schrieb Peter Blaha:

Hmm. ?

Are you using   k-parallel  AND  mpi-parallel ??  This could 
overload the machine.


How does the .machines file look like ?


Am 10.05.2024 um 18:15 schrieb Michael Fechtelkord via Wien:

Dear all,


the following problem occurs to me using the NMR part of WIEN2k 
(23.2) on a opensuse LEAP 15.5 Intel platform. WIEN2k was 
compiled using one-api 2024.1 ifort and gcc 13.2.1. I am using 
ELPA 2024.03.01, Libxc 6.22, fftw 3.3.10 and MPICH 4.2.1 and the 
one-api 2024.1 MKL libraries. The CPU is a I9 14900k with 24 
cores where I use eight for the calculations. The RAM is 130 Gb 
and a swap file of 16 GB on a Samsung PCIE 4.0 NVME SSD. The BUS 
width is 5600 MT / s.


The structure is a layersilicate and to simulate the ratio of 
Si:Al = 3:1 I use a 1:1:2 supercell currently. The monoclinic 
symmetry of the new structure (original is C 2/c) is P 2/c and 
contains 40 atoms (K, Al, Si, O, and F).


I use 3 NMR LOs for K and O and 10 for Si, Al, and F (where I 
need the chemical shifts). The k mesh is 40k points.


The interesting thing is that the RAM is sufficient during NMR 
vector calculations (always under 100 Gb RAM occupied) and at 
the beginning of the electron current calculation. However, the 
RAM use increases to a critical point in the calculation and 
more and more data is outsourced into the SWAP File which is 
sometimes 80% occupied.


As you see this time only one core failed because of memory 
overflow. But using 48k points 3 cores crashed and so the whole 
current calculation. The reason is of the crash clear to me. But 
I do not understand, why 

Re: [Wien] [WIEN2k] abort of CPU core parallel jobs in NMR calculations of the current

2024-05-12 Thread Michael Fechtelkord via Wien

Hello all, hello Peter,


That is what is really running in the background (from htop: this is a 
new job with 4 nodes but it was the same with 8 nodes -p 1 - 8), so no 
nmr_mpi.



TIME+ Command

96.0 14.9 19h06:05 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode current 
-green -scratch /scratch/WIEN2k/ -noco -p 3


95.8 14.9 19h05:10 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode current 
-green -scratch /scratch/WIEN2k/ -noco -p 1


95.1 14.9 19h06:00 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode current 
-green -scratch /scratch/WIEN2K/ -noco -p 2


95.5 15.4 19h08:10 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode current 
-green -scratch /scratch/WIEN2k/ -noco -p 4


94.6 14.9 18h35:33 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode current 
-green -scratch /scratch/WIEN2k/ -noco -p 3


93.3 15.4 18h36:24 /usr/local/WIEN2k/nmr-case MS_2M1_Al2 -mode current 
-green -scratch /scratch/WIEN2k/ -noco -p 4


93.3 14.9 18h33:02 /usr/local/WIEN2k/nmr-case MS_2M1_A12 -mode current 
-green -scratch/scratch/WIEN2k/ -noco -p2


94.0 14.9 18h38:44 /usr/local/WIEN2k/nmr -case MS_2M1_A12 -mode current 
-green -scratch /scratch/WIEN2k/ -noco -p 1



Regards,

Michael


Am 11.05.2024 um 20:10 schrieb Michael Fechtelkord via Wien:

Hello Peter,


I just use "x_nmr_lapw -p" and the rest is initiated by the nmr 
script. The Line "/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current -green -scratch /scratch/WIEN2k/ -noco " is just part 
of the whole procedure and not initiated by me manually.. (I only 
copied the last lines of the calculation).



Best regards,

Michael


Am 11.05.2024 um 18:08 schrieb Peter Blaha:

Hallo Michael,

I don't understand the line:

/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode current 
-green -scratch /scratch/WIEN2k/ -noco


The mode current should run only k-parallel, not in mpi ??

PS: The repetition of

nmr_integ:localhost    is useless.

nmr mode integ runs only once (not k-parallel, sumpara has already 
summed up the currents)


But one can use       nmr_integ:localhost:8


Best regards

Am 11.05.2024 um 16:19 schrieb Michael Fechtelkord via Wien:

Hello Peter,

this is the .machines file content:

granulartity:1
omp_lapw0:8
omp_global:2
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost


Best regards,

Michael


Am 11.05.2024 um 14:58 schrieb Peter Blaha:

Hmm. ?

Are you using   k-parallel  AND  mpi-parallel ??  This could 
overload the machine.


How does the .machines file look like ?


Am 10.05.2024 um 18:15 schrieb Michael Fechtelkord via Wien:

Dear all,


the following problem occurs to me using the NMR part of WIEN2k 
(23.2) on a opensuse LEAP 15.5 Intel platform. WIEN2k was compiled 
using one-api 2024.1 ifort and gcc 13.2.1. I am using ELPA 
2024.03.01, Libxc 6.22, fftw 3.3.10 and MPICH 4.2.1 and the 
one-api 2024.1 MKL libraries. The CPU is a I9 14900k with 24 cores 
where I use eight for the calculations. The RAM is 130 Gb and a 
swap file of 16 GB on a Samsung PCIE 4.0 NVME SSD. The BUS width 
is 5600 MT / s.


The structure is a layersilicate and to simulate the ratio of 
Si:Al = 3:1 I use a 1:1:2 supercell currently. The monoclinic 
symmetry of the new structure (original is C 2/c) is P 2/c and 
contains 40 atoms (K, Al, Si, O, and F).


I use 3 NMR LOs for K and O and 10 for Si, Al, and F (where I need 
the chemical shifts). The k mesh is 40k points.


The interesting thing is that the RAM is sufficient during NMR 
vector calculations (always under 100 Gb RAM occupied) and at the 
beginning of the electron current calculation. However, the RAM 
use increases to a critical point in the calculation and more and 
more data is outsourced into the SWAP File which is sometimes 80% 
occupied.


As you see this time only one core failed because of memory 
overflow. But using 48k points 3 cores crashed and so the whole 
current calculation. The reason is of the crash clear to me. But I 
do not understand, why the current calculation reacts so sensitive 
with so few atoms and a small k mesh. I made calculations with 
more atoms and a 1000K point mesh on 4 cores .. they worked fine. 
So can it be that the Intel MKL library is the source of failure? 
So I better get back to 4 cores, even with longer calculation times?


Have all a nice weekend!


Best wishes from

Michael Fechtelkord

---

cd ./  ...  x lcore  -f MS_2M1_Al2
 CORE  END
0.685u 0.028s 0:00.71 98.5% 0+0k 2336+16168io 5pf+0w

lcore      ready


 EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current    -green -scratch /scratch/WIEN2k/ -noco


[1] 20253
[2] 20257
[3] 20261
[4] 20265
[5] 20269
[6] 20273
[7] 20277
[8] 20281
[8]  + Abgebrochen   ( cd $dir; $exec2 >> 
nmr.ou

Re: [Wien] [WIEN2k] abort of CPU core parallel jobs in NMR calculations of the current

2024-05-11 Thread Michael Fechtelkord via Wien

Hello Peter,


I just use "x_nmr_lapw -p" and the rest is initiated by the nmr script. 
The Line "/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode current 
-green -scratch /scratch/WIEN2k/ -noco " is just part of the 
whole procedure and not initiated by me manually.. (I only copied the 
last lines of the calculation).



Best regards,

Michael


Am 11.05.2024 um 18:08 schrieb Peter Blaha:

Hallo Michael,

I don't understand the line:

/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode current 
-green -scratch /scratch/WIEN2k/ -noco


The mode current should run only k-parallel, not in mpi ??

PS: The repetition of

nmr_integ:localhost    is useless.

nmr mode integ runs only once (not k-parallel, sumpara has already 
summed up the currents)


But one can use       nmr_integ:localhost:8


Best regards

Am 11.05.2024 um 16:19 schrieb Michael Fechtelkord via Wien:

Hello Peter,

this is the .machines file content:

granulartity:1
omp_lapw0:8
omp_global:2
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost


Best regards,

Michael


Am 11.05.2024 um 14:58 schrieb Peter Blaha:

Hmm. ?

Are you using   k-parallel  AND  mpi-parallel ??  This could 
overload the machine.


How does the .machines file look like ?


Am 10.05.2024 um 18:15 schrieb Michael Fechtelkord via Wien:

Dear all,


the following problem occurs to me using the NMR part of WIEN2k 
(23.2) on a opensuse LEAP 15.5 Intel platform. WIEN2k was compiled 
using one-api 2024.1 ifort and gcc 13.2.1. I am using ELPA 
2024.03.01, Libxc 6.22, fftw 3.3.10 and MPICH 4.2.1 and the one-api 
2024.1 MKL libraries. The CPU is a I9 14900k with 24 cores where I 
use eight for the calculations. The RAM is 130 Gb and a swap file 
of 16 GB on a Samsung PCIE 4.0 NVME SSD. The BUS width is 5600 MT / s.


The structure is a layersilicate and to simulate the ratio of Si:Al 
= 3:1 I use a 1:1:2 supercell currently. The monoclinic symmetry of 
the new structure (original is C 2/c) is P 2/c and contains 40 
atoms (K, Al, Si, O, and F).


I use 3 NMR LOs for K and O and 10 for Si, Al, and F (where I need 
the chemical shifts). The k mesh is 40k points.


The interesting thing is that the RAM is sufficient during NMR 
vector calculations (always under 100 Gb RAM occupied) and at the 
beginning of the electron current calculation. However, the RAM use 
increases to a critical point in the calculation and more and more 
data is outsourced into the SWAP File which is sometimes 80% occupied.


As you see this time only one core failed because of memory 
overflow. But using 48k points 3 cores crashed and so the whole 
current calculation. The reason is of the crash clear to me. But I 
do not understand, why the current calculation reacts so sensitive 
with so few atoms and a small k mesh. I made calculations with more 
atoms and a 1000K point mesh on 4 cores .. they worked fine. So can 
it be that the Intel MKL library is the source of failure? So I 
better get back to 4 cores, even with longer calculation times?


Have all a nice weekend!


Best wishes from

Michael Fechtelkord

---

cd ./  ...  x lcore  -f MS_2M1_Al2
 CORE  END
0.685u 0.028s 0:00.71 98.5% 0+0k 2336+16168io 5pf+0w

lcore      ready


 EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current    -green -scratch /scratch/WIEN2k/ -noco


[1] 20253
[2] 20257
[3] 20261
[4] 20265
[5] 20269
[6] 20273
[7] 20277
[8] 20281
[8]  + Abgebrochen   ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[7]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[6]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[5]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[4]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[3]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[2]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[1]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop


 EXECUTING: /usr/local/WIEN2k/nmr -case MS_2M1_Al2 -mode 
sumpara  -p 8    -green -scratch /scratch/WIEN2k/



current      ready


 EXECUTING: mpirun -np 1 -machinefile .machine_nmrinteg 
/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode integ -green



nmr:  integration  ... done in   4032.3s


stop


--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150

Re: [Wien] [WIEN2k] abort of CPU core parallel jobs in NMR calculations of the current

2024-05-11 Thread Michael Fechtelkord via Wien

Hello Peter,

this is the .machines file content:

granulartity:1
omp_lapw0:8
omp_global:2
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
1:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost
nmr_integ:localhost


Best regards,

Michael


Am 11.05.2024 um 14:58 schrieb Peter Blaha:

Hmm. ?

Are you using   k-parallel  AND  mpi-parallel ??  This could overload 
the machine.


How does the .machines file look like ?


Am 10.05.2024 um 18:15 schrieb Michael Fechtelkord via Wien:

Dear all,


the following problem occurs to me using the NMR part of WIEN2k 
(23.2) on a opensuse LEAP 15.5 Intel platform. WIEN2k was compiled 
using one-api 2024.1 ifort and gcc 13.2.1. I am using ELPA 
2024.03.01, Libxc 6.22, fftw 3.3.10 and MPICH 4.2.1 and the one-api 
2024.1 MKL libraries. The CPU is a I9 14900k with 24 cores where I 
use eight for the calculations. The RAM is 130 Gb and a swap file of 
16 GB on a Samsung PCIE 4.0 NVME SSD. The BUS width is 5600 MT / s.


The structure is a layersilicate and to simulate the ratio of Si:Al = 
3:1 I use a 1:1:2 supercell currently. The monoclinic symmetry of the 
new structure (original is C 2/c) is P 2/c and contains 40 atoms (K, 
Al, Si, O, and F).


I use 3 NMR LOs for K and O and 10 for Si, Al, and F (where I need 
the chemical shifts). The k mesh is 40k points.


The interesting thing is that the RAM is sufficient during NMR vector 
calculations (always under 100 Gb RAM occupied) and at the beginning 
of the electron current calculation. However, the RAM use increases 
to a critical point in the calculation and more and more data is 
outsourced into the SWAP File which is sometimes 80% occupied.


As you see this time only one core failed because of memory overflow. 
But using 48k points 3 cores crashed and so the whole current 
calculation. The reason is of the crash clear to me. But I do not 
understand, why the current calculation reacts so sensitive with so 
few atoms and a small k mesh. I made calculations with more atoms and 
a 1000K point mesh on 4 cores .. they worked fine. So can it be that 
the Intel MKL library is the source of failure? So I better get back 
to 4 cores, even with longer calculation times?


Have all a nice weekend!


Best wishes from

Michael Fechtelkord

---

cd ./  ...  x lcore  -f MS_2M1_Al2
 CORE  END
0.685u 0.028s 0:00.71 98.5% 0+0k 2336+16168io 5pf+0w

lcore      ready


 EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current    -green -scratch /scratch/WIEN2k/ -noco


[1] 20253
[2] 20257
[3] 20261
[4] 20265
[5] 20269
[6] 20273
[7] 20277
[8] 20281
[8]  + Abgebrochen   ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[7]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[6]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[5]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[4]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[3]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[2]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[1]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop


 EXECUTING: /usr/local/WIEN2k/nmr -case MS_2M1_Al2 -mode sumpara  
-p 8    -green -scratch /scratch/WIEN2k/



current      ready


 EXECUTING: mpirun -np 1 -machinefile .machine_nmrinteg 
/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode integ -green



nmr:  integration  ... done in   4032.3s


stop


--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email: michael.fechtelk...@ruhr-uni-bochum.de
Web Page: 
https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] [WIEN2k] abort of CPU core parallel jobs in NMR calculations of the current

2024-05-10 Thread Michael Fechtelkord via Wien

Dear all,


the following problem occurs to me using the NMR part of WIEN2k (23.2) 
on a opensuse LEAP 15.5 Intel platform. WIEN2k was compiled using 
one-api 2024.1 ifort and gcc 13.2.1. I am using ELPA 2024.03.01, Libxc 
6.22, fftw 3.3.10 and MPICH 4.2.1 and the one-api 2024.1 MKL libraries. 
The CPU is a I9 14900k with 24 cores where I use eight for the 
calculations. The RAM is 130 Gb and a swap file of 16 GB on a Samsung 
PCIE 4.0 NVME SSD. The BUS width is 5600 MT / s.


The structure is a layersilicate and to simulate the ratio of Si:Al = 
3:1 I use a 1:1:2 supercell currently. The monoclinic symmetry of the 
new structure (original is C 2/c) is P 2/c and contains 40 atoms (K, Al, 
Si, O, and F).


I use 3 NMR LOs for K and O and 10 for Si, Al, and F (where I need the 
chemical shifts). The k mesh is 40k points.


The interesting thing is that the RAM is sufficient during NMR vector 
calculations (always under 100 Gb RAM occupied) and at the beginning of 
the electron current calculation. However, the RAM use increases to a 
critical point in the calculation and more and more data is outsourced 
into the SWAP File which is sometimes 80% occupied.


As you see this time only one core failed because of memory overflow. 
But using 48k points 3 cores crashed and so the whole current 
calculation. The reason is of the crash clear to me. But I do not 
understand, why the current calculation reacts so sensitive with so few 
atoms and a small k mesh. I made calculations with more atoms and a 
1000K point mesh on 4 cores .. they worked fine. So can it be that the 
Intel MKL library is the source of failure? So I better get back to 4 
cores, even with longer calculation times?


Have all a nice weekend!


Best wishes from

Michael Fechtelkord

---

cd ./  ...  x lcore  -f MS_2M1_Al2
 CORE  END
0.685u 0.028s 0:00.71 98.5% 0+0k 2336+16168io 5pf+0w

lcore      ready


 EXECUTING: /usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode 
current    -green -scratch /scratch/WIEN2k/ -noco


[1] 20253
[2] 20257
[3] 20261
[4] 20265
[5] 20269
[6] 20273
[7] 20277
[8] 20281
[8]  + Abgebrochen   ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[7]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[6]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[5]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[4]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[3]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[2]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop
[1]  + Fertig    ( cd $dir; $exec2 >> 
nmr.out.${loop} ) >& nmr.err.$loop


 EXECUTING: /usr/local/WIEN2k/nmr -case MS_2M1_Al2 -mode sumpara  
-p 8    -green -scratch /scratch/WIEN2k/



current      ready


 EXECUTING: mpirun -np 1 -machinefile .machine_nmrinteg 
/usr/local/WIEN2k/nmr_mpi -case MS_2M1_Al2 -mode integ -green



nmr:  integration  ... done in   4032.3s


stop

--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email: michael.fechtelk...@ruhr-uni-bochum.de
Web Page: 
https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] NMR Chemical Shift NMR-LOs - here: possibilty to focus on more then one atom

2024-01-26 Thread Michael Fechtelkord via Wien

Dear all,


following the suggestions, I created a set of in1_nmr files focussing on 
the nuclei I want Chemical shifts for e.g. (x_nmr_lapw  -mode in1 -focus 
F) and after that I renamed the in1_nmr file ( mv case.in1_nmr 
case.in1_nmr_F)


I did that for F, Al and Si. After that I substituted all NMR LO= 3 
parts with the NMR LO= 10 parts in the case.in1_nmr file  from the 
created case.in1_nmr_X files using a text editor.



The NMR calculation worked fine.


Thanks again for the help!


Best regards

Michael Fechtelkord



Am 03.01.2024 um 16:48 schrieb Peter Blaha:

The only specific option besides the number of LOs for   mode in1 is   
-focus nat-nr


But this will set NMR-los only for the atom with index nat-nr.

Your desired in1_nmr file needs to be done by hand, maybe by 
copy/paste from 2 different runs with 3 and 10 LOs.


Regards


Am 03.01.2024 um 16:25 schrieb Michael Fechtelkord via Wien:

Dear All,


I have a short question concerning the NMR Chemical Shift 
calculations. I am calculating Chemical Shifts on Lepidolites, e.g. 
Trilithionite which is K(Li1.5Al1.5)[Si3AlO10]F2 . To reduce the 
calculation time and reduce the number of NMR-LOs I am asking myself 
if it is possible to focus on more than one atom, e.g., I am 
interested in Chemical Shifts of F, Al, and Si, but not in those of 
K, Li and O, where a reduced number of LOs (n=3) is ok. I think I 
could do this by merge the values in the in1_nmr files together using 
the values of n=3 for K, Li and O and n=10 for F, Al, and Si.


Is there an easier way to create a in1_nmr file?


Thanks in advance and happy new year to all!


Best regards,

Michael Fechtelkord



--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email: michael.fechtelk...@ruhr-uni-bochum.de
Web Page: 
https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Intels Oneapi 2024: Compiler bug ?

2024-01-26 Thread Michael Fechtelkord via Wien

Hello all,


I tried also to use ifx .. it works for elpa, mpich, fftw and libxc, but 
the compilation of WIEN2k has too many errors. With the classic compiler 
ifort the compilation works fine and also the workaround for SRC_wplot 
does resolve the compilation error.


Elpa recommends flags for certain cpu structures using AVX512, AVX2 etc 
and uses -O3 instead.



I was wondering if using "-O3 -xAVX2" in the compiler flags brings 
better performance of the WIEN2k code or if its counterproductive and I 
should stay with the recommendations?



Best regards,

Michael


Am 25.01.2024 um 23:54 schrieb Laurence Marks:
From what I can see, ifx is not ready, too much is missing. I suggest 
sticking with ifoft.


---
Professor Laurence Marks (Laurie)
www.numis.northwestern.edu 
https://scholar.google.com/citations?user=zmHhI9gJ&hl=en 

"Research is to see what everybody else has seen, and to think what 
nobody else has thought" Albert Szent-Györgyi


On Fri, Jan 26, 2024, 07:21 Jan Doumont  wrote:

Dear Peter,

Interestingly, I get the same error when using IFORT with the newest
oneapi...

ifort  -O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
-assume buffered_io -I/opt/intel/oneapi/mkl/2024.0/include
-DHAVE_PTR_ALLOC_GENERICS  -Ilib -free -gen-interface nosource
-traceback -g  -I../SRC_w2w/lib -I../SRC_w2w/lib -c modules.f
-olib/modules.o -module lib
ifort: remark #10448: Intel(R) Fortran Compiler Classic (ifort) is
now
deprecated and will be discontinued late 2024. Intel recommends that
customers transition now to using the LLVM-based Intel(R) Fortran
Compiler (ifx) for continued Windows* and Linux* support, new
language
support, new language features, and optimizations. Use
'-diag-disable=10448' to disable this message.
modules.f(195): error #6911: The syntax of this substring is invalid.
[CART]
    inw%grid%len = (/ ( sqrt(sum( inw%grid%Cart(:,i)**2 )),
i=1,3 ) /)
--^
compilation aborted for modules.f (code 1)
make: *** [Makefile:140: lib/modules.o] Error 1

However, I found the following workaround works with both ifort
and ifx
on oneapi 2024:

    do i=1,3
   inw%grid%len(i) = sqrt(sum(inw%grid%cart(:,i)**2 ))
    end do

i.e. to replace the implicit loop by an explicit one.

BW
Jan Doumont

On 25/01/2024 19:52, Jan Doumont wrote:
> Dear Peter,
>
> I could compile wien2k 23.2 with no issues using gfortran 13.2.1
(the
> version supplied with Fedora 39). I double checked the
compile.msg of
> SRC_wplot and there are no errors.
>
> Best Wishes
>
> Jan Doumont
>
>
>
> On 25/01/2024 19:00, Peter Blaha wrote:
>> Dear users,
>>
>> Maybe there is a Fortran expert who knows if this syntax is
correct
>> or not.
>>
>> A user reported recently a compilation problem using   the most
>> recent ifort (or ifx, which will become soon the new fortran
>> compiler) (oneapi-2024.0)   in SRC_wplot:
>>
>> ifx -O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
>> -assume buffered_io -I/home/aarav/intel/mkl/2024.0/include
>> -DHAVE_PTR_ALLOC_GENERICS -Ilib -free -gen-interface nosource
>> -traceback -g -I../SRC_w2w/lib -I../SRC_w2w/lib -c modules.f
>> -olib/modules.o -module lib
>> modules.f(195): error #6911: The syntax of this substring is
invalid.
>> [CART]
>>    inw%grid%len = (/( sqrt(sum( inw%grid%Cart(:,i)**2 )),
i=1,3 )/)
>> -^
>>
>> So the error is in line 195 of SRC_wplot/modules.f.
>>
>> It appear ONLY with the most recent oneapi 2024.0, not with older
>> versions nor with gfortran-12.
>>
>> Thus the question is: Is this a compiler bug or is this due to
a very
>> new fortran-standard which this version enforces ?
>> Has anybody an even newer gfortran (higher than version 12) and
can
>> test it with this compiler ?
>>
>> Best regards
>> Peter Blaha
>
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
>
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/li

[Wien] NMR Chemical Shift NMR-LOs - here: possibilty to focus on more then one atom

2024-01-03 Thread Michael Fechtelkord via Wien

Dear All,


I have a short question concerning the NMR Chemical Shift calculations. 
I am calculating Chemical Shifts on Lepidolites, e.g. Trilithionite 
which is K(Li1.5Al1.5)[Si3AlO10]F2 . To reduce the calculation time and 
reduce the number of NMR-LOs I am asking myself if it is possible to 
focus on more than one atom, e.g., I am interested in Chemical Shifts of 
F, Al, and Si, but not in those of K, Li and O, where a reduced number 
of LOs (n=3) is ok. I think I could do this by merge the values in the 
in1_nmr files together using the values of n=3 for K, Li and O and n=10 
for F, Al, and Si.


Is there an easier way to create a in1_nmr file?


Thanks in advance and happy new year to all!


Best regards,

Michael Fechtelkord


--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email: michael.fechtelk...@ruhr-uni-bochum.de
Web Page: 
https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] Ifort compiler error for wplot for newest one api Version 2024.0

2023-11-28 Thread Michael Fechtelkord via Wien

Dear colleagues,


I just compiled WIEN2K with the newest Intel one-api Version 2024.0 and 
got the following compiler error for modules.f in SRC_wplot:



ifort  -O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback 
-assume buffered_io -I/opt/intel/oneapi/mkl/2024.0/include 
-DHAVE_PTR_ALLOC_GENERICS -Ilib -free
 -gen-interface nosource -traceback -g  -I../SRC_w2w/lib 
-I../SRC_w2w/lib -c modules.f -olib/modules.o -module lib
modules.f(195): error #6911: The syntax of this substring is invalid.   
[CART]

   inw%grid%len = (/( sqrt(sum( inw%grid%Cart(:,i)**2 )), i=1,3 )/)
-^
compilation aborted for modules.f (code 1)
make: *** [Makefile:140: lib/modules.o] Fehler 1
ifort  -O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback 
-assume buffered_io -I/opt/intel/oneapi/mkl/2024.0/include 
-DHAVE_PTR_ALLOC_GENERICS -Ilib -free
 -gen-interface nosource -traceback -g  -I../SRC_w2w/lib 
-I../SRC_w2w/lib -c modules.f -olib/modules.o -module lib
modules.f(195): error #6911: The syntax of this substring is invalid.   
[CART]

   inw%grid%len = (/( sqrt(sum( inw%grid%Cart(:,i)**2 )), i=1,3 )/)
-^
compilation aborted for modules.f (code 1)
make: *** [Makefile:140: lib/modules.o] Fehler 1


There are no compiler errors using the 2023.2 Version of one-api.


Best regards,

Michael Fechtelkord


--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email: michael.fechtelk...@ruhr-uni-bochum.de
Web Page: 
https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] [WIEN2k] forrtl IO error in x_nmr_lapw for Heavy metal structures (TlF3, HgF2)

2023-11-16 Thread Michael Fechtelkord via Wien

Dear Prof. Blaha,


I recompiled lapw1 with LOMAX = 4 but then the scf cycle fails. The real 
problem was the used cut off energy of -11 Ry in init_lapw. That 
introduces also too much orbitals in x_nmr -mode in1 and a qtl-b errors 
in the first two loops. After using the default value of -6 Ry for core 
/ valence separation, scf cycles converge without QTL-B errors and the 
x_nmr initializations starts with less atomic orbitals and  the 
calculations (x_nmr -p) no longer produces forrtl I/O errors.



Thanks again for the help!


Best regards,

Michael Fechtelkord


Am 12.11.2023 um 23:28 schrieb Peter Blaha:

Once I've seen your in1 file, the solution is probably very simple:

I did not know that you included the 4f states of Tl (near -8 Ry) as 
valence.
The nmr code constructs by default NMR-local orbitals up to 
"l-exception" + 1, i.e. up to l=4 when you have l=3 states listed in 
the regular case.in1


While this is possible, it requires to recompile lapw1/2,nmr with a 
modified parameter  LOMAX =4 (param.inc in lapw1, in other codes in 
modules - do a search).


This is necessary if you handle 4f elements or early 5d metals, 
however, I very much doubt that it is a good idea to include the 4f 
states for Tl (with RMT=2.5) as valence.  I would not use -ecut -11.
All it produces is noise as the 4f convergence can be quite 
problematic and SO effects might be of importance.


Best regards
Peter Blaha

Am 12.11.2023 um 22:12 schrieb Michael Fechtelkord:

Lieber Herr Blaha,

schon mal vorab herzlichen Dank für die schnelle Hilfe auch am 
Wochenende. Anbei die gewünschten Daten und folgendermaßen bin ich 
vorgegangen:


im Verzeichnis TlF3

1) cif2struct TlF3.cif

2) Kontrolle und Nachbearbeitung mit struct generator in w2web

3) rmt gesetzt mit 0% Reduktion in w2web struct Generator (set 
automatically RMT and continue editing)


4) Structfile abgeschlossen (save file and cleanup)

Weiter im Terminalfenster:

5) init_lapw -b -rkmax 7 -numk 1000 -ecut -11 (endete mit ok)

6) run_lapw -p -ec 0.0001 -cc 0.0001 (konvergierte nach ca. 13 Zyklen)

7) save_lapw TlF3_pbe_rkmax_7_numk_1000_ecut_11_cc_0001

8) x kgen auf 1 k points (habe es auch mit weniger probiert, 
daran liegt es wohl nicht)


9)  x_nmr_lapw -mode in1

10)  x_nmr_lapw -p

Ich hänge auch das cif File und das machines File der Vollständigkeit 
halber an. NTMATMAX ist 4, NUME 6000, OMP_NUM_THREADS 2



Sollten Sie zusätzliche Daten benötigen, schreiben Sie mich kurz an.


Viele Dank schon mal und einen guten Wochenstart

wünscht

Michael Fechtelkord





--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email: michael.fechtelk...@ruhr-uni-bochum.de
Web Page: 
https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] [WIEN2k] forrtl IO error in x_nmr_lapw for Heavy metal structures (TlF3, HgF2)

2023-11-13 Thread Michael Fechtelkord via Wien

Dear Prof. Blaha,


thanks for the fast reply. I will try that later. Currently calculations 
are running. I wanted to calculate the 19F chemical shift for TlF3 just 
as a model compound for experimental / computational shift correlations. 
So it is not that important for my work.



Thanks again and best regards,

Michael Fechtelkord


Am 12.11.2023 um 23:28 schrieb Peter Blaha:

Once I've seen your in1 file, the solution is probably very simple:

I did not know that you included the 4f states of Tl (near -8 Ry) as 
valence.
The nmr code constructs by default NMR-local orbitals up to 
"l-exception" + 1, i.e. up to l=4 when you have l=3 states listed in 
the regular case.in1


While this is possible, it requires to recompile lapw1/2,nmr with a 
modified parameter  LOMAX =4 (param.inc in lapw1, in other codes in 
modules - do a search).


This is necessary if you handle 4f elements or early 5d metals, 
however, I very much doubt that it is a good idea to include the 4f 
states for Tl (with RMT=2.5) as valence.  I would not use -ecut -11.
All it produces is noise as the 4f convergence can be quite 
problematic and SO effects might be of importance.


Best regards
Peter Blaha

Am 12.11.2023 um 22:12 schrieb Michael Fechtelkord:

Lieber Herr Blaha,

schon mal vorab herzlichen Dank für die schnelle Hilfe auch am 
Wochenende. Anbei die gewünschten Daten und folgendermaßen bin ich 
vorgegangen:


im Verzeichnis TlF3

1) cif2struct TlF3.cif

2) Kontrolle und Nachbearbeitung mit struct generator in w2web

3) rmt gesetzt mit 0% Reduktion in w2web struct Generator (set 
automatically RMT and continue editing)


4) Structfile abgeschlossen (save file and cleanup)

Weiter im Terminalfenster:

5) init_lapw -b -rkmax 7 -numk 1000 -ecut -11 (endete mit ok)

6) run_lapw -p -ec 0.0001 -cc 0.0001 (konvergierte nach ca. 13 Zyklen)

7) save_lapw TlF3_pbe_rkmax_7_numk_1000_ecut_11_cc_0001

8) x kgen auf 1 k points (habe es auch mit weniger probiert, 
daran liegt es wohl nicht)


9)  x_nmr_lapw -mode in1

10)  x_nmr_lapw -p

Ich hänge auch das cif File und das machines File der Vollständigkeit 
halber an. NTMATMAX ist 4, NUME 6000, OMP_NUM_THREADS 2



Sollten Sie zusätzliche Daten benötigen, schreiben Sie mich kurz an.


Viele Dank schon mal und einen guten Wochenstart

wünscht

Michael Fechtelkord





--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email: michael.fechtelk...@ruhr-uni-bochum.de
Web Page: 
https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] [WIEN2k] forrtl IO error in x_nmr_lapw for Heavy metal structures (TlF3, HgF2)

2023-11-12 Thread Michael Fechtelkord via Wien

I checked TlF3.in1_nmr  and  TlF3/nmr_q0/nmr_q0.in1. They are identical.


The requested files and the description of what I did I will send to you 
directly as soon as possible.



Best regards,

Michael Fechtelkord


Am 12.11.2023 um 18:27 schrieb Peter Blaha:

I've done NMR for TlCl or TlBr previously. No problem.

Is TlF3.in1_nmr  and  TlF3/nmr_q0/nmr_q0.in1   identical ??

Please send the struct file and the case.in1_nmr to my private email, 
together with a description of what you did.


The error is in lapw1 when it tries to read the case.in1 file. So 
there should be a problem with the case.in1 file or something with 
your lapw1 version.


Regards


Am 12.11.2023 um 12:36 schrieb Michael Fechtelkord via Wien:

Hello Prof. Blaha,


thanks for the reply .. I did run x_nmr -mode in1. I checked the 
case.in1_nmr file and did not find anything suspicious.


I can send the file by direct e-mail if you like. I do not want to 
make the messages for the mailing list unnecessary long.



Best regards,

Michael Fechtelkord*
*

*
*

*
*

*Peter Blaha* peter.blaha at tuwien.ac.at 
<mailto:wien%40zeus.theochem.tuwien.ac.at?Subject=Re%3A%20%5BWien%5D%20%5BWIEN2k%5D%20forrtl%20IO%20error%20in%20x_nmr_lapw%20for%20Heavy%20metal%0A%20structures%20%28TlF3%2C%20HgF2%29&In-Reply-To=%3Cc9bc8726-daef-4af6-a684-78f67934a05c%40tuwien.ac.at%3E>

/Sat Nov 11 18:26:57 CET 2023/

  * Previous message (by thread): [Wien] [WIEN2k] forrtl IO error in
    x_nmr_lapw for Heavy metal structures (TlF3, HgF2)
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/2023-November/033511.html>
  * Next message (by thread): [Wien] semicore band ranges too large
    error: for MoSi2N4
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/2023-November/033514.html>
  * *Messages sorted by:* [ date ]
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/2023-November/date.html#33513> 
[ thread ] 
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/2023-November/thread.html#33513> 
[ subject ] 
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/2023-November/subject.html#33513> 
[ author ] 
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/2023-November/author.html#33513>




Did you forget to run   x_nmr -mode in1 ???

The error is in lapw1, it cannot read the in1 file. All other errors re
follow-up ...

One needs to inspect case.in1_nmr

--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email:michael.fechtelk...@ruhr-uni-bochum.de
Web 
Page:https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/



___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at: 
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html



--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email: michael.fechtelk...@ruhr-uni-bochum.de
Web Page: 
https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] [WIEN2k] forrtl IO error in x_nmr_lapw for Heavy metal structures (TlF3, HgF2)

2023-11-12 Thread Michael Fechtelkord via Wien

Hello Prof. Blaha,


thanks for the reply .. I did run x_nmr -mode in1. I checked the 
case.in1_nmr file and did not find anything suspicious.


I can send the file by direct e-mail if you like. I do not want to make 
the messages for the mailing list unnecessary long.



Best regards,

Michael Fechtelkord*
*

*
*

*
*

*Peter Blaha* peter.blaha at tuwien.ac.at 


/Sat Nov 11 18:26:57 CET 2023/

 * Previous message (by thread): [Wien] [WIEN2k] forrtl IO error in
   x_nmr_lapw for Heavy metal structures (TlF3, HgF2)
   
 * Next message (by thread): [Wien] semicore band ranges too large
   error: for MoSi2N4
   
 * *Messages sorted by:* [ date ]
   

   [ thread ]
   

   [ subject ]
   

   [ author ]
   





Did you forget to run   x_nmr -mode in1 ???

The error is in lapw1, it cannot read the in1 file. All other errors re
follow-up ...

One needs to inspect case.in1_nmr

--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email:michael.fechtelk...@ruhr-uni-bochum.de
Web 
Page:https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] [WIEN2k] forrtl IO error in x_nmr_lapw for Heavy metal structures (TlF3, HgF2)

2023-11-11 Thread Michael Fechtelkord via Wien

Hello all,


I got a Fortran error during the lapw 1 / lapw2 subroutines in the 
x_nmr_lapw script. The structures are simple (two atoms, most cubic 
Fm-3m) but contain heavy metal atoms like Hg or Tl. I am interested in 
the theoretical 19F Chemical shift to compare to the experimental.


The scf cycles converge after initialization (RMT reduce 0%, rkmax 7, 
ecut -11, 1000 k points, pbe, cc 0.0001 ec 0.0001)


nmr initialization works fine with default parameters. K mesh was set to 
100 k points. The I/O error is listed as follows:


klist    ready

nmr:  klists  done

cd ./nmr_q0  ...  x lapw1 -nmr    -scratch /scratch/WIEN2K/
 forrtl: severe (59): list-directed I/O syntax error, unit 5, file 
/home/nmr/WIEN2k/19F_shifts_fluorides/TlF3/nmr_q0/nmr_q0.in1

Image  PC    Routine Line    Source
lapw1  004DD47E  Unknown Unknown  Unknown
lapw1  004DC95C  Unknown Unknown  Unknown
lapw1  0042DEBC  find_nloat_ 15  find_nloat_tmp_.F
lapw1  0045CF17  inilpw_ 256  inilpw.f
lapw1  004617D1  MAIN__ 48  lapw1_tmp_.F
lapw1  00405B4D  Unknown Unknown  Unknown
libc-2.31.so   14D053D9E24D  __libc_start_main Unknown  Unknown
lapw1  00405A7A  Unknown Unknown  Unknown
0.004u 0.004s 0:00.02 0.0%  0+0k 16+8io 1pf+0w
error: command   /usr/local/WIEN2K/lapw1 lapw1.def   failed



cd ./nmr_q0  ...  x lapw2  -fermi   -scratch /scratch/WIEN2K/
forrtl: severe (24): end-of-file during read, unit 30, file 
/home/nmr/WIEN2k/19F_shifts_fluorides/TlF3/nmr_q0/nmr_q0.energy

Image  PC    Routine Line    Source
lapw2  0050D0E6  Unknown Unknown  Unknown
lapw2  00443014  fermi_ 48  fermi_tmp_.F
lapw2  00496ED7  MAIN__ 416  lapw2_tmp_.F
lapw2  00404ACD  Unknown Unknown  Unknown
libc-2.31.so   14573490924D  __libc_start_main Unknown  Unknown
lapw2  004049FA  Unknown Unknown  Unknown
0.010u 0.007s 0:00.02 50.0% 0+0k 0+320io 1pf+0w
error: command   /usr/local/WIEN2K/lapw2 lapw2.def   failed

...


lapw2    ready

cd ./  ...  x lcore  -f TlF3
 CORE  END
0.023u 0.003s 0:00.02 100.0%    0+0k 0+1592io 1pf+0w

lcore      ready


 EXECUTING: /usr/local/WIEN2K/nmr -case TlF3 -mode current 
-green -scratch /scratch/WIEN2K/   -noco


forrtl: severe (24): end-of-file during read, unit 11, file 
/scratch/WIEN2K/nmr_q0.vector

Image  PC    Routine Line    Source
nmr    00544843  Unknown Unknown  Unknown
nmr    0041BA19  read_vector0_ 21  read_vector_tmp_.F
nmr    00467106  make_current_ 35  make_current_tmp_.F
nmr    0041B706  MAIN__ 28  nmr.f
nmr    0040468D  Unknown Unknown  Unknown
libc-2.31.so   146A73B0924D  __libc_start_main Unknown  Unknown
nmr    004045BA  Unknown Unknown  Unknown

stop error

I don't know if the nmr routine has problems to handle the heavy atoms 
or I just did something wrong. Calculations with lighter atoms work well 
(AlF3, KAlF4, Na2AlF6 etc.)



Best regards,

Michael Fechtelkord

--
Dr. Michael Fechtelkord

Institut für Geologie, Mineralogie und Geophysik
Ruhr-Universität Bochum
Universitätsstr. 150
D-44780 Bochum

Phone: +49 (234) 32-24380
Fax:  +49 (234) 32-04380
Email: michael.fechtelk...@ruhr-uni-bochum.de
Web Page: 
https://www.ruhr-uni-bochum.de/kristallographie/kc/mitarbeiter/fechtelkord/

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html