Re: [Wien] Segmentation fault (core dumped) error in w2w with spin-orbit

2019-07-26 Thread Kyohoon Ahn
Dear Fhokrul,


Thank you for your kind response :)

Then,,, Shall we try to use "x joinvec" ..?



In my case, the following procedure works fine for me. (without inversion
case)



### PART 1. WIEN2K   ###


 \cp case.vns case.vnsup
 \cp case.vns case.vnsdn
 \cp case.vsp case.vspup
 \cp case.vsp case.vspdn

 x kgen -fbz (no shift)

 x lapw1 -up -c -p
 x lapw1 -dn -c -p
 x lapwso -up -c -p

 x joinvec -up -so
 x joinvec -dn -so

# Now we can delete the useless vector files:
# rm *.vectorup* -f
# rm *.vectordn* -f
# rm *.vectorsoup_* -f
# rm *.vectorsodn_* -f




### PART 2. WIEN2WANNIER & WANNIER90 ###


 write_inwf -up
 write_inwf -dn
 write_win -up

 write case.fermiup and case.fermidn

 x wannier90 -pp
 x w2w -up -c -so (Please note that, here should be no "-p" with joinvec)
 x w2w -dn -c -so
 x wannier90 -so




Have a nice day.!

Best regards,


- Kyohoon

2019년 7월 26일 (금) 오전 1:10, Md. Fhokrul Islam 님이 작성:

> Hi Kyohoon,
>
> 1) I think -c switch is default for spin-orbit calculation but I tried
> both with/without -c. Doesn't make any difference.
>
> 2) No, there is no case.in1 file, onle case.in1c
>
> 3) I did rerun calculation the same way you did but it still doesn't
> create case.amn* or case.mmn* files (these files are empty). It creates the
> error file upw2w.error file right away with single message: "Error in W2W".
> Again -c switch doesn't make any difference.
>
> By the way, these are parallel calculation so I have -p switch in all
> steps of the calculation.
>
>
> Thanks,
> Fhokrul
> --
> *From:* Wien  on behalf of
> Kyohoon Ahn 
> *Sent:* Thursday, July 25, 2019 8:12 PM
> *To:* A Mailing list for WIEN2k users 
> *Subject:* Re: [Wien] Segmentation fault (core dumped) error in w2w with
> spin-orbit
>
> Dear Fhokrul,
>
> Your procedure looks fine.
>
>
>
> Here I have some questions:
>
> (1) Is there no [-c] in your lapwso ..?
>
> (2) Are there both of [in1] and [in1c] in your directory?
> If yes, could you delete the [in1], before the [lapw1~lapwso] ?
>
> (3) Could you try to do [x wannier90 -pp], just before [x w2w] ?
> And could you try to use [x w2w -up -c -so], instead of [x w2w -up -so] ?
> (<<< I think they are the same,,, but just for checking ...)
>
>
>
> Best regards,
>
> - Kyohoon
>
> 2019년 7월 25일 (목) 오후 6:49, Md. Fhokrul Islam 님이 작성:
>
> Dear Kyohoon,
>
> Thank you for your reply. Our procedure are almost the same. First I use
> prepare_w2wdir [dirrecotry name] and then I initialize the calculation with
> init_w2w -up, so it does all the steps you mentioned both for DFT part and
> Wannier90.
>
> After initialization I run:
>
> x lapw1 -c -up   (my system doesn't have inversion symmetry)
> x lapw1 -c -dn
>
> x lapwso -up
>
> x w2w -up -so   ---> job crashes here with core dumped error
> x w2w -dn -so
>
> x wannier90 -so
>
> I have used this procedure for several other calculation with different
> systems
> and it worked. But for some reason, for this system it is works only up to
> lapwso.
> I will check if they problem goes away with smaller k-mesh.
>
>
> Regards,
> Fhokrul
>
>
>
>
>
>
> --
> *From:* Wien  on behalf of
> Kyohoon Ahn 
> *Sent:* Thursday, July 25, 2019 3:53 PM
> *To:* A Mailing list for WIEN2k users 
> *Subject:* Re: [Wien] Segmentation fault (core dumped) error in w2w with
> spin-orbit
>
> Dear Fhokrul,
>
>
> I also usually do some calculations with [run_lapw -so].
> (i.e., not runsp_lapw)
>
> So I may be I can share my experiences.
>
>
> In my case, the procedure is following:
>
>
> 
> ### PART 1. WIEN2K   ###
> 
>
>  \cp case.vns case.vnsup
>  \cp case.vns case.vnsdn
>  \cp case.vsp case.vspup
>  \cp case.vsp case.vspdn
>
>  x kgen -fbz (no shift)
>
>  x lapw1 -up (and with the additional options from the dayfile)
>  x lapw1 -dn
>  x lapwso -up
>
>
> 
> ### PART 2. WIEN2WANNIER & WANNIER90 ###
> 
>
>  write_inwf -up
>  write_inwf -dn
>  write_win -up
>
>  write case.fermiup and case.fermidn
>
>  x wannier90 -pp
>  x w2w -so -up
>  x w2w -so -dn
>  wannier90 -so
>
>
> The above procedure works fine for me.
> Is there any difference from yours?
>
>
> Have a nice day.!
>
> Best regards,
>
>
> - Kyohoon
>
> 2019년 7월 23일 (화) 오후 3:04, Md. Fhokrul Islam 님이 작성:
>
> Hi Wien2k users and developers,
>
> I encountered couple of problems running w2w with SO for a tetragonal
> Cd3As2 (with some impurity). I am using Wien2k18.2.
>
> 1. This is a non-magnetic system so I did run spin unpolarized
> calculations (x lapw1, x lapwso) following a note by Elias Assmann but it
> crashes when starts
>
> x w2w -so -up
> x w2w -so -dn
>
> It asks for spin-polarized files vspup and vspdn f

[Wien] lapw2c tries to read an anomalous amount of data

2019-07-26 Thread Luc Fruchter
I think it is unlikely related to a specific machine or OS problem: I 
encountered the same situation with different machines types, different 
OS (Redhat Sci. Linux, Ubuntu), different Intel compilers versions (from 
2017 to 2019). But it could be some common configuration problem.


I don't use mpi, and yes there four 9.6 Gb .vector files (there is 4 
CPUs used), the total of which exceeds the total memory (24 Gb), while 
the four lapw1 processes only used 20 Gb.


I still have to figure out what is the memory needed for lapw2s, even 
when it is sufficient for lapw1s. I will try to reduce the number of 
CPUs, to see if the total .vector files size is the problem for lapw2s.

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] lapw2c tries to read an anomalous amount of data

2019-07-26 Thread Laurence Marks
Aaaah, it sounds like you are swapping, either each core or the different
cores (assuming that you have shared memory). That is, since all four cores
cannot run at the same time due to the memory available, they are being
swapped to and from swap space (which is probably on disc).

Wien2k is not well optimized for this type of issue, and often memory can
be an issue. At one time one of my students was running a large problem on
a supercomputer with 32 cores/node but only 64Gb memory, and ended up only
being able to use 16-20 cores.

The mpi versions may work better for you as they somewhat (not perfectly)
split the code so can use less memory. Whether this is faster will depend
upon your hardware; I have always found mpi to be faster but I know that on
some of Peter's computers this is not true.

Or buy some more RAM!

On Fri, Jul 26, 2019 at 8:58 AM Luc Fruchter  wrote:

> I think it is unlikely related to a specific machine or OS problem: I
> encountered the same situation with different machines types, different
> OS (Redhat Sci. Linux, Ubuntu), different Intel compilers versions (from
> 2017 to 2019). But it could be some common configuration problem.
>
> I don't use mpi, and yes there four 9.6 Gb .vector files (there is 4
> CPUs used), the total of which exceeds the total memory (24 Gb), while
> the four lapw1 processes only used 20 Gb.
>
> I still have to figure out what is the memory needed for lapw2s, even
> when it is sufficient for lapw1s. I will try to reduce the number of
> CPUs, to see if the total .vector files size is the problem for lapw2s.
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__zeus.theochem.tuwien.ac.at_mailman_listinfo_wien&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=fcFIgKmYRAO8A2zFk34GnDZgH1eK_jzQR5SB1CChREg&s=vyk6-KXz2YNM7jt-nsE5U5BhgFy4BcxpGmN_R53kC50&e=
> SEARCH the MAILING-LIST at:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mail-2Darchive.com_wien-40zeus.theochem.tuwien.ac.at_index.html&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=fcFIgKmYRAO8A2zFk34GnDZgH1eK_jzQR5SB1CChREg&s=nAvHikfKYok4yQR0vMVV_Fa44nkoxISiC4Oj7CusDDw&e=
>


-- 
Professor Laurence Marks
Department of Materials Science and Engineering
Northwestern University
www.numis.northwestern.edu
Corrosion in 4D: www.numis.northwestern.edu/MURI
Co-Editor, Acta Cryst A
"Research is to see what everybody else has seen, and to think what nobody
else has thought"
Albert Szent-Gyorgi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] lapw2c tries to read an anomalous amount of data

2019-07-26 Thread Luc Fruchter
Yes, I have shared memory. Swap on disk is disabled, so the system must 
manage differently here.


I just wonder now: is there a way to estimate the memory needed for the 
lapw2s, without running scf up to these ? Is this the total .vector size ?

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] lapw2c tries to read an anomalous amount of data

2019-07-26 Thread Laurence Marks
If I remember right, the largest piece of memory is the vector file so this
should be a reasonable estimate.

During the scf convergence you can reduce this by *carefully* changing the
numbers at the end of case.in1(c). You don't really need to go to 1.5 Ryd
above E_F (and similarly reduce nband for ELPA). For DOS etc later you
increase these and rerun lapw1 etc.

On Fri, Jul 26, 2019 at 9:27 AM Luc Fruchter  wrote:

> Yes, I have shared memory. Swap on disk is disabled, so the system must
> manage differently here.
>
> I just wonder now: is there a way to estimate the memory needed for the
> lapw2s, without running scf up to these ? Is this the total .vector size ?
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__zeus.theochem.tuwien.ac.at_mailman_listinfo_wien&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=xDusGo0KphXQ04Dl6Wf9xCaKVxoL-U4kVBCyrmtP_J4&s=f2IY4a60LXX2_8DTCObJe-nnPgNcIVqZRBsqpTqrRQU&e=
> SEARCH the MAILING-LIST at:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mail-2Darchive.com_wien-40zeus.theochem.tuwien.ac.at_index.html&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=xDusGo0KphXQ04Dl6Wf9xCaKVxoL-U4kVBCyrmtP_J4&s=g-rsFk4xC7uHaddVQZCS2nKpLz-AyX4WWPpytDCUObI&e=
>


-- 
Professor Laurence Marks
Department of Materials Science and Engineering
Northwestern University
www.numis.northwestern.edu
Corrosion in 4D: www.numis.northwestern.edu/MURI
Co-Editor, Acta Cryst A
"Research is to see what everybody else has seen, and to think what nobody
else has thought"
Albert Szent-Gyorgi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] lapw2c tries to read an anomalous amount of data

2019-07-26 Thread Pavel Ondračka
The easiest solution to reduce memory pressure is to reduce the number
of k-points run in parallel... You should experiment with other
parallelization options. If running 4 kpoints in parallel does not fit
in your memory (or is slow), try to run for example just with two but
with 2 OpenMP threads per process. Using MPI is another option and also
reduces memory required per CPU.

On Fri, 2019-07-26 at 09:37 +0100, Laurence Marks wrote:
> If I remember right, the largest piece of memory is the vector file
> so this should be a reasonable estimate.
> 
> During the scf convergence you can reduce this by carefully changing
> the numbers at the end of case.in1(c). You don't really need to go to
> 1.5 Ryd above E_F (and similarly reduce nband for ELPA). For DOS etc
> later you increase these and rerun lapw1 etc.
> 
> On Fri, Jul 26, 2019 at 9:27 AM Luc Fruchter 
> wrote:
> > Yes, I have shared memory. Swap on disk is disabled, so the system
> > must 
> > manage differently here.
> > 
> > I just wonder now: is there a way to estimate the memory needed for
> > the 
> > lapw2s, without running scf up to these ? Is this the total .vector
> > size ?
> > ___
> > Wien mailing list
> > Wien@zeus.theochem.tuwien.ac.at
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__zeus.theochem.tuwien.ac.at_mailman_listinfo_wien&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=xDusGo0KphXQ04Dl6Wf9xCaKVxoL-U4kVBCyrmtP_J4&s=f2IY4a60LXX2_8DTCObJe-nnPgNcIVqZRBsqpTqrRQU&e=
> > SEARCH the MAILING-LIST at:  
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mail-2Darchive.com_wien-40zeus.theochem.tuwien.ac.at_index.html&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=xDusGo0KphXQ04Dl6Wf9xCaKVxoL-U4kVBCyrmtP_J4&s=g-rsFk4xC7uHaddVQZCS2nKpLz-AyX4WWPpytDCUObI&e=
> 
> 
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:  
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] how to control occupancy matrix

2019-07-26 Thread Ranasinghe, Jayangani
Dear wien2k community


What is the procedure to control the occupancy matrix in Wien2k to tackle the 
meta-stable states of f-electron system?


Thank you


Jayangani


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] lapw2c tries to read an anomalous amount of data

2019-07-26 Thread Gavin Abo
Maybe it doesn't help since I don't remember if the lapw1 memory usage 
is larger than lapw2 or not. However, I have found "x lapw1 -nmat_only" 
helpful for estimation in the past:


https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg08136.html


On 7/26/2019 2:27 AM, Luc Fruchter wrote:
Yes, I have shared memory. Swap on disk is disabled, so the system 
must manage differently here.


I just wonder now: is there a way to estimate the memory needed for 
the lapw2s, without running scf up to these ? Is this the total 
.vector size ?
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] how to control occupancy matrix

2019-07-26 Thread tran

Hi,

The steps are:

1) Edit the files case.dmatup and case.dmatdn and manually
change the occupation. To understand the format of case.dmatup/dn,
you have to look at how these files are read in
$WIENROOT/SRC_orb/init.f (search for the read(ifile) statements).

2) execute "x orb -up" and "x orb -dn" to generate case.vorbup and
case.vorbdn.

3) Do a calculation with the option -orbc:
runsp_lapw -orbc ...
This calculation will force the system to have the chosen occupation.

4) Save the calculation with save_lapw

5) Do the final calculation with -orb:
runsp_lapw -orb ...

F. Tran

On Friday 2019-07-26 13:35, Ranasinghe, Jayangani wrote:


Date: Fri, 26 Jul 2019 13:35:23
From: "Ranasinghe, Jayangani" 
Reply-To: A Mailing list for WIEN2k users 
To: "wien@zeus.theochem.tuwien.ac.at" 
Subject: [Wien] how to control occupancy matrix


Dear wien2k community


What is the procedure to control the occupancy matrix in Wien2k to tackle the 
meta-stable states of f-electron
system?


Thank you


Jayangani






___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Segmentation fault (core dumped) error in w2w with spin-orbit

2019-07-26 Thread Md. Fhokrul Islam
Dear Kyohoon,

Thank you for your time to help me fix the problem. I started from the scratch 
and used joinvec to create single file vector and energy files as you suggested 
but unfortunately, it is still giving me the same error "Segmentation fault 
(core dumped)". I noticed that the case.eigup/dn files, which is created during 
the 'x  w2w' from case.enegysoup/dn files have some problem. I tried different 
k-mesh and each case the eigenvalues at the last k-point of the case.klist file 
is incomplete. I checked the case.energysoup/dn files but there is nothing 
wrong in those files.

I checked in google that core dumped problem is associated with program trying 
to access memory that is not there. But I am not sure why the problem  appears 
only in this particular case.


Regards,
Fhokrul

From: Wien  on behalf of Kyohoon Ahn 

Sent: Friday, July 26, 2019 7:05 AM
To: A Mailing list for WIEN2k users 
Subject: Re: [Wien] Segmentation fault (core dumped) error in w2w with 
spin-orbit

Dear Fhokrul,


Thank you for your kind response :)

Then,,, Shall we try to use "x joinvec" ..?



In my case, the following procedure works fine for me. (without inversion case)



### PART 1. WIEN2K   ###


 \cp case.vns case.vnsup
 \cp case.vns case.vnsdn
 \cp case.vsp case.vspup
 \cp case.vsp case.vspdn

 x kgen -fbz (no shift)

 x lapw1 -up -c -p
 x lapw1 -dn -c -p
 x lapwso -up -c -p

 x joinvec -up -so
 x joinvec -dn -so

# Now we can delete the useless vector files:
# rm *.vectorup* -f
# rm *.vectordn* -f
# rm *.vectorsoup_* -f
# rm *.vectorsodn_* -f




### PART 2. WIEN2WANNIER & WANNIER90 ###


 write_inwf -up
 write_inwf -dn
 write_win -up

 write case.fermiup and case.fermidn

 x wannier90 -pp
 x w2w -up -c -so (Please note that, here should be no "-p" with joinvec)
 x w2w -dn -c -so
 x wannier90 -so




Have a nice day.!

Best regards,


- Kyohoon

2019년 7월 26일 (금) 오전 1:10, Md. Fhokrul Islam 
mailto:fis...@hotmail.com>>님이 작성:
Hi Kyohoon,

1) I think -c switch is default for spin-orbit calculation but I tried both 
with/without -c. Doesn't make any difference.

2) No, there is no case.in1 file, onle case.in1c

3) I did rerun calculation the same way you did but it still doesn't create 
case.amn* or case.mmn* files (these files are empty). It creates the error file 
upw2w.error file right away with single message: "Error in W2W". Again -c 
switch doesn't make any difference.

By the way, these are parallel calculation so I have -p switch in all steps of 
the calculation.


Thanks,
Fhokrul

From: Wien 
mailto:wien-boun...@zeus.theochem.tuwien.ac.at>>
 on behalf of Kyohoon Ahn mailto:butz1...@korea.ac.kr>>
Sent: Thursday, July 25, 2019 8:12 PM
To: A Mailing list for WIEN2k users 
mailto:wien@zeus.theochem.tuwien.ac.at>>
Subject: Re: [Wien] Segmentation fault (core dumped) error in w2w with 
spin-orbit

Dear Fhokrul,

Your procedure looks fine.



Here I have some questions:

(1) Is there no [-c] in your lapwso ..?

(2) Are there both of [in1] and [in1c] in your directory?
If yes, could you delete the [in1], before the [lapw1~lapwso] ?

(3) Could you try to do [x wannier90 -pp], just before [x w2w] ?
And could you try to use [x w2w -up -c -so], instead of [x w2w -up -so] ?
(<<< I think they are the same,,, but just for checking ...)



Best regards,

- Kyohoon

2019년 7월 25일 (목) 오후 6:49, Md. Fhokrul Islam 
mailto:fis...@hotmail.com>>님이 작성:
Dear Kyohoon,

Thank you for your reply. Our procedure are almost the same. First I use 
prepare_w2wdir [dirrecotry name] and then I initialize the calculation with 
init_w2w -up, so it does all the steps you mentioned both for DFT part and 
Wannier90.

After initialization I run:

x lapw1 -c -up   (my system doesn't have inversion symmetry)
x lapw1 -c -dn

x lapwso -up

x w2w -up -so   ---> job crashes here with core dumped error
x w2w -dn -so

x wannier90 -so

I have used this procedure for several other calculation with different systems
and it worked. But for some reason, for this system it is works only up to 
lapwso.
I will check if they problem goes away with smaller k-mesh.


Regards,
Fhokrul







From: Wien 
mailto:wien-boun...@zeus.theochem.tuwien.ac.at>>
 on behalf of Kyohoon Ahn mailto:butz1...@korea.ac.kr>>
Sent: Thursday, July 25, 2019 3:53 PM
To: A Mailing list for WIEN2k users 
mailto:wien@zeus.theochem.tuwien.ac.at>>
Subject: Re: [Wien] Segmentation fault (core dumped) error in w2w with 
spin-orbit

Dear Fhokrul,


I also usually do some calculations with [run_lapw -so].
(i.e., not runsp_lapw)

So I may be I can share my experiences.


In my case, the procedure is following:



### PART 1. WIEN2K   ###

Re: [Wien] how to control occupancy matrix

2019-07-26 Thread Ranasinghe, Jayangani
Dear Dr. Tran and Dr.Gavin Abo


Thank you for your responses. I will try through those and let you know any 
difficulties.


Thank you

Jayangani







From: Wien  on behalf of 
t...@theochem.tuwien.ac.at 
Sent: Friday, July 26, 2019 6:06:05 AM
To: A Mailing list for WIEN2k users 
Subject: Re: [Wien] how to control occupancy matrix

Hi,

The steps are:

1) Edit the files case.dmatup and case.dmatdn and manually
change the occupation. To understand the format of case.dmatup/dn,
you have to look at how these files are read in
$WIENROOT/SRC_orb/init.f (search for the read(ifile) statements).

2) execute "x orb -up" and "x orb -dn" to generate case.vorbup and
case.vorbdn.

3) Do a calculation with the option -orbc:
runsp_lapw -orbc ...
This calculation will force the system to have the chosen occupation.

4) Save the calculation with save_lapw

5) Do the final calculation with -orb:
runsp_lapw -orb ...

F. Tran

On Friday 2019-07-26 13:35, Ranasinghe, Jayangani wrote:

>Date: Fri, 26 Jul 2019 13:35:23
>From: "Ranasinghe, Jayangani" 
>Reply-To: A Mailing list for WIEN2k users 
>To: "wien@zeus.theochem.tuwien.ac.at" 
>Subject: [Wien] how to control occupancy matrix
>
>
>Dear wien2k community
>
>
>What is the procedure to control the occupancy matrix in Wien2k to tackle the 
>meta-stable states of f-electron
>system?
>
>
>Thank you
>
>
>Jayangani
>
>
>
>
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] lapw2c tries to read an anomalous amount of data

2019-07-26 Thread Peter Blaha

Dear all,

This is a long thread by now, and it comes because of insufficient 
information from the beginning. I guess we all were thinking of a huge 
mpi-calculation 


The tread should start like:

I'm running a case with XX atoms (matrix-size Y), with/without 
inversion symmetry; NN k-points.
It is running on a single PC (Intel XX with 24 GB RAM and 4 cores) and I 
use WIEN2k_19.1 with ifort/mkl compilation.


I'm running 4 k-parallel jobs and lapw1 runs fine. However, in lapw2 .


Size: lapw2 reads the vector files k-point by k-point (one at the time), 
so if you have more k-points than the size of the vector file has 
nothing to do with the memory of lapw2.
This size will be NMAT*NUME only. There are other large arrays with 
(NMAT,lm, NUME) (Alms,BLMs, CLMs,..).


However, lapw2 runs with a loop about atoms, so when you have more then 
1 k-point, it needs to read these vector files NATOM times.
And with 4 k-parallel threads this is a lot of PARALLEL disk-I/O for a 
poor SATA disk. I can imagine that this is the bottleneck.


Disk I/O can be reduced in at least 2 ways:

As Laurence Marks suggested, reduce E-top in case.in1 such that you have 
only few unoccupied states included.


The second, even more efficient way was suggested by Pavel Ondracka: Use 
OMP-parallelization, eg. export OMP_NUM_THREAD 2 and only 2 k-parallel 
jobs. If this is still too much parallel I/O, you could also use 4 
OpenMP parallel threads and no k-parallelization at all, but this will 
be a little slower.


You may also try the mpi-parallel version, but definitely ONLY if you 
have a recent ELPA installed. Otherwise it will be much slower. However, 
the mpi-version of lapw1 needs more memory (but still less than 4 
k-parallel lapw1) ...


Regards


Am 26.07.2019 um 10:37 schrieb Laurence Marks:
If I remember right, the largest piece of memory is the vector file so 
this should be a reasonable estimate.


During the scf convergence you can reduce this by *carefully* changing 
the numbers at the end of case.in1(c). You don't really need to go to 
1.5 Ryd above E_F (and similarly reduce nband for ELPA). For DOS etc 
later you increase these and rerun lapw1 etc.


On Fri, Jul 26, 2019 at 9:27 AM Luc Fruchter > wrote:


Yes, I have shared memory. Swap on disk is disabled, so the system must
manage differently here.

I just wonder now: is there a way to estimate the memory needed for the
lapw2s, without running scf up to these ? Is this the total .vector
size ?
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at 

https://urldefense.proofpoint.com/v2/url?u=http-3A__zeus.theochem.tuwien.ac.at_mailman_listinfo_wien&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=xDusGo0KphXQ04Dl6Wf9xCaKVxoL-U4kVBCyrmtP_J4&s=f2IY4a60LXX2_8DTCObJe-nnPgNcIVqZRBsqpTqrRQU&e=
SEARCH the MAILING-LIST at:

https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mail-2Darchive.com_wien-40zeus.theochem.tuwien.ac.at_index.html&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=xDusGo0KphXQ04Dl6Wf9xCaKVxoL-U4kVBCyrmtP_J4&s=g-rsFk4xC7uHaddVQZCS2nKpLz-AyX4WWPpytDCUObI&e=



--
Professor Laurence Marks
Department of Materials Science and Engineering
Northwestern University
www.numis.northwestern.edu 
Corrosion in 4D: www.numis.northwestern.edu/MURI 


Co-Editor, Acta Cryst A
"Research is to see what everybody else has seen, and to think what 
nobody else has thought"

Albert Szent-Gyorgi

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html



--
--
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
Email: bl...@theochem.tuwien.ac.atWIEN2k: http://www.wien2k.at
WWW: 
http://www.imc.tuwien.ac.at/tc_blaha- 


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html