[Wien] MBJ fails to produce gap unto 0.6 for VO2 M1 phase

2020-05-11 Thread Wasim Raja Mondal
Dear Experts,
 I am doing some calculations for VO2 M1 phase. To get
the correct band gap value, I applied MBJ. But, I am getting zero gap. To
get the gap, I increased the c value. with such large c value, there is no
sign of convergence of my calculation.

I appreciate if experts have any comments and suggestions.

Regards
Wasim
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Problems of out of memory in parallel jobs

2020-05-11 Thread Laurence Marks
I suggest that you talk to a sysadmin to get some clarification. In
particular, see if this is just memory, or a combination of memory and file
space. From what I can see it is probably memory, but there seems to be
some flexibility in how it is configured.

One other possibility is a memory leak. What mpi are you using?

N.B., I would be a bit concerned that srun is not working for you. Talk to
a sysadmin, you might be running outside/around your memory allocation.

Two relevant sources:
https://community.pivotal.io/s/article/the-application-crashes-with-the-message-cgroup-out-of-memory?language=en_US

https://bugs.schedmd.com/show_bug.cgi?id=2614

On Mon, May 11, 2020 at 10:55 AM MA Weiliang 
wrote:

> Dear Wien users,
>
> The wien2k 18.2 I used is compiled in a share memory cluster with intel
> compiler 2019, mkl 2019 and impi 2019.  Because ‘srun' cannot get a correct
> parallel calculation in the system, I commented the line of "#setenv
> WIEN_MPIRUN "srun -K -N_nodes_ -n_NP_ -r_offset_ _PINNING_ _EXEC_” in the
> parallel_options file and used the second choice "mpirun='mpirun -np _NP_
> _EXEC_”.
>
> Parallel jobs go well in scf cycles. But when I increase k points (about
> 5000) to calculate DOS, the lapw1 crashed with the cgroup out-of-memory
> handler halfway. That is very strange. With same parameters, job runs well
> with single core.
>
> The similar problem is encountered on nlvdw_mpi step. I also increase
> memory up to 50G for this less than 10 atoms cell, but it still didn’t work.
>
> *[Parallel job output:]*
> *starting parallel lapw1 at lun. mai 11 16:24:48 CEST 2020*
> *->  starting parallel LAPW1 jobs at lun. mai 11 16:24:48 CEST 2020*
> *running LAPW1 in parallel mode (using .machines)*
> *1 number_of_parallel_jobs*
> *[1] 12604*
> *[1]  + Done  ( cd $PWD; $t $ttt; rm -f
> .lock_$lockfile[$p] ) >> .time1_$loop*
> * lame25 lame25 lame25 lame25 lame25 lame25 lame25 lame25(5038)
> 4641.609u 123.862s 10:00.69 793.3%   0+0k 489064+2505080io 7642pf+0w*
> *   Summary of lapw1para:*
> *   lame25k=0 user=0  wallclock=0*
> ***  LAPW1 crashed!*
> *4643.674u 126.539s 10:03.50 790.4%  0+0k 490512+2507712io 7658pf+0w*
> *error: command   /home/mcsete/work/wma/Package/wien2k.18n/lapw1para
> lapw1.def   failed*
> *slurmstepd: error: Detected 1 oom-kill event(s) in step 86112.batch
> cgroup. Some of your processes may have been killed by the cgroup
> out-of-memory handler.*
>
> *[Single mode output: ]*
> * LAPW1 END*
> *11651.205u 178.664s 3:23:49.07 96.7%0+0k 19808+22433688io 26pf+0w*
>
> Do you have any ideas? Thank you in advance!
>
> Best regards,
> Liang
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
>
> https://urldefense.com/v3/__http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien__;!!Dq0X2DkFhyF93HkjWTBQKhk!FzOfv9gey6e2nE6BL116Cgoy1UpRBalprajLfQ67QqwAytt0uPXPCtFoozTIGaBYJjay5Q$
> SEARCH the MAILING-LIST at:
> https://urldefense.com/v3/__http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html__;!!Dq0X2DkFhyF93HkjWTBQKhk!FzOfv9gey6e2nE6BL116Cgoy1UpRBalprajLfQ67QqwAytt0uPXPCtFoozTIGaAW3twYHA$
>


-- 
Professor Laurence Marks
Department of Materials Science and Engineering
Northwestern University
www.numis.northwestern.edu
Corrosion in 4D: www.numis.northwestern.edu/MURI
Co-Editor, Acta Cryst A
"Research is to see what everybody else has seen, and to think what nobody
else has thought"
Albert Szent-Gyorgi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] Problems of out of memory in parallel jobs

2020-05-11 Thread MA Weiliang
Dear Wien users,

The wien2k 18.2 I used is compiled in a share memory cluster with intel 
compiler 2019, mkl 2019 and impi 2019.  Because ‘srun' cannot get a correct 
parallel calculation in the system, I commented the line of "#setenv 
WIEN_MPIRUN "srun -K -N_nodes_ -n_NP_ -r_offset_ _PINNING_ _EXEC_” in the 
parallel_options file and used the second choice "mpirun='mpirun -np _NP_ 
_EXEC_”.

Parallel jobs go well in scf cycles. But when I increase k points (about 5000) 
to calculate DOS, the lapw1 crashed with the cgroup out-of-memory handler 
halfway. That is very strange. With same parameters, job runs well with single 
core.

The similar problem is encountered on nlvdw_mpi step. I also increase memory up 
to 50G for this less than 10 atoms cell, but it still didn’t work.

[Parallel job output:]
starting parallel lapw1 at lun. mai 11 16:24:48 CEST 2020
->  starting parallel LAPW1 jobs at lun. mai 11 16:24:48 CEST 2020
running LAPW1 in parallel mode (using .machines)
1 number_of_parallel_jobs
[1] 12604
[1]  + Done  ( cd $PWD; $t $ttt; rm -f 
.lock_$lockfile[$p] ) >> .time1_$loop
 lame25 lame25 lame25 lame25 lame25 lame25 lame25 lame25(5038) 4641.609u 
123.862s 10:00.69 793.3%   0+0k 489064+2505080io 7642pf+0w
   Summary of lapw1para:
   lame25k=0 user=0  wallclock=0
**  LAPW1 crashed!
4643.674u 126.539s 10:03.50 790.4%  0+0k 490512+2507712io 7658pf+0w
error: command   /home/mcsete/work/wma/Package/wien2k.18n/lapw1para lapw1.def   
failed
slurmstepd: error: Detected 1 oom-kill event(s) in step 86112.batch cgroup. 
Some of your processes may have been killed by the cgroup out-of-memory handler.

[Single mode output: ]
 LAPW1 END
11651.205u 178.664s 3:23:49.07 96.7%0+0k 19808+22433688io 26pf+0w

Do you have any ideas? Thank you in advance!

Best regards,
Liang
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Characters of atoms in the fold2bloch bands.

2020-05-11 Thread Oleg Rubel

Hi Artem,

what you describe sounds very interesting. It will be a QTL-resolved 
unfolded band structure. Are you doing calculations with SOC?


Thanks
Oleg
On 2020-05-10 8:51 p.m., Артем Тарасов wrote:

Hello Oleg,

I’m sorry for that my question was not enough clear. Now I will try to 
describe my situation in terms of lapw1, lapw2 and fold2bloch output 
files. After completion of "lapw1 -band" I got eigenvalues of energy for 
all states in k-vectors that listed in case.klist_band. Then I can use 
the "lapw2 -qtl" procedure to find out the contribution of each atom of 
an unit cell in all (E,k)-states that were calculated by lapw1. Thus, I 
obtain the table with columns: k-vector, Energy, сontribution (of an 
atom or its orbital). If I apply fold2bloch on eigenstates in 
case.vector, I obtain the case.f2b file with columns: k-vector, Energy, 
the spectral Bloch weight. So I was trying to identify a total 
contribution of selected atoms in each state listed in case.f2b. To be 
honest, I think that I resolved this problem because I have obtained 
realistic results for my tests. I had examined the code of the 
fold2bloch procedure and have found that each k-vector in the original 
case.klist_band file transforms to some number of k-vector in the 
case.f2b file and this number is determined by the size of supercell 
(for example 1 k-vector of the 4x4x1 cell transforms to 16 new 
k-vectors). So to resolve my problem I just multiply the spectral Bloch 
weights that match these 16 k-vectors with the same energy eigenvalues 
on the value of an atomic сontribution for the folded (E,k)-state. Then 
I do this operation for all spectral Bloch weights of the unfolded 
(E,k)-states. I suppose that my procedure is quite acceptable and I get 
good results with it. When I tell about atomic contributions I mean 
partial charges, of course.


Sincerely yours,

Artem Tarasov

Department of Solid State Electronics

Saint Petersburg State University.


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html