[Wien] [Wien2k Users] Spin orbit coupling of Atom with many nonequivalent positions

2010-01-30 Thread Ghosh SUDDHASATTWA
Dear Wien2k users, 

Consider an atom A (heavy element) with let us say, 5 different non
equivalent atoms in the crystal lattice. 

During the spin orbit coupling initialization, we have to modify the
case.inso file. 

In case we incorporate only 4 atoms for so coupling and leave one of the
nonequivalent position, will it really matter in the SCF cycle for the
calculation of ENE?

 

Suddhasattwa Ghosh 

 

 

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20100130/9c62afe6/attachment.htm>


[Wien] Problems of HCP Tb!!! (the second time)

2010-01-30 Thread Hui Wang
*
 Spin-polarized + s-o calculation, M||  0.000  0.000  1.000
  Calculation of , X=c*Xr(r)*Xls(l,s)
  Xr(r)=   I
  Xls(l,s) = L(dzeta)
  c=  1.0
  atom   Lup  dn total
:XOP  1  3-0.00187 0.00626 0.00439
*
 
 The orbital moment is only 0.00439, I was abosolutely confused.
 In a word, although I adopt Stefaan's suggestion, and maybe I took it in 
the wrong way or I didn't understand Stefaan's suggestion well, the calculated 
total magnetic moment of single Tb atom is still too small, comparing with the 
experimental magnetic moment 10 uB.   
 
 So my questions are as follows:
   (1)What's my problem? I have been puzzled by this for almost a month, this 
question is the most important one. If your time permiting, can anyone give me 
a detailed reply(step by step will be greatly appreciated) about how to 
correctly calculate the total magnetic moment of single Tb atom ? 
(2)Is there any diffenece about the priority order of +SO and +U ? ( 
userguide page 85 says: first +SO,then +U) Some systems don't have spin-orbit 
coupling.
(3)What the difference between case.indm and case.indmc ? In my case, the 
content of them are exactly same, just the name is different. (I already know 
case.indmc is for spin orbit coupling, and case.indm is for GGA+U)
(4)I saw a mailing list which was answered by Stefaan, it said: Convergence 
with LDA+U might be problematic, that often happens.Converge with LDA first, 
then save_lapw. Now converge with -orbc, then save_lapw. Finally do 
unconstrained LDA+U (-orb). If it doesn't work even that way, then consider to 
crank up the U slowly (first 0.1,converge, save, then 0.2, etc.)
My question is : when should use -orbc, and when to use -orb ? In my case, I 
just use -orb.
(5)Stefaan told me that:  you have to change the last line to '1 3' after 
convergence, and run 'x lapwdm -c -so -up' to find the orbital moment in 
case.scfdmup. My question is when to change '0 0' to '1 3' in case.indmc ? 
(6)Do you think it is very important to use Hybrid Functionals in HCP Tb 
system ?
 
I am so sorry to bring you to so much trouble, but I really need your help 
to deal with HCP Tb which puzzled me for almost a month. Your kindly help will 
be greatly appreciated.
Anyway, thanks a lot for the suggestion of Stefaan, really.
I am looking forward to your reply.
Cheers.
 
Yours sincerely
Hui Wang




=
Magnetism and Magnetic Materials Division
Shenyang Materials Science National Laboratory
Institute of Metal Research
Chinese Academy of Sciences
72 Wenhua Road, Shenyang 110016, P. R. China
Tel: +86-24-83978845
PHD: Wanghui
Email: hwang at imr.ac.cn 
= 

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20100130/6190ccf9/attachment.htm>


[Wien] Problems of HCP Tb!!! (the second time)

2010-01-30 Thread Stefaan Cottenier

> The reason I use *kpoints = 1000* is because I refered to S. 
> Cottenier's book ---Density Functional Theory and the Family of 
> (L)APW-methods: a step-by-step introduction, it uses kpoints = 912 for 
> hcp-Cd 

That is in the irreducible part of the Brillouin zone (= "number of
lines in case.klist"). That corresponds to (roughly) 2 as input for
kgen.

> reasonable. I also use kpoints = 4000 for HCP Tb, the total energy only 
> changed *0.4 mRy*, so kpoints = 1000 is enough.

That might be fine -- depends on the properties you are finally
interested in.

> According to Stefaan's suggestions, I recalculate HCP Tb. ( 
> Userguide says first use +SO, then use +U in page 85 of 2009's userguide.)
>  
> *order: GGA==>>GGA+SO==>>GGA+SO+U*
> ** 
> ***(1)*run a regular GGA scf, commamd line: *runsp -ec 0.0001 -cc 
> 0.0001 -i 1000*
>then i got some results as follows:
>:ENE:-46876.362493 Ry
>:MMTOT:  12.01107  uB
>:MMI001: 5.86470   uB
>:MMINT:  0.28167   uB
>  
> ***(2)*based on the GGA scf, i added the spin-orbit coupling.
>case.indmc, case.inso and case.inorb were prepared as follows:
>  
> *case.indmc*
> *
> -9.  Emin cutoff energy
>  1   number of atoms for which density matrix is 
> calculated
>  1  1  3  index of 1st atom, number of L's, L1
>  0 0   r-index, (l,s)index
> *
> *case.inorb  *
> *
>   1  1  0 nmod, natorb, ipr
> PRATT  1.0BROYD/PRATT, mixing
>   1 1 3  iatom nlorb, lorb
>   1  nsic 0..AFM, 1..SIC, 2..HFM
>   0.50 0.00U J (Ry)   Note: we recommend to use U_eff = U-J and J=0
> *
> *case.inso*
> *
> WFFIL
>  4  1  0  llmax,ipr,kpot
>  -9.   2.   emin,emax (output energy window)
>0.  0.  1. direction of magnetization (lattice vectors)
>  1   number of atoms for which RLO is added
>  1   -1.7  0.01   atom number,e-lo,de (case.in1), repeat NX times
>  0 0 0 0 0number of atoms for which SO is switch 
> off; atoms
> *
>  
>run a regular GGA+SO scf, commamd line: *runsp -ec 0.0001 -cc 
> 0.0001 -so -i 1000*
>then i got some results as follows:
>:ENE:-46876.608343 Ry
>:MMTOT:  11.92563  uB
>:MMI001: 5.81891   uB
>:MMINT:  0.28781   uB
>*I added spin orbit coupling, but there is no :ORB information, i 
> don't why ?*
> ** 

That is just an output feature. :ORB is written only when LDA+U is used.
If you want to know the orbital moment with SO only, you have to run
lapwdm once after the scf cycle with '1 3'.

>  *(3)*based on the GGA+SO scf, then i added the U parameter.
> case.inorb and case.indm were prepared as follows:
>  
> *case.indm*
> *
> -9.  Emin cutoff energy
>  1   number of atoms for which density matrix is 
> calculated
>  1  1  3  index of 1st atom, number of L's, L1
>  0 0   r-index, (l,s)index
> *
> *case.inorb  *
> *
>   1  1  0 nmod, natorb, ipr
> PRATT  1.0BROYD/PRATT, mixing
>   1 1 3  iatom nlorb, lorb
>   1  nsic 0..AFM, 1..SIC, 2..HFM
>   0.50 0.00U J (Ry)   Note: we recommend to use U_eff = U-J and J=0
> *
>  
>run a regular GGA+SO scf, commamd line: *runsp -ec 0.0001 -cc 
> 0.0001 -so -orb -i 1000*
>then i got some results as follows:
>:ENE:-46876.47444 Ry
>:MMTOT:  13.45593  uB
>:MMI001: 6.25394   uB
>:MMINT:  0.94806   uB
>*:ORB001: 0.00439   uB ( I don't know why it is so small ?)*
> ** 

That can be due to the m-orbitals occupied by the f-electrons of the
minority spin. If one is in m=-2 and the other in m=+2, the orbital
moment is +2-2=0. If they are equally distributed over all m-values
(fractional occupation), the sum is zero too.

You can inspect this by looking at the diagonal elements of the f
density matrix -- you find the full matrix (complex numbers) in
case.dmatup and case.dmatdn.

If you do have indeed a case where the occupation is such that you have
zero orbital moment, you might try to write an occupation in the dmat
files that 

[Wien] Fwd: MPI segmentation fault

2010-01-30 Thread Md. Fhokrul Islam






Hi Marks,

I have followed your suggestions and have used openmpi 1.4.1 compiled with 
icc.
I also have compiled fftw with cc instead of gcc and recompiled Wien2k with 
mpirun option
in parallel_options:

current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_ -x LD_LIBRARY_PATH
 
Although I didn't get segmentation fault but the job still crashes at lapw1 
with a different error 
message. I have pasted case.dayfile and case.error below along with ompi_info 
and stacksize
info. I am not even sure where to look for the solution. Please let me know if 
you have any
suggestions regarding this MPI problem.

Thanks,
Fhokrul 

case.dayfile:

cycle 1 (Sat Jan 30 16:49:55 CET 2010)  (200/99 to go)

>   lapw0 -p(16:49:55) starting parallel lapw0 at Sat Jan 30 16:49:56 CET 
> 2010
 .machine0 : 4 processors
1863.235u 21.743s 8:21.32 376.0%0+0k 0+0io 1068pf+0w
>   lapw1  -c -up -p(16:58:17) starting parallel lapw1 at Sat Jan 30 
> 16:58:18 CET 2010
->  starting parallel LAPW1 jobs at Sat Jan 30 16:58:18 CET 2010
running LAPW1 in parallel mode (using .machines)
1 number_of_parallel_jobs
 mn117.mpi mn117.mpi mn117.mpi mn117.mpi(1) 1263.782u 28.214s 36:47.58 
58.5%0+0k 0+0io 49300pf+0w
**  LAPW1 crashed!
1266.358u 37.286s 36:53.31 58.8%0+0k 0+0io 49425pf+0w
error: command   /disk/global/home/eishfh/Wien2k_09_2/lapw1cpara -up -c 
uplapw1.def   failed

Error file:

 LAPW0 END
 LAPW0 END
 LAPW0 END
 LAPW0 END
--
mpirun noticed that process rank 0 with PID 8837 on node mn117.local exited on 
signal 9 (Killed).


[eishfh at milleotto
s110]$ ompi_info

 Package: Open MPI
root at milleotto.local Distribution

Open MPI: 1.4.1

  Prefix:
/sw/pkg/openmpi/1.4.1/intel/11.1

 Configured architecture:
x86_64-unknown-linux-gnu

  Configure host: milleotto.local

   Configured by: root

   Configured on: Sat Jan 16 19:40:36
CET 2010

  Configure host: milleotto.local

  Built host: milleotto.local

Fortran90 bindings
size: small

  C compiler: icc

 C compiler absolute:
/sw/pkg/intel/11.1.064//bin/intel64/icc

C++ compiler: icpc

   C++ compiler absolute:
/sw/pkg/intel/11.1.064//bin/intel64/icpc

  Fortran77 compiler: ifort

  Fortran77 compiler abs:
/sw/pkg/intel/11.1.064//bin/intel64/ifort

  Fortran90 compiler: ifort

  Fortran90 compiler abs:
/sw/pkg/intel/11.1.064//bin/intel64/ifort


stacksize:



 [eishfh at milleotto s110]$ ulimit -a

core file size  (blocks, -c) 0

data seg size   (kbytes, -d) unlimited

scheduling
priority (-e) 0

file size   (blocks, -f) unlimited

pending signals (-i) 73728

max locked
memory   (kbytes, -l) 32

max memory size (kbytes, -m) unlimited

open files  (-n) 1024

pipe size(512 bytes, -p) 8

POSIX message
queues (bytes, -q) 819200

real-time
priority  (-r) 0

stack size  (kbytes, -s) unlimited

cpu time   (seconds, -t) unlimited

max user
processes  (-u) 73728

virtual memory  (kbytes, -v) unlimited

file locks  (-x) unlimited





> 
> In essence, you have a mess and you are going to have to talk to your
> sysadmin (hikmpn) to get things sorted out. Issues:
> 
> a) You have openmpi-1.3.3. This works for small problems, fails for
> large ones. This needs to be updated to 1.4.0 or 1.4.1 (the older
> versions of openmpi have bugs).
> b) The openmpi was compiled with ifort 10.1 but you are using 11.1.064
> for Wien2k -- could lead to problems.
> c) The openmpi was compiled with gcc and ifort 10.1, not icc and ifort
> which could lead to problems.
> d) The fftw library you are using was compiled with gcc not icc, this
> could lead to problems.
> e) Some of the shared libraries are in your LD_LIBRARY_PATH, you will
> need to add -x LD_LIBRARY_PATH to how mpirun is called (in
> $WIENROOT/parallel_options) -- look at man mpirun.
> f) I still don't know what the stack limits are on your machine --
> this can lead to severe problems in lapw0_mpi

  
_
Hotmail: Trusted email with Microsoft?s powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20100130/391efe4b/attachment.htm>


[Wien] Fwd: MPI segmentation fault

2010-01-30 Thread Md. Fhokrul Islam
signup.aspx?id=60969
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20100130/7dec25c2/attachment.htm>


[Wien] Fwd: MPI segmentation fault

2010-01-30 Thread Laurence Marks
OK, looks like you have cleaned up many of the issues. The SIGSEV is
(I think) now one of two things:

a) memory limitations (how much do you have, 8Gb or 16-24 Gb ?)

While the process is running do a "top" and see how much memory is
allocated and whether this is essentially all. If you have ganglia
available you can use this to see readily. Similar information is also
available in  cat /proc/meminfo or using the nmon utility from IBM
(google it, it is easy to compile). I suspect that you are simply
running out of memory, running too many tasks at the same time on one
machine -- you would need to use more machines so the memory usage on
any one is smaller.

b) stacksize issue (less likely)

This is an issue with openmpi, see
http://www.open-mpi.org/community/lists/users/2008/09/6491.php . In a
nutshell, the stacksize limit is not an environmental parameter and
there is no direct way to set it correctly with openmpi except to use
a wrapper. I have a patch for this, but lets' try something simpler
first (which I think is OK, but I might have it slightly wrong).

* Create a file called wrap.sh in your search path (e.g. ~/bin or even
$WIENROOT) and put in it
#!/bin/bash
source $HOME/.bashrc
ulimit -s unlimited
#write a line so we know we got here
echo "Hello Fhorkul"
$1 $2 $3 $4

* Do a "chmod a+x wrap.sh" (appropriate location of course)

* Edit parallel_options in $WIENROOT so it reads
setenv WIEN_MPIRUN "mpirun -x LD_LIBRARY_PATH -x PATH -np _NP_
-machinefile _HOSTS_ wrap.sh _EXEC_"

This does the same as is described in the email link above, forcing
the Wien2k mpi commands to be executed from within a bash shell so
parameters are setup. If this works then I can provide details for a
more general patch.


2010/1/30 Md. Fhokrul Islam :
> Hi Marks,
>
> ??? I have followed your suggestions and have used openmpi 1.4.1 compiled
> with icc.
> I also have compiled fftw with cc instead of gcc and recompiled Wien2k with
> mpirun option
> in parallel_options:
>
> current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_ -x
> LD_LIBRARY_PATH
>
> Although I didn't get segmentation fault but the job still crashes at lapw1
> with a different error
> message. I have pasted case.dayfile and case.error below along with
> ompi_info and stacksize
> info. I am not even sure where to look for the solution. Please let me know
> if you have any
> suggestions regarding this MPI problem.
>
> Thanks,
> Fhokrul
>
> case.dayfile:
>
> ??? cycle 1 (Sat Jan 30 16:49:55 CET 2010)? (200/99 to go)
>
>>?? lapw0 -p??? (16:49:55) starting parallel lapw0 at Sat Jan 30 16:49:56
>> CET 2010
>  .machine0 : 4 processors
> 1863.235u 21.743s 8:21.32 376.0%??? 0+0k 0+0io 1068pf+0w
>>?? lapw1? -c -up -p??? (16:58:17) starting parallel lapw1 at Sat Jan 30
>> 16:58:18 CET 2010
> ->? starting parallel LAPW1 jobs at Sat Jan 30 16:58:18 CET 2010
> running LAPW1 in parallel mode (using .machines)
> 1 number_of_parallel_jobs
>  mn117.mpi mn117.mpi mn117.mpi mn117.mpi(1) 1263.782u 28.214s 36:47.58
> 58.5%??? 0+0k 0+0io 49300pf+0w
> **? LAPW1 crashed!
> 1266.358u 37.286s 36:53.31 58.8%??? 0+0k 0+0io 49425pf+0w
> error: command?? /disk/global/home/eishfh/Wien2k_09_2/lapw1cpara -up -c
> uplapw1.def?? failed
>
> Error file:
>
> ?LAPW0 END
> ?LAPW0 END
> ?LAPW0 END
> ?LAPW0 END
> --
> mpirun noticed that process rank 0 with PID 8837 on node mn117.local exited
> on signal 9 (Killed).
>
> stacksize:
>
> ?[eishfh at milleotto s110]$ ulimit -a
>
> file locks? (-x) unlimited
>
>
-- 
Laurence Marks
Department of Materials Science and Engineering
MSE Rm 2036 Cook Hall
2220 N Campus Drive
Northwestern University
Evanston, IL 60208, USA
Tel: (847) 491-3996 Fax: (847) 491-7820
email: L-marks at northwestern dot edu
Web: www.numis.northwestern.edu
Chair, Commission on Electron Crystallography of IUCR
www.numis.northwestern.edu/
Electron crystallography is the branch of science that uses electron
scattering and imaging to study the structure of matter.


[Wien] Fwd: MPI segmentation fault

2010-01-30 Thread Md. Fhokrul Islam
Cook Hall
> 2220 N Campus Drive
> Northwestern University
> Evanston, IL 60208, USA
> Tel: (847) 491-3996 Fax: (847) 491-7820
> email: L-marks at northwestern dot edu
> Web: www.numis.northwestern.edu
> Chair, Commission on Electron Crystallography of IUCR
> www.numis.northwestern.edu/
> Electron crystallography is the branch of science that uses electron
> scattering and imaging to study the structure of matter.
> ___
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
  
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20100130/4078828a/attachment.htm>