[Wien] [Wien2k Users] mpi parallel and k-point parallel
Dear Wien2k users, If both mpi parallel and k-point parallel have been configured in Wien2k and we want to run only mpi parallel, then how do we have to select this option from w2web? Moreover, I am still confused about the question of the queuing job process in Wiwn2k. As a root administrator, I know my system is SGE but as a source code, how Wien2k knows that its jobs are through SGE? Suddhasattwa -- next part -- An HTML attachment was scrubbed... URL: http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20100129/da087495/attachment.htm
[Wien] Problems of HCP Tb
doesn't converge, so i use -orbc) 0.2-46876.2639156.20827 0.3-46876.2706066.25566 0.4-46876.2606206.26501 0.5-46876.2323666.26697 0.6-46876.2432986.27250 0.7-46876.2161626.27217 by GGA+U+SO (based on the result of GGA+U, and rm *.broy*) command line: runsp -ec 0.0001 -cc 0.0001 -orb -so -i 1000 U(Ry)E(Ry/cell) M_spin(uB/atom) M_orbit(uB/atom) 0.0 -46876.609319 5.81268 1.25824 0.1 -46876.335016 6.04722 0.0 (GGA+U+SO doesn't converge, i don't know why ?) 0.2 -46876.507404 6.19715 1.40041 0.3 -46876.498692 6.23715 0.01548 0.4 -46876.487014 6.25003 0.00605 0.5 -46876.470927 6.25129 1.33299 0.6 -46876.467519 6.26165 0.00113 0.7 -46876.452793 6.25998 1.36656 Two questions: (1)From the data above, the largest magnetic moment of atom Tb is about 7.6 uB, which is still small than the experimental magnetic moment (10 uB). How can I get the right magnetic moment of Tb? (2)About GGA+U, it need case.indm or case.indmc ? runsp -orb or runsp -orbc ? About GGA+U+SO, there are two Tb atoms which are at the unequivalent position, how to change the case.inso files ? case.inso * WFFIL 4 1 0 llmax,ipr,kpot -15. 5. emin,emax (output energy window) 0. 0. 1. direction of magnetization (lattice vectors) 1number of atoms for which RLO is added 1 -1.7 0.01 atom number,e-lo,de (case.in1), repeat NX times ( if i wanna add RLO to the second Tb atom, what should I do ?) 0 0 0 0 0number of atoms for which SO is switch off; atoms * I know havey rare earth elements are hard to deal with , and I really need someone who can give me some enlightenment. Any suggestion will be greatly appreciated. I am looking forward to your reply. Cheers. Yours sincerely Hui Wang = Magnetism and Magnetic Materials Division Shenyang Materials Science National Laboratory Institute of Metal Research Chinese Academy of Sciences 72 Wenhua Road, Shenyang 110016, P. R. China Tel: +86-24-83978845 PHD: Wanghui Email: hwang at imr.ac.cn = -- next part -- An HTML attachment was scrubbed... URL: http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20100129/59fd5dd2/attachment.htm
[Wien] [Wien2k Users] Spin polarized and spin orbit calculations for f-systems
Dear Wien2k users, I did three types of calculations for gamma uranium. SCF without spin orbit and without spin polarized SCF with spin orbit and without spin polarized SCF with spin orbit and with spin polarized The ENE for these SCF cycles was -56165.956 Ry, -56166.379 Ry and -56166.372 Ry respectively. Separation energy between core and valence was chosen as -6.0 Ry Emax in case.in1 was changed to 2.50 (Is it right??) RKmax of 8.00 I seek the following answers? What would be the effect of Emax on ENE? I guess 2.50 is a good choice. The ENE values for all three cycles are more or less similar. Then why do we emphasize for spin orbit coupling in f-systems? Suddhasattwa -- next part -- An HTML attachment was scrubbed... URL: http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20100129/00df4a81/attachment.htm
[Wien] Problems of HCP Tb
Hi Wang, I have rather a small question on the files that you have shown. 0.30 as a global energy parameter seems to be too low for f-systems. I am really surprised that your SCF cycle has not shown any QTL type error. Can you please also tell me that why in the case.in1 file you have changed the emax from the default 2.0 to 5.0? Suddhasattwa -- next part -- An HTML attachment was scrubbed... URL: http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20100129/36d0134b/attachment.htm
[Wien] Problems of HCP Tb
I am running wien version wien2k 08_03 (consider upgrading -- this version is almost 2 years old, and you miss quite some features and fixes) I have been trying to simulate hcp Tb for weeks, no matter whatever I did (including change the parameters and use different method such as GGA+U or SO), *I still couldn't get the right magnetic moment of Tb atom, and the convergence criterion was met.* The experimental magnetic moment of Tb is around 10 uB, but the largest magnetic moment I got by wien2k_08.3 was around 7.6 uB (GGA+U+SO). Let us first estimate what to expect (in a free atom): The free atom configuration of Tb is 6H_15/2 (Xe-4f^96s^2): 7 f-up and 2 f-dn electrons. That gives a spin moment of 7-2=5 mu_B. The fully occupied up-shell does not contribute to the orbital moment. The two dn-electron occupy the m=3 and m=2 orbitals (+3, not -3, due to Hund's 3th rule for the second half of the lanthanide series), which adds 3+2 mu_B. That gives a total (f-)moment of 5+3+2=10 mu_B per Tb-atom, which is the experimental moment which you quote (small modifications are possible because the solid state configuration is slightly different from the free atom configuration). LDA/GGA will not work, because it will put the partly occupied 7-dn shell at E_F, in the wrong way. Hence, you are right to go to LDA+U. But with LDA+U, you can stabilize multiple configurations of the f-shell, and you might have to try different dmat-occupations to find the 'right' one. The main reason why your moment is (apparently) too small, is probably this one: * *case.indmc* * -15. Emin cutoff energy 1 number of atoms for which density matrix is calculated 1 1 3 index of 1st atom, number of L's, L1 0 0 r-index, (l,s)index If you want to calculate the orbital moment explicitly, you have to change the last line to '1 3' *after* convergence, and run 'x lapwdm -c -so -up' to find the orbital moment in case.scfdmup (although for LDA+U, there is also a :ORB line in case.scf). If you didn't do that, then for sure your total moment is too small. Another reason could be this one: * *case.inorb (here, the value of u varies from 0.0 to 0.7 Ry)* * 1 1 0 nmod, natorb, ipr PRATT 1.0BROYD/PRATT, mixing 1 1 3 iatom nlorb, lorb 1 nsic 0..AFM, 1..SIC, 2..HFM 0.10 0.00U J (Ry) Note: we recommend to use U_eff = U-J and J=0 For lanthanides, U=0.70 Ry is a typical value. 0.10 is way too small, and will give you almost LDA-like results, with too small moments (for GGA+U, you might need rather 0.40 or so). * other parameters: RMT = 2.5, Kpoints: 1000, IBZ: 76 * 1000 k-points is pretty small, did you test convergence with respect to k-mesh? (8000-15000 is more plausible) (2)About GGA+U, it need case.indm or case.indmc ? runsp -orb or runsp -orbc ? Use GGA+U preferably in connection with spin-orbit coupling. Then you need case.indmc. About GGA+U+SO, there are *two Tb atoms which are at the unequivalent position*, how to change the case.inso files ? ? No, there should be still 2 inquivalent Tb atoms. I know havey rare earth elements are hard to deal with , and I really need someone who can give me some enlightenment. Any suggestion will be greatly appreciated. You can find more details in http://dx.doi.org/10.1103/PhysRevB.74.014409 and http://dx.doi.org/10.1103/PhysRevB.77.155101 Stefaan
[Wien] MPI segmentation fault
A brief explanation of why I was requesting all this information: 1) ompi_info will tell what the openmpi version is, and whether it was correctly compiled with icc/ifort or not. I know that versions 1.3.2 and 1.3.3 have problems. 2) Information about what LD_LIBRARY_PATH, PATH and an ldd on lapw0_mpi will say whether your sysadmin has setup shared libraries in ld.so.conf so they are global or not. You may need to add -x LD_LIBRARY_PATH -x PATH to MPIRUN as by default openmpi does not export environmental variables. 3) Information about what the stacksize parameter shows without it being set from within a bash/csh/tcsh will say whether your sysadmin has set the soft limit high or not. Unfortunately the way openmpi works this is not passed to other processes from the parent and some fixes are needed if the soft limit is low. 4) Information about what process is crashing is basic, is it a bug in lapw2_mpi (postded a few weeks ago), or elsewhere? On Thu, Jan 28, 2010 at 8:04 AM, Laurence Marks L-marks at northwestern.edu wrote: Also, in 3) while still on the node do which lapw0_mpi which mpirun And include this information. On Thu, Jan 28, 2010 at 8:01 AM, Laurence Marks L-marks at northwestern.edu wrote: We need a bit more information than this. 1) Please do ompi_info and paste the output to the end of your response to this email. 2) Also paste the output of echo $LD_LIBRARY_PATH 3) If you have in your .bashrc a ulimit -s unlimited please edit this (temporarily) out, then ssh into one of the child nodes. (If you are using csh/tcsh edit out limit stacksize unlimited). Then do a ulimit -s or in csh/tcsh limit stacksize, and include the result. If you are using qsub you may need to launch an interactive job. Also do echo $LD_LIBRARY_PATH while on the node, pasting the result, and ldd $WIENROOT/lapw0_mpi. (If $LD_LIBRARY_PATH is not blank, repeat the ldd $WIENROOT/lapw0_mpi after doing a LD_LIBRARY_PATH= , or setenv LD_LIBRARY_PATH .) 4) Tell us which program gave a SIGSEV, lapw0_mpi. lapw1_mpi or lapw2_mpi? 2010/1/28 Md. Fhokrul Islam fislam at hotmail.com: Hi Marks, ??? Thank you very much for your reply. I am using Wien2k_09.2 version and I have used the following OPTIONS file for MPI compilation. I would like to mention that MPI works fine when I tested with an 8 atom system. current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -align -DINTEL_VML -traceback current:FPOPT:$(FOPT) current:LDFLAGS:-L/sw/pkg/mkl/10.0/lib/em64t/ current:DPARALLEL:'-DParallel' current:R_LIBS:-lmkl_intel_lp64 -lmkl_sequential -lmkl_core -liomp5 -lsvml -lpthread current:RP_LIBS:-L/sw/pkg/mkl/10.0/lib/em64t/ -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 -L/home/eishfh/fftw-2.1.5-gcc/lib -lfftw_mpi -lfftw -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -liomp5 -lsvml -lpthread current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_ Thanks, Fhokrul Date: Thu, 28 Jan 2010 07:16:37 -0600 From: L-marks at northwestern.edu To: wien at zeus.theochem.tuwien.ac.at Subject: Re: [Wien] MPI segmentation fault What version of mpi are you using -- please be specific including the release. 2010/1/28 Md. Fhokrul Islam fislam at hotmail.com: Dear Wien2k users, ??? I am trying to do a surface supercell calculation with 96 atoms (1 k-point) using MPI. I have used 8 processors for this job but it crashes in 1st cycle with an error message: mpirun noticed that process rank 7 with PID 6532 on node mn003.mpi exited on signal 11 (Segmentation fault). ?? Since many of you have experience in running large systems with MPI, I am wondering if anyone can suggest me how to fix this problem. Thanks, Fhokrul Hotmail: Powerful Free email with security by Microsoft. Get it now. ___ Wien mailing list Wien at zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien -- Laurence Marks Department of Materials Science and Engineering MSE Rm 2036 Cook Hall 2220 N Campus Drive Northwestern University Evanston, IL 60208, USA Tel: (847) 491-3996 Fax: (847) 491-7820 email: L-marks at northwestern dot edu Web: www.numis.northwestern.edu Chair, Commission on Electron Crystallography of IUCR www.numis.northwestern.edu/ Electron crystallography is the branch of science that uses electron scattering and imaging to study the structure of matter. ___ Wien mailing list Wien at zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien Hotmail: Trusted email with powerful SPAM protection. Sign up now. ___ Wien mailing list Wien at zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien -- Laurence Marks Department of Materials Science and Engineering MSE Rm 2036
[Wien] Fwd: MPI segmentation fault
://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien _ Your E-mail and More On-the-Go. Get Windows Live Hotmail Free. https://signup.live.com/signup.aspx?id=60969 -- next part -- An HTML attachment was scrubbed... URL: http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20100129/b73b6315/attachment-0001.htm