Re: [Wien] Problem with wien2k 13.1 parallel for Slurm+intel mpi

2013-11-26 Thread Natalia Pavlenko

Dear Prof. Blaha,

thanks a lot for your reply. I have corrected the .machines file
(the node with 6 cores is automatically chosen):
-
lapw0: alcc92:6
1:alcc92:6
granularity:1
extrafine:1
-
but nevertheless got the following output in case.dayfile:
---case.dayfile

Calculating case in 
/alcc/gpfs1/home/exp6/pavlenna/work/laosto/ovac/case

on alcc92 with PID 9804
using WIEN2k_13.1 (Release 17/6/2013) in 
/alcc/gpfs1/home/exp6/pavlenna/wien



start   (Tue Nov 26 13:41:14 CET 2013) with lapw0 (50/99 to go)

cycle 1 (Tue Nov 26 13:41:14 CET 2013)  (50/99 to go)

  lapw0 -p(13:41:15) starting parallel lapw0 at Tue Nov 26 
13:41:15 CET 2013

 .machine0 : 6 processors
0.024u 0.024s 0:12.00 0.3%  0+0k 1632+8io 6pf+0w
  lapw1  -up -p   (13:41:27) starting parallel lapw1 at Tue Nov 26 
13:41:27 CET 2013

-  starting parallel LAPW1 jobs at Tue Nov 26 13:41:27 CET 2013
running LAPW1 in parallel mode (using .machines)
1 number_of_parallel_jobs
 alcc92 alcc92 alcc92 alcc92 alcc92 alcc92(6) 0.016u 0.004s 0:00.75 
1.3%0+0k 0+8io 0pf+0w

   Summary of lapw1para:
   alcc92k=0 user=0  wallclock=0
0.068u 0.020s 0:02.19 3.6%  0+0k 0+104io 0pf+0w
  lapw1  -dn -p   (13:41:29) starting parallel lapw1 at Tue Nov 26 
13:41:29 CET 2013

-  starting parallel LAPW1 jobs at Tue Nov 26 13:41:29 CET 2013
running LAPW1 in parallel mode (using .machines.help)
1 number_of_parallel_jobs
 alcc92 alcc92 alcc92 alcc92 alcc92 alcc92(6) 0.020u 0.004s 0:00.42 
4.7%0+0k 0+8io 0pf+0w

   Summary of lapw1para:
   alcc92k=0 user=0  wallclock=0
0.072u 0.028s 0:02.11 4.2%  0+0k 0+104io 0pf+0w

  lapw2 -up -p(13:41:31) running LAPW2 in parallel mode

**  LAPW2 crashed!
0.248u 0.012s 0:00.73 34.2% 0+0k 8+16io 0pf+0w
error: command   /alcc/gpfs1/home/exp6/pavlenna/wien/lapw2para -up 
uplapw2.def   failed



  stop error

-
In the uplapw2.err I have the following error messages:

Error in LAPW2
 'LAPW2' - can't open unit: 30
 'LAPW2' -filename: case.energyup_1
**  testerror: Error in Parallel LAPW2
-
and the following error output messages:

--
starting on alcc92
 LAPW0 END
 LAPW0 END
 LAPW0 END
 LAPW0 END
 LAPW0 END
 LAPW0 END
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
case.scf1up_1: No such file or directory.
grep: No match.
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
PMPI_Comm_size(76).: Invalid communicator
case.scf1dn_1: No such file or directory.
grep: No match.
FERMI - Error
cp: cannot stat `.in.tmp': No such file or directory


  stop error

---
Please let me know, maybe something is wrong in the mpi configuration.
I have an intel mpi installed on the cluster.


Best regards, N.Pavlenko


Am 2013-11-23 13:07, schrieb Peter Blaha:

You completely misunderstand how parallelization in wien2k works.
Please read the UG carefully (parallelization), also notice the

Re: [Wien] Problem with wien2k 13.1 parallel for Slurm+intel mpi

2013-11-26 Thread Laurence Marks
The PMPI_Comm_size: Invalid communicator, error stack is almost
always due to issues with how the mpi version was compiled and linked.
Common issues include:
1) Not using the ifort/icc mpi compilers.
2) Not using the correct linking options for the flavor of mpi that
you are using
3) Problems with the infiniband or similar drivers on the system
(rare, but not unknown).

For 1), please check that the mpif77 (or mpif90) you used is the Intel
one -- you may need to source the scripts in the Intel bin directory
to set these up correctly.

For 2), the Intel Math Kernel Library at
http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor
is useful.

N.B., I susect that lapw0_mpi did not run, as well as lapw1_mpi.

On Tue, Nov 26, 2013 at 7:03 AM, Natalia Pavlenko
natalia.pavle...@physik.uni-augsburg.de wrote:
 Dear Prof. Blaha,

 thanks a lot for your reply. I have corrected the .machines file
 (the node with 6 cores is automatically chosen):
 -
 lapw0: alcc92:6
 1:alcc92:6
 granularity:1
 extrafine:1
 -
 but nevertheless got the following output in case.dayfile:
 ---case.dayfile

 Calculating case in
 /alcc/gpfs1/home/exp6/pavlenna/work/laosto/ovac/case
 on alcc92 with PID 9804
 using WIEN2k_13.1 (Release 17/6/2013) in
 /alcc/gpfs1/home/exp6/pavlenna/wien


  start   (Tue Nov 26 13:41:14 CET 2013) with lapw0 (50/99 to go)

  cycle 1 (Tue Nov 26 13:41:14 CET 2013)  (50/99 to go)

   lapw0 -p(13:41:15) starting parallel lapw0 at Tue Nov 26
 13:41:15 CET 2013
  .machine0 : 6 processors
 0.024u 0.024s 0:12.00 0.3%  0+0k 1632+8io 6pf+0w
   lapw1  -up -p   (13:41:27) starting parallel lapw1 at Tue Nov 26
 13:41:27 CET 2013
 -  starting parallel LAPW1 jobs at Tue Nov 26 13:41:27 CET 2013
 running LAPW1 in parallel mode (using .machines)
 1 number_of_parallel_jobs
   alcc92 alcc92 alcc92 alcc92 alcc92 alcc92(6) 0.016u 0.004s 0:00.75
 1.3%0+0k 0+8io 0pf+0w
 Summary of lapw1para:
 alcc92k=0 user=0  wallclock=0
 0.068u 0.020s 0:02.19 3.6%  0+0k 0+104io 0pf+0w
   lapw1  -dn -p   (13:41:29) starting parallel lapw1 at Tue Nov 26
 13:41:29 CET 2013
 -  starting parallel LAPW1 jobs at Tue Nov 26 13:41:29 CET 2013
 running LAPW1 in parallel mode (using .machines.help)
 1 number_of_parallel_jobs
   alcc92 alcc92 alcc92 alcc92 alcc92 alcc92(6) 0.020u 0.004s 0:00.42
 4.7%0+0k 0+8io 0pf+0w
 Summary of lapw1para:
 alcc92k=0 user=0  wallclock=0
 0.072u 0.028s 0:02.11 4.2%  0+0k 0+104io 0pf+0w
   lapw2 -up -p(13:41:31) running LAPW2 in parallel mode
 **  LAPW2 crashed!
 0.248u 0.012s 0:00.73 34.2% 0+0k 8+16io 0pf+0w
 error: command   /alcc/gpfs1/home/exp6/pavlenna/wien/lapw2para -up
 uplapw2.def   failed

   stop error
 -
 In the uplapw2.err I have the following error messages:

 Error in LAPW2
   'LAPW2' - can't open unit: 30
   'LAPW2' -filename: case.energyup_1
 **  testerror: Error in Parallel LAPW2
 -
 and the following error output messages:

 --
 starting on alcc92
   LAPW0 END
   LAPW0 END
   LAPW0 END
   LAPW0 END
   LAPW0 END
   LAPW0 END
 Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
 PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
 PMPI_Comm_size(76).: Invalid communicator
 Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
 PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
 PMPI_Comm_size(76).: Invalid communicator
 Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
 PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
 PMPI_Comm_size(76).: Invalid communicator
 Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
 PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
 PMPI_Comm_size(76).: Invalid communicator
 Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
 PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
 PMPI_Comm_size(76).: Invalid communicator
 Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
 PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
 PMPI_Comm_size(76).: Invalid communicator
 case.scf1up_1: No such file or directory.
 grep: No match.
 Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
 PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
 PMPI_Comm_size(76).: Invalid communicator
 Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
 PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
 PMPI_Comm_size(76).: Invalid communicator
 Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
 PMPI_Comm_size(123): MPI_Comm_size(comm=0x5b, size=0x7e356c) failed
 PMPI_Comm_size(76).: Invalid communicator
 Fatal error in PMPI_Comm_size: Invalid communicator, error stack:
 PMPI_Comm_size(123): 

[Wien] Exchange-correlation energy

2013-11-26 Thread Martin Gmitra
Dear Wien2k users,

Some time ago there was discussed topic about xc energy
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg01719.html.
Since then Wien2k has KVC flag allowing to print :EXC contribution to
the total energy. Could you please navigate me how:

1. one gets separate exchange and correlation contributions?

2. to get the separate contributions orbital resolved for each atom in the cell?

3. to get the xc contributions in a particular region (say cylinder =
including part of interstitial and muffin-tins) of the cell?

I would like to do it for PBE (option 13).
Thanks in advance for help.
Martin Gmitra
Uni Regensburg
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Exchange-correlation energy

2013-11-26 Thread tran

Hi,

For 1 there is IGRAD.EQ.33 in vxclm2.f which is for exchange-only PBE
(then correlation c is c=xc-x). For such thing the correct procedure is to
plug the PBE orbitals (obtained from an usual PBE calculation) into the
exchange-only PBE:
1) replace 13 by 33 in case.in0
2) x lapw0

I' m not really sure to understand for 2.

For 3 there is the contributions from the spheres and interstitial
in case.output0. This is the 3rd column when you grep for TOTAL=
(the values are in Hartree). This is not really possible to get easily
the contrbution from an arbitrary region of space. A solution
would be to use lapw5 to write on a file the values of exc on a
regaular 3d grid an then integrate yourself. But then the integration
close to the nuclei will be very inaccurate.

One thing: It is dangerous and not recommended to compare two values
of Exc when they were obtained from two different electron densities.

F. Tran


On Tue, 26 Nov 2013, Martin Gmitra wrote:


Dear Wien2k users,

Some time ago there was discussed topic about xc energy
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg01719.html.
Since then Wien2k has KVC flag allowing to print :EXC contribution to
the total energy. Could you please navigate me how:

1. one gets separate exchange and correlation contributions?

2. to get the separate contributions orbital resolved for each atom in the cell?

3. to get the xc contributions in a particular region (say cylinder =
including part of interstitial and muffin-tins) of the cell?

I would like to do it for PBE (option 13).
Thanks in advance for help.
Martin Gmitra
Uni Regensburg
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Slab dielectric function vs. bulk dielectric function

2013-11-26 Thread phlhj phlhj
Dear Prof. Blaha,

Thank you for your suggestion. I tried using hexagonal structure of bulk Al
with 39 MLs and 61x61x1 k-mesh. Actually I can reproduce plasma frequency
and dielectric function compared to the results from one Al atom unit cell
calculations.

I plotted the calculated slab plasma frequency as a function of Al(111)
film thickness and find the value is approaching converged, even though the
converged value is 2eV smaller the plasma frequency from bulk geometry
calculation.

For the thickness dependence of dielectric function, I also get the similar
converged results. The sizable difference lies at the low energy
range(1.2eV). it converges to bulk value at large energy range, say, of
2.5eV.

I am trying to do a very dense k-mesh calculation, (e.g. 99x99x1). However
this dense mesh does not sound practical.

Thank you again,
Wenmei


2013/11/25 Peter Blaha pbl...@theochem.tuwien.ac.at

 Of course, in principle slabs should converge to bulk epsilon. But:

 In your slabs with a k-mesh of 69x69x1 you are using effectively a k-mesh
 of 69x69x39  instead of  a 69x69x69 mesh.
 In addition, in the z direction you use root-sampling instead of
 tetrahedra
 method. It is like integration with the rectangular-rule instead of a
 trapezoidal rule.

 Try fcc-Al with a small tetragonal distortion during setup, so that you
 get only 16 sym.ops.
 then change c/a back to 1 and use a kmesh of 69x69x39 and compare the
 dielectric function
 to the 2 times 69-mesh. (This mimics the k-mesh problem, but still there s
 the
 integration method !!).

 You probably need even more layers 

 Am 26.11.2013 04:51, schrieb phlhj phlhj:

 Dear Prof. Blaha,

 Thanks so much for your suggestion.

 I tried bulk Al supercell with 39ML without vacuum with the same k-mesh
 as used in 39 ML thin film supercell. In fact I get the same results for
 plasma frequency and
 dielectric function as those from Al unit cell with only one Al atom. I
 think the k-mesh of 61x61x1 I used in my calculation is dense enough to
 give a precise result.

 The main difference for the dielectric function between thin film
 geometry and bulk geometry is at the low energy range (1.2eV). I
 researched some paper for studying the
 anisotropic surface reflectance in semiconductor surface, say, GaAs(110).
 Even 15 atomic layers are used in the LDA calculation but still some
 difference around the band
 gap regime for the dielectric function is found between surface
 calculation and bulk calculation. I think the difference I encountered for
 teh dielectric function between
 slab Al(111) and bulk Al might be similar to the case in semiconductor
 system. However,  from the physical point of view, it's hard to understand
 why there is still
 appreciable difference out there even though very thick film is used.
 Physically the dielectric function of the very thick slab should converge
 to that in the bulk counterpart.

 Thank you so much for sharing any understanding about this,

 Wenmei


 2013/11/24 Peter Blaha pbl...@theochem.tuwien.ac.at mailto:
 pbl...@theochem.tuwien.ac.at


 As you probably know, the dielectric function of Al converges VERY
 slowly
 with respect to the k-mesh.

 When you do slab calculations, you include the surface effect, but
 you also replace
 the periodicity in k-z (and thus the k-mesh in k-z) to a backfoldung
 according to
 your slab. Even a 39 ML slab corresponds probably not to a very large
 k-z mesh and
 in addition the integration over k-z is limited to a root-sampling
 instead of the
 tetrahedron method. I could even imagine large numerical problems in
 this 2-D integration
 using a 3-D algorithm in joint due to large degeneracy of the
 tetrahedra.

 At least you could differentiate between integration problems and
 surface effects
 by using a 39-layer bulk structure (i.e. remove the vacuum in your
 supercell, so that
 you get 3D-Al again, but restrict yourself to 1-k point in k-z) and
 compare the
 resulting eps to bulk Al (with 1 atom/cell and good k-meshes.

 Am 23.11.2013 16:54, schrieb phlhj phlhj:

 Dear all,

 I was trying to calculate the optical properties of Al(111) slab.
 For the bulk FCC Al, I can reproduce the dielectric functions and plasma
 frequency very precisely
 reported
 in literature before.  However, I did find some difference
 between the slab dielectric functions and the corresponding bulk values.

 Especially even though I used very thick slab, say 39MLs, in the
 low photo energy range (1eV), the imaginary part is much larger than the
 bulk. I doubt this may be
 related
 to the band-folding and symmetry reduction in the direction
 normal to the surface.

 Also, I found the plasma frequency of the slab is smaller than
 the bulk plasma frequency.

 Mathematically, this behavior of the imaginary parts of the
 interband and intraband transitions contributions seems to be able to be
 

[Wien] LSDA+U+SO orbital moment and spin moment

2013-11-26 Thread njudyp
Hello, everyone.
I use wien2k with LSDA+U+SO method to calculate the AFM. And set the direction 
of magnetization in (1,0,0). 
However, I found that spin moment and orbital moment not only had x component 
but also had y component.
At sometime, the y component was larger than x direction. My question is how do 
I select the spin moment and orbital momet?
Should I just take the PROJECTION one or take three direction into 
consideration? And how do I determine the magnetic moment direction?
Is the direction in the file *.inso or determined by output in *.scf(i.e. SPIN 
MOMENT: -0.25635 0.00270 0.0 )?
thank you very much!
 
***
:SPI019:  SPIN MOMENT:  -0.25635   0.00270   0.0 PROJECTION ON M -0.25635
:SPI020:  SPIN MOMENT:   0.25636  -0.00270   0.0 PROJECTION ON M  0.25636
:SPI021:  SPIN MOMENT:   0.25675   0.00286   0.0 PROJECTION ON M  0.25675
:SPI022:  SPIN MOMENT:  -0.25675  -0.00286   0.0 PROJECTION ON M -0.25675
:SPI023:  SPIN MOMENT:   2.66505   0.00241   0.0 PROJECTION ON M  2.66505
:SPI024:  SPIN MOMENT:  -2.66505  -0.00241   0.0 PROJECTION ON M -2.66505
:SPI025:  SPIN MOMENT:  -2.66425   0.00239   0.0 PROJECTION ON M -2.66425
:SPI026:  SPIN MOMENT:   2.66425  -0.00239   0.0 PROJECTION ON M  2.66425

:ORB019:  ORBITAL MOMENT: -0.03831 -0.01570  0.0 PROJECTION ON M -0.03831
:ORB020:  ORBITAL MOMENT:  0.03831  0.01570  0.0 PROJECTION ON M  0.03831
:ORB021:  ORBITAL MOMENT:  0.03811 -0.01582  0.0 PROJECTION ON M  0.03811
:ORB022:  ORBITAL MOMENT: -0.03811  0.01582  0.0 PROJECTION ON M -0.03811
:ORB023:  ORBITAL MOMENT: -0.35968  0.71525  0.0 PROJECTION ON M -0.35968
:ORB024:  ORBITAL MOMENT:  0.35968 -0.71525  0.0 PROJECTION ON M  0.35968
:ORB025:  ORBITAL MOMENT:  0.35830  0.71689  0.0 PROJECTION ON M  0.35830
:ORB026:  ORBITAL MOMENT: -0.35830 -0.71689  0.0 PROJECTION ON M -0.35830
*___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html